openstack team mailing list archive
Mailing list archive
[Cinder][HyperV][KVM] iSCSI / NFS/ FCoE for Block Storage Implementation
Dear stackers, salute o>
As I'm moving forward to finally deploy Openstack to production,
I'd like to hear your thoughts for when dealing with Cinder in an approach
of Block (iSCSI) vs Filesystem (NFS)
As far as I've read, iSCSI is extremely resilient and more reliable than NFS
since it already address issues like network faults, by using multiple
channels as individual paths to make sure the data reaches its targets.
On the other hand, NFS would require the infrastructure itself to guarantee
the network connectivity (not that this would be a major issue, though).
That for a production use is very important indeed, but I cannot forget of
the performance between the two (I've also got to know that NFS might
have a superior read IO, due to its read_cache, but it lacks when it comes
to writing -- unless I have some sort of write cache, like the one deployed
in the ZFS filesystem).
Note: we have a Sun/Oracle Storage using the ZFS filesystem for what
we're using currently. I've read a lot on the internet, but as you know,
I'm not sure how practical these articles can be or if they're just
comparing them theorically)
Performance-wise, in a very high-throughput network
(supposely 10G), would iSCSI perform better than other alternatives ?
Would you guys have any thoughts on FCoE over iSCSI ? According to this
blueprint, it's already implemented and available for libvirt to use it
If NFS is the best option out of the 3, I'm not sure how to deal with it
for Hyper-V hosts, I mean, would it be technically possible at all ?
**Question 4** [Cloudbase-question]
Since the "Cinder Volume for Windows Storage Server 2012"
doesn't use the libvirt driver, may I ask you if the current build already
supports FCoE (Fibre Channel over Ethernet) ? Thank you very very much.
Thank you a lot, Stackers.
Developer, Software Engineer
irc: lychinus | skype: brunnop.oliveira