[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] iscsi vs nfs for xen VMs





2011/1/26 Christian Zoffoli <czoffoli@xxxxxxxxxxx>
Il 26/01/2011 18:58, Roberto Bifulco ha scritto:
[cut]
> from comparisons over the same harware we can be more confident that the
> results we get are still
> valid over a similar (clearly not exactly the same!!) configuration.


tipically tests are quite incomparable.
If you change disks (type, brand, size, number, raid level) or some
settings or hw you can obtain very different results.

That's why I said I'm interested in comparison over the same hardware.
Then, results can be generalized if you mantain some variables (such as the "architecture") unchanged.
To be clearer, if I found that NFS is slower than LVM over iSCSI, this is likely to be true on fast disks and 
on slow ones, assuming that the network isn't a bottleneck.
 

IMHO the right way is to find how many IOPS do you need to archive your
load and then you can choose disk type, raid type, rpm etc

I'm actually not interested in numbers. I was just saying: each of us perform some tests to
define the storage architecture that best fits his needs, just share results, so that other ones
can decide in terms of "this one is better for performance, but worst for flexibility"
and so on...
 

Tipically, the SAN type (iSCSI, FC, etc) doesn't affect IOPS ...so if
you need 4000 IOPS of a mixed 70/30 RW you can simply calculate the iron
you need to archive this. 

Nevertheless, the connection type affects bandwidth between servers and
storage(s), latency and how many VMs you can put on a single piece of hw.

In other words, if you have good iron on the disk/controller side you
can archive for example 100 VMs but if the bottleneck is your connection
probably you have to reduce the overbooking level.

iSCSI tipically has a quite big overhead due to the protocol, FC, SAS,
native infiniband, AoE have very low overhead.

Things like bandwidth consuption, latency, CPU cost and so on, should be
included in the evaluation of a storage architecture for virtualized systems.
Again, I'm talking about an high view of the system performance as a whole
and not solely of the disks, raid controller, etc. performance.

Do you think that such an approach is useless?
I'm not an expert in storage devices, but I'm quite interested in the flexibility you can
get abstracting and combining them. That's why I'm asking about "architecture" performance.

Regards,
Roberto

--
Roberto Bifulco, Ph.D. Student
robertobifulco.it
COMICS Lab - www.comics.unina.it
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.