[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Strange failures of Xen 4.3.1, PVHVM storage VM, iSCSI and Windows+GPLPV VM combination

> A 1Gbps ethernet provides 100MB/s (actually closer to 130MB/s), so
> simply bonding 2 x 1Gbps ethernet can usually provide more disk
> bandwidth than the disks can provide.
> In my setup, the iSCSI server uses 8 x 1Gbps ethernet bonded, and the
> xen machines use 2 x 1Gbps bonded for iSCSI, plus 1 x 1Gbps which is
> bridged to the domU's and for dom0 "management". You can get a couple of
> cheap-ish 1Gbps network cards easily enough, and your disk subsystem
> probably won't provide more than 200MB/s anyway (we can get a max of
> 2.5GB/s read from the disk subsystem, but the limited bandwidth for each
> dom0 helps to stop any one domU from stealing all the disk IO). In
> practice, I can run 4 x dom0 and obtain over 180MB/s on each of them in
> parallel.

If you workload is only iSCSI then can you comment on the choice of bonding 
instead of multipath?


Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.