[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] Performance of network block devices (iSCSI)



I have a 'backup' server to which I have a number of machines dumping their filesystems using rdiff-backup. The backup server is storing this data on a volume mounted off an iSCSI store (Dell/EMC AX100i). I've found the performance to be 'very poor' and asked on the rdiff-backup list, a response I got was :

I found that the network and I/O scheduler in Xen was a single pipeline and contention was terrible. We got terrible performance when we used network block devices with Xen, as the VMs would just sit in waitI/O all the time when accessing the network block devices (we tried AoE, NBD, iSCSI).
...
We ended up moving to OpenVZ and haven't looked back.

I've done a test after copying the store to a local disk (xvda) which is another volume in the LVM setup of the Xen host - it's notable that copying the backup off the iSCSI volume ran at only about 1/2G/hr. The difference is quite dramatic, a backup from one client takes 36s to a local disk, but 9 1/2 minutes to the iSCSI box - that's a 15 fold difference.

While copying to or from the iSCSI volume the backup server sits at 100% (occasionally 99%) wait-io, while backing up to the virtual disk it shows the normal levels of processor activity I would expect (with minimal wait-io).

Systems are Debian Lenny, running on a Dell 2650 with hardware raid (PERC) and plenty of RAM.

Is there something I've missed ? Is there anything I can do ?

--
Simon Hobson

Visit http://www.magpiesnestpublishing.co.uk/ for books by acclaimed
author Gladys Hobson. Novels - poetry - short stories - ideal as
Christmas stocking fillers. Some available as e-books.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.