[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] iSCSI initiator on Dom0, exported to DomU via xvd, Disk IO Drops in Half...
On Tue, Jan 13, 2009 at 3:53 PM, Ross Walker <rswwalker@xxxxxxxxx> wrote: > > On Jan 13, 2009, at 6:37 PM, Ross Walker <rswwalker@xxxxxxxxx> wrote: > >> On Jan 13, 2009, at 5:48 PM, "Christopher Chen" <muffaleta@xxxxxxxxx> >> wrote: >> >>> Hi there! >>> >>> I've been wrestling with an issue for a little bit now-- >>> >>> In my test environment, I have tgtd running on a Centos 5.2 box, with >>> a raid 10 array backing it. >>> >>> The initiators are also Centos 5.2 boxes running Xen 3.0.3 userland >>> with a Xen 3.1.2/Linux 2.6.18 kernel (as from repos). >>> >>> Bonnie++ on the Dom0 shows about 110MB/sec writes, and 45MB/sec reads. >> >> That's kind of lopsided I'd expect it the other way around. >> >> Is this hardware RAID on the backend with write-back cache? >> >>> >>> I've attached the iSCSI LUN to the DomU as a virtual block device, and >>> I'm seeing 47MB/sec writes, and 39MB/sec reads. >> >> How did you attach it, what Xen driver did you use phy: or file:? > > Sorry, missed the virtual block device bit... > >>> I've tried a few things, like running against a local disk, and >>> suprisingly, writes on the DomU are faster than the Dom0--can I assume >>> the writes are buffered by the Dom0. >> >> I'm confused. >> >> I thought you said above you got 110MB/s on dom0 and 45MB/s on the domU? > > Never mind my comment, writes are only buffered using file: io, but they are > buffered in the domU's page cache which is where you might be seeing the > performance difference. > >>> I'm going to give a shot doing the initialization from the DomU (just >>> for kicks...)...and wow! 129MB/sec writes, 49MB/sec reads. >> >> You've completely lost me now, what do you mean initialization? Do you >> mean boot domU off of iSCSI directly? > > After re-reading I guessed you meant you attached to the iSCSI lun after > booting into the VM not as the OS disk. > > Again you are most likely seeing all cache affect and not the real io. > >>> This is all with bonnie++ -d /mnt -f -u root:root >>> >>> Anyone seen this, or have any ideas? >>> >>> Is any additional latency provided by the xen virtual block device >>> causing a degradation in TCP performance (i.e. a window size or >>> delayed ACK problem) or is the buffering also causing pain? I'm going >>> to keep looking, but I thought I'd ask all of you. >> >> Any layer you add is going to create latency. >> >> If you can be a little more clearer I'm sure an accurate explanation can >> be made. > > Try increasing the size of the bonnie test file to defeat the cache, say 2x > the memory of the dom0 or domU or target which ever is largest. The nice thing about bonnie++ -f is it sizes the file for 2x memory. These are the numbers. In any case, the ~110MB/sec writes to the iSCSI target is our baseline number writing across the network. The Dom0 has 4G allocated to it--bonnie++'s test file is 8G. Any reading lower than that (in my mind) is degradation. I, of course, expect some effect from the layering, but 50%? cc -- Chris Chen <muffaleta@xxxxxxxxx> "I want the kind of six pack you can't drink." -- Micah _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |