[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] blk[front|back] does not hand over minimum and optimal_io_size to domU
On Wed, Feb 23, 2011 at 02:26:41PM +0100, Adi Kriegisch wrote: > Dear all, > > I investigated some serious performance drop between Dom0 and DomU with > LVM on top of RAID6 and blkback devices. > While I have around 130MB/s write performance in Dom0, I only get 30MB/s in > DomU. Inspecting this with dstat/iostat revealed that I have a read rate of > about 17-25MB/s while writing with aroung 40MB/s. > The reading only occurs on the disk devices assembled to the RAID6 not the > md device itself. So this is related to RAID6 activity only. > The reason for this is recalculation of checksums due to a too small > optimal_io_size: > On Dom0: > blockdev --getiomin /dev/space/test > 524288 (which is chunk size) > blockdev --getioopt /dev/space/test > 3145728 (which is 6*chunk size) > > On DomU: > blockdev --getiomin /dev/xvdb1 > 512 > blockdev --getioopt /dev/xvdb1 > 0 (so the kernel will use 1MB by default, IIRC) > > minimum_io_size -- if not set -- is hardware block size which seems to be > set to 512 in xlvbd_init_blk_queue (blkfront.c). Btw: blockdev --getbsz > /dev/space/test gives 4096 on Dom0 while DomU reports 512. > > I can somehow mitigate the issue by using a way smaller chunk size but this > is IMHO just working around the issue. > > Is this a bug or a regression? Or does this happen to anyone using RAID6 > (and probably RAID5 as well) and noone noticed the drop until now? > Is there any way to work around this issue? > > Thanks, > Adi Kriegisch > > PS: I am using a stock Debian/Squeeze kernel on top of Debians Xen 4.0.1-2. > Hello, Did you find more info about this issue? -- Pasi _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |