[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] blk[front|back] does not hand over minimum and optimal_io_size to domU



Dear Pasi,

I am still investigating this... (and I also wrote a bug report about it
which is still waiting for an update).

> > I investigated some serious performance drop between Dom0 and DomU with
> > LVM on top of RAID6 and blkback devices.
[SNIP]
> > minimum_io_size -- if not set -- is hardware block size which seems to be
> > set to 512 in xlvbd_init_blk_queue (blkfront.c). Btw: blockdev --getbsz
> > /dev/space/test gives 4096 on Dom0 while DomU reports 512.
I recompiled the kernel with those values hardcoded. It had no direct
impact on the benchmark results. So this assumtion was wrong.

> > I can somehow mitigate the issue by using a way smaller chunk size but this
> > is IMHO just working around the issue.
Using a smaller chunk size indeed helps to improve write speeds but read
speeds are getting worse then.
Making benchmarks with different chunk sizes and different kernels is quite
time consuming; therefor I did not provide an update on that yet.

> > Is this a bug or a regression? Or does this happen to anyone using RAID6
> > (and probably RAID5 as well) and noone noticed the drop until now?
I'd be really glad if someone who is using raid5 or raid6 on Dom0 could
provide some numbers on this.

Probably this is related to the weak hardware I am using: This machine is
an Atom D525 with 4 (hyperthreaded) cores. Maybe the issue is related to
in-order/out-of-order execution or something like that?

> Did you find more info about this issue?
To sum it up: no, not yet! ;-)

Thanks for asking,
        Adi 

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.