[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] GPLPV Disk performance block device vs. file based backend



On 09/20/2012 12:16 PM, Fajar A. Nugraha wrote:
> On Thu, Sep 20, 2012 at 3:57 PM, Dion Kant <dion@xxxxxxxxxx> wrote:
>
>
>> Now I measure more than a factor 3 better I/O performance on the file
>> based VM as compared to the block device based VM. I don't think it is a
>> cache issue which is tricking me.
> I'm 99.9% sure it tricks you :)

Hi Fajar,

Thank you for leaving me 0.01% of uncertainty ;)
>> sync; time (dd if=/dev/zero of=test.bin bs=4096 count=5000000; sync)
> dd is terrible for benchmark purposes. I'd suggest fio, random rw,
> data size at least twice RAM.
I don't care about random rw, I am looking at the speed of which nicely
ordered data is streamed to a set of disks, observed with vmstat in dom0.

Note I write 20GB, so I have plenty time to make sure that all caches
are filled in dom0 and writing to the disks has to start.

There is 8 GB in dom0 left, furthermore the sync of Cygwin really does
its job.

I'll have a look at fio anyway.....

>> I noticed that for the file based VM, the "bo" results from vmstat are
>> doubled, i.e. the bytes written to the disk file living on the LV are
>> counted twice.
> probably because file:/ uses loopback, which is counted as another block 
> device.
Ok that shall be the reason.

>> I can provide more details if required and I can do more testing as well.
> There are many factors involved: file vs phy, file-backed vs
> LV-backed, windows, gplpv, etc. What I suggest is:
>
> - use linux pv domU, one backed with LV, the other with file
> - use tap:aio for the file-backed one (NOT file:/)
> - use fio for testing
>
> That SHOULD eliminate most other factors, and allow you to focus on
> file-tap vs LV-phy.
I don't have this issue with Linux PV domUs. I think it is something
related to GPLPV or HVM. I'll do this anyway again and report on the
results. If I recall correctly from my tests in the past with PV Linux,
using phy: tap:aio or file: only differs a little bit (<10%). Here we
are talking about a factor >3.

Thanks,

Dion

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.