[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] GPLPV Disk performance block device vs. file based backend



On Thu, Sep 20, 2012 at 3:57 PM, Dion Kant <dion@xxxxxxxxxx> wrote:

> name="wsrv-file"
> disk=[ 'file:/var/lib/xen/images/wsrv-file/disk0.raw,xvda,w', ]


> name="wsrv-bd"
> disk=[ 'phy:/dev/vg0/wsrv-bd,xvda,w', ]

> Now I measure more than a factor 3 better I/O performance on the file
> based VM as compared to the block device based VM. I don't think it is a
> cache issue which is tricking me.

I'm 99.9% sure it tricks you :)

> sync; time (dd if=/dev/zero of=test.bin bs=4096 count=5000000; sync)

dd is terrible for benchmark purposes. I'd suggest fio, random rw,
data size at least twice RAM.


> I noticed that for the file based VM, the "bo" results from vmstat are
> doubled, i.e. the bytes written to the disk file living on the LV are
> counted twice.

probably because file:/ uses loopback, which is counted as another block device.

> I can provide more details if required and I can do more testing as well.

There are many factors involved: file vs phy, file-backed vs
LV-backed, windows, gplpv, etc. What I suggest is:

- use linux pv domU, one backed with LV, the other with file
- use tap:aio for the file-backed one (NOT file:/)
- use fio for testing

That SHOULD eliminate most other factors, and allow you to focus on
file-tap vs LV-phy.

-- 
Fajar

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.