[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Xen Partition Performance



Matt Ayres <matta@xxxxxxxxxxxx> wrote on 04/26/2006 09:49:58 AM:


> Steve Dobbelstein wrote:
> > "Sylvain Coutant" <sco@xxxxxxxxxx> wrote on 04/25/2006 07:47:09 AM:
> >
> >>> I'm setting up a Xen system since I have diferent choices to
> >>> create the domU's partitions: raw partition, lvm, files.
> >>>
> >>> I've done some tests with hdparm and it all seems to be the same.
> >> It can if you have enough CPU in dom0. The interesting point could
> >> then be : how much each of those cost in CPU share in dom0 ? The
> >> less layers you'll have, the better it should behave. Raw partition
> >> should be the cheapest, closely followed by LVM and lastly files
> >> (which could have different results depending on fs in use: ext2,
> >> ext3, reiser, xfs, etc.). I'm not sure if the difference would be
> >> very significant. Probably, it depends on usage, number of domUs, ...
> >
> > Also be aware that the devices go through the buffer cache on dom0.  I
> > would not be surprised that the performance to a partition, LVM volume,
or
> > loop device is not that different, since they all hit the buffer cache,
> > especially if your tests do mostly reads.
> >
>
> I have to disagree here.  A physical partition / LVM gets directly
> passed to the domU and by-passes the dom0 buffer cache.  Since the image
> (loop) file resides on the host that will be cached by dom0.

Thanks Matt and Mark for the correction.  What you state is true for
paravirtualized domains (the open of the device is done in the kernel).
HVM domains, however, open the device in user space, which means the I/O
goes through the VFS cache.  I have been spending most of my time with HVM
domains and incorrectly assumed that all domains have that behavior.

Steve D.


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.