[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Shadow page tables?



On Tue, Oct 12, 2004 at 09:09:24PM +0100, Andrew Warfield wrote:
>   I'd be interested to know what sort of overhead results you've seen
> on lvm -- I've been looking at block devices myself a bit lately and
> if lvm doesn't look like it can be made to scale as a solution for
> cow, I might be able to arrange some sort of alternative.  Any sort of
> summary of your lvm thoughts/experiences would be great.

My primary concern with LVM snapshot scalability is kernel memory
consumption.  (I haven't done any performance benchmarking.)  For each
LVM snapshot that is created, the kernel allocates:
  - 1 MB of pages for a kcopyd_client.
  - Up to a 2 MB hash table for tracking exceptions (areas of the disk
    that have been changed).
  - The exceptions themselves.  (Not very large individually, but it all
    has to be in kernel memory--on the order of 16 bytes per area of
    disk [say ~16 K] that has been remapped, so that could grow to 1 MB
    of memory per 1 GB COW disk.)
(These numbers are gathered from looking through the snapshot code.)

The kcopyd_client can (I think) be safely shared between snapshots--I
patched the kernel to do this, and it worked fine under the limited
testing I gave it.  So this issue could be worked around.  But the other
two are more troublesome, since the assumption that all the COW mapping
tables available in kernel memory can't be changed without a lot of
work.

The problem is currently worse than it has to be since the LVM code
won't sleep if the kernel memory isn't immediately available; it errors
out.  Even with that fixed (which may happen eventually:
http://www.redhat.com/archives/dm-devel/2004-January/msg00079.html), LVM
is still pinning on the order of megabytes of kernel memory per
snapshot.  That is fine for some workloads, but I'd like to see if I can
get to hundreds or thousands of COW disks--in which case this is too
much memory per disk.

I'm currently wondering if a solution implemented partially in userspace
is viable--put the COW driver in userspace, let the on-disk data
structures get cached in the page cache, but any COW disks not being
actively utilized would have data swapped out of memory.  Perhaps
communicate to the kernel via the nbd interface or something similar?  I
haven't put much thought into this yet.  I expect performance would be
worse than LVM, but that's a tradeoff I'm willing to make.

--Michael Vrable


-------------------------------------------------------
This SF.net email is sponsored by: IT Product Guide on ITManagersJournal
Use IT products in your business? Tell us what you think of them. Give us
Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more
http://productguide.itmanagersjournal.com/guidepromo.tmpl
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.