[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 0/2] Bulk mem-share identical domains
On 06/10/15 15:26, Ian Campbell wrote: > On Sun, 2015-10-04 at 14:25 -0600, Tamas K Lengyel wrote: >> The following patches add a convenience memop to the mem_sharing system, >> allowing for the rapid deduplication of memory pages between identical >> domains. >> >> The envisioned use-case for this is the following: >> 1) Create two domains from the same snapshot using xl. >> This step can also be performed by piping an existing domain's memory >> with >> "xl save -c <domain> <pipe> | xl restore -p <new_cfg> <pipe>" >> It is up for the user to create the appropriate configuration for the >> clone, >> including setting up a CoW-disk as well. >> 2) Enable memory sharing on both domains >> 3) Execute bulk dedup between the domains. > This is a neat trick, but has the downside of first shovelling all the data > over a pipe and then needing to allocate it transiently before dedupping it > again. > > Have you looked at the possibility of doing the save+restore in the same > process with a cut through for the RAM part which just dups the page into > the target domain? > > Once upon a time (migr v1) that would certainly have been impossibly hard, > but with migr v2 it might be a lot easier to integrate something like that > (although surely not as easy as what you've done here!). > > Just an idea, and not intended at all as an argument for not taking this > series or anything. If we are making modifications like this, make something like XEN_DOMCTL_domain_clone which takes a source domid (must exist), pauses it, creates a new domain, copies some state and shares all memory CoW from source to the new domain. This will be far more efficient still than moving all the memory through userspace in dom0. ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |