[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 0/9] Memory scrubbing from idle loop



On 03/04/17 17:50, Boris Ostrovsky wrote:
> When a domain is destroyed the hypervisor must scrub domain's pages before
> giving them to another guest in order to prevent leaking the deceased
> guest's data. Currently this is done during guest's destruction, possibly
> causing very lengthy cleanup process.
> 
> This series adds support for scrubbing released pages from idle loop,
> making guest destruction significantly faster. For example, destroying a
> 1TB guest can now be completed in 40+ seconds as opposed to about 9 minutes
> using existing scrubbing algorithm.
> 
> The downside of this series is that we sometimes fail to allocate high-order
> sets of pages since dirty pages may not yet be merged into higher-order sets
> while they are waiting to be scrubbed.
> 
> Briefly, the new algorithm places dirty pages at the end of heap's page list
> for each node/zone/order to avoid having to scan full list while searching
> for dirty pages. One processor form each node checks whether the node has any
> dirty pages and, if such pages are found, scrubs them. Scrubbing itself
> happens without holding heap lock so other users may access heap in the
> meantime. If while idle loop is scrubbing a particular chunk of pages this
> chunk is requested by the heap allocator, scrubbing is immediately stopped.
> 
> On the allocation side, alloc_heap_pages() first tries to satisfy allocation
> request using only clean pages. If this is not possible, the search is
> repeated and dirty pages are scrubbed by the allocator.
> 
> This series is somewhat based on earlier work by Bob Liu.
> 
> V1:
> * Only set PGC_need_scrub bit for the buddy head, thus making it unnecessary
>   to scan whole buddy
> * Fix spin_lock_cb()
> * Scrub CPU-less nodes
> * ARM support. Note that I have not been able to test this, only built the
>   binary
> * Added scrub test patch (last one). Not sure whether it should be considered
>   for committing but I have been running with it.
> 
> V2:
> * merge_chunks() returns new buddy head
> * scrub_free_pages() returns softirq pending status in addition to (factored 
> out)
>   status of unscrubbed memory
> * spin_lock uses inlined spin_lock_cb()
> * scrub debugging code checks whole page, not just the first word.
> 
> Deferred:
> * Per-node heap locks. In addition to (presumably) improving performance in
>   general, once they are available we can parallelize scrubbing further by
>   allowing more than one core per node to do idle loop scrubbing.
> * AVX-based scrubbing
> * Use idle loop scrubbing during boot.
> 
> 
> Boris Ostrovsky (9):
>   mm: Separate free page chunk merging into its own routine
>   mm: Place unscrubbed pages at the end of pagelist
>   mm: Scrub pages in alloc_heap_pages() if needed
>   mm: Scrub memory from idle loop
>   mm: Do not discard already-scrubbed pages softirqs are pending
>   spinlock: Introduce spin_lock_cb()
>   mm: Keep pages available for allocation while scrubbing
>   mm: Print number of unscrubbed pages in 'H' debug handler
>   mm: Make sure pages are scrubbed
> 
>  xen/Kconfig.debug          |    7 +
>  xen/arch/arm/domain.c      |   13 +-
>  xen/arch/x86/domain.c      |    3 +-
>  xen/common/page_alloc.c    |  475 
> +++++++++++++++++++++++++++++++++++++-------
>  xen/common/spinlock.c      |    9 +-
>  xen/include/asm-arm/mm.h   |    8 +
>  xen/include/asm-x86/mm.h   |    8 +
>  xen/include/xen/mm.h       |    1 +
>  xen/include/xen/spinlock.h |    3 +

FAOD, even though this series has 'mm' in the title of almost every
patch, none of it comes under my explicit maintainership (which is
arch/x86/mm), so I don't think it needs any Acks specifically from me to
get in.

On the whole the series looks like it's going in the right direction, so
I won't give a detailed review unless someone specifically asks for it.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.