[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] heap_lock optimizations?



On 15/07/13 16:15, Konrad Rzeszutek Wilk wrote:
Hey Tim,

I was looking at making the 'Scrubbing Free RAM:' code faster on 1TB
boxes with 128 CPUs. And naively I wrote code that setup a tasklet
on each CPU and scrub a swatch of MFNs. Unfortunatly even on 8VCPU
machines the end result was a slower boot time!

The culprit looks to be the heap_lock that is taken and released
on every MFN (for fun I added a bit of code to do batches - of
32 MFNs and to iterate over those 32 MFNs while holding the lock - that
did make it a bit faster, but not by a much).

What I am wondering is:
  - Have you ever thought about optimizing this? If so, how?
  - Another idea to potentially make this faster was to seperate this
    scrubbing in two stages:
     1) (under the heap_lock) - reserve/take a giant set of MFN pages
        (perhaps also consult the NUMA affinity). This would be
        usurping the whole heap[zone].
     2) Give it out to the CPUS to scrub (this would be done without being
        under a spinlock). The heap[zone] would be split equally amongst the
        CPUs.
     3) Goto 1 until done.
  - Look for examples in the Linux kernel to see how it does it.

Thanks!
Hi Konrad,

Did you see a patch I posted for this last year? http://lists.xen.org/archives/html/xen-devel/2012-05/msg00701.html

Unfortunately I made some minor errors and it didn't apply cleanly but I'll fix it up now and repost so you can test it.

Malcolm
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.