[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 0 of 5] v2: Nested-p2m cleanups and locking changes



At 14:15 +0100 on 27 Jun (1309184128), Tim Deegan wrote:
> At 14:23 +0200 on 27 Jun (1309184586), Christoph Egger wrote:
> > >  - Why is there a 10x increase in IPIs after this series?  I don't see
> > >    what sequence of events sets the relevant cpumask bits to make this
> > >    happen.
> > 
> > In patch 1 the code that sends the IPIs was outside of the loop and
> > moved into the loop.
> 
> Well, yes, but I don't see what that causes 10x IPIs, unless the vcpus
> are burning through np2m tables very quickly indeed.  Maybe removing the
> extra flushes for TLB control will do the trick.  I'll make a patch...

I think I get it - it's a race between p2m_flush_nestedp2m() on one CPU
flushing all the nested P2M tables and a VCPU on another CPU repeatedly
getting fresh ones.  Try the attached patch, which should cut back the
major source of p2m_flush_nestedp2m() calls. 

Writing it, I realised that after my locking fix, p2m_flush_nestedp2m()
isn't safe because it can run in parallel with p2m_get_nestedp2m, which
reorders the array it walks.  I'll have to make the LRU-fu independent
of the array order; should be easy enough but I'll hold off committing
the current series until I've done it. 

Cheers,

Tim.

-- 
Tim Deegan <Tim.Deegan@xxxxxxxxxx>
Principal Software Engineer, Xen Platform Team
Citrix Systems UK Ltd.  (Company #02937203, SL9 0BG)

Attachment: np2m-range-update
Description: Text document

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.