[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 5 of 9] Fine-grained concurrency control structure for the p2m



Hi, 

At 07:20 -0700 on 02 Nov (1320218409), andres@xxxxxxxxxxxxxxxx wrote:
> > I suspect that if this is a contention point, allowing multiple readers
> > will become important, especially if there are particular pages that
> > often get emulated access.
> >
> > And also, I'd  like to get some sort of plan for handling long-lived
> > foreign mappings, if only to make sure that this phase-1 fix doesn't
> > conflict wih it.
> >
> 
> If foreign mappings will hold a lock/ref on a p2m subrange, then they'll
> disallow global operations, and you'll get a clash between log-dirty and,
> say, qemu. Ka-blam live migration.

Yep.  That's a tricky one.  Log-dirty could be special-cased but I guess
we'll have the same problem with paging, mem-event &c. :(

> Read-only foreign mappings are only problematic insofar paging happens.
> With proper p2m update/lookups serialization (global or fine-grained) that
> problem is gone.
> 
> Write-able foreign mappings are trickier because of sharing and w^x. Is
> there a reason left, today, to not type PGT_writable an hvm-domain's page
> when a foreign mapping happens?

Unfortunately, yes.  The shadow pagetable code uses the typecount to
detect whether the guest has any writeable mappings of the page; without
that it would have to brute-force search all the L1 shadows in order to
be sure that it had write-protected a page.

> That would solve sharing problems. w^x
> really can't be solved short of putting the vcpu on a waitqueue
> (preferable to me), or destroying the mapping and forcing the foreign OS
> to remap later. All a few steps ahead, I hope.

OK, so if I understand correctly your plan is to add this mutual
exclusion for all other users of the p2m (emulation &c) but leave
foreign mappings alone for now, with the general plan of fixing that up
using waitqueues.  That's OK by me.

> Who/what's using w^x by the way? If the refcount is zero, I think I know
> what I'll do ;)

I think the original authors are using it in their product.  I haven't
heard of any other users but there might be some. 

> What is a real problem is that pod sweeps can cause deadlocks. There is a
> simple step to mitigate this: start the sweep from the current gfn and
> never wrap around -- too bad if the gfn is too high. But this alters the
> sweeping algorithm. I'll deal with it when its it's turn.

OK.  If there's some chance that Olaf can make PoD a special case of
paging maybe we can get rid of the sweeps altogether (i.e., have the
domain pause when it runs out of PoD and let the pager fix it up).  But
I know George has spent a fair amount of time tuning the performance of
PoD so that may not be acceptable. 

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.