[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 10/11] x86/altp2m: fix log-dirty handling.



At 12:49 -0800 on 15 Jan (1421322565), Ed White wrote:
> On 01/15/2015 09:20 AM, Tim Deegan wrote:
> > Hi,
> > 
> > The locking chages look OK at first glance, but...
> > 
> > At 13:26 -0800 on 09 Jan (1420806400), Ed White wrote:
> >> @@ -793,6 +793,10 @@ int p2m_change_type_one(struct domain *d, unsigned 
> >> long gfn,
> >>  
> >>      gfn_unlock(p2m, gfn, 0);
> >>  
> >> +    if ( pt == ot && altp2mhvm_active(d) )
> >> +        /* make sure this page isn't valid in any alternate p2m */
> >> +        p2m_remove_altp2m_page(d, gfn);
> >> +
> >>      return rc;
> >>  }
> > 
> > ...this is the wrong level to be making this change at.  The hook needs
> > to be right at the bottom, in atomic_write_ept_entry() (and
> > hap_write_p2m_entry() for AMD, I think), to catch _every_ update of a
> > p2m entry in the host p2m.
> > 
> > Otherwise a guest frame could be removed entirely and the altp2m would
> > still map it.  Or am I missing some other path that handles that case?
> > nested-p2m handles this by failry aggressively flushing nested p2m
> > tabvles but that doesn't sounds suitable for this since there's state
> > in the alt-p2m that needs to be retained.
> 
> Hmm. Is that going to give me even more locking order problems?

Potentially.  Having given yourself a separate altp2m lock, you might
be able to nest it the other way around, so the p2m lock is always
taken first.

> I don't want to go down the nested p2m route. That is seriously bad
> for performance, and I've also seen plenty of cases where a flush on
> one vcpu breaks instruction emulation on another. Also, as you say,
> I don't have enough information to rebuild the alt p2m.

Yep, it's clearly not going to work for you.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.