[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for 4.6] x86/hvm.c: fix regression in guest destruction



On Thu, 2015-07-30 at 10:38 +0100, George Dunlap wrote:
> On 07/30/2015 10:32 AM, Ian Campbell wrote:
> > On Thu, 2015-07-30 at 10:21 +0100, Ian Campbell wrote:
> > 
> > > which I have applied with. I still don't think the commit message is 
> > > very
> > > satisfactory, but I'm not maintainer of any of this code so meh.
> > 
> > For the benefit of the archives perhaps someone could explain why 
> > gating a
> > per-vcpu teardown on a host level feature setting is correct?
> > 
> > In particular what ensures that altp2m_vcpu_initialise has been called,
> > given that this is only called from HVMOP_altp2m_set_domain_state. What
> > happens if that HVMOP is never touched?
> > 
> > Do things work both for altp2m disabled on the Xen command line and
> > disabled/enabled in the guest config? If so how?
> > 
> > Also how come HVMOP_altp2m_set_domain_state does not have a
> > hvm_altp2m_supported check?
> 
> So this was all acked & stuff before I had much of a chance to comment
> on it, but on my to-do list for 4.7 is to rework a lot of the
> initialization / teardown stuff.  In particular:
> 
> - Always and only check for whether something has been initialized
> (e.g., non-NULL, non-INVALID_MFN) before tearing it down
> 
> - Do *all* of the initialization for both altp2m and nestedhvm when
> they're actually enabled for the domain, rather than doing a bunch of
> the initialization unconditionally up front.
> 
> This is all part of the "technical debt" we were talking about when we
> considered giving it a freeze exception.

Thanks, I'm inferring that everything I asked about in the second from last
paragraph is somehow fine, just confusingly achieved in the current code...

I'm done grumping now...

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.