[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3] xen/x86: vpmu: Unmap per-vCPU PMU page when the domain is destroyed



> -----Original Message-----
> From: Jan Beulich <jbeulich@xxxxxxxx>
> Sent: 28 November 2019 10:23
> To: Durrant, Paul <pdurrant@xxxxxxxxxx>; Boris Ostrovsky
> <boris.ostrovsky@xxxxxxxxxx>
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx; Grall, Julien <jgrall@xxxxxxxxxx>;
> Andrew Cooper <andrew.cooper3@xxxxxxxxxx>; Roger Pau Monné
> <roger.pau@xxxxxxxxxx>; Jun Nakajima <jun.nakajima@xxxxxxxxx>; Kevin Tian
> <kevin.tian@xxxxxxxxx>; Wei Liu <wl@xxxxxxx>
> Subject: Re: [PATCH v3] xen/x86: vpmu: Unmap per-vCPU PMU page when the
> domain is destroyed
> 
> On 28.11.2019 10:38, Paul Durrant wrote:
> > From: Julien Grall <jgrall@xxxxxxxxxx>
> >
> > A guest will setup a shared page with the hypervisor for each vCPU via
> > XENPMU_init. The page will then get mapped in the hypervisor and only
> > released when XENPMU_finish is called.
> >
> > This means that if the guest fails to invoke XENPMU_finish, e.g if it is
> > destroyed rather than cleanly shut down, the page will stay mapped in
> the
> > hypervisor. One of the consequences is the domain can never be fully
> > destroyed as a page reference is still held.
> >
> > As Xen should never rely on the guest to correctly clean-up any
> > allocation in the hypervisor, we should also unmap such pages during the
> > domain destruction if there are any left.
> >
> > We can re-use the same logic as in pvpmu_finish(). To avoid
> > duplication, move the logic in a new function that can also be called
> > from vpmu_destroy().
> >
> > NOTE: - The call to vpmu_destroy() must also be moved from
> >         arch_vcpu_destroy() into domain_relinquish_resources() such that
> >         the reference on the mapped page does not prevent
> domain_destroy()
> >         (which calls arch_vcpu_destroy()) from being called.
> >       - Whilst it appears that vpmu_arch_destroy() is idempotent it is
> >         by no means obvious. Hence make sure the VPMU_CONTEXT_ALLOCATED
> >         flag is cleared at the end of vpmu_arch_destroy().
> >       - This is not an XSA because vPMU is not security supported (see
> >         XSA-163).
> >
> > Signed-off-by: Julien Grall <jgrall@xxxxxxxxxx>
> > Signed-off-by: Paul Durrant <pdurrant@xxxxxxxxxx>
> > ---
> > Cc: Jan Beulich <jbeulich@xxxxxxxx>
> > Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> > Cc: Wei Liu <wl@xxxxxxx>
> > Cc: "Roger Pau Monné" <roger.pau@xxxxxxxxxx>
> > Cc: Jun Nakajima <jun.nakajima@xxxxxxxxx>
> > Cc: Kevin Tian <kevin.tian@xxxxxxxxx>
> >
> > v2:
> >  - Re-word commit comment slightly
> >  - Re-enforce idempotency of vmpu_arch_destroy()
> >  - Move invocation of vpmu_destroy() earlier in
> >    domain_relinquish_resources()
> 
> What about v3?

Oh, sorry:

v3:
 - Add comment regarding XSA-163
 - Revert changes setting VPMU_CONTEXT_ALLOCATED in common code

  Paul

> 
> > --- a/xen/arch/x86/cpu/vpmu.c
> > +++ b/xen/arch/x86/cpu/vpmu.c
> > @@ -576,11 +576,36 @@ static void vpmu_arch_destroy(struct vcpu *v)
> >
> >           vpmu->arch_vpmu_ops->arch_vpmu_destroy(v);
> >      }
> > +
> > +    vpmu_reset(vpmu, VPMU_CONTEXT_ALLOCATED);
> >  }
> 
> Boris, to be on the safe side - are you in agreement with this
> change, now that the setting of the flag is being left untouched?
> 
> Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.