[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3] xen/x86: vpmu: Unmap per-vCPU PMU page when the domain is destroyed



On 28.11.2019 10:38, Paul Durrant wrote:
> From: Julien Grall <jgrall@xxxxxxxxxx>
> 
> A guest will setup a shared page with the hypervisor for each vCPU via
> XENPMU_init. The page will then get mapped in the hypervisor and only
> released when XENPMU_finish is called.
> 
> This means that if the guest fails to invoke XENPMU_finish, e.g if it is
> destroyed rather than cleanly shut down, the page will stay mapped in the
> hypervisor. One of the consequences is the domain can never be fully
> destroyed as a page reference is still held.
> 
> As Xen should never rely on the guest to correctly clean-up any
> allocation in the hypervisor, we should also unmap such pages during the
> domain destruction if there are any left.
> 
> We can re-use the same logic as in pvpmu_finish(). To avoid
> duplication, move the logic in a new function that can also be called
> from vpmu_destroy().
> 
> NOTE: - The call to vpmu_destroy() must also be moved from
>         arch_vcpu_destroy() into domain_relinquish_resources() such that
>         the reference on the mapped page does not prevent domain_destroy()
>         (which calls arch_vcpu_destroy()) from being called.
>       - Whilst it appears that vpmu_arch_destroy() is idempotent it is
>         by no means obvious. Hence make sure the VPMU_CONTEXT_ALLOCATED
>         flag is cleared at the end of vpmu_arch_destroy().
>       - This is not an XSA because vPMU is not security supported (see
>         XSA-163).
> 
> Signed-off-by: Julien Grall <jgrall@xxxxxxxxxx>
> Signed-off-by: Paul Durrant <pdurrant@xxxxxxxxxx>

Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.