[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/6] x86/vmx: Fix handing of MSR_DEBUGCTL on VMExit



> From: Andrew Cooper [mailto:andrew.cooper3@xxxxxxxxxx]
> Sent: Thursday, May 31, 2018 1:35 AM
> 
> Currently, whenever the guest writes a nonzero value to MSR_DEBUGCTL,
> Xen
> updates a host MSR load list entry with the current hardware value of
> MSR_DEBUGCTL.
> 
> On VMExit, hardware automatically resets MSR_DEBUGCTL to 0.  Later,
> when the
> guest writes to MSR_DEBUGCTL, the current value in hardware (0) is fed
> back
> into guest load list.  As a practical result, `ler` debugging gets lost on any
> PCPU which has ever scheduled an HVM vcpu, and the common case when
> `ler`
> debugging isn't active, guest actions result in an unnecessary load list entry
> repeating the MSR_DEBUGCTL reset.
> 
> Restoration of Xen's debugging setting needs to happen from the very first
> vmexit.  Due to the automatic reset, Xen need take no action in the general
> case, and only needs to load a value when debugging is active.
> 
> This could be fixed by using a host MSR load list entry set up during
> construct_vmcs().  However, a more efficient option is to use an alternative
> block in the VMExit path, keyed on whether hypervisor debugging has been
> enabled.
> 
> In order to set this up, drop the per cpu ler_msr variable (as there is no
> point having it per cpu when it will be the same everywhere), and use a
> single
> read_mostly variable instead.  Split calc_ler_msr() out of percpu_traps_init()
> for clarity.
> 
> Finally, clean up do_debug().  Reinstate LBR early to help catch cascade
> errors, which allows for the removal of the out label.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>

nice cleanup. 

Reviewed-by: Kevin Tian <kevin.tian@xxxxxxxxx>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.