[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 4/4] x86/vmx: Drop vmx_msr_state infrastructure

On 13/02/17 16:12, Andrew Cooper wrote:
> On 13/02/17 16:01, Jan Beulich wrote:
>>>>> On 13.02.17 at 15:32, <andrew.cooper3@xxxxxxxxxx> wrote:
>>> To avoid leaking host MSR state into guests, guest LSTAR, STAR and
>>> SYSCALL_MASK state is unconditionally loaded when switching into guest
>>> context.
>>> Attempting to dirty-track the state is pointless; host state is always
>>> restoring upon exit from guest context, meaning that guest state is always
>>> considered dirty.
>>> Drop struct vmx_msr_state, enum VMX_INDEX_MSR_* and msr_index[].  The guests
>>> MSR values are stored plainly in arch_vmx_struct, in the same way as 
>>> shadow_gs
>>> and cstar are.  vmx_restore_guest_msrs() and long_mode_do_msr_write() ensure
>>> that the hardware MSR values are always up-to-date.
>>> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
>> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
>> However, the description above made me think whether always
>> saving/restoring these MSRs is really needed (and desirable):
>> We don't need the host values in place unless we context switch
>> to a PV guest, so perhaps we should rather write them in
>> paravirt_ctxt_switch_to()?
> That would leak the values between different HVM guests.
> In principle we could skip the update if context switching to the idle
> cpu, but that would involve leaking a VT-x-ism into the common code. 
> SVM on the other hand automatically switches these MSRs on all
> vmentries/exits so Xen always has its MSRs in context.

Furthermore, I did consider whether we should allow the guest to write
to those MSRs directly, and handle them like shadow_gs.

I don't expect a plain OS to change them after initial setup, but a
nested hypervisor (particularly Xen) is taking quite a performance hit
on its context switch path because of these MSRs being intercepted at L0.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.