[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH V2 2/2] x86/hvm: fix domain crash when CR3 has the noflush bit set



On Wed, Jan 31, 2018 at 11:44 AM, Tamas K Lengyel <tamas@xxxxxxxxxxxxx> wrote:
> "
>
> On Tue, Jan 30, 2018 at 2:16 AM, Razvan Cojocaru
> <rcojocaru@xxxxxxxxxxxxxxx> wrote:
>> The emulation layers of Xen lack PCID support, and as we only offer
>> PCID to HAP guests, all writes to CR3 are handled by hardware,
>> except when introspection is involved. Consequently, trying to set
>> CR3 when the noflush bit is set in hvm_set_cr3() leads to domain
>> crashes. The workaround is to clear the noflush bit in
>> hvm_set_cr3(). CR3 values in hvm_monitor_cr() are also sanitized.
>> Additionally, a bool parameter now propagates to
>> {svm,vmx}_update_guest_cr(), so that no flushes occur when
>> the bit was set.
>>
>> Signed-off-by: Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx>
>> Reported-by: Bitweasil <bitweasil@xxxxxxxxxxxxxx>
>> Suggested-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
>>
>> ---
>> Changes since V1:
>>  - Added the bool noflush parameter and code to propagate it to
>>    {svm,vmx}_update_guest_cr().
>>  - Added X86_CR3_NOFLUSH_DISABLE_MASK and X86_CR3_NOFLUSH_DISABLE.
>>  - No longer sanitizing the old value in hvm_monitor_cr().
>> ---
>>  xen/arch/x86/hvm/domain.c         |  6 +++---
>>  xen/arch/x86/hvm/hvm.c            | 25 ++++++++++++++++---------
>>  xen/arch/x86/hvm/monitor.c        |  3 +++
>>  xen/arch/x86/hvm/svm/nestedsvm.c  |  4 ++--
>>  xen/arch/x86/hvm/svm/svm.c        | 22 ++++++++++++++--------
>>  xen/arch/x86/hvm/svm/vmcb.c       |  4 ++--
>>  xen/arch/x86/hvm/vmx/vmcs.c       |  4 ++--
>>  xen/arch/x86/hvm/vmx/vmx.c        | 16 +++++++++-------
>>  xen/arch/x86/mm.c                 |  2 +-
>>  xen/arch/x86/mm/hap/hap.c         |  6 +++---
>>  xen/arch/x86/mm/shadow/common.c   |  2 +-
>>  xen/arch/x86/mm/shadow/multi.c    |  6 +++---
>>  xen/arch/x86/mm/shadow/none.c     |  2 +-
>>  xen/arch/x86/monitor.c            |  2 +-
>>  xen/include/asm-x86/hvm/hvm.h     | 10 +++++++---
>>  xen/include/asm-x86/hvm/svm/svm.h |  2 +-
>>  xen/include/asm-x86/paging.h      |  7 ++++---
>>  17 files changed, 73 insertions(+), 50 deletions(-)
>>
>> diff --git a/xen/arch/x86/hvm/domain.c b/xen/arch/x86/hvm/domain.c
>> index 6047464..9be085e 100644
>> --- a/xen/arch/x86/hvm/domain.c
>> +++ b/xen/arch/x86/hvm/domain.c
>> @@ -287,9 +287,9 @@ int arch_set_info_hvm_guest(struct vcpu *v, const 
>> vcpu_hvm_context_t *ctx)
>>          return -EINVAL;
>>      }
>>
>> -    hvm_update_guest_cr(v, 0);
>> -    hvm_update_guest_cr(v, 3);
>> -    hvm_update_guest_cr(v, 4);
>> +    hvm_update_guest_cr(v, 0, false);
>> +    hvm_update_guest_cr(v, 3, false);
>> +    hvm_update_guest_cr(v, 4, false);
>>      hvm_update_guest_efer(v);
>>
>>      if ( hvm_paging_enabled(v) && !paging_mode_hap(v->domain) )
>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
>> index c4287a3..b42fbd1 100644
>> --- a/xen/arch/x86/hvm/hvm.c
>> +++ b/xen/arch/x86/hvm/hvm.c
>> @@ -2184,7 +2184,7 @@ static void hvm_update_cr(struct vcpu *v, unsigned int 
>> cr, unsigned long value)
>>  {
>>      v->arch.hvm_vcpu.guest_cr[cr] = value;
>>      nestedhvm_set_cr(v, cr, value);
>> -    hvm_update_guest_cr(v, cr);
>> +    hvm_update_guest_cr(v, cr, false);
>>  }
>>
>>  int hvm_set_cr0(unsigned long value, bool_t may_defer)
>> @@ -2310,6 +2310,7 @@ int hvm_set_cr3(unsigned long value, bool_t may_defer)
>>      struct vcpu *v = current;
>>      struct page_info *page;
>>      unsigned long old = v->arch.hvm_vcpu.guest_cr[3];
>> +    bool noflush = false;
>>
>>      if ( may_defer && 
>> unlikely(v->domain->arch.monitor.write_ctrlreg_enabled &
>>                                 monitor_ctrlreg_bitmask(VM_EVENT_X86_CR3)) )
>
> In this if block shouldn't we save the "noflush" into
> "v->arch.vm_event->write_data" so that it can be used during
> hvm_do_resume as well?
>

Never mind, I see that this only applies on the return path already.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.