[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/2] x86/hvm: Drop the may_defer boolean from hvm_* helpers


  • To: Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Wed, 24 Oct 2018 11:00:43 +0100
  • Autocrypt: addr=andrew.cooper3@xxxxxxxxxx; prefer-encrypt=mutual; keydata= xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs 6+ahAA==
  • Cc: Kevin Tian <kevin.tian@xxxxxxxxx>, Tamas K Lengyel <tamas@xxxxxxxxxxxxx>, Wei Liu <wei.liu2@xxxxxxxxxx>, Jan Beulich <JBeulich@xxxxxxxx>, Jun Nakajima <jun.nakajima@xxxxxxxxx>, Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>, Brian Woods <brian.woods@xxxxxxx>, Suravee Suthikulpanit <suravee.suthikulpanit@xxxxxxx>
  • Delivery-date: Wed, 24 Oct 2018 10:01:04 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 23/10/18 16:24, Razvan Cojocaru wrote:
> On 10/23/18 5:35 PM, Andrew Cooper wrote:
>> The may_defer booleans were introduced with the monitor infrastructure, but
>> their purpose is not obvious and not described anywhere.
>>
>> They exist to avoid triggering nested monitoring events from introspection
>> activities, but with the introduction of the general monitor.suppress
>> infrastructure, they are no longer needed.  Drop them.
> I admit their purpose may not be obvious, but they don't exist only for
> the reason you've given. They exist so that we may be able to send out
> vm_events _before_ a write happens (so that we are then able to veto the
> CR or MSR write from the introspection agent).
>
> So defer means that we defer the write to until after the introspection
> agent replies. The "may" part refers to the fact that the introspection
> may not be interested in that event, so you're telling the function
> "please don't write the value in this MSR, just send a vm_event for now,
> _unless_ the introspection agent didn't subscribe to writes in this
> particular MSR".
>
> The actual write is done in the code called by hvm_vm_event_do_resume(),
> if the vm_event reply allows it.
>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
>> ---
>> CC: Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx>
>> CC: Tamas K Lengyel <tamas@xxxxxxxxxxxxx>
>> CC: Jan Beulich <JBeulich@xxxxxxxx>
>> CC: Wei Liu <wei.liu2@xxxxxxxxxx>
>> CC: Jun Nakajima <jun.nakajima@xxxxxxxxx>
>> CC: Kevin Tian <kevin.tian@xxxxxxxxx>
>> CC: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
>> CC: Suravee Suthikulpanit <suravee.suthikulpanit@xxxxxxx>
>> CC: Brian Woods <brian.woods@xxxxxxx>
>> ---
>>  xen/arch/x86/hvm/emulate.c        |  8 ++++----
>>  xen/arch/x86/hvm/hvm.c            | 31 +++++++++++++++----------------
>>  xen/arch/x86/hvm/svm/nestedsvm.c  | 14 +++++++-------
>>  xen/arch/x86/hvm/svm/svm.c        |  2 +-
>>  xen/arch/x86/hvm/vm_event.c       |  9 ++++-----
>>  xen/arch/x86/hvm/vmx/vmx.c        |  4 ++--
>>  xen/arch/x86/hvm/vmx/vvmx.c       | 16 ++++++++--------
>>  xen/include/asm-x86/hvm/support.h |  8 ++++----
>>  8 files changed, 45 insertions(+), 47 deletions(-)
>>
>> diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
>> index cd1d9a7..43f18c2 100644
>> --- a/xen/arch/x86/hvm/emulate.c
>> +++ b/xen/arch/x86/hvm/emulate.c
>> @@ -2024,7 +2024,7 @@ static int hvmemul_write_cr(
>>      switch ( reg )
>>      {
>>      case 0:
>> -        rc = hvm_set_cr0(val, 1);
>> +        rc = hvm_set_cr0(val);
>>          break;
>>  
>>      case 2:
>> @@ -2033,11 +2033,11 @@ static int hvmemul_write_cr(
>>          break;
>>  
>>      case 3:
>> -        rc = hvm_set_cr3(val, 1);
>> +        rc = hvm_set_cr3(val);
>>          break;
>>  
>>      case 4:
>> -        rc = hvm_set_cr4(val, 1);
>> +        rc = hvm_set_cr4(val);
>>          break;
>>  
>>      default:
>> @@ -2092,7 +2092,7 @@ static int hvmemul_write_msr(
>>      uint64_t val,
>>      struct x86_emulate_ctxt *ctxt)
>>  {
>> -    int rc = hvm_msr_write_intercept(reg, val, 1);
>> +    int rc = hvm_msr_write_intercept(reg, val);
>>  
>>      if ( rc == X86EMUL_EXCEPTION )
>>          x86_emul_hw_exception(TRAP_gp_fault, 0, ctxt);
>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
>> index 4b4d9d6..296b967 100644
>> --- a/xen/arch/x86/hvm/hvm.c
>> +++ b/xen/arch/x86/hvm/hvm.c
>> @@ -2046,15 +2046,15 @@ int hvm_mov_to_cr(unsigned int cr, unsigned int gpr)
>>      switch ( cr )
>>      {
>>      case 0:
>> -        rc = hvm_set_cr0(val, 1);
>> +        rc = hvm_set_cr0(val);
>>          break;
>>  
>>      case 3:
>> -        rc = hvm_set_cr3(val, 1);
>> +        rc = hvm_set_cr3(val);
>>          break;
>>  
>>      case 4:
>> -        rc = hvm_set_cr4(val, 1);
>> +        rc = hvm_set_cr4(val);
>>          break;
>>  
>>      case 8:
>> @@ -2150,7 +2150,7 @@ static void hvm_update_cr(struct vcpu *v, unsigned int 
>> cr, unsigned long value)
>>      hvm_update_guest_cr(v, cr);
>>  }
>>  
>> -int hvm_set_cr0(unsigned long value, bool_t may_defer)
>> +int hvm_set_cr0(unsigned long value)
>>  {
>>      struct vcpu *v = current;
>>      struct domain *d = v->domain;
>> @@ -2176,8 +2176,8 @@ int hvm_set_cr0(unsigned long value, bool_t may_defer)
>>           (value & (X86_CR0_PE | X86_CR0_PG)) == X86_CR0_PG )
>>          return X86EMUL_EXCEPTION;
>>  
>> -    if ( may_defer && 
>> unlikely(v->domain->arch.monitor.write_ctrlreg_enabled &
>> -                               monitor_ctrlreg_bitmask(VM_EVENT_X86_CR0)) )
>> +    if ( unlikely(v->domain->arch.monitor.write_ctrlreg_enabled &
>> +                  monitor_ctrlreg_bitmask(VM_EVENT_X86_CR0)) )
>>      {
>>          ASSERT(v->arch.vm_event);
>>  
>> @@ -2268,15 +2268,15 @@ int hvm_set_cr0(unsigned long value, bool_t 
>> may_defer)
>>      return X86EMUL_OKAY;
>>  }
>>  
>> -int hvm_set_cr3(unsigned long value, bool_t may_defer)
>> +int hvm_set_cr3(unsigned long value)
>>  {
>>      struct vcpu *v = current;
>>      struct page_info *page;
>>      unsigned long old = v->arch.hvm.guest_cr[3];
>>      bool noflush = false;
>>  
>> -    if ( may_defer && 
>> unlikely(v->domain->arch.monitor.write_ctrlreg_enabled &
>> -                               monitor_ctrlreg_bitmask(VM_EVENT_X86_CR3)) )
>> +    if ( unlikely(v->domain->arch.monitor.write_ctrlreg_enabled &
>> +                  monitor_ctrlreg_bitmask(VM_EVENT_X86_CR3)) )
>>      {
>>          ASSERT(v->arch.vm_event);
>>  
>> @@ -2322,7 +2322,7 @@ int hvm_set_cr3(unsigned long value, bool_t may_defer)
>>      return X86EMUL_UNHANDLEABLE;
>>  }
>>  
>> -int hvm_set_cr4(unsigned long value, bool_t may_defer)
>> +int hvm_set_cr4(unsigned long value)
>>  {
>>      struct vcpu *v = current;
>>      unsigned long old_cr;
>> @@ -2356,8 +2356,8 @@ int hvm_set_cr4(unsigned long value, bool_t may_defer)
>>          return X86EMUL_EXCEPTION;
>>      }
>>  
>> -    if ( may_defer && 
>> unlikely(v->domain->arch.monitor.write_ctrlreg_enabled &
>> -                               monitor_ctrlreg_bitmask(VM_EVENT_X86_CR4)) )
>> +    if ( unlikely(v->domain->arch.monitor.write_ctrlreg_enabled &
>> +                  monitor_ctrlreg_bitmask(VM_EVENT_X86_CR4)) )
>>      {
>>          ASSERT(v->arch.vm_event);
>>  
>> @@ -2989,7 +2989,7 @@ void hvm_task_switch(
>>      if ( task_switch_load_seg(x86_seg_ldtr, tss.ldt, new_cpl, 0) )
>>          goto out;
>>  
>> -    rc = hvm_set_cr3(tss.cr3, 1);
>> +    rc = hvm_set_cr3(tss.cr3);
>>      if ( rc == X86EMUL_EXCEPTION )
>>          hvm_inject_hw_exception(TRAP_gp_fault, 0);
>>      if ( rc != X86EMUL_OKAY )
>> @@ -3497,8 +3497,7 @@ int hvm_msr_read_intercept(unsigned int msr, uint64_t 
>> *msr_content)
>>      goto out;
>>  }
>>  
>> -int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content,
>> -                            bool may_defer)
>> +int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
>>  {
>>      struct vcpu *v = current;
>>      struct domain *d = v->domain;
>> @@ -3507,7 +3506,7 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t 
>> msr_content,
>>      HVMTRACE_3D(MSR_WRITE, msr,
>>                 (uint32_t)msr_content, (uint32_t)(msr_content >> 32));
>>  
>> -    if ( may_defer && unlikely(monitored_msr(v->domain, msr)) )
>> +    if ( unlikely(monitored_msr(v->domain, msr)) )
>>      {
>>          uint64_t msr_old_content;
>>  
> I don't see how this could work. The beginning of this function looks as
> follows:
>
> 3492 int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content,
> 3493                             bool may_defer)
> 3494 {
> 3495     struct vcpu *v = current;
> 3496     struct domain *d = v->domain;
> 3497     int ret;
> 3498
> 3499     HVMTRACE_3D(MSR_WRITE, msr,
> 3500                (uint32_t)msr_content, (uint32_t)(msr_content >> 32));
> 3501
> 3502     if ( may_defer && unlikely(monitored_msr(v->domain, msr)) )
> 3503     {
> 3504         uint64_t msr_old_content;
> 3505
> 3506         ret = hvm_msr_read_intercept(msr, &msr_old_content);
> 3507         if ( ret != X86EMUL_OKAY )
> 3508             return ret;
> 3509
> 3510         ASSERT(v->arch.vm_event);
> 3511
> 3512         /* The actual write will occur in hvm_do_resume() (if
> permitted). */
> 3513         v->arch.vm_event->write_data.do_write.msr = 1;
> 3514         v->arch.vm_event->write_data.msr = msr;
> 3515         v->arch.vm_event->write_data.value = msr_content;
> 3516
> 3517         hvm_monitor_msr(msr, msr_content, msr_old_content);
> 3518         return X86EMUL_OKAY;
> 3519     }
> 3520
> 3521     if ( (ret = guest_wrmsr(v, msr, msr_content)) !=
> X86EMUL_UNHANDLEABLE )
> 3522         return ret;
>
> By dumping may_defer, you're now making sure that this function will
> never get to guest_wrmsr() as long as we're dealing with a monitored_msr().
>
> But the code currently calls hvm_msr_write_intercept() with a 0 value
> for may_defer not only in hvm_vm_event_do_resume(), but also in
> load_shadow_guest_state() in vvmx.c, for example.
>
> Speaking of which, removing may_defer from these functions without
> looking at v->monitor.suppress won't work. I think what you were aiming
> at was perhaps to replace may_defer with an equilvalent test on
> v->monitor.suppress in the body of the function instead of simply
> erasing may_defer from everywhere.

Hmm - good point.  This will break things even more.  The
monitor.suppress check needs to be at this point, rather than later.

I'll see about wrapping the monitor checks up into some static inlines
which better model the intercepts they are built from, and check
suppress as the first action.  That should resolve the issues here.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.