[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 2/2] vmx/hap: optimize CR4 trapping



On 02/16/2018 02:37 PM, Roger Pau Monné wrote:
> On Fri, Feb 16, 2018 at 02:30:55PM +0200, Razvan Cojocaru wrote:
>> On 02/16/2018 02:10 PM, Roger Pau Monne wrote:
>>> diff --git a/xen/arch/x86/monitor.c b/xen/arch/x86/monitor.c
>>> index f229e69948..4317658c56 100644
>>> --- a/xen/arch/x86/monitor.c
>>> +++ b/xen/arch/x86/monitor.c
>>> @@ -189,10 +189,11 @@ int arch_monitor_domctl_event(struct domain *d,
>>>              ad->monitor.write_ctrlreg_enabled &= ~ctrlreg_bitmask;
>>>          }
>>>  
>>> -        if ( VM_EVENT_X86_CR3 == mop->u.mov_to_cr.index )
>>> +        if ( VM_EVENT_X86_CR3 == mop->u.mov_to_cr.index ||
>>> +             VM_EVENT_X86_CR4 == mop->u.mov_to_cr.index )
>>>          {
>>>              struct vcpu *v;
>>> -            /* Latches new CR3 mask through CR0 code. */
>>> +            /* Latches new CR3 or CR4 mask through CR0 code. */
>>>              for_each_vcpu ( d, v )
>>>                  hvm_update_guest_cr(v, 0);
>>>          }
>>
>> Did you, by any chance, test this code with xen-access.c (it already has
>> a test for CR4 for the PGE stuff)? I'm not convinced the
>> hvm_update_guest_cr(v, 0); call suffices to enable CR4 load exits.
> 
> hvm_update_guest_cr is just a wrapper to vmx_update_guest_cr when
> using vmx, which will unconditionally re-calculate the CR4 mask when
> called with cr == 0 or cr == 4.
> 
> I have not tested it with xen-access, but it seems quite
> straightforward to me. Are you seeing any other path that could
> enable CR4 load accesses without calling hvm_update_guest_cr?

No, I thought I did but as it turns out it wasn't. I'll run a quick test
on the patches just to make sure though. They should be alright.


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.