[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] mem_access: Use monitor_traps instead of mem_access_send_req



On Thu, Jul 28, 2016 at 2:54 PM, Andrew Cooper
<andrew.cooper3@xxxxxxxxxx> wrote:
> On 28/07/2016 20:35, Tamas K Lengyel wrote:
>> The two functions monitor_traps and mem_access_send_req duplicate
>> some of the same functionality. The mem_access_send_req however leaves a
>> lot of the standard vm_event fields to be filled by other functions.
>>
>> Since mem_access events go on the monitor ring in this patch we consolidate
>> all paths to use monitor_traps to place events on the ring and to fill in
>> the common parts of the requests.
>>
>> Signed-off-by: Tamas K Lengyel <tamas.lengyel@xxxxxxxxxxxx>
>> ---
>> Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>
>> Cc: Julien Grall <julien.grall@xxxxxxx>
>> Cc: Jan Beulich <jbeulich@xxxxxxxx>
>> Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
>> Cc: Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx>
>> Cc: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
>
> Common and x86 bits Reviewed-by: Andrew Cooper
> <andrew.cooper3@xxxxxxxxxx>, but a few suggestions.
>
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index d82349c..df898a3 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -1642,12 +1642,41 @@ void __init setup_virt_paging(void)
>>      smp_call_function(setup_virt_paging_one, (void *)val, 1);
>>  }
>>
>> +static int
>> +__p2m_mem_access_send_req(paddr_t gpa, vaddr_t gla, const struct npfec 
>> npfec,
>> +                          xenmem_access_t xma)
>> +{
>> +    struct vcpu *v = current;
>> +    vm_event_request_t req = {};
>> +    bool_t sync = (xma == XENMEM_access_n2rwx) ? 0 : 1;
>> +
>> +    req.reason = VM_EVENT_REASON_MEM_ACCESS;
>> +
>> +    /* Send request to mem access subscriber */
>> +    req.u.mem_access.gfn = gpa >> PAGE_SHIFT;
>> +    req.u.mem_access.offset = gpa & ((1 << PAGE_SHIFT) - 1);
>
> I see this is only code motion, but ~PAGE_MASK here instead of
> open-coding it.

Sounds good.

>
>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
>> index daaee1d..688370d 100644
>> --- a/xen/arch/x86/hvm/hvm.c
>> +++ b/xen/arch/x86/hvm/hvm.c
>> @@ -1846,11 +1846,12 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned 
>> long gla,
>>                  }
>>              }
>>
>> -            if ( p2m_mem_access_check(gpa, gla, npfec, &req_ptr) )
>> -            {
>> +            sync = p2m_mem_access_check(gpa, gla, npfec, &req_ptr);
>> +
>> +            if ( !sync ) {
>
> Please keep this brace on the newline (inline with the style), even if
> it doesn't match the style of the else clause.

Sure.

>
>>                  fall_through = 1;
>>              } else {
>> -                /* Rights not promoted, vcpu paused, work here is done */
>> +                /* Rights not promoted (aka. sync event), work here is done 
>> */
>>                  rc = 1;
>>                  goto out_put_gfn;
>>              }
>> @@ -1956,7 +1957,12 @@ out:
>>      }
>>      if ( req_ptr )
>>      {
>> -        mem_access_send_req(currd, req_ptr);
>> +        if ( hvm_monitor_mem_access(curr, sync, req_ptr) < 0 )
>> +        {
>> +            /* Crash the domain */
>> +            rc = 0;
>> +        }
>
> It is reasonable to omit the braces here.

Yea but with the comment being in-between I just didn't like the look
of it without the braces..

>
>> +
>>          xfree(req_ptr);
>>      }
>>      return rc;
>> diff --git a/xen/arch/x86/hvm/monitor.c b/xen/arch/x86/hvm/monitor.c
>> index 7277c12..c7285c6 100644
>> --- a/xen/arch/x86/hvm/monitor.c
>> +++ b/xen/arch/x86/hvm/monitor.c
>> @@ -152,6 +152,12 @@ int hvm_monitor_cpuid(unsigned long insn_length)
>>      return monitor_traps(curr, 1, &req);
>>  }
>>
>> +int hvm_monitor_mem_access(struct vcpu* v, bool_t sync,
>
> vcpu *v.
>
> ~Andrew

Thanks,
Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.