[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 2/4] x86/mm: use optional cache in guest_walk_tables()



>>> On 11.09.18 at 18:17, <Paul.Durrant@xxxxxxxxxx> wrote:
>> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
>> Sent: 11 September 2018 14:15
>> 
>> --- a/xen/arch/x86/mm/guest_walk.c
>> +++ b/xen/arch/x86/mm/guest_walk.c
>> @@ -92,8 +92,13 @@ guest_walk_tables(struct vcpu *v, struct
>>  #if GUEST_PAGING_LEVELS >= 4 /* 64-bit only... */
>>      guest_l3e_t *l3p = NULL;
> 
> Shouldn't the above line be...
> 
>>      guest_l4e_t *l4p;
>> +    paddr_t l4gpa;
>> +#endif
>> +#if GUEST_PAGING_LEVELS >= 3 /* PAE or 64... */
> 
> ...here?

No (and note that's not code I change).

>> @@ -134,7 +139,15 @@ guest_walk_tables(struct vcpu *v, struct
>>      /* Get the l4e from the top level table and check its flags*/
>>      gw->l4mfn = top_mfn;
>>      l4p = (guest_l4e_t *) top_map;
>> -    gw->l4e = l4p[guest_l4_table_offset(gla)];
>> +    l4gpa = gfn_to_gaddr(top_gfn) +
>> +            guest_l4_table_offset(gla) * sizeof(gw->l4e);
>> +    if ( !cache ||
>> +         !hvmemul_read_cache(cache, l4gpa, 4, &gw->l4e, sizeof(gw->l4e)) )
>> +    {
>> +        gw->l4e = l4p[guest_l4_table_offset(gla)];
>> +        if ( cache )
>> +            hvmemul_write_cache(cache, l4gpa, 4, &gw->l4e, sizeof(gw->l4e));
> 
> No need to test cache here or below since neither the read or write 
> functions (yet) dereference it.

I don't understand: The patch here is to prepare for a fully
implemented set of backing functions, i.e. I in particular don't
want to touch this code again once I add actual bodies to
the functions. Apart from this I don't think callers should make
assumptions like this about callee behavior.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.