[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3/6] x86/HVM: implement memory read caching



>>> On 19.07.18 at 16:20, <Paul.Durrant@xxxxxxxxxx> wrote:
>> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
>> Sent: 19 July 2018 11:49
>> @@ -1046,6 +1060,8 @@ static int __hvmemul_read(
>>          pfec |= PFEC_implicit;
>>      else if ( hvmemul_ctxt->seg_reg[x86_seg_ss].dpl == 3 )
>>          pfec |= PFEC_user_mode;
>> +    if ( access_type == hvm_access_insn_fetch )
>> +        pfec |= PFEC_insn_fetch;
> 
> Since you OR the insn_fetch flag in here...
> 
>> 
>>      rc = hvmemul_virtual_to_linear(
>>          seg, offset, bytes, &reps, access_type, hvmemul_ctxt, &addr);
>> @@ -1059,7 +1075,8 @@ static int __hvmemul_read(
>> 
>>      rc = ((access_type == hvm_access_insn_fetch) ?
>>            hvm_fetch_from_guest_linear(p_data, addr, bytes, pfec, &pfinfo) :
> 
> ...could you not just use hvm_copy_from_guest_linear() here regardless of 
> access_type (and just NULL out the cache argument if it is an insn_fetch)?
> 
> AFAICT the only reason hvm_fetch_from_guest_linear() exists is to OR the 
> extra flag in.

Well, technically it looks like I indeed could. I'm not sure that's good idea
though - the visual separation of "copy" vs "fetch" is helpful I think. Let's
see if I get any opinions by others in one or the other direction.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.