[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/paravirt: merge activate_mm and dup_mmap callbacks



On 1/15/23 10:43 PM, Juergen Gross wrote:
> On 16.01.23 05:27, Srivatsa S. Bhat wrote:
>>
>> Hi Juergen,
>>
>> On 1/12/23 7:21 AM, Juergen Gross wrote:
>>> The two paravirt callbacks .mmu.activate_mm and .mmu.dup_mmap are
>>> sharing the same implementations in all cases: for Xen PV guests they
>>> are pinning the PGD of the new mm_struct, and for all other cases
>>> they are a NOP.
>>>
>>
>> I was expecting that the duplicated functions xen_activate_mm() and
>> xen_dup_mmap() would be merged into a common one, and that both
>> .mmu.activate_mm and .mmu.dup_mmap callbacks would be mapped to that
>> common implementation for Xen PV.
>>
>>> So merge them to a common callback .mmu.enter_mmap (in contrast to the
>>> corresponding already existing .mmu.exit_mmap).
>>>
>>
>> Instead, this patch seems to be merging the callbacks themselves...
>>
>> I see that's not an issue right now since there is no other actual
>> user for these callbacks. But are we sure that merging the callbacks
>> just because the current user (Xen PV) has the same implementation for
>> both is a good idea? The callbacks are invoked at distinct points from
>> fork/exec, so wouldn't it be valuable to retain that distinction in
>> semantics in the callbacks as well?
>>
>> However, if you believe that two separate callbacks are not really
>> required here (because there is no significant difference in what they
>> mean, rather than because their callback implementations happen to be
>> the same right now), then could you please expand on this and call it
>> out in the commit message, please?
> 
> Would you be fine with:
> 
>   In the end both callbacks are meant to register an address space with the
>   underlying hypervisor, so there needs to be only a single callback for that
>   purpose.
> 

Sure, that looks good. Thank you!
 
Regards,
Srivatsa
VMware Photon OS



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.