[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 3/3] x86/mem_sharing: make fork_reset more configurable





On Wed, Mar 30, 2022, 9:34 AM Jan Beulich <jbeulich@xxxxxxxx> wrote:
On 30.03.2022 15:19, Tamas K Lengyel wrote:
> On Wed, Mar 30, 2022, 6:31 AM Jan Beulich <jbeulich@xxxxxxxx> wrote:
>> On 29.03.2022 16:03, Tamas K Lengyel wrote:
>>> --- a/xen/arch/x86/include/asm/mem_sharing.h
>>> +++ b/xen/arch/x86/include/asm/mem_sharing.h
>>> @@ -85,6 +85,9 @@ static inline bool mem_sharing_is_fork(const struct
>> domain *d)
>>>  int mem_sharing_fork_page(struct domain *d, gfn_t gfn,
>>>                            bool unsharing);
>>>
>>> +int mem_sharing_fork_reset(struct domain *d, bool reset_state,
>>> +                           bool reset_memory);
>>
>> Please avoid passing multiple booleans, even more so when you already
>> derive them from a single "flags" value. This would likely be easiest
>> if you re-used the VM_EVENT_FLAG_RESET_FORK_* values also for
>> XENMEM_FORK_RESET_*, with suitable BUILD_BUG_ON() put in place to
>> prove they're the same.
>
> I don't see why that would be an improvement in any way. I also don't want
> to make VM_EVENT flags tied to the XENMEM ones as they are separate
> interfaces. I rather just drop the changes to the XENMEM interface then do
> that.

If the function gained two or three more flags, how would that look to
you? And how would you easily identify the correct order of all the
booleans?

IMHO we can cross that bridge when and if that becomes necessary. Also not sure what you mean by the order of the booleans - that's irrelevant since the booleans are named?


>>> @@ -394,6 +399,16 @@ static int vm_event_resume(struct domain *d, struct
>> vm_event_domain *ved)
>>>              if ( rsp.reason == VM_EVENT_REASON_MEM_PAGING )
>>>                  p2m_mem_paging_resume(d, &rsp);
>>>  #endif
>>> +#ifdef CONFIG_MEM_SHARING
>>> +            if ( mem_sharing_is_fork(d) )
>>> +            {
>>> +                bool reset_state = rsp.flags &
>> VM_EVENT_FLAG_RESET_FORK_STATE;
>>> +                bool reset_mem = rsp.flags &
>> VM_EVENT_FLAG_RESET_FORK_MEMORY;
>>> +
>>> +                if ( reset_state || reset_mem )
>>> +                    ASSERT(!mem_sharing_fork_reset(d, reset_state,
>> reset_mem));
>>> +            }
>>> +#endif
>>
>> Should the two flags be rejected in the "else" case, rather than
>> being silently ignored?
>
> What do you mean by rejected? There is no feasible "rejection" that could
> be done in this path other then skipping it.

IOW no return value being handed back to the requester? The function
does have an error return path, so I don't see why you couldn't return
-EINVAL.

The way vm_event response flags are right now is "best effort". Error code from this path never reaches the caller, which are usually callback functions that return these flags. The best way for an error code to reach the caller would be by making a separate vm_event on the ring and sending that, but that's non-existent today and also hasn't been needed. It certainly isn't needed here since there should be no feasable way for a fork to fail a reset request (hence the assert).

Tamas

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.