[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 1/3] xen/sched: introduce cpupool_update_node_affinity()


  • To: Juergen Gross <jgross@xxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Wed, 3 Aug 2022 10:30:00 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Uvk+06XhDS3sHMy4faMqGv7tUvWhq2BRDUqga52pPDk=; b=KvH5c8rwby58whWmpcfqu+ftFu5ojZNUtfOJO4MmZA31ko9ZQnwFDtsRmSLrIhElfPjvF0XPR70ZglyMf2ubt+uUzPqVswC5nQ4kWBTlqNWvu6l+g3+uQa0p19jFadKgsmoOPDLtpHtx9JHECD2bIyg5qR5tnlN1iQCl+DIq1qx/DJ8pF2iPYwW9UDKKyCywDh5pQ13M2pXVH7PnsMh34RQoZWueuPHRSOHoqcdnNw6kO/tqq7ZQeSe6UU02oyBOQLXc43Rk8HU2R5Bz17hJfYu0iDR4iQPPenJ1hjbuBvKW+kpxMTXok40wQX27iYRYkaaifhHUidomQfkm0og6QA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=VDycRQNaw77DgnOhzfb3lrJkfO5278M/rl9cplf8+3f5OFrvNEg99LwO/kGMtwqeXUkGOS6d1XXGkla3KYUYg2if8wnECmMy5sokuHSqn7FMV41WDBwlqTMHw0DzHQrXEAroYAEHAU9rR9mHqxvfu+F0DRyQanV1rSys9A/VAsrSZURlPIeKaJ/+uTqj/ujAj/UgvtvC152Au8OJHVFR1m0Ik051JdQGH1/82ez1BqWHgXj1fobz105nQoCKcpHqlbYQf/YClKZGd51uCb+ACL/IcmYjR/El59wTLKvRXQ5bshYLBg5H1fUzMJAh4XL2Ru66wWnOusP2pf43pr36ew==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: George Dunlap <george.dunlap@xxxxxxxxxx>, Dario Faggioli <dfaggioli@xxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Wed, 03 Aug 2022 08:30:18 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 03.08.2022 10:01, Juergen Gross wrote:
> On 03.08.22 09:50, Jan Beulich wrote:
>> On 02.08.2022 15:27, Juergen Gross wrote:
>>> --- a/xen/common/sched/core.c
>>> +++ b/xen/common/sched/core.c
>>> @@ -1790,28 +1790,14 @@ int vcpu_affinity_domctl(struct domain *d, uint32_t 
>>> cmd,
>>>       return ret;
>>>   }
>>>   
>>> -void domain_update_node_affinity(struct domain *d)
>>> +void domain_update_node_affinity_noalloc(struct domain *d,
>>> +                                         const cpumask_t *online,
>>> +                                         struct affinity_masks *affinity)
>>>   {
>>> -    cpumask_var_t dom_cpumask, dom_cpumask_soft;
>>>       cpumask_t *dom_affinity;
>>> -    const cpumask_t *online;
>>>       struct sched_unit *unit;
>>>       unsigned int cpu;
>>>   
>>> -    /* Do we have vcpus already? If not, no need to update node-affinity. 
>>> */
>>> -    if ( !d->vcpu || !d->vcpu[0] )
>>> -        return;
>>> -
>>> -    if ( !zalloc_cpumask_var(&dom_cpumask) )
>>> -        return;
>>> -    if ( !zalloc_cpumask_var(&dom_cpumask_soft) )
>>> -    {
>>> -        free_cpumask_var(dom_cpumask);
>>> -        return;
>>> -    }
>>
>> Instead of splitting the function, did you consider using
>> cond_zalloc_cpumask_var() here, thus allowing (but not requiring)
>> callers to pre-allocate the masks? Would imo be quite a bit less
>> code churn (I think).
> 
> This would require to change all callers of domain_update_node_affinity()
> to add the additional mask parameter. The now common/sched local struct
> affinity_masks would then need to made globally visible.
> 
> I'm not sure this is a good idea.

Hmm, I see there are quite a few callers (so there would be code churn
elsewhere). But I don't think the struct would need making globally
visible - the majority of callers could simply pass NULL, making the
function use a local instance of the struct instead. Personally I think
that would still be neater than having a _noalloc-suffixed variant of a
function (and specifically in this case also with an already long name).
But I guess this is then up to you / the scheduler maintainers.

>>> --- a/xen/common/sched/cpupool.c
>>> +++ b/xen/common/sched/cpupool.c
>>> @@ -410,6 +410,48 @@ int cpupool_move_domain(struct domain *d, struct 
>>> cpupool *c)
>>>       return ret;
>>>   }
>>>   
>>> +/* Update affinities of all domains in a cpupool. */
>>> +static int cpupool_alloc_affin_masks(struct affinity_masks *masks)
>>> +{
>>> +    if ( !alloc_cpumask_var(&masks->hard) )
>>> +        return -ENOMEM;
>>> +    if ( alloc_cpumask_var(&masks->soft) )
>>> +        return 0;
>>> +
>>> +    free_cpumask_var(masks->hard);
>>> +    return -ENOMEM;
>>> +}
>>
>> Wouldn't this be a nice general helper function, also usable from
>> outside of this CU?
> 
> I considered that, but wasn't sure this is really helpful. The only
> potential other user would be domain_update_node_affinity(), requiring
> to use the zalloc variant of the allocation in the helper (not that this
> would be a major problem, though).

I was actually thinking the other way around - the clearing of the masks
might better move into what is domain_update_node_affinity_noalloc() in
this version of the patch, so the helper could continue to use the non-
clearing allocations.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.