[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 1/3] xen/sched: introduce cpupool_update_node_affinity()


  • To: Juergen Gross <jgross@xxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Wed, 3 Aug 2022 09:50:18 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=IA2FWrZc0ZBmU9i3X9RwKpZ3EAF95UEhHUzsTdGHfG4=; b=E1fUSXEsPucuC33foCkAzGM14q3aJqOXhWtkStxdAO817ThOQFz/V04vzxMv4s8B4mwhFsTyag3+ElmbKMIhI+dGQBs6OS6KARuv5EEP8cm6r48JZFvB6D8CWBaXsb5nBRKqTXB8N/A8/O2qch86OkE0KelQCdrXRjlwqCET6ltSDIV61/SKrXH0zA+HZTPpNVgnJj2GTFAYnaEkSwRMgp8pCB/H/inp4Y4bSsBVAew3H6ZyN+EVzXgAv65S7nvLpkJ6kxie8RKK35b8zQ58cmj1N9LDMQkKB1H4y8kwolkO8AU3eNelQhYAscPdG+Pb/II1osibbPoFnSwtl3pLUg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Oiytix/dJyyeMv0CLWwz3zT13+1FmTy0Zp+/ig+YA2V1crEmX5GEw2/7SiHclKPws2dGWnwwdA2AWgIVlplQTl9CAHPjJYR0/vf0ZLlL6eaCEBxCFavCrvlO/47be34gChf1y3ZuwtxzFlK51Bf8tbK2Txm7WUcbvascO3+rLebfZevjI73mBFdZfJR7zI82Mu91JKqe+lmL6ABZWpuigGGQ252tSdbIo/3//WHyv2WsEKpAxAJnJqN0M/zBnaIWASA6VcD2mSpq+a7xYxjYMvuDwzXazyqJqtEHjFlsVwqTvFWPsK9M7LF9olv13P0qvpOsLSGXzfpXnjwkQFni8w==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: George Dunlap <george.dunlap@xxxxxxxxxx>, Dario Faggioli <dfaggioli@xxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Wed, 03 Aug 2022 07:50:34 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 02.08.2022 15:27, Juergen Gross wrote:
> --- a/xen/common/sched/core.c
> +++ b/xen/common/sched/core.c
> @@ -1790,28 +1790,14 @@ int vcpu_affinity_domctl(struct domain *d, uint32_t 
> cmd,
>      return ret;
>  }
>  
> -void domain_update_node_affinity(struct domain *d)
> +void domain_update_node_affinity_noalloc(struct domain *d,
> +                                         const cpumask_t *online,
> +                                         struct affinity_masks *affinity)
>  {
> -    cpumask_var_t dom_cpumask, dom_cpumask_soft;
>      cpumask_t *dom_affinity;
> -    const cpumask_t *online;
>      struct sched_unit *unit;
>      unsigned int cpu;
>  
> -    /* Do we have vcpus already? If not, no need to update node-affinity. */
> -    if ( !d->vcpu || !d->vcpu[0] )
> -        return;
> -
> -    if ( !zalloc_cpumask_var(&dom_cpumask) )
> -        return;
> -    if ( !zalloc_cpumask_var(&dom_cpumask_soft) )
> -    {
> -        free_cpumask_var(dom_cpumask);
> -        return;
> -    }

Instead of splitting the function, did you consider using
cond_zalloc_cpumask_var() here, thus allowing (but not requiring)
callers to pre-allocate the masks? Would imo be quite a bit less
code churn (I think).

> --- a/xen/common/sched/cpupool.c
> +++ b/xen/common/sched/cpupool.c
> @@ -410,6 +410,48 @@ int cpupool_move_domain(struct domain *d, struct cpupool 
> *c)
>      return ret;
>  }
>  
> +/* Update affinities of all domains in a cpupool. */
> +static int cpupool_alloc_affin_masks(struct affinity_masks *masks)
> +{
> +    if ( !alloc_cpumask_var(&masks->hard) )
> +        return -ENOMEM;
> +    if ( alloc_cpumask_var(&masks->soft) )
> +        return 0;
> +
> +    free_cpumask_var(masks->hard);
> +    return -ENOMEM;
> +}

Wouldn't this be a nice general helper function, also usable from
outside of this CU?

As a nit - right now the only caller treats the return value as boolean,
so perhaps the function better would return bool?

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.