[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 1/3] xen/sched: introduce cpupool_update_node_affinity()


  • To: Juergen Gross <jgross@xxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Mon, 15 Aug 2022 13:41:53 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=6c5Xnjox/aFnKyU2Sf7zKAlWPBsEHfz6jZpO+CsRjB0=; b=kbw1KteXdnty3AAH0t3MfH2udpAHUim09CqcNEof++MRrVzlG6DhBoUaxCU5Kfxt/h9x9XpU88nnlbRwzkum21/Y7SaPjO8mEtyqfgzJMB3+uTemh/arvhQDJbvoCYT6heSNLy2d111jy+FitGRw8G2Lgu0XIQq/hkjczTL0eLClSDTmM8DO89P5UTyX9FXOrDtsKW8j0mGkjJsITuXCHgtSvAI4E6ly5Lod/jEzrXEUON8fIChRKiGv1lVWeY9PA4vWHUJI4abNTolezky8jKL5ZRm99WwfyJZTDSLObqdk+HtRYT5CTfFrkAqb1hr150j98w6qbsZNM1SHZLSx5g==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lIyqheKbvZw2hK3nOTir24eVR900CipsIZp1jGmA+VSk26W/J2a+MtU/GpWMiFeFZp60LTU6rkBCxxTEtWbKMoM6V6irVmIcn7faT40MWUVTdLojXwPZNjE+2sShmqbdEyZen6a89059fDWKPdPY8DgPGDqczQMYMXDU7ZuOMAhmfPS1ywiItt3o/QUn+6rw2IvtUBeulssV5edFAndK7gOnzwPdSRCLnmUSClKFEpBURuSCTczNp8yChV7OFrA9BsVrhDV0om16vum0aD8dsZxmcoMkeHkmQeCcBPxKizzoStIO3wlL5ImBY4FsHGyl73gO4In7MlmfJpnO2gXNlg==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: George Dunlap <george.dunlap@xxxxxxxxxx>, Dario Faggioli <dfaggioli@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Mon, 15 Aug 2022 11:42:06 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 15.08.2022 13:04, Juergen Gross wrote:
> For updating the node affinities of all domains in a cpupool add a new
> function cpupool_update_node_affinity().
> 
> In order to avoid multiple allocations of cpumasks carve out memory
> allocation and freeing from domain_update_node_affinity() into new
> helpers, which can be used by cpupool_update_node_affinity().
> 
> Modify domain_update_node_affinity() to take an additional parameter
> for passing the allocated memory in and to allocate and free the memory
> via the new helpers in case NULL was passed.
> 
> This will help later to pre-allocate the cpumasks in order to avoid
> allocations in stop-machine context.
> 
> Signed-off-by: Juergen Gross <jgross@xxxxxxxx>

Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
with the observation that ...

> --- a/xen/common/sched/core.c
> +++ b/xen/common/sched/core.c
> @@ -1824,9 +1824,28 @@ int vcpu_affinity_domctl(struct domain *d, uint32_t 
> cmd,
>      return ret;
>  }
>  
> -void domain_update_node_affinity(struct domain *d)
> +bool update_node_aff_alloc(struct affinity_masks *affinity)
>  {
> -    cpumask_var_t dom_cpumask, dom_cpumask_soft;
> +    if ( !alloc_cpumask_var(&affinity->hard) )
> +        return false;
> +    if ( !alloc_cpumask_var(&affinity->soft) )
> +    {
> +        free_cpumask_var(affinity->hard);
> +        return false;
> +    }
> +
> +    return true;
> +}
> +
> +void update_node_aff_free(struct affinity_masks *affinity)
> +{
> +    free_cpumask_var(affinity->soft);
> +    free_cpumask_var(affinity->hard);
> +}
> +
> +void domain_update_node_aff(struct domain *d, struct affinity_masks 
> *affinity)
> +{
> +    struct affinity_masks masks = { };

... the initializer doesn't really look to be needed here, just like
you don't have one in cpupool_update_node_affinity(). The one thing
I'm not sure about is whether old gcc might mis-report a potentially
uninitialized variable with the initializer dropped ...

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.