[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/dom0: Add log for dom0_nodes and dom0_max_vcpus_max conflict


  • To: Jane Malalane <jane.malalane@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Wed, 9 Feb 2022 12:37:25 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=none; dmarc=none; dkim=none; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=SWMMrLLJn3NpHnUd6OT41d2b9nG86WZvF59IBkaEkIA=; b=l3cBu1Z+tbjIg1b6oAjdbXADEvREHBRcCpkcQnrycEGCDgDxiwuSWd4LQPCUfCL8PVGbnl3j6j97+Q4PNpWja2pdgkZkYVDMfqiKALNHFdeHCcmqlMsCmJ0uVHx8EQaDI9Zd0DU6pxhE1Mkl1hDvbNmWqLN1xrQwqdESaVSyHp8dWKDUTR9FsckkD5tGJCe4fAxvhKy1Z3oUzICGVZG7RRQyAK++2OzpIfCXDQ1OQqhABoGulqSSNZrRPFLBDl883EvrHJsilK5spTL4nG0B5mkjHaI+ATQv9MyCoRqwulhGlc9QiPEkFPrwv5Eo4kPdAxQvDtXu3jjnTOQyBknFaw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=D9MhEkw4QZba/tgr75vqjItPAQYCLOSEKrCoUBZSvDn50mQ7grvJ+smyw3wR/KlBV8YJT0PrPL5mBYvybCYOW0flDgcN+KXt7MBt9UafEr35lg/GtLuWQw6//6zanU+ORve+GkwfFe3AxiVJ1oSFrWItWrjck7h9JhrVpQfeuXprsYodosEpJCkbDrQyaTvzLCM77ddctci0g1n6e+5hDFlZaScIin0hKp9pf6fFupIlOe5KVfxvmaVZ/8Ex1Hj67FP1EqHxPnQ2T5BpiNXSs4s0muNkSGaY6uOlvDFeGN9kPtSRWI+Z1rWDMPSEfya+BJq7X/5FpnYzwnAt4nfZeA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Wed, 09 Feb 2022 11:37:38 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 09.02.2022 11:31, Jane Malalane wrote:
> This is not a bug. The xen cmdline can request both a NUMA restriction
> and a vcpu count restriction for Dom0. The node restriction wil always
> be respected which might mean either using dom0_max_vcpus <
> opt_dom0_max_vcpus_max

This is quite normal a case if a range was specified, or did you mean
opt_dom0_max_vcpus_min? But min and max get applied last anyway, so
those always override what was derived from dom0_nr_pxms.

> or using more vCPUs than pCPUs on a node. In
> the case where dom0_max_vcpus gets capped at the maximum number of
> pCPUs for the number of nodes chosen, it can be useful particularly
> for debugging to print a message in the serial log.

The number of vCPU-s Dom0 gets is logged in all cases. And the
reasons why a certain value is uses depends on more than just
the number-of-nodes restriction. I therefor wonder whether the
wording as you've chosen it is potentially misleading, and
properly expressing everything in a single message is going to
be quite a bit too noisy. Furthermore ...

> --- a/xen/arch/x86/dom0_build.c
> +++ b/xen/arch/x86/dom0_build.c
> @@ -240,6 +240,11 @@ unsigned int __init dom0_max_vcpus(void)
>      if ( max_vcpus > limit )
>          max_vcpus = limit;
>  
> +    if ( max_vcpus < opt_dom0_max_vcpus_max && max_vcpus > 
> opt_dom0_max_vcpus_min )
> +        printk(XENLOG_INFO "Dom0 using %d vCPUs conflicts with request to 
> use"
> +               " %d node(s), using up to %d vCPUs\n", opt_dom0_max_vcpus_max,
> +               dom0_nr_pxms, max_vcpus);

... the function can be called more than once, whereas such a
message (if we really want it) would better be issued just once.

To answer your later reply to yourself: I think printk() is fine
here (again assuming we want such a message in the first place);
it's a boot-time-only message after all.

Jan




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.