[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH v6 5/6] xen/x86: move NUMA process nodes nodes code from x86 to common


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Wei Chen <Wei.Chen@xxxxxxx>
  • Date: Wed, 19 Oct 2022 02:17:37 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=2; mx.microsoft.com 1; spf=pass (sender ip is 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com; arc=pass (0 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com] dmarc=[1,1,header.from=arm.com])
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none
  • Arc-message-signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=XURcNgy3MkV74ZAjLI3+lIYyiOJT9nfVQUYsk9XBMWo=; b=SvKHzB9FiEFL2QjcTbAY7o4FOCHoSXwuoT3atABReYaZa4Q2iL4cea5uX8iQ7wQTsRSuybyl9B1uGeYeXSpP/1uo7UYMFtT/Nmd7bFKRSTdgSgbM1dSjYToa3WPkSXuY/pqT7poIpmayvTF12w0P994MVIBPlHi4ibyHTUvjbBikhI2zAaFDZpPa2VepnYIYCAZT7QWBz4t7CbdzA4akBHW6Ki/NTg3Lf+p6iSBy+bKJqP4e21km3ls+eSC+h0cIHA9ENuCYbGQXgyN+fFbp6su8VaTrbbTC8iTiqLWbm0G513AryYmbfZE2nr+RXyrv5KAMXVo0kLpS4R5dL27iNw==
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=XURcNgy3MkV74ZAjLI3+lIYyiOJT9nfVQUYsk9XBMWo=; b=hgnqxuH8U0lK9FtAni8+zUpFbyGlxI6FLgPw2yLbPzMXZgs/4PoG3TOGgovUwwzmye6zzzj2qcAt3SKx4lml2ayGe5FP+DSZgoQNKo3Eo2A5AugRXGFmRmAzCZz5mml3lwVq+w1Y2wOjkp6jZVJ2LAGntGyjdto0AVUsAnQG5MaXC7L/Zt7nqYEXV5NoAhSVm4TSvpEiFwJF3mJwCmKfYjpB+x24kNcFxo/PpWJ6EJI3N8cPsVMUb57eNJnbg1sANtKqx8XckeSQsId/XSt3tRJkUG2q1+xuHXKEVyGR924uM1C+2FWTjg7nxmLSiYleQNq2+QoQWaeWN7WcMfgIPA==
  • Arc-seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass; b=ZlBH6yT+3xky3s5qP4cvVTjGXjMLDgNCtvnR2bCRtoStHJFvpGqoOjMUuzB4hXOcMqj4Cdvmhxob2FcisOH11Rid1H6mIpuUZiksGUiTumzjGr0pB0t1L12pbYWkjilB2ITwtGJAcLM9da2FPv2LxnQ1ZFKzBDEsjW4nOgcqYS5YKfT9UfLTXcBiJBQMwr5PpFmLNJYKH1czdHCM8dOwTT6ZSfcJq3C7XjV1mFjLRAQgmSXvjLu0ch/8iWC4nDKPWIXKnIW4ODkKtKcW6F2z53afcDlTbVldBuuB6sjr5A+YN3SqmfinnKYQNeZ2PJnnHgzErFhJkvSevRA7tzMJTQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=nP/vv35EVLdXXevQnoleV7uOX0+W48QHK175iSRFFOXoNLwNVLLO/oQHxXL9LSSMCkMOYqcm7OoKnDGTcLUUMjrze2JJ9hk87M2WD7NeES8rBKZyKFltLAHDEVZRRx18oc8a4Js9SqwWCst02ijRT0gd7fzTvb1e2mNctKIe4o0x2bnc6yLkYqRvjO2Mf5eCU8BPga4OqOVN1AmyGUAhSFEC0PHFlmvfsT+9X1u2f1SNwU6O/YnrIX99Owa/i4deSxsMNw06SfPNbCFzK8g/6DgO9h5niZLxLuBMw/68GmCQX4HM+++f4HFQ5MRBgziMGstjCfTrojhwLTPi6IAUUw==
  • Authentication-results-original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com;
  • Cc: nd <nd@xxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Wed, 19 Oct 2022 02:18:08 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true
  • Original-authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com;
  • Thread-index: AQHY3WMd/F6kSYZvakmE40WWtzpE9K4UO1IAgADGeHA=
  • Thread-topic: [PATCH v6 5/6] xen/x86: move NUMA process nodes nodes code from x86 to common

Hi Jan,

> -----Original Message-----
> From: Jan Beulich <jbeulich@xxxxxxxx>
> Sent: 2022年10月18日 22:08
> To: Wei Chen <Wei.Chen@xxxxxxx>
> Cc: nd <nd@xxxxxxx>; Andrew Cooper <andrew.cooper3@xxxxxxxxxx>; Roger Pau
> Monné <roger.pau@xxxxxxxxxx>; Wei Liu <wl@xxxxxxx>; George Dunlap
> <george.dunlap@xxxxxxxxxx>; Julien Grall <julien@xxxxxxx>; Stefano
> Stabellini <sstabellini@xxxxxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx
> Subject: Re: [PATCH v6 5/6] xen/x86: move NUMA process nodes nodes code
> from x86 to common
> 
> On 11.10.2022 13:17, Wei Chen wrote:
> > --- a/xen/arch/x86/numa.c
> > +++ b/xen/arch/x86/numa.c
> > @@ -46,6 +46,11 @@ bool arch_numa_disabled(void)
> >      return acpi_numa < 0;
> >  }
> >
> > +bool arch_numa_unavailable(void)
> 
> __init ?

Yes, this function will only be called in an init function.
I will add it.

> 
> > @@ -31,11 +46,334 @@ nodemask_t __read_mostly node_online_map = { { [0]
> = 1UL } };
> >
> >  bool __ro_after_init numa_off;
> >
> > +const char *__ro_after_init numa_fw_nid_name = "NONAME";
> 
> Didn't you mean to leave this at NULL for the DT case? (But yes, this
> way you avoid a conditional at every printk() using it.)
> 

Yes.

> I'm also uncertain of "NOMAME" - personally I think e.g. "???" would
> be better, just in case a message actually is logged with this still
> un-overridden.
> 

Ok, I will use "???" for this default value.

> > +bool __init numa_update_node_memblks(nodeid_t node, unsigned int
> arch_nid,
> > +                                     paddr_t start, paddr_t size, bool
> hotplug)
> > +    node_memblk_range[i].start = start;
> > +    node_memblk_range[i].end = end;
> > +
> > +    memmove(&memblk_nodeid[i + 1], &memblk_nodeid[i],
> > +            (num_node_memblks - i) * sizeof(*memblk_nodeid));
> > +    memblk_nodeid[i] = node;
> > +
> > +    if ( hotplug ) {
> 
> Nit: Placement of brace.
> 

Ok.

> > --- a/xen/common/page_alloc.c
> > +++ b/xen/common/page_alloc.c
> > @@ -159,6 +159,8 @@
> >  #define PGT_TYPE_INFO_INITIALIZER 0
> >  #endif
> >
> > +paddr_t __read_mostly mem_hotplug;
> 
> Not __ro_after_init?

I will add it.

Thanks,
Wei Chen

> 
> Jan

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.