|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [PATCH 30/37] xen/arm: introduce a helper to parse device tree memory node
Hi Stefano,
> -----Original Message-----
> From: Stefano Stabellini <sstabellini@xxxxxxxxxx>
> Sent: 2021年9月24日 11:05
> To: Wei Chen <Wei.Chen@xxxxxxx>
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx; sstabellini@xxxxxxxxxx; julien@xxxxxxx;
> Bertrand Marquis <Bertrand.Marquis@xxxxxxx>
> Subject: Re: [PATCH 30/37] xen/arm: introduce a helper to parse device
> tree memory node
>
> On Thu, 23 Sep 2021, Wei Chen wrote:
> > Memory blocks' NUMA ID information is stored in device tree's
> > memory nodes as "numa-node-id". We need a new helper to parse
> > and verify this ID from memory nodes.
> >
> > Signed-off-by: Wei Chen <wei.chen@xxxxxxx>
>
> There are tabs for indentation in this patch, we use spaces.
>
OK
>
> > ---
> > xen/arch/arm/numa_device_tree.c | 80 +++++++++++++++++++++++++++++++++
> > 1 file changed, 80 insertions(+)
> >
> > diff --git a/xen/arch/arm/numa_device_tree.c
> b/xen/arch/arm/numa_device_tree.c
> > index 2428fbae0b..7918a397fa 100644
> > --- a/xen/arch/arm/numa_device_tree.c
> > +++ b/xen/arch/arm/numa_device_tree.c
> > @@ -42,6 +42,35 @@ static int __init
> fdt_numa_processor_affinity_init(nodeid_t node)
> > return 0;
> > }
> >
> > +/* Callback for parsing of the memory regions affinity */
> > +static int __init fdt_numa_memory_affinity_init(nodeid_t node,
> > + paddr_t start, paddr_t size)
>
> Please align the parameters
>
OK
>
> > +{
> > + int ret;
> > +
> > + if ( srat_disabled() )
> > + {
> > + return -EINVAL;
> > + }
> > +
> > + if ( !numa_memblks_available() )
> > + {
> > + dprintk(XENLOG_WARNING,
> > + "Too many numa entry, try bigger NR_NODE_MEMBLKS \n");
> > + bad_srat();
> > + return -EINVAL;
> > + }
> > +
> > + ret = numa_update_node_memblks(node, start, size, false);
> > + if ( ret != 0 )
> > + {
> > + bad_srat();
> > + return -EINVAL;
> > + }
> > +
> > + return 0;
> > +}
>
> Aside from spaces/tabs, this is a lot better!
>
ok
>
> > /* Parse CPU NUMA node info */
> > static int __init fdt_parse_numa_cpu_node(const void *fdt, int node)
> > {
> > @@ -56,3 +85,54 @@ static int __init fdt_parse_numa_cpu_node(const void
> *fdt, int node)
> >
> > return fdt_numa_processor_affinity_init(nid);
> > }
> > +
> > +/* Parse memory node NUMA info */
> > +static int __init fdt_parse_numa_memory_node(const void *fdt, int node,
> > + const char *name, uint32_t addr_cells, uint32_t size_cells)
>
> Please align the parameters
>
ok
>
> > +{
> > + uint32_t nid;
> > + int ret = 0, len;
> > + paddr_t addr, size;
> > + const struct fdt_property *prop;
> > + uint32_t idx, ranges;
> > + const __be32 *addresses;
> > +
> > + nid = device_tree_get_u32(fdt, node, "numa-node-id", MAX_NUMNODES);
> > + if ( nid >= MAX_NUMNODES )
> > + {
> > + printk(XENLOG_WARNING "Node id %u exceeds maximum value\n",
> nid);
> > + return -EINVAL;
> > + }
> > +
> > + prop = fdt_get_property(fdt, node, "reg", &len);
> > + if ( !prop )
> > + {
> > + printk(XENLOG_WARNING
> > + "fdt: node `%s': missing `reg' property\n", name);
> > + return -EINVAL;
> > + }
> > +
> > + addresses = (const __be32 *)prop->data;
> > + ranges = len / (sizeof(__be32)* (addr_cells + size_cells));
> > + for ( idx = 0; idx < ranges; idx++ )
> > + {
> > + device_tree_get_reg(&addresses, addr_cells, size_cells, &addr,
> &size);
> > + /* Skip zero size ranges */
> > + if ( !size )
> > + continue;
> > +
> > + ret = fdt_numa_memory_affinity_init(nid, addr, size);
> > + if ( ret ) {
> > + return -EINVAL;
> > + }
> > + }
>
> I take it would be difficult to parse numa-node-id and call
> fdt_numa_memory_affinity_init from
> xen/arch/arm/bootfdt.c:device_tree_get_meminfo. Is it because
> device_tree_get_meminfo is called too early?
>
When I was composing this patch, penny's patch hadn't been merged.
I will look into it.
>
> > + if ( idx == 0 )
> > + {
> > + printk(XENLOG_ERR
> > + "bad property in memory node, idx=%d ret=%d\n", idx,
> ret);
> > + return -EINVAL;
> > + }
> > +
> > + return 0;
> > +}
> > --
> > 2.25.1
> >
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |