[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-4.13] xen/arm: fix duplicate memory node in DT



On Mon, 7 Oct 2019, Julien Grall wrote:
> Hi,
> 
> On 07/10/2019 22:30, Stefano Stabellini wrote:
> > On Mon, 7 Oct 2019, Julien Grall wrote:
> >> On 05/10/2019 00:09, Stefano Stabellini wrote:
> >>> When reserved-memory regions are present in the host device tree, dom0
> >>> is started with multiple memory nodes. Each memory node should have a
> >>> unique name, but today they are all called "memory" leading to Linux
> >>> printing the following warning at boot:
> >>>
> >>>     OF: Duplicate name in base, renamed to "memory#1"
> >>>
> >>> This patch fixes the problem by appending a "@<unit-address>" to the
> >>> name, as per the Device Tree specification, where <unit-address> matches
> >>> the base of address of the first region.
> >>>
> >>> Reported-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>
> >>> Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxx>
> >>>
> >>> ---
> >>>
> >>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> >>> index 921b054520..a4c07db383 100644
> >>> --- a/xen/arch/arm/domain_build.c
> >>> +++ b/xen/arch/arm/domain_build.c
> >>> @@ -646,16 +646,22 @@ static int __init make_memory_node(const struct 
> >>> domain
> >>> *d,
> >>>        int res, i;
> >>>        int reg_size = addrcells + sizecells;
> >>>        int nr_cells = reg_size * mem->nr_banks;
> >>> +    /* Placeholder for memory@ + a 32-bit number + \0 */
> >>> +    char buf[18];
> >>>        __be32 reg[NR_MEM_BANKS * 4 /* Worst case addrcells + sizecells 
> >>> */];
> >>>        __be32 *cells;
> >>>          BUG_ON(nr_cells >= ARRAY_SIZE(reg));
> >>> +    /* Nothing to do */
> >>
> >> This a departure from the current solution where a node will be created 
> >> with
> >> no "reg" property. I think this change of behavior should at least be
> >> described in the commit message if not implemented in a separate patch. 
> >> But...
> >>
> >>> +    if ( mem->nr_banks == 0 )
> >>> +        return 0;
> >>
> >> ... I don't think we want to ignore it. The caller most likely messed up 
> >> the
> >> banks and we should instead report an error.
> > 
> > I admit it wasn't my intention to change the current behavior. As I was
> > looking through the code I noticed that we call make_memory_node for
> > both normal memory and reserved_memory. Of course, reserved_memory could
> > have no banks. So I thought it would be good to check whether there are
> > any banks before continuing because now we are going to access
> > mem->bank[0].start, which would be a mistake if there are no banks.
> 
> Ok, so this not theoritical bug as I first thought but a real bug on 
> platform where DT does not have reserved-regions node.
> 
> In this case, this should be in a separate patch as this is now 2 
> different bugs solved in one patch.

OK


> > In regards to your comment about returning error, we could return ENOENT,
> > however we would also have to handle ENOENT especially at the caller
> > side (handle_node). Or we would have to add a check if ( mem->nr_banks >
> > 0) to avoid calling make_memory_node when nr_banks is zero.
> 
> I would much prefer if we check mem->nr_banks > 0 for reserved-regions 
> before hand.

All right


> Both will need a "Fixes:" to keep track of the original patch.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.