[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 7/7] xen/arm: export shared memory regions as reserved-memory on device tree



On Wed, 1 Aug 2018, Julien Grall wrote:
> Hi,
> 
> On 31/07/18 19:23, Stefano Stabellini wrote:
> > Shared memory regions need to be advertised to the guest. Fortunately, a
> > device tree binding for special memory regions already exist:
> > reserved-memory.
> > 
> > Add a reserved-memory node for each shared memory region, for both
> > masters and slaves.
> > 
> > Signed-off-by: Stefano Stabellini <stefanos@xxxxxxxxxx>
> > ---
> >   tools/libxl/libxl_arch.h |  2 +-
> >   tools/libxl/libxl_arm.c  | 52
> > +++++++++++++++++++++++++++++++++++++++++++++---
> >   tools/libxl/libxl_dom.c  |  2 +-
> >   tools/libxl/libxl_x86.c  |  2 +-
> >   4 files changed, 52 insertions(+), 6 deletions(-)
> > 
> > diff --git a/tools/libxl/libxl_arch.h b/tools/libxl/libxl_arch.h
> > index 6a07ccf..3626e4a 100644
> > --- a/tools/libxl/libxl_arch.h
> > +++ b/tools/libxl/libxl_arch.h
> > @@ -36,7 +36,7 @@ int libxl__arch_domain_create(libxl__gc *gc,
> > libxl_domain_config *d_config,
> >   /* setup arch specific hardware description, i.e. DTB on ARM */
> >   _hidden
> >   int libxl__arch_domain_init_hw_description(libxl__gc *gc,
> > -                                           libxl_domain_build_info *info,
> > +                                           libxl_domain_config *d_config,
> >                                              libxl__domain_build_state
> > *state,
> >                                              struct xc_dom_image *dom);
> >   /* finalize arch specific hardware description. */
> > diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
> > index 5f62e78..4020453 100644
> > --- a/tools/libxl/libxl_arm.c
> > +++ b/tools/libxl/libxl_arm.c
> > @@ -461,6 +461,49 @@ static int make_memory_nodes(libxl__gc *gc, void *fdt,
> >       return 0;
> >   }
> >   +static int make_reserved_nodes(libxl__gc *gc, void *fdt,
> > +                               libxl_domain_config *d_config)
> > +{
> > +    int res, i;
> > +    const char *name;
> > +
> > +    if (d_config->num_sshms == 0)
> > +        return 0;
> > +
> > +    res = fdt_begin_node(fdt, "reserved-memory");
> > +    if (res) return res;
> > +
> > +    res = fdt_property_cell(fdt, "#address-cells", ROOT_ADDRESS_CELLS);
> > +    if (res) return res;
> > +
> > +    res = fdt_property_cell(fdt, "#size-cells", ROOT_SIZE_CELLS);
> > +    if (res) return res;
> > +
> > +    res = fdt_property(fdt, "ranges", NULL, 0);
> > +    if (res) return res;
> > +
> > +    for (i = 0; i < d_config->num_sshms; i++) {
> > +        uint64_t start = d_config->sshms[i].begin;
> > +        if (d_config->sshms[i].role == LIBXL_SSHM_ROLE_SLAVE)
> > +            start += d_config->sshms[i].offset;
> > +        name = GCSPRINTF("memory@%"PRIx64, start);
> 
> I understand the node will be useful for avoid the guest using as a normal
> memory. 

Yes, that's right.


> We also need to make sure the guest can detect what it is used for
> (imagine a driver for it). So you probably want a need compatible and more
> information in it.

Something like "xen,shared-memory" ?


> Also, this would be clearer if the name is called xen-shmem@ rather than
> memory@.

Sure, we are free to choose the node name.


> > +
> > +        res = fdt_begin_node(fdt, name);
> > +        if (res) return res;
> > +
> > +        res = fdt_property_regs(gc, fdt, ROOT_ADDRESS_CELLS,
> > ROOT_SIZE_CELLS,
> > +                                1, start, d_config->sshms[i].size);
> > +
> > +        res = fdt_end_node(fdt);
> > +        if (res) return res;
> > +    }
> > +
> > +    res = fdt_end_node(fdt);
> > +    if (res) return res;
> > +
> > +    return 0;
> > +}

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.