[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3/6] libxl/xl: add cacheability option to iomem



On Wed, 27 Feb 2019, Julien Grall wrote:
> Hi Stefano,
> 
> On 2/26/19 11:07 PM, Stefano Stabellini wrote:
> > Parse a new cacheability option for the iomem parameter, it can be
> > "devmem" for device memory mappings, which is the default, or "memory"
> > for normal memory mappings.
> > 
> > Store the parameter in a new field in libxl_iomem_range.
> > 
> > Pass the cacheability option to xc_domain_memory_mapping.
> > 
> > Signed-off-by: Stefano Stabellini <stefanos@xxxxxxxxxx>
> > CC: ian.jackson@xxxxxxxxxxxxx
> > CC: wei.liu2@xxxxxxxxxx
> > ---
> >   docs/man/xl.cfg.pod.5.in    |  4 +++-
> >   tools/libxl/libxl_create.c  | 12 +++++++++++-
> >   tools/libxl/libxl_types.idl |  7 +++++++
> 
> Extension of the libxl_types.idl should be companied with a new define in
> libxl.h. So a toolstack can deal with multiple libxl version.

I'll do


> >   tools/xl/xl_parse.c         | 17 +++++++++++++++--
> >   4 files changed, 36 insertions(+), 4 deletions(-)
> > 
> > diff --git a/docs/man/xl.cfg.pod.5.in b/docs/man/xl.cfg.pod.5.in
> > index b1c0be1..655008a 100644
> > --- a/docs/man/xl.cfg.pod.5.in
> > +++ b/docs/man/xl.cfg.pod.5.in
> > @@ -1222,7 +1222,7 @@ is given in hexadecimal format and may either be a
> > range, e.g. C<2f8-2ff>
> >   It is recommended to only use this option for trusted VMs under
> >   administrator's control.
> >   -=item B<iomem=[ "IOMEM_START,NUM_PAGES[@GFN]",
> > "IOMEM_START,NUM_PAGES[@GFN]", ...]>
> > +=item B<iomem=[ "IOMEM_START,NUM_PAGES[@GFN],CACHEABILITY",
> > "IOMEM_START,NUM_PAGES[@GFN],CACHEABILITY", ...]>
> 
> Below you suggest the cacheability is option. However, the is not reflected
> here. I think you want to put ',CACHEABILITY' between [] as it is done for
> '@GFN'.

Good point


> >     Allow auto-translated domains to access specific hardware I/O memory
> > pages.
> >   @@ -1233,6 +1233,8 @@ B<GFN> is not specified, the mapping will be
> > performed using B<IOMEM_START>
> >   as a start in the guest's address space, therefore performing a 1:1
> > mapping
> >   by default.
> >   All of these values must be given in hexadecimal format.
> > +B<CACHEABILITY> can be "devmem" for device memory, the default if not
> > +specified, or it can be "memory" for normal memory.
> 
> I was planning to comment about the naming and documentation. But I will do it
> in patch #1 where Jan already started a discussion about it.

All right. I don't have good ideas about naming. I am happy to change.


> > diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
> > index 352cd21..1da2670 100644
> > --- a/tools/xl/xl_parse.c
> > +++ b/tools/xl/xl_parse.c
> > @@ -1883,6 +1883,7 @@ void parse_config_data(const char *config_source,
> >           }
> >           for (i = 0; i < num_iomem; i++) {
> >               int used;
> > +            char cache[7];
> >                 buf = xlu_cfg_get_listitem (iomem, i);
> >               if (!buf) {
> > @@ -1891,15 +1892,27 @@ void parse_config_data(const char *config_source,
> >                   exit(1);
> >               }
> >               libxl_iomem_range_init(&b_info->iomem[i]);
> > -            ret = sscanf(buf, "%" SCNx64",%" SCNx64"%n@%" SCNx64"%n",
> > +            ret = sscanf(buf, "%" SCNx64",%" SCNx64"%n@%" SCNx64"%n,%6s%n",
> >                            &b_info->iomem[i].start,
> >                            &b_info->iomem[i].number, &used,
> > -                         &b_info->iomem[i].gfn, &used);
> > +                         &b_info->iomem[i].gfn, &used,
> > +                         cache, &used);
> 
> If I read it correctly, you will require the GFN to be specified in order to
> get the "cacheability". Am I correct?

Yes, I was lazy :-)  I'll fix it.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.