[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 7/7] tools: add tools support for Intel CAT
On Tue, 2015-03-24 at 19:20 +0800, Chao Peng wrote: > On Thu, Mar 19, 2015 at 12:51:19PM +0000, Ian Campbell wrote: > > On Thu, 2015-03-19 at 18:41 +0800, Chao Peng wrote: > > > This is the xc/xl changes to support Intel Cache Allocation > > > Technology(CAT). Two commands are introduced: > > > - xl psr-cat-cbm-set [-s socket] <domain> <cbm> > > > Set cache capacity bitmasks(CBM) for a domain. > > > - xl psr-cat-show <domain> > > > Show Cache Allocation Technology information. > > > > > > Signed-off-by: Chao Peng <chao.p.peng@xxxxxxxxxxxxxxx> > > > --- > > > tools/libxc/include/xenctrl.h | 15 +++ > > > tools/libxc/xc_psr.c | 74 ++++++++++++++ > > > > At the libxc level this looks like a reasonable wrapping of the > > hypercall interface, so if the hypervisor folks are happy with that then > > I'm happy with this. > > > > > tools/libxl/libxl.h | 18 ++++ > > > > At the libxl level things seem to be rather opaque to me, i.e. what is > > domain_data? what does it contain? does it have any more semantics than > > a 64-bit number? > > The 64-bit number now holds the cache bitmask (CBM) for the domain. In > the future, the number may represent the memory bandwidth throttling for > the domain. So use a neutral name here. How is the memory bandwidth throttling for a domain represented? Is it necessary and/or desirable to funnel such distinct operations through a single libxl API call? Why not add more semantically meaningful APIs for each case or set of related cases? > > What future new values might there be for libxl_psr_cat_type? > > As far as I can see, besides L3_CBM, libxl_psr_cat_type may be L2_CBM or > MEM_BANDWIDTH_THROTTLING in the future. > > > > Also, does this function need to be in an ifdef, since it isn't called > > unless LIBXL_HAVE_... is set? (IOW: does this compile for ARM) > > libxl_get_physinfo is for all archs, so I think it compiles for ARM. I was meaning hte other way around, the callers of get_phy_socket_num are, I think, all within LIBXL_HAVE_.... So on architectures which don't set those defines this static function will be unused, which will cause the compiler to complain and therefore the build will fail. > > > > > > > > @@ -8057,6 +8070,202 @@ int main_psr_cmt_show(int argc, char **argv) > > > } > > > #endif > > > > > > +#ifdef LIBXL_HAVE_PSR_CAT > > > +static int psr_cat_l3_cbm_set(uint32_t domid, uint32_t socket, uint64_t > > > cbm) > > > +{ > > > + int rc; > > > + uint32_t i, nr_sockets; > > > + > > > + if (socket != ALL_SOCKETS) { > > > + return libxl_psr_cat_set_domain_data(ctx, domid, > > > + LIBXL_PSR_CAT_TYPE_L3_CBM, > > > + socket, cbm); > > > + } else { > > > + nr_sockets = get_phy_socket_num(); > > > > I wonder if the libxl API ought to allow for an "all sockets" argument > > and then do all this internally instead of making the callers do it? > > I can of course. But I just think of the API semantic consistence for > future, the "target" may be cpu but not socket if we support L2_CBM, so > will that value still represent all cpus? By extension having a way to say all cpus (or indeed "all whatevers") seems likely to be useful too. > > > + if (nr_sockets == 0) { > > > + fprintf(stderr, "Failed getting physinfo\n"); > > > + return -1; > > > + } > > > + for (i = 0; i < nr_sockets; i++) { > > > + rc = libxl_psr_cat_set_domain_data(ctx, domid, > > > + LIBXL_PSR_CAT_TYPE_L3_CBM, > > > + i, cbm); > > > + /* Total L3 cache size */ > > > + printf("%-46s", "Total L3 Cache Size"); > > > > How wide are these lines going to be? Can we try and keep it to <80 > > columns? Perhaps you could include an example of the output of each of > > the show functions in the commit message? > > As socket number is variable, it's hard to strictly ensure <80 if all > the sockets displayed in one line. Perhaps socket by socket display is > the right direction. Each socket is a column? I had assumed each was a row (which highlights why examples of the output is useful!). Ian _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |