[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v10 4/4] tools, docs: add total/local memory bandwith monitoring



On Thu, 2015-02-26 at 16:45 +0800, Chao Peng wrote:
> Add Memory Bandwidth Monitoring(MBM) for VMs. Two types of monitoring
> are supported: total and local memory bandwidth monitoring. To use it,
> CMT should be enabled in hypervisor.
> 
> Signed-off-by: Chao Peng <chao.p.peng@xxxxxxxxxxxxxxx>

This looks good. I have one question and one small comment/idea:

[...]
> diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
> index 09d819f..54043ee 100644
> --- a/tools/libxc/include/xenctrl.h
> +++ b/tools/libxc/include/xenctrl.h
> @@ -2688,6 +2688,8 @@ int xc_resource_op(xc_interface *xch, uint32_t nr_ops, 
> xc_resource_op_t *ops);
>  #if defined(__i386__) || defined(__x86_64__)
>  enum xc_psr_cmt_type {
>      XC_PSR_CMT_L3_OCCUPANCY,
> +    XC_PSR_CMT_TOTAL_MEM_BANDWIDTH,
> +    XC_PSR_CMT_LOCAL_MEM_BANDWIDTH,

Is "bandwidth" still the correct term here (and more importantly in the
libxl interface e.g. enum), given that we now do the sampling at the
application level and just expose the current count from Xen via libxl?

I'm not sure what I better term would be though. "count"?

> @@ -167,6 +194,16 @@ int xc_psr_cmt_get_data(xc_interface *xch, uint32_t 
> rmid, uint32_t cpu,
>      entries[nr].rsvd = 0;
>      nr++;
>  
> +    if ( tsc != NULL )
> +    {
> +        tsc_entry = &entries[nr];
> +        entries[nr].u.cmd = XEN_RESOURCE_OP_MSR_READ;
> +        entries[nr].idx = MSR_IA32_TSC;
> +        entries[nr].val = 0;
> +        entries[nr].rsvd = 0;
> +        nr++;
> +    }

Perhaps consider an assertion here that nr <= the number of elements in
the array (i.e. ARRAY_SIZE(entries))? (Either added here or in the patch
which switched to using nr++)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.