[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 0/7] enable Cache QoS Monitoring (CQM) feature



> -----Original Message-----
> From: xen-devel-bounces@xxxxxxxxxxxxx
> [mailto:xen-devel-bounces@xxxxxxxxxxxxx] On Behalf Of Dongxiao Xu
> Sent: Thursday, December 05, 2013 5:39 PM
> To: xen-devel@xxxxxxxxxxxxx
> Cc: keir@xxxxxxx; Ian.Campbell@xxxxxxxxxx; stefano.stabellini@xxxxxxxxxxxxx;
> andrew.cooper3@xxxxxxxxxx; dario.faggioli@xxxxxxxxxx;
> Ian.Jackson@xxxxxxxxxxxxx; JBeulich@xxxxxxxx; dgdegra@xxxxxxxxxxxxx
> Subject: [Xen-devel] [PATCH v6 0/7] enable Cache QoS Monitoring (CQM) feature
> 

Any more comments about this version?

Thanks,
Dongxiao

> Changes from v5:
>  - Address comments from Dario Faggioli, including:
>    * Define a new libxl_cqminfo structure to avoid reference of xc
>      structure in libxl functions.
>    * Use LOGE() instead of the LIBXL__LOG() functions.
> 
> Changes from v4:
>  - When comparing xl cqm parameter, use strcmp instead of strncmp,
>    otherwise, "xl pqos-attach cqmabcd domid" will be considered as
>    a valid command line.
>  - Address comments from Andrew Cooper, including:
>    * Adjust the pqos parameter parsing function.
>    * Modify the pqos related documentation.
>    * Add a check for opt_cqm_max_rmid in initialization code.
>    * Do not IPI CPU that is in same socket with current CPU.
>  - Address comments from Dario Faggioli, including:
>    * Fix an typo in export symbols.
>    * Return correct libxl error code for qos related functions.
>    * Abstract the error printing logic into a function.
>  - Address comment from Daniel De Graaf, including:
>    * Add return value in for pqos related check.
>  - Address comments from Konrad Rzeszutek Wilk, including:
>    * Modify the GPLv2 related file header, remove the address.
> 
> Changes from v3:
>  - Use structure to better organize CQM related global variables.
>  - Address comments from Andrew Cooper, including:
>    * Remove the domain creation flag for CQM RMID allocation.
>    * Adjust the boot parameter format, use custom_param().
>    * Add documentation for the new added boot parameter.
>    * Change QoS type flag to be uint64_t.
>    * Initialize the per socket cpu bitmap in system boot time.
>    * Remove get_cqm_avail() function.
>    * Misc of format changes.
>  - Address comment from Daniel De Graaf, including:
>    * Use avc_current_has_perm() for XEN2__PQOS_OP that belongs to
> SECCLASS_XEN2.
> 
> Changes from v2:
>  - Address comments from Andrew Cooper, including:
>    * Merging tools stack changes into one patch.
>    * Reduce the IPI number to one per socket.
>    * Change structures for CQM data exchange between tools and Xen.
>    * Misc of format/variable/function name changes.
>  - Address comments from Konrad Rzeszutek Wilk, including:
>    * Simplify the error printing logic.
>    * Add xsm check for the new added hypercalls.
> 
> Changes from v1:
>  - Address comments from Andrew Cooper, including:
>    * Change function names, e.g., alloc_cqm_rmid(), system_supports_cqm(),
> etc.
>    * Change some structure element order to save packing cost.
>    * Correct some function's return value.
>    * Some programming styles change.
>    * ...
> 
> Future generations of Intel Xeon processor may offer monitoring capability in
> each logical processor to measure specific quality-of-service metric,
> for example, the Cache QoS Monitoring to get L3 cache occupancy.
> Detailed information please refer to Intel SDM chapter 17.14.
> 
> Cache QoS Monitoring provides a layer of abstraction between applications and
> logical processors through the use of Resource Monitoring IDs (RMIDs).
> In Xen design, each guest in the system can be assigned an RMID independently,
> while RMID=0 is reserved for monitoring domains that doesn't enable CQM
> service.
> When any of the domain's vcpu is scheduled on a logical processor, the 
> domain's
> RMID will be activated by programming the value into one specific MSR, and
> when
> the vcpu is scheduled out, a RMID=0 will be programmed into that MSR.
> The Cache QoS Hardware tracks cache utilization of memory accesses according
> to
> the RMIDs and reports monitored data via a counter register. With this 
> solution,
> we can get the knowledge how much L3 cache is used by a certain guest.
> 
> To attach CQM service to a certain guest, two approaches are provided:
> 1) Create the guest with "pqos_cqm=1" set in configuration file.
> 2) Use "xl pqos-attach cqm domid" for a running guest.
> 
> To detached CQM service from a guest, users can:
> 1) Use "xl pqos-detach cqm domid" for a running guest.
> 2) Also destroying a guest will detach the CQM service.
> 
> To get the L3 cache usage, users can use the command of:
> $ xl pqos-list cqm (domid)
> 
> The below data is just an example showing how the CQM related data is exposed
> to
> end user.
> 
> [root@localhost]# xl pqos-list cqm
> Name               ID  SocketID        L3C_Usage       SocketID
> L3C_Usage
> Domain-0            0         0         20127744              1
> 25231360
> ExampleHVMDomain    1         0          3211264              1
> 10551296
> 
> RMID count    56        RMID available    53
> 
> Dongxiao Xu (7):
>   x86: detect and initialize Cache QoS Monitoring feature
>   x86: dynamically attach/detach CQM service for a guest
>   x86: initialize per socket cpu map
>   x86: collect CQM information from all sockets
>   x86: enable CQM monitoring for each domain RMID
>   xsm: add platform QoS related xsm policies
>   tools: enable Cache QoS Monitoring feature for libxl/libxc
> 
>  docs/misc/xen-command-line.markdown          |    7 +
>  tools/flask/policy/policy/modules/xen/xen.if |    2 +-
>  tools/flask/policy/policy/modules/xen/xen.te |    5 +-
>  tools/libxc/xc_domain.c                      |   47 +++++
>  tools/libxc/xenctrl.h                        |   11 ++
>  tools/libxl/Makefile                         |    3 +-
>  tools/libxl/libxl.h                          |    6 +
>  tools/libxl/libxl_pqos.c                     |  163 +++++++++++++++++
>  tools/libxl/libxl_types.idl                  |   13 ++
>  tools/libxl/xl.h                             |    3 +
>  tools/libxl/xl_cmdimpl.c                     |  133 ++++++++++++++
>  tools/libxl/xl_cmdtable.c                    |   15 ++
>  xen/arch/x86/Makefile                        |    1 +
>  xen/arch/x86/cpu/intel.c                     |    6 +
>  xen/arch/x86/domain.c                        |    8 +
>  xen/arch/x86/domctl.c                        |   40 +++++
>  xen/arch/x86/pqos.c                          |  242
> ++++++++++++++++++++++++++
>  xen/arch/x86/setup.c                         |    3 +
>  xen/arch/x86/smp.c                           |    7 +-
>  xen/arch/x86/smpboot.c                       |   19 +-
>  xen/arch/x86/sysctl.c                        |   65 +++++++
>  xen/include/asm-x86/cpufeature.h             |    1 +
>  xen/include/asm-x86/domain.h                 |    2 +
>  xen/include/asm-x86/msr-index.h              |    5 +
>  xen/include/asm-x86/pqos.h                   |   51 ++++++
>  xen/include/asm-x86/smp.h                    |    2 +
>  xen/include/public/domctl.h                  |   20 +++
>  xen/include/public/sysctl.h                  |   10 ++
>  xen/include/xen/cpumask.h                    |    1 +
>  xen/xsm/flask/hooks.c                        |    8 +
>  xen/xsm/flask/policy/access_vectors          |   17 +-
>  31 files changed, 907 insertions(+), 9 deletions(-)
>  create mode 100644 tools/libxl/libxl_pqos.c
>  create mode 100644 xen/arch/x86/pqos.c
>  create mode 100644 xen/include/asm-x86/pqos.h
> 
> --
> 1.7.9.5
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.