[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH v7 00/14] enable Cache Allocation Technology (CAT) for VMs
Changes in v7: Address comments from Jan/Ian, mainly: * Introduce total_cpus to calculate nr_sockets. * Clear the init/enable flag when a socket going offline. * Reorder the statements in init_psr_cat. * Copyback psr_cat_op only for XEN_SYSCTL_PSR_CAT_get_l3_info. * Broadcast LIBXL_HAVE_SOCKET_BITMAP_ALLOC. * Add PSR head1 level section and change CMT/CAT as its subsections for xl man page. Changes in v6: Address comments from Andrew/Dario/Ian, mainly: * Introduce cat_socket_init(_enable)_bitmap. * Merge xl psr-cmt/cat-hwinfo => xl psr-hwinfo. * Add function header to explain the 'target' parameter. * Use bitmap instead of TARGETS_ALL. * Document fix. Changes in v5: * Address comments from Andrew and Ian(Detail in patch). * Add socket_to_cpumask. * Add xl psr-cmt/cat-hwinfo. * Add some libxl CMT enhancement. Changes in v4: * Address comments from Andrew and Ian(Detail in patch). * Split COS/CBM management patch into 4 small patches. * Add documentation xl-psr.markdown. Changes in v3: * Address comments from Jan and Ian(Detail in patch). * Add xl sample output in cover letter. Changes in v2: * Address comments from Konrad and Jan(Detail in patch): * Make all cat unrelated changes into the preparation patches. This patch serial enables the new Cache Allocation Technology (CAT) feature found in Intel Broadwell and later server platform. In Xen's implementation, CAT is used to control cache allocation on VM basis. Detail hardware spec can be found in section 17.15 of the Intel SDM [1]. The design for XEN can be found at [2]. patch1-2: preparation. patch3-9: real work for CAT. patch10-11: enhancement for CMT. patch12: libxl prepareation patch13: tools side work for CAT. patch14: xl document for CMT/MBM/CAT. [1] Intel SDM (http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-software-developer-manual-325462.pdf) [2] CAT design for XEN( http://lists.xen.org/archives/html/xen-devel/2014-12/msg01382.html) Chao Peng (14): x86: add socket_to_cpumask x86: improve psr scheduling code x86: detect and initialize Intel CAT feature x86: maintain COS to CBM mapping for each socket x86: add COS information for each domain x86: expose CBM length and COS number information x86: dynamically get/set CBM for a domain x86: add scheduling support for Intel CAT xsm: add CAT related xsm policies tools/libxl: minor name changes for CMT commands tools/libxl: add command to show PSR hardware info tools/libxl: introduce some socket helpers tools: add tools support for Intel CAT docs: add xl-psr.markdown docs/man/xl.pod.1 | 76 ++++- docs/misc/xen-command-line.markdown | 15 +- docs/misc/xl-psr.markdown | 133 +++++++++ tools/flask/policy/policy/modules/xen/xen.if | 2 +- tools/flask/policy/policy/modules/xen/xen.te | 4 +- tools/libxc/include/xenctrl.h | 15 + tools/libxc/xc_psr.c | 76 +++++ tools/libxl/libxl.h | 42 +++ tools/libxl/libxl_internal.h | 2 + tools/libxl/libxl_psr.c | 143 ++++++++- tools/libxl/libxl_types.idl | 10 + tools/libxl/libxl_utils.c | 46 +++ tools/libxl/libxl_utils.h | 2 + tools/libxl/xl.h | 5 + tools/libxl/xl_cmdimpl.c | 262 ++++++++++++++++- tools/libxl/xl_cmdtable.c | 27 +- xen/arch/x86/domain.c | 13 +- xen/arch/x86/domctl.c | 20 ++ xen/arch/x86/mpparse.c | 5 + xen/arch/x86/psr.c | 422 +++++++++++++++++++++++++-- xen/arch/x86/smpboot.c | 25 +- xen/arch/x86/sysctl.c | 18 ++ xen/include/asm-x86/cpufeature.h | 1 + xen/include/asm-x86/domain.h | 5 +- xen/include/asm-x86/msr-index.h | 1 + xen/include/asm-x86/psr.h | 13 +- xen/include/asm-x86/smp.h | 16 + xen/include/public/domctl.h | 12 + xen/include/public/sysctl.h | 16 + xen/xsm/flask/hooks.c | 6 + xen/xsm/flask/policy/access_vectors | 4 + 31 files changed, 1385 insertions(+), 52 deletions(-) create mode 100644 docs/misc/xl-psr.markdown -- 1.9.1 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |