[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 00/10] x86: Default vs Max policies



This series builds on several years worth of building blocks to finally create
a real distinction between default and max policies.

See the final patch for a concrete example.

Everything but the final patch is ready to go in now.  The final patch depends
on the still-in-review migration series, to provide suitable backwards
compatilbity for VMs coming from older versions of Xen.

Andrew Cooper (10):
  x86/sysctl: Don't return cpu policy data for compiled-out support (2)
  tools/libxc: Simplify xc_get_static_cpu_featuremask()
  x86/gen-cpuid: Rework internal logic to ease future changes
  x86/gen-cpuid: Create max and default variations of INIT_*_FEATURES
  x86/msr: Compile out unused logic/objects
  x86/msr: Introduce and use default MSR policies
  x86/cpuid: Compile out unused logic/objects
  x86/cpuid: Introduce and use default CPUID policies
  x86/gen-cpuid: Distinguish default vs max in feature annotations
  x86/hvm: Do not enable MPX by default

 tools/libxc/include/xenctrl.h               |  10 ++-
 tools/libxc/xc_cpuid_x86.c                  |  55 +++++--------
 tools/misc/xen-cpuid.c                      |  35 ++++++---
 xen/arch/x86/cpuid.c                        | 118 ++++++++++++++++++++++------
 xen/arch/x86/msr.c                          |  62 +++++++++++----
 xen/arch/x86/sysctl.c                       |  25 ++++--
 xen/include/asm-x86/cpuid.h                 |   3 +-
 xen/include/asm-x86/msr.h                   |   4 +-
 xen/include/public/arch-x86/cpufeatureset.h |   4 +-
 xen/include/public/sysctl.h                 |   2 +
 xen/tools/gen-cpuid.py                      | 100 ++++++++++++-----------
 11 files changed, 268 insertions(+), 150 deletions(-)

-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.