[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v5 0/8] Display IO topology when PXM data is available (plus some cleanup)



Changes in v5:
* Make CPU topology and NUMA info sysctls behave more like 
XEN_DOMCTL_get_vcpu_msrs
  when passed NULL buffers. This required toolstack changes as well
* Don't use 8-bit data types in interfaces
* Fold interface version update into patch#3

Changes in v4:
* Split cputopology and NUMA info changes into separate patches
* Added patch#1 (partly because patch#4 needs to know when when distance is 
invalid,
  i.e. NUMA_NO_DISTANCE)
* Split sysctl version update into a separate patch
* Other changes are listed in each patch
* NOTE: I did not test python's xc changes since I don't think I know how.

Changes in v3:
* Added patch #1 to more consistently define nodes as a u8 and properly
  use NUMA_NO_NODE.
* Make changes to xen_sysctl_numainfo, similar to those made to
  xen_sysctl_topologyinfo. (Q: I kept both sets of changes in the same
  patch #3 to avoid bumping interface version twice. Perhaps it's better
  to split it into two?)
* Instead of copying data for each loop index allocate a buffer and copy
  once for all three queries in sysctl.c.
* Move hypercall buffer management from libxl to libxc (as requested by
  Dario, patches #5 and #6).
* Report topology info for offlined CPUs as well
* Added LIBXL_HAVE_PCITOPO macro

Changes in v2:
* Split topology sysctls into two --- one for CPU topology and the other
  for devices
* Avoid long loops in the hypervisor by using continuations. (I am not
  particularly happy about using first_dev in the interface, suggestions
  for a better interface would be appreciated)
* Use proper libxl conventions for interfaces
* Avoid hypervisor stack corruption when copying PXM data from guest


A few patches that add interface for querying hypervisor about device
topology and allow 'xl info -n' display this information if PXM object
is provided by ACPI.

This series also makes some optimizations and cleanup of current CPU
topology and NUMA sysctl queries.



Boris Ostrovsky (8):
  numa: __node_distance() should return u8
  pci: Stash device's PXM information in struct pci_dev
  sysctl: Make XEN_SYSCTL_topologyinfo sysctl a little more efficient
  sysctl: Make XEN_SYSCTL_numainfo a little more efficient
  sysctl: Add sysctl interface for querying PCI topology
  libxl/libxc: Move libxl_get_cpu_topology()'s hypercall buffer
    management to libxc
  libxl/libxc: Move libxl_get_numainfo()'s hypercall buffer management
    to libxc
  libxl: Add interface for querying hypervisor about PCI topology

 tools/libxc/include/xenctrl.h     |   12 ++-
 tools/libxc/xc_misc.c             |  102 ++++++++++++++++---
 tools/libxl/libxl.c               |  183 ++++++++++++++++------------------
 tools/libxl/libxl.h               |   12 ++
 tools/libxl/libxl_freebsd.c       |   12 ++
 tools/libxl/libxl_internal.h      |    5 +
 tools/libxl/libxl_linux.c         |   69 +++++++++++++
 tools/libxl/libxl_netbsd.c        |   12 ++
 tools/libxl/libxl_types.idl       |    7 ++
 tools/libxl/libxl_utils.c         |    8 ++
 tools/libxl/xl_cmdimpl.c          |   40 ++++++--
 tools/misc/xenpm.c                |  101 ++++++++----------
 tools/python/xen/lowlevel/xc/xc.c |  105 +++++++-------------
 xen/arch/x86/physdev.c            |   23 ++++-
 xen/arch/x86/srat.c               |   13 ++-
 xen/common/page_alloc.c           |    4 +-
 xen/common/sysctl.c               |  200 +++++++++++++++++++++++++++----------
 xen/drivers/passthrough/pci.c     |   13 ++-
 xen/include/asm-x86/numa.h        |    2 +-
 xen/include/public/physdev.h      |    6 +
 xen/include/public/sysctl.h       |  138 ++++++++++++++++---------
 xen/include/xen/numa.h            |    3 +-
 xen/include/xen/pci.h             |    5 +-
 23 files changed, 715 insertions(+), 360 deletions(-)


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.