[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v6 0/5] Display IO topology when PXM data is available (plus some cleanup)



Changes in v6:
* PCI topology interface changes: no continuations, userspace will be dealing
  with "unfinished" sysctl (patches 2 and 5)
* Unknown device will cause ENODEV in sysctl
* No NULL tests in libxc
* Loop control initialization fix (similar to commit 26da081ac91a)
* Other minor changes (see per-patch notes)

Changes in v5:
* Make CPU topology and NUMA info sysctls behave more like 
XEN_DOMCTL_get_vcpu_msrs
  when passed NULL buffers. This required toolstack changes as well
* Don't use 8-bit data types in interfaces
* Fold interface version update into patch#3

Changes in v4:
* Split cputopology and NUMA info changes into separate patches
* Added patch#1 (partly because patch#4 needs to know when when distance is 
invalid,
  i.e. NUMA_NO_DISTANCE)
* Split sysctl version update into a separate patch
* Other changes are listed in each patch
* NOTE: I did not test python's xc changes since I don't think I know how.

Changes in v3:
* Added patch #1 to more consistently define nodes as a u8 and properly
  use NUMA_NO_NODE.
* Make changes to xen_sysctl_numainfo, similar to those made to
  xen_sysctl_topologyinfo. (Q: I kept both sets of changes in the same
  patch #3 to avoid bumping interface version twice. Perhaps it's better
  to split it into two?)
* Instead of copying data for each loop index allocate a buffer and copy
  once for all three queries in sysctl.c.
* Move hypercall buffer management from libxl to libxc (as requested by
  Dario, patches #5 and #6).
* Report topology info for offlined CPUs as well
* Added LIBXL_HAVE_PCITOPO macro

Changes in v2:
* Split topology sysctls into two --- one for CPU topology and the other
  for devices
* Avoid long loops in the hypervisor by using continuations. (I am not
  particularly happy about using first_dev in the interface, suggestions
  for a better interface would be appreciated)
* Use proper libxl conventions for interfaces
* Avoid hypervisor stack corruption when copying PXM data from guest


A few patches that add interface for querying hypervisor about device
topology and allow 'xl info -n' display this information if PXM object
is provided by ACPI.

This series also makes some optimizations and cleanup of current CPU
topology and NUMA sysctl queries.

Boris Ostrovsky (5):
  sysctl: Make XEN_SYSCTL_numainfo a little more efficient
  sysctl: Add sysctl interface for querying PCI topology
  libxl/libxc: Move libxl_get_cpu_topology()'s hypercall buffer
    management to libxc
  libxl/libxc: Move libxl_get_numainfo()'s hypercall buffer management
    to libxc
  libxl: Add interface for querying hypervisor about PCI topology

 docs/misc/xsm-flask.txt             |    1 +
 tools/libxc/include/xenctrl.h       |   12 ++-
 tools/libxc/xc_misc.c               |  103 +++++++++++++++++++---
 tools/libxl/libxl.c                 |  160 ++++++++++++++++++-----------------
 tools/libxl/libxl.h                 |   12 +++
 tools/libxl/libxl_freebsd.c         |   12 +++
 tools/libxl/libxl_internal.h        |    5 +
 tools/libxl/libxl_linux.c           |   69 +++++++++++++++
 tools/libxl/libxl_netbsd.c          |   12 +++
 tools/libxl/libxl_types.idl         |    7 ++
 tools/libxl/libxl_utils.c           |    8 ++
 tools/libxl/xl_cmdimpl.c            |   40 +++++++--
 tools/misc/xenpm.c                  |   51 +++++------
 tools/python/xen/lowlevel/xc/xc.c   |   74 ++++++----------
 xen/common/sysctl.c                 |  136 ++++++++++++++++++++++--------
 xen/include/public/sysctl.h         |   83 +++++++++++++-----
 xen/xsm/flask/hooks.c               |    1 +
 xen/xsm/flask/policy/access_vectors |    1 +
 18 files changed, 554 insertions(+), 233 deletions(-)


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.