[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v11 0/3] Refactor ioreq server for better performance.

XenGT leverages ioreq server to track and forward the accesses to
GPU I/O resources, e.g. the PPGTT(per-process graphic translation
tables). Currently, ioreq server uses rangeset to track the BDF/
PIO/MMIO ranges to be emulated. To select an ioreq server, the 
rangeset is searched to see if the I/O range is recorded. However,
traversing the link list inside rangeset could be time consuming
when number of ranges is too high. On HSW platform, number of PPGTTs
for each vGPU could be several hundred. On BDW, this value could
be several thousand.  This patch series refactored rangeset to base
it on red-back tree, so that the searching would be more efficient. 

Besides, this patchset also splits the tracking of MMIO and guest
ram ranges into different rangesets. And to accommodate more ranges,
a new parameter , max_wp_ram_ranges, is introduced in hvm configuration

Changes in v11: 
1> rename the new parameter to "max_wp_ram_ranges", and use it
specifically for write-protected ram ranges.
2> clear the documentation part.

Changes in v10: 
1> Add a new patch to configure the range limit inside ioreq server.
2> Commit message changes. 
3> The previous patch "[1/3] Remove identical relationship between
   ioreq type and rangeset type." has already been merged, and is not
   included in this series now.

Changes in v9: 
1> Change order of patch 2 and patch3.
2> Intruduce a const static array before hvm_ioreq_server_alloc_rangesets().
3> Coding style changes.

Changes in v8: 
Use a clearer API name to map/unmap the write-protected memory in
ioreq server.

Changes in v7: 
1> Coding style changes;
2> Fix a typo in hvm_select_ioreq_server().

Changes in v6: 
Break the identical relationship between ioreq type and rangeset
index inside ioreq server.

Changes in v5:
1> Use gpfn, instead of gpa to track guest write-protected pages;
2> Remove redundant conditional statement in routine find_range().

Changes in v4:
Keep the name HVMOP_IO_RANGE_MEMORY for MMIO resources, and add
a new one, HVMOP_IO_RANGE_WP_MEM, for write-protected memory.

Changes in v3:
1> Use a seperate rangeset for guest ram pages in ioreq server;
2> Refactor rangeset, instead of introduce a new data structure.

Changes in v2:
1> Split the original patch into 2;
2> Take Paul Durrant's comments:
  a> Add a name member in the struct rb_rangeset, and use the 'q'
debug key to dump the ranges in ioreq server;
  b> Keep original routine names for hvm ioreq server;
  c> Commit message changes - mention that a future patch to change
the maximum ranges inside ioreq server.

Yu Zhang (3):
  Refactor rangeset structure for better performance.
  Differentiate IO/mem resources tracked by ioreq server
  tools: introduce parameter max_wp_ram_ranges.

 docs/man/xl.cfg.pod.5            | 18 +++++++++
 tools/libxc/include/xenctrl.h    | 31 +++++++++++++++
 tools/libxc/xc_domain.c          | 61 ++++++++++++++++++++++++++++++
 tools/libxl/libxl.h              |  5 +++
 tools/libxl/libxl_dom.c          |  3 ++
 tools/libxl/libxl_types.idl      |  1 +
 tools/libxl/xl_cmdimpl.c         |  4 ++
 xen/arch/x86/hvm/hvm.c           | 37 +++++++++++++++---
 xen/common/rangeset.c            | 82 +++++++++++++++++++++++++++++-----------
 xen/include/asm-x86/hvm/domain.h |  2 +-
 xen/include/public/hvm/hvm_op.h  |  1 +
 xen/include/public/hvm/params.h  |  5 ++-
 12 files changed, 221 insertions(+), 29 deletions(-)


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.