[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v2 0/2] Refactor ioreq server for better performance.

XenGT leverages ioreq server to track and forward the accesses to
GPU I/O resources, e.g. the PPGTT(per-process graphic translation
tables). Currently, ioreq server uses rangeset to track the BDF/
PIO/MMIO ranges to be emulated. To select an ioreq server, the
rangeset is searched to see if the I/O range is recorded. However,
traversing the link list inside rangeset could be time consuming
when number of ranges is too high. On HSW platform, number of PPGTTs
for each vGPU could be several hundred. On BDW, this value could
be several thousand.

To accommodate more ranges, limitation of the number of ranges in an
ioreq server, MAX_NR_IO_RANGES is changed - future patches will be
provided to tune this with other approaches. And to increase the ioreq
server performance, a new data structure, rb_rangeset, is introduced.

Changes in v2:
1> Split the original patch into 2;
2> Take Paul Durrant's comments:
  a> Add a name member in the struct rb_rangeset, and use the 'q'
debug key to dump the ranges in ioreq server;
  b> Keep original routine names for hvm ioreq server;
  c> Commit message changes - mention that a future patch to change
the maximum ranges inside ioreq server.

Yu Zhang (2):
  Resize the MAX_NR_IO_RANGES for ioreq server
  Add new data structure to track ranges.

 xen/arch/x86/domain.c            |   3 +
 xen/arch/x86/hvm/hvm.c           |  56 ++++++--
 xen/common/Makefile              |   1 +
 xen/common/rb_rangeset.c         | 281 +++++++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/hvm/domain.h |   4 +-
 xen/include/asm-x86/hvm/hvm.h    |   1 +
 xen/include/xen/rb_rangeset.h    |  49 +++++++
 7 files changed, 379 insertions(+), 16 deletions(-)
 create mode 100644 xen/common/rb_rangeset.c
 create mode 100644 xen/include/xen/rb_rangeset.h


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.