[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [RFC PATCH 0/2] gnttab: refactor locking for better scalability

From: Matt Wilson <msw@xxxxxxxxxx>

As discussed in the Xen Developer Summit Storage Performance BoF,
there is a lot of room for improvement in grant table locking. Anthony
and I have been working on refactoring the locking over the past few
weeks. The performance improvement is considerable and I'd like to
hear from others if this approach is fundamentally wrong for some

The previous single spinlock per grant table is split into multiple
locks. The heavily modified components of the grant table (the
maptrack state and the active entries) are now protected by their own
spinlocks. The remaining elements of the grant table are read-mostly,
so I modified the main grant table lock to be a rwlock to improve

On the performance improvement: Without persistent grants, a domU with
24 VBDs plummbed to local HDDs in a streaming 2M write workload
achieved 1,400 MB/sec before this change. Performance more than
doubles with this patch, reaching 3,000 MB/sec before tuning and 3,600
MB/sec after adjusting event channel vCPU bindings.

I included the previously posted patch to __gnttab_unmap_common() in
the series since it makes a bit more sense in this context, and the
follow on refactor patch is on top of it.

DISCLAIMER: I ported this patch from a different Xen version earlier
today, and I've only compile tested so far. In the original state
we've pushed a lot of concurrent I/O through dom0 and haven't seen any
stability issues.

Matt Wilson (2):
  gnttab: lock the local grant table earlier in __gnttab_unmap_common()
  gnttab: refactor locking for better scalability

 docs/misc/grant-tables.txt    |   56 +++++++-
 xen/arch/x86/mm.c             |    4 +-
 xen/common/grant_table.c      |  308 ++++++++++++++++++++++++++---------------
 xen/include/xen/grant_table.h |    9 +-
 4 files changed, 261 insertions(+), 116 deletions(-)


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.