[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v3] Tmem bug-fixes and cleanups.



From Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> # This line is ignored.
From: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Subject: [PATCH v3] Tmem bug-fixes and cleanups. 
In-Reply-To: 

Hey!

Since v2 [http://lists.xen.org/archives/html/xen-devel/2015-08/msg02134.html]:
 - Addressed all (I hope?) comments.
 - Added 'tmem_oid' structure and made it work with compat layer.
 - Went wild with peppering full stops.
v1:
 - Internal review.

----------------------------------------------------------------------
NEW PATCHES in this series:

 tmem: Spelling and full stop surgery.
 tmem: Use 'struct tmem_oid' in tmem_handle and move it to sysctl header.
 tmem/sysctl: Use 'struct tmem_oid' for every user.
 tmem: Make the uint64_t oid[3] a proper structure: tmem_oid

The rest are the same except where I modified when requested.
----------------------------------------------------------------------

At the Xenhackathon we spoke that the tmem code needs a bit of cleanups
and simplification. One of the things that Andrew mentioned was that the
TMEM_CONTROL should really be part of the sysctl hypercall. As I ventured
this path I realized there were some other issues that need to be taken
care of (like shared pools blowing up).

This patchset has been tested with 32/64 guests, with a 64 hypervisor
and 32 bit toolstack (and also with 64bit toolstack) with success.

For fun I've also created an Linux module:
http://xenbits.xen.org/gitweb/?p=xentesttools/bootstrap.git;a=blob;f=root_image/drivers/tmem_test/tmem_test.c
that I will expand to cover in the future more interesting hypercall
uses.

Going forward the next step will be to:
 - move the 'tmem_control' function to its own file to simplify the code.
 - remove some of the unsafe type uses of the tmem control commands.
 - make migration work.
 
The patches are also in my git tree:

git://xenbits.xen.org/people/konradwilk/xen.git for-4.6/tmem.cleanups.v3

NOTE that I've also cross built it under ARM without any issues.

 tools/libxc/include/xenctrl.h          |   6 +-
 tools/libxc/xc_tmem.c                  | 117 ++++----
 tools/libxl/libxl.c                    |  22 +-
 tools/python/xen/lowlevel/xc/xc.c      |  27 +-
 tools/xenstat/libxenstat/src/xenstat.c |   6 +-
 xen/common/compat/tmem_xen.c           |   6 +-
 xen/common/sysctl.c                    |   7 +-
 xen/common/tmem.c                      | 477 +++++++++++++++++----------------
 xen/include/public/sysctl.h            |  56 ++++
 xen/include/public/tmem.h              |  58 ++--
 xen/include/xen/tmem.h                 |   3 +
 xen/include/xen/tmem_xen.h             |   4 -
 xen/include/xlat.lst                   |   1 +
 xen/include/xsm/dummy.h                |   6 -
 xen/include/xsm/xsm.h                  |   6 -
 xen/xsm/dummy.c                        |   1 -
 xen/xsm/flask/hooks.c                  |   9 +-
 xen/xsm/flask/policy/access_vectors    |   2 +-
 18 files changed, 423 insertions(+), 391 deletions(-)

Konrad Rzeszutek Wilk (11):
      tmem: Don't crash/hang/leak hypervisor when using shared pools within an 
guest.
      tmem: Add ASSERT in obj_rb_insert for pool->rwlock lock.
      tmem: Remove in xc_tmem_control_oid duplicate set_xen_guest_handle call
      tmem: Remove xc_tmem_control mystical arg3
      tmem: Move TMEM_CONTROL subop of tmem hypercall to sysctl.
      tmem: Remove the old tmem control XSM checks as it is part of sysctl 
hypercall.
      tmem: Make the uint64_t oid[3] a proper structure: tmem_oid
      tmem/sysctl: Use 'struct tmem_oid' for every user.
      tmem: Use 'struct tmem_oid' in tmem_handle and move it to sysctl header.
      tmem: Remove extra spaces at end and some hard tabbing.
      tmem: Spelling and full stop surgery.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.