[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH RFC 0/8] libxl, xl, public/io: PV backends feature control



Hey folks,

Presented herewith is an attempt to implement PV backends feature control
as discussed in the list 
(https://lists.xen.org/archives/html/xen-devel/2017-09/msg00766.html)

Given that this a simple proposal hence I thought to include all changes
involved in the same patchset such that everyone see all the changes and has a
better estimate (but restricted to xen-devel just for the RFC purposes).

The motivation here is to allow system administrators more fine grained
control of the device features being used by guest.

The only change I made compared to the proposed discussed above was to use
"require" instead of "request" as the prefix because there is a feature which
has "request" in it. But if "request" is still preferred as a prefix I can 
change
it up.

The scheme proposed is quite simple:

* The directory "require" is created (inside the backend path) and within that
directory the features/capabilities names and values are written.

* Toolstack constructs a key value store of features, and user specifies those
through special entry names prefixed also as "require". Toolstack is stateless 
thus sys
admin has full control over what to pass to the backend. In other words it
doesn't look at particular feature names/values.

* The backend will then use that for seeding its maximum feature set to the
frontend.

An example would be a domain config to look like this:

vif = ["bridge=br0,require-multi-queue-max-queues=2"]
disk = [ "phy:/path/to/disk,hda,w,require-feature-persistent=0" ]

And if backend supports it, it would create a vif with a maximum of 2 queues,
and a vbd with persistent grants disabled.

I only implemented for blkback and netback but there is nothing really specific
to how it's done and could possibly be implemented in other PV interfaces. But
there wasn't a protocol agnostic file to put all this, so I went ahead and did
for the two individual io types (block and netif) I am most interested in.

Any comments appreciated :)

Thanks!
Joao

For Linux the diffstat/changeset is: (the last two patches)

Joao Martins (2):
  xen-blkback: frontend feature control
  xen-netback: frontend feature control

 drivers/block/xen-blkback/blkback.c |   2 +-
 drivers/block/xen-blkback/common.h  |   1 +
 drivers/block/xen-blkback/xenbus.c  |  66 ++++++++++++++++---
 drivers/net/xen-netback/xenbus.c    | 122 +++++++++++++++++++++++++++++-------
 4 files changed, 159 insertions(+), 32 deletions(-)

And for Xen the diffstat/changeset is:

Joao Martins (6):
  public/io/blkif: add directory for backend parameters
  public/io/netif: add directory for backend parameters
  libxl: add backend_features to libxl_device_disk
  libxl: add backend_features to libxl_device_nic
  libxlu: parse disk backend features parameters
  xl: parse vif backend features parameters

 tools/libxl/libxl.h           | 16 +++++++++++++++
 tools/libxl/libxl_9pfs.c      |  2 +-
 tools/libxl/libxl_console.c   |  7 ++++---
 tools/libxl/libxl_device.c    | 47 +++++++++++++++++++++++++++++++++++--------
 tools/libxl/libxl_disk.c      | 17 ++++++++++++++--
 tools/libxl/libxl_internal.h  |  6 ++++--
 tools/libxl/libxl_nic.c       | 13 +++++++++++-
 tools/libxl/libxl_pci.c       |  2 +-
 tools/libxl/libxl_types.idl   |  2 ++
 tools/libxl/libxl_usb.c       |  2 +-
 tools/libxl/libxl_vdispl.c    |  3 ++-
 tools/libxl/libxl_vtpm.c      |  2 +-
 tools/libxl/libxlu_disk_l.l   | 42 ++++++++++++++++++++++++++++++++++++++
 tools/xl/xl_parse.c           | 37 ++++++++++++++++++++++++++++++++++
 tools/xl/xl_parse.h           |  2 ++
 xen/include/public/io/blkif.h | 14 +++++++++++++
 xen/include/public/io/netif.h | 16 +++++++++++++++
 17 files changed, 209 insertions(+), 21 deletions(-)

-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.