[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH V4 09/24] xen/ioreq: Make x86's IOREQ related dm-op handling common




On 18.01.21 11:17, Paul Durrant wrote:

Hi Paul



-----Original Message-----
From: Xen-devel <xen-devel-bounces@xxxxxxxxxxxxxxxxxxxx> On Behalf Of Oleksandr 
Tyshchenko
Sent: 12 January 2021 21:52
To: xen-devel@xxxxxxxxxxxxxxxxxxxx
Cc: Julien Grall <julien.grall@xxxxxxx>; Jan Beulich <jbeulich@xxxxxxxx>; 
Andrew Cooper
<andrew.cooper3@xxxxxxxxxx>; Roger Pau Monné <roger.pau@xxxxxxxxxx>; Wei Liu 
<wl@xxxxxxx>; George
Dunlap <george.dunlap@xxxxxxxxxx>; Ian Jackson <iwj@xxxxxxxxxxxxxx>; Julien Grall 
<julien@xxxxxxx>;
Stefano Stabellini <sstabellini@xxxxxxxxxx>; Paul Durrant <paul@xxxxxxx>; 
Daniel De Graaf
<dgdegra@xxxxxxxxxxxxx>; Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>
Subject: [PATCH V4 09/24] xen/ioreq: Make x86's IOREQ related dm-op handling 
common

From: Julien Grall <julien.grall@xxxxxxx>

As a lot of x86 code can be re-used on Arm later on, this patch
moves the IOREQ related dm-op handling to the common code.

The idea is to have the top level dm-op handling arch-specific
and call into ioreq_server_dm_op() for otherwise unhandled ops.
Pros:
- More natural than doing it other way around (top level dm-op
handling common).
- Leave compat_dm_op() in x86 code.
Cons:
- Code duplication. Both arches have to duplicate do_dm_op(), etc.

Also update XSM code a bit to let dm-op be used on Arm.

This support is going to be used on Arm to be able run device
emulator outside of Xen hypervisor.

Signed-off-by: Julien Grall <julien.grall@xxxxxxx>
Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>
[On Arm only]
Tested-by: Wei Chen <Wei.Chen@xxxxxxx>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

***
I decided to leave common dm.h to keep struct dmop_args declaration
(to be included by Arm's dm.c), alternatively we could avoid
introducing new header by moving the declaration into the existing
header, but failed to find a suitable one which context would fit.
***

Changes RFC -> V1:
    - update XSM, related changes were pulled from:
      [RFC PATCH V1 04/12] xen/arm: Introduce arch specific bits for IOREQ/DM 
features

Changes V1 -> V2:
    - update the author of a patch
    - update patch description
    - introduce xen/dm.h and move definitions here

Changes V2 -> V3:
    - no changes

Changes V3 -> V4:
    - rework to have the top level dm-op handling arch-specific
    - update patch subject/description, was "xen/dm: Make x86's DM feature 
common"
    - make a few functions static in common ioreq.c
---
  xen/arch/x86/hvm/dm.c   | 101 +-----------------------------------
  xen/common/ioreq.c      | 135 ++++++++++++++++++++++++++++++++++++++++++------
  xen/include/xen/dm.h    |  39 ++++++++++++++
  xen/include/xen/ioreq.h |  17 +-----
  xen/include/xsm/dummy.h |   4 +-
  xen/include/xsm/xsm.h   |   6 +--
  xen/xsm/dummy.c         |   2 +-
  xen/xsm/flask/hooks.c   |   5 +-
  8 files changed, 171 insertions(+), 138 deletions(-)
  create mode 100644 xen/include/xen/dm.h

diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index d3e2a9e..dc8e47d 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -16,6 +16,7 @@

  #include <xen/event.h>
  #include <xen/guest_access.h>
+#include <xen/dm.h>
  #include <xen/hypercall.h>
  #include <xen/ioreq.h>
  #include <xen/nospec.h>
@@ -29,13 +30,6 @@

  #include <public/hvm/hvm_op.h>

-struct dmop_args {
-    domid_t domid;
-    unsigned int nr_bufs;
-    /* Reserve enough buf elements for all current hypercalls. */
-    struct xen_dm_op_buf buf[2];
-};
-
  static bool _raw_copy_from_guest_buf_offset(void *dst,
                                              const struct dmop_args *args,
                                              unsigned int buf_idx,
@@ -408,71 +402,6 @@ static int dm_op(const struct dmop_args *op_args)

      switch ( op.op )
      {
-    case XEN_DMOP_create_ioreq_server:
-    {
-        struct xen_dm_op_create_ioreq_server *data =
-            &op.u.create_ioreq_server;
-
-        const_op = false;
-
-        rc = -EINVAL;
-        if ( data->pad[0] || data->pad[1] || data->pad[2] )
-            break;
-
-        rc = hvm_create_ioreq_server(d, data->handle_bufioreq,
-                                     &data->id);
-        break;
-    }
-
-    case XEN_DMOP_get_ioreq_server_info:
-    {
-        struct xen_dm_op_get_ioreq_server_info *data =
-            &op.u.get_ioreq_server_info;
-        const uint16_t valid_flags = XEN_DMOP_no_gfns;
-
-        const_op = false;
-
-        rc = -EINVAL;
-        if ( data->flags & ~valid_flags )
-            break;
-
-        rc = hvm_get_ioreq_server_info(d, data->id,
-                                       (data->flags & XEN_DMOP_no_gfns) ?
-                                       NULL : &data->ioreq_gfn,
-                                       (data->flags & XEN_DMOP_no_gfns) ?
-                                       NULL : &data->bufioreq_gfn,
-                                       &data->bufioreq_port);
-        break;
-    }
-
-    case XEN_DMOP_map_io_range_to_ioreq_server:
-    {
-        const struct xen_dm_op_ioreq_server_range *data =
-            &op.u.map_io_range_to_ioreq_server;
-
-        rc = -EINVAL;
-        if ( data->pad )
-            break;
-
-        rc = hvm_map_io_range_to_ioreq_server(d, data->id, data->type,
-                                              data->start, data->end);
-        break;
-    }
-
-    case XEN_DMOP_unmap_io_range_from_ioreq_server:
-    {
-        const struct xen_dm_op_ioreq_server_range *data =
-            &op.u.unmap_io_range_from_ioreq_server;
-
-        rc = -EINVAL;
-        if ( data->pad )
-            break;
-
-        rc = hvm_unmap_io_range_from_ioreq_server(d, data->id, data->type,
-                                                  data->start, data->end);
-        break;
-    }
-
      case XEN_DMOP_map_mem_type_to_ioreq_server:
      {
          struct xen_dm_op_map_mem_type_to_ioreq_server *data =
@@ -523,32 +452,6 @@ static int dm_op(const struct dmop_args *op_args)
          break;
      }

-    case XEN_DMOP_set_ioreq_server_state:
-    {
-        const struct xen_dm_op_set_ioreq_server_state *data =
-            &op.u.set_ioreq_server_state;
-
-        rc = -EINVAL;
-        if ( data->pad )
-            break;
-
-        rc = hvm_set_ioreq_server_state(d, data->id, !!data->enabled);
-        break;
-    }
-
-    case XEN_DMOP_destroy_ioreq_server:
-    {
-        const struct xen_dm_op_destroy_ioreq_server *data =
-            &op.u.destroy_ioreq_server;
-
-        rc = -EINVAL;
-        if ( data->pad )
-            break;
-
-        rc = hvm_destroy_ioreq_server(d, data->id);
-        break;
-    }
-
      case XEN_DMOP_track_dirty_vram:
      {
          const struct xen_dm_op_track_dirty_vram *data =
@@ -703,7 +606,7 @@ static int dm_op(const struct dmop_args *op_args)
      }

      default:
-        rc = -EOPNOTSUPP;
+        rc = ioreq_server_dm_op(&op, d, &const_op);
          break;
      }

diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
index a319c88..72b5da0 100644
--- a/xen/common/ioreq.c
+++ b/xen/common/ioreq.c
@@ -591,8 +591,8 @@ static void hvm_ioreq_server_deinit(struct ioreq_server *s)
      put_domain(s->emulator);
  }

-int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling,
-                            ioservid_t *id)
+static int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling,
+                                   ioservid_t *id)
Would this not be a good opportunity to drop the 'hvm_' prefix (here and 
elsewhere)?

It would be, I will drop.


May I ask, are you ok with that alternative approach proposed by Jan and already implemented in current version (top level dm-op handling arch-specific
and call into ioreq_server_dm_op() for otherwise unhandled ops)?

Initial discussion here:
https://lore.kernel.org/xen-devel/1606732298-22107-10-git-send-email-olekstysh@xxxxxxxxx/

--
Regards,

Oleksandr Tyshchenko




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.