[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH V1 02/16] xen/ioreq: Make x86's IOREQ feature common




On 22.09.20 13:54, Jan Beulich wrote:

Hi Jan

On 22.09.2020 11:58, Oleksandr wrote:
On 22.09.20 09:33, Jan Beulich wrote:
On 21.09.2020 21:02, Oleksandr wrote:
On 14.09.20 17:17, Jan Beulich wrote:
On 10.09.2020 22:21, Oleksandr Tyshchenko wrote:
+#define GET_IOREQ_SERVER(d, id) \
+    (d)->arch.hvm.ioreq_server.server[id]
arch.hvm.* feels like a layering violation when used in this header.
Got it. The only reason why GET_IOREQ_SERVER is here is inline
get_ioreq_server(). I will make it non-inline and move both to
common/ioreq.c.
Which won't make the layering violation go away. It's still
common rather than per-arch code then. As suggested elsewhere,
I think the whole ioreq_server struct wants to move into
struct domain itself, perhaps inside a new #ifdef (iirc one of
the patches introduces a suitable Kconfig option).
Well, your advise regarding ioreq_server sounds reasonable, but the
common ioreq.c
still will have other *arch.hvm.* for both vcpu and domain. So looks
like other involved structs should be moved
into *common* struct domain/vcpu itself, correct? Some of them could be
moved easily since contain the same fields (arch.hvm.ioreq_gfn),
but some of them couldn't and seems to require to pull a lot of changes
to the Xen code (arch.hvm.params, arch.hvm.hvm_io), I am afraid.
Or I missed something?
arch.hvm.params, iirc, is an x86 concept, and hence would need
abstracting away anyway. I expect this will be common pattern:
Either you want things to become generic (structure fields
living directly in struct domain, or at least not under arch.hvm),
or things need abstracting for per-arch handling.
Got it.

Let me please clarify one more question.
In order to avoid the layering violation in current patch we could apply a complex approach.

1. *arch.hvm.ioreq_gfn* and *arch.hvm.ioreq_server*: Both structs go into common struct domain.

2. *arch.hvm.params*: Two functions that use it (hvm_alloc_legacy_ioreq_gfn/hvm_free_legacy_ioreq_gfn) either go into arch code completely or
    specific macro is used in common code:

   #define ioreq_get_params(d, i) ((d)->arch.hvm.params[i])

   I would prefer macro than moving functions to arch code (which are equal and should remain in sync).

3. *arch.hvm.hvm_io*: We could also use the following:

   #define ioreq_get_io_completion(v) ((v)->arch.hvm.hvm_io.io_completion)
   #define ioreq_get_io_req(v) ((v)->arch.hvm.hvm_io.io_req)

   This way struct hvm_vcpu_io won't be used in common code as well.

   Are #2, 3 appropriate to go with?


Dirty non-tested patch (which applied on top of the whole series and targets Arm only) shows how it could look like.


diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
index 2e85ea7..5894bdab 100644
--- a/xen/common/ioreq.c
+++ b/xen/common/ioreq.c
@@ -194,7 +194,7 @@ static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p)
 bool handle_hvm_io_completion(struct vcpu *v)
 {
     struct domain *d = v->domain;
-    struct hvm_vcpu_io *vio = &v->arch.hvm.hvm_io;
+    ioreq_t io_req = ioreq_get_io_req(v);
     struct hvm_ioreq_server *s;
     struct hvm_ioreq_vcpu *sv;
     enum hvm_io_completion io_completion;
@@ -209,14 +209,14 @@ bool handle_hvm_io_completion(struct vcpu *v)
     if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) )
         return false;

-    vio->io_req.state = hvm_ioreq_needs_completion(&vio->io_req) ?
+    io_req.state = hvm_ioreq_needs_completion(&io_req) ?
         STATE_IORESP_READY : STATE_IOREQ_NONE;

     msix_write_completion(v);
     vcpu_end_shutdown_deferral(v);

-    io_completion = vio->io_completion;
-    vio->io_completion = HVMIO_no_completion;
+    io_completion = ioreq_get_io_completion(v);
+    ioreq_get_io_completion(v) = HVMIO_no_completion;

     switch ( io_completion )
     {
@@ -227,8 +227,8 @@ bool handle_hvm_io_completion(struct vcpu *v)
         return ioreq_handle_complete_mmio();

     case HVMIO_pio_completion:
-        return handle_pio(vio->io_req.addr, vio->io_req.size,
-                          vio->io_req.dir);
+        return handle_pio(io_req.addr, io_req.size,
+                          io_req.dir);

     default:
         return arch_handle_hvm_io_completion(io_completion);
@@ -247,7 +247,7 @@ static gfn_t hvm_alloc_legacy_ioreq_gfn(struct hvm_ioreq_server *s)
     for ( i = HVM_PARAM_IOREQ_PFN; i <= HVM_PARAM_BUFIOREQ_PFN; i++ )
     {
         if ( !test_and_clear_bit(i, &d->ioreq_gfn.legacy_mask) )
-            return _gfn(d->arch.hvm.params[i]);
+            return _gfn(ioreq_get_params(d, i));
     }

     return INVALID_GFN;
@@ -279,7 +279,7 @@ static bool hvm_free_legacy_ioreq_gfn(struct hvm_ioreq_server *s,

     for ( i = HVM_PARAM_IOREQ_PFN; i <= HVM_PARAM_BUFIOREQ_PFN; i++ )
     {
-        if ( gfn_eq(gfn, _gfn(d->arch.hvm.params[i])) )
+        if ( gfn_eq(gfn, _gfn(ioreq_get_params(d, i))) )
              break;
     }
     if ( i > HVM_PARAM_BUFIOREQ_PFN )
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 0e3ef20..ff761f5 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -21,6 +21,8 @@ struct hvm_domain
     uint64_t              params[HVM_NR_PARAMS];
 };

+#define ioreq_get_params(d, i) ((d)->arch.hvm.params[i])
+
 #ifdef CONFIG_ARM_64
 enum domain_type {
     DOMAIN_32BIT,
@@ -120,6 +122,9 @@ struct hvm_vcpu_io {
     unsigned long       mmio_gpfn;
 };

+#define ioreq_get_io_completion(v) ((v)->arch.hvm.hvm_io.io_completion)
+#define ioreq_get_io_req(v) ((v)->arch.hvm.hvm_io.io_req)
+
 struct arch_vcpu
 {
     struct {
(END)



This goes
alongside my suggestion to drop the "hvm" prefixes and infixes
from involved function names.
Well, I assume this request as well as the request above should be
addressed in the follow-up patches, as we want to keep the code movement
in current patch as (almost) verbatim copy,
Am I correct?
The renaming could imo be done before or after the move, but within
a single series. Doing it (or some of it) during the move may be
acceptable, but this primarily depends on the overall effect on the
patch that this would have. I.e. the patch better wouldn't become
gigantic just because all the renaming gets done in one go, and it's
hundreds of places that need touching.

Got it as well.

Thank you for the explanation.

--
Regards,

Oleksandr Tyshchenko




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.