[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH V2 01/23] x86/ioreq: Prepare IOREQ feature for making it common




On 12.11.20 12:58, Jan Beulich wrote:

Hi Jan.

On 15.10.2020 18:44, Oleksandr Tyshchenko wrote:
From: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>

As a lot of x86 code can be re-used on Arm later on, this
patch makes some preparation to x86/hvm/ioreq.c before moving
to the common code. This way we will get a verbatim copy for
a code movement in subsequent patch (arch/x86/hvm/ioreq.c
will be *just* renamed to common/ioreq).

This patch does the following:
1. Introduce *inline* arch_hvm_ioreq_init(), arch_hvm_ioreq_destroy(),
    arch_hvm_io_completion(), arch_hvm_destroy_ioreq_server() and
    hvm_ioreq_server_get_type_addr() to abstract arch specific materials.
2  Make hvm_map_mem_type_to_ioreq_server() *inline*. It is not going
    to be called from the common code.
As already indicated on another sub-thread, I think some of these
are too large to be inline functions. Additionally, considering
their single-use purpose, I don't think they should be placed in
a header consumed by more than the producer and the sole consumer.
ok, the only reason I made these inline was to achieve a moving of the whole x86/hvm/ioreq.c to the common code.
I will move some of them back to ioreq.c.



3. Make get_ioreq_server() global. It is going to be called from
    a few places.
And with this its name ought to change, to fit the general naming
model of global functions of this subsystem.
I think, with new requirements (making hvm_map_mem_type_to_ioreq_server() common) this helper
doesn't need to be global. I will make it static again.



4. Add IOREQ_STATUS_* #define-s and update candidates for moving.
This, it seems to me, could be a separate patch.

Well, will do.



@@ -855,7 +841,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t 
id)
domain_pause(d); - p2m_set_ioreq_server(d, 0, s);
+    arch_hvm_destroy_ioreq_server(s);
Iirc there are plans to rename hvm_destroy_ioreq_server() in the
course of the generalization. If so, this arch hook would imo
better be named following the new scheme right away.
Could you please clarify, are you speaking about the plans discussed there

"[PATCH V2 12/23] xen/ioreq: Remove "hvm" prefixes from involved function names"?

Copy text for the convenience:
AT least some of the functions touched here would be nice to be
moved to a more consistent new naming scheme right away, to
avoid having to touch all the same places again. I guess ioreq
server functions would be nice to all start with ioreq_server_
and ioreq functions to all start with ioreq_. E.g. ioreq_send()
and ioreq_server_select().

or some other plans I am not aware of?


What I really want to avoid with IOREQ enabling work is the round-trips related to naming things, this patch series became quite big (and consumes som time to rebase and test it) and I expect it to become bigger.

So the arch_hvm_destroy_ioreq_server() should be arch_ioreq_server_destroy()?



@@ -1215,7 +1153,7 @@ void hvm_destroy_all_ioreq_servers(struct domain *d)
      struct hvm_ioreq_server *s;
      unsigned int id;
- if ( !relocate_portio_handler(d, 0xcf8, 0xcf8, 4) )
+    if ( !arch_hvm_ioreq_destroy(d) )
There's no ioreq being destroyed here, so I think this wants
renaming (and again ideally right away following the planned
new scheme).
Agree that no ioreq being destroyed here. Probably ioreq_server_check_for_destroy()?
I couldn't think of a better name.



+static inline int hvm_map_mem_type_to_ioreq_server(struct domain *d,
+                                                   ioservid_t id,
+                                                   uint32_t type,
+                                                   uint32_t flags)
+{
+    struct hvm_ioreq_server *s;
+    int rc;
+
+    if ( type != HVMMEM_ioreq_server )
+        return -EINVAL;
+
+    if ( flags & ~XEN_DMOP_IOREQ_MEM_ACCESS_WRITE )
+        return -EINVAL;
+
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    s = get_ioreq_server(d, id);
+
+    rc = -ENOENT;
+    if ( !s )
+        goto out;
+
+    rc = -EPERM;
+    if ( s->emulator != current->domain )
+        goto out;
+
+    rc = p2m_set_ioreq_server(d, flags, s);
+
+ out:
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    if ( rc == 0 && flags == 0 )
+    {
+        struct p2m_domain *p2m = p2m_get_hostp2m(d);
I realize I may be asking too much, but would it be possible if,
while moving code, you made simple and likely uncontroversial
adjustments like adding const here? (Such adjustments would be
less desirable to make if they increased the size of the patch,
e.g. if you were touching only nearby code.)
This function as well as one located below won't be moved to this header
for the next version of patch.

ok, will add const.



+        if ( read_atomic(&p2m->ioreq.entry_count) )
+            p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw);
+    }
+
+    return rc;
+}
+
+static inline int hvm_ioreq_server_get_type_addr(const struct domain *d,
+                                                 const ioreq_t *p,
+                                                 uint8_t *type,
+                                                 uint64_t *addr)
+{
+    uint32_t cf8 = d->arch.hvm.pci_cf8;
Similarly, for example, neither this nor ...

+    if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO )
+        return -EINVAL;
+
+    if ( p->type == IOREQ_TYPE_PIO &&
+         (p->addr & ~3) == 0xcfc &&
+         CF8_ENABLED(cf8) )
+    {
+        uint32_t x86_fam;
... this really need to use a fixed width type - unsigned int is
going to be quite fine. But since you're only moving this code,
I guess I'm not going to insist.

Will use unsigned int.



+static inline bool arch_hvm_ioreq_destroy(struct domain *d)
+{
+    if ( !relocate_portio_handler(d, 0xcf8, 0xcf8, 4) )
+        return false;
+
+    return true;
Any reason this cannot simply be

     return relocate_portio_handler(d, 0xcf8, 0xcf8, 4);

Yes, good point.


--
Regards,

Oleksandr Tyshchenko




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.