[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH V4 11/24] xen/mm: Make x86's XENMEM_resource_ioreq_server handling common

On 15.01.21 16:35, Alex Bennée wrote:

Hi Alex

Oleksandr <olekstysh@xxxxxxxxx> writes:

On 14.01.21 05:58, Wei Chen wrote:
Hi Oleksandr,
Hi Wei
@@ -1090,6 +1091,40 @@ static int acquire_grant_table(struct domain *d,
unsigned int id,
       return 0;

+static int acquire_ioreq_server(struct domain *d,
+                                unsigned int id,
+                                unsigned long frame,
+                                unsigned int nr_frames,
+                                xen_pfn_t mfn_list[])
+    ioservid_t ioservid = id;
+    unsigned int i;
+    int rc;
+    if ( !is_hvm_domain(d) )
+        return -EINVAL;
+    if ( id != (unsigned int)ioservid )
+        return -EINVAL;
+    for ( i = 0; i < nr_frames; i++ )
+    {
+        mfn_t mfn;
+        rc = hvm_get_ioreq_server_frame(d, id, frame + i, &mfn);
+        if ( rc )
+            return rc;
+        mfn_list[i] = mfn_x(mfn);
+    }
+    return 0;
+    return -EOPNOTSUPP;
This change could not be applied to the latest staging branch.
Yes, thank you noticing that.  The code around was changed a bit (patch
series is based on 10-days old staging), I will update for the next
I think the commit that introduced config ARCH_ACQUIRE_RESOURCE could
probably be reverted as it achieves pretty much the same thing as the
above code by moving the logic into the common code path.

The only real practical difference is a inline stub vs a general purpose
function with an IOREQ specific #ifdeferry.
Hmm, thank you for noticing that.
So, yes, I should either add an extra patch for V5 to revert ARCH_ACQUIRE_RESOURCE before applying this one or rebase it to the current codebase (and likely drop all collected R-bs because of an additional changes of removing ARCH_ACQUIRE_RESOURCE bits).


Oleksandr Tyshchenko



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.