[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH 4/5] libxc: use multicall for memory-op on Linux (and Solaris)


  • To: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Fri, 18 Jun 2021 12:24:54 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=yx3gFp3n9BvdQf1GHZ3mn0AwjS8nAcB86a1HzYSQ+Co=; b=HI1FX13ZU4VJXeQin2rnRvR2+LSbEv1hpWWtbuw/z5bSECE4nFxhMjxMCoRrlIYQCtKZSh7JSsoFsP7AuqbtobVRUIAq73JOyj9177tRBE9gp2a2FSatEu1PdFGPzhDBG8aDWnBilmk0mcaW3mrx236FfMd2dwlPWvDpV/BVx1xi64EGu6RQQiPRkwJwZYH6o3In598dxVdjewVXHYcVsauYJVq1KxNpjzqRhFVuPyrOlBxpciHoglurQdreDY8fb+P+0w+2pGVwUPGRc++jt9fBDi7Z5ja6VkDtdR/Z5vHlNHRA+nBd7b/BmqHm2WlYhoOAMPcqiiCUNowkmcb12w==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=bwN9k1IQlhorXBmnqvEonzgHr8ttkInSEKDlGnEoX7jASWG5vlrCEL+76a7qGtaq4YkZL72mOArF41U4jp7ZX4+zCe+OTlSpEJQyfd4N5jnU5DZP0ttpuluUKt8c6Bpq/GQtAIKHuztE1VTebXJD/YT0HqZAB5caJ+0iozBw/kIKpWODHRRwJtiENqjFqKfCMBW4095y+t0gshPf+L836y40mSrYemp4U3ZgU8W/3NQGK12aGRjxzv+o1kAaKQdUXIzZF5yahgC6gOfE7+lEbciE1ZPKq+WeF3RvQ7d7sCN3RNFKvodxM/g+7F7llosFvquTL4gGxgNqJdaOOaUwtw==
  • Authentication-results: suse.com; dkim=none (message not signed) header.d=none;suse.com; dmarc=none action=none header.from=suse.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Ian Jackson <iwj@xxxxxxxxxxxxxx>, Juergen Gross <jgross@xxxxxxxx>
  • Delivery-date: Fri, 18 Jun 2021 10:25:09 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Some sub-functions, XENMEM_maximum_gpfn in particular, can return values
requiring more than 31 bits to represent. Hence we cannot issue the
hypercall directly when the return value of ioctl() is used to propagate
this value (note that this is not the case for the BSDs, and MiniOS
already wraps all hypercalls in a multicall).

Suggested-by: Jürgen Groß <jgross@xxxxxxxx>
Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

--- a/tools/libs/ctrl/xc_private.c
+++ b/tools/libs/ctrl/xc_private.c
@@ -337,8 +337,47 @@ long do_memory_op(xc_interface *xch, int
         goto out1;
     }
 
-    ret = xencall2(xch->xcall, __HYPERVISOR_memory_op,
-                   cmd, HYPERCALL_BUFFER_AS_ARG(arg));
+#if defined(__linux__) || defined(__sun__)
+    /*
+     * Some sub-ops return values which don't fit in "int". On platforms
+     * without a specific hypercall return value field in the privcmd
+     * interface structure, issue the request as a single-element multicall,
+     * to be able to capture the full return value.
+     */
+    if ( sizeof(long) > sizeof(int) )
+    {
+        multicall_entry_t multicall = {
+            .op = __HYPERVISOR_memory_op,
+            .args[0] = cmd,
+            .args[1] = HYPERCALL_BUFFER_AS_ARG(arg),
+        }, *call = &multicall;
+        DECLARE_HYPERCALL_BOUNCE(call, sizeof(*call),
+                                 XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
+
+        if ( xc_hypercall_bounce_pre(xch, call) )
+        {
+            PERROR("Could not bounce buffer for memory_op hypercall");
+            goto out1;
+        }
+
+        ret = do_multicall_op(xch, HYPERCALL_BUFFER(call), 1);
+
+        xc_hypercall_bounce_post(xch, call);
+
+        if ( !ret )
+        {
+            ret = multicall.result;
+            if ( multicall.result > ~0xfffUL )
+            {
+                errno = -ret;
+                ret = -1;
+            }
+        }
+    }
+    else
+#endif
+        ret = xencall2L(xch->xcall, __HYPERVISOR_memory_op,
+                        cmd, HYPERCALL_BUFFER_AS_ARG(arg));
 
     xc_hypercall_bounce_post(xch, arg);
  out1:




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.