[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 07/15] xen/overlay: Enable device tree overlay assignment to running domains


  • To: Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>
  • From: Henry Wang <xin.wang2@xxxxxxx>
  • Date: Mon, 6 May 2024 11:14:56 +0800
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0)
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=i1bh3+wge0j+km//KlMypTl9qwGSWcm+TB01jpyzT2E=; b=e0nwLrVRLxne4r6TYuXIhw0u54odg/xsS10pei2T3qlvWC01b5dL7JHJh+NB3s1W71uSMmUVN9USwstlz/urDWWGKUjQXFmqMvA/sYv5ZhyWmixYhU8tIxdXU36Wu52zOqoPqvlJF1+L39BKsjltuhwD0K1BemUwvxYTjjXg7/hI8Zu9JWQdHleAzQvGG9KXM5lnYvJxdqMjzeEn3Y7pZhX63jeOKnZ/a+I8qV1Wk0ZStdJlozDf3yFf2lkaP71UZhrPE6KChND7XiVzqM0a/k48zGZa6BtFSzZtR/WysHLnyb7h/kLKkgPr9gATChRt+znViTzepHXqZuMS9UHrJQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mrb73tqaDvGuWUtOc/oofGWff07c6PffgZ/u/89IHI16t1xorF/XUCUi7WzkJvnKiQ2RC5s5uZk0zAKDRsIIfV0T8aTqvoPyrC8GewOYwHu8TE5yPU/ckQQsWudUqo+sOPlN6gSQv45Ys612nw20uCIeRnxUcv8rhLDrW3HLja+fQZfZpW79D/nSsYsFtV1VgyzHj9cEAoHIcHw/x9EJlu8H0aNIIVe7iq+2ekQmmd3OwXrXNqTWweGpIG5yC5SY7g7yK0tPyu8RKgIkjGIhx2dJizNPFcWssLHQJ0EWzjHyj9fcOIL6A1QHMQQWhfLrYOyQgMWs5Nn2YVK+wN7/3w==
  • Cc: Jan Beulich <jbeulich@xxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, "Volodymyr Babchuk" <Volodymyr_Babchuk@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Mon, 06 May 2024 03:16:09 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Hi Stefano, Julien,

On 5/3/2024 2:02 AM, Stefano Stabellini wrote:
On Tue, 30 Apr 2024, Henry Wang wrote:
Hi Julien,

On 4/30/2024 1:34 AM, Julien Grall wrote:
On 29/04/2024 04:36, Henry Wang wrote:
Hi Jan, Julien, Stefano,
Hi Henry,

On 4/24/2024 2:05 PM, Jan Beulich wrote:
On 24.04.2024 05:34, Henry Wang wrote:
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -1197,7 +1197,9 @@ struct xen_sysctl_dt_overlay {
   #define XEN_SYSCTL_DT_OVERLAY_ADD                   1
   #define XEN_SYSCTL_DT_OVERLAY_REMOVE                2
       uint8_t overlay_op;                     /* IN: Add or remove. */
-    uint8_t pad[3];                         /* IN: Must be zero. */
+    bool domain_mapping;                    /* IN: True of False. */
+    uint8_t pad[2];                         /* IN: Must be zero. */
+    uint32_t domain_id;
   };
If you merely re-purposed padding fields, all would be fine without
bumping the interface version. Yet you don't, albeit for an unclear
reason: Why uint32_t rather than domid_t? And on top of that - why a
separate boolean when you could use e.g. DOMID_INVALID to indicate
"no domain mapping"?
I think both of your suggestion make great sense. I will follow the
suggestion in v2.

That said - anything taking a domain ID is certainly suspicious in a
sysctl. Judging from the description you really mean this to be a
domctl. Anything else will require extra justification.
I also think a domctl is better. I had a look at the history of the
already merged series, it looks like in the first version of merged part 1
[1], the hypercall was implemented as the domctl in the beginning but
later in v2 changed to sysctl. I think this makes sense as the scope of
that time is just to make Xen aware of the device tree node via Xen device
tree.

However this is now a problem for the current part where the scope (and
the end goal) is extended to assign the added device to Linux Dom0/DomU
via device tree overlays. I am not sure which way is better, should we
repurposing the sysctl to domctl or maybe add another domctl (I am
worrying about the duplication because basically we need the same sysctl
functionality but now with a domid in it)? What do you think?
I am not entirely sure this is a good idea to try to add the device in Xen
and attach it to the guests at the same time. Imagine the following
situation:

1) Add and attach devices
2) The domain is rebooted
3) Detach and remove devices

After step 2, you technically have a new domain. You could have also a case
where this is a completely different guest. So the flow would look a little
bit weird (you create the DT overlay with domain A but remove with domain
B).

So, at the moment, it feels like the add/attach (resp detech/remove)
operations should happen separately.

Can you clarify why you want to add devices to Xen and attach to a guest
within a single hypercall?
Sorry I don't know if there is any specific thoughts on the design of using a
single hypercall to do both add devices to Xen device tree and assign the
device to the guest. In fact seeing your above comments, I think separating
these two functionality to two xl commands using separated hypercalls would
indeed be a better idea. Thank you for the suggestion!

To make sure I understand correctly, would you mind confirming if below
actions for v2 make sense to you? Thanks!
- Only use the XEN_SYSCTL_DT_OVERLAY_{ADD, REMOVE} sysctls to add/remove
overlay to Xen device tree
- Introduce the xl dt-overlay attach <domid> command and respective domctls to
do the device assignment for the overlay to domain.
I think two hypercalls is OK. The original idea was to have a single xl
command to do the operation for user convenience (even that is not a
hard requirement) but that can result easily in two hypercalls.

Ok, sounds good. I will break the command to two hypercalls and try to reuse the existing domctls for assign/remove IRQ/MMIO ranges.

Kind regards,
Henry




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.