[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 1/4] xen/xsm: Add XSM_HW_PRIV


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Jason Andryuk <jason.andryuk@xxxxxxx>
  • Date: Tue, 10 Jun 2025 23:13:04 -0400
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=suse.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0)
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=6knYC8fsWowFRgs5B54grYDA2NIIDK6L3C975z9rfC0=; b=BHYgw/11TrWLhJqYww9AEINHKiwyfvGyuD4UheAaS+oNiMRUnuFkcTex4qRhQtJyve+RySKc0ADnsEyv1IUrDM7ZR+J6lLdL081JwwKO/FIn2pRq5s9tFmmbymGW3W0DyF8fAECTOqETs9PWYTz95M3MD0rJ2HtqoaQvQaz5vU9a/4WzR7GrrhSJorU5tqF/uDcny355RJc5Py9270lKTWArB64EjCIyrvwjcoM/II2dNFyn1rthN0USosUECnH2g1t72By/MYaBPJPyqJnbW+uXY+1zdG84sLHIHew6YhxLDPHT/6zFPDmtF1TGr1gOTjMj0/kpjyJT3G8/xQSHxQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=KHB6nAhXsv7H8Erj0uCHpEsk1d3tmn7W3el2zR+3aYvZeVf8LWc/taZ4aBxVm2pD0qvFTB5YHXhYJ4VOJtrLddmxNCEcvSMAGv5+6YSCy7thu1xil7XkotCRwOPal+fv7B2BAf6pP+Ex/633v9agaJVU6v0rOpfqNcIM3VIiL72xxfaoo9cjpaCbv9hXGpN/VnQqHnFTi8gCRYwT9ocSsx9p+dJt7nyTqMhUocur7MkR6vVgJXC/mn7InOgbWKqLitiB/4ew9/JLFeC6u/Abs63KNvR2f//+z0plIWjbP0B/SM75uAncY++BGtHNyRBFznEVOTd0+6Gkgy9IkiyThQ==
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, "Daniel P. Smith" <dpsmith@xxxxxxxxxxxxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Wed, 11 Jun 2025 15:52:13 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 2025-06-11 09:02, Jan Beulich wrote:
On 11.06.2025 00:57, Jason Andryuk wrote:
Xen includes disctinct concepts of a control domain (privileged) and a
hardware domain, but there is only a single XSM_PRIV check.  For dom0
this is not an issue as they are one and the same.

With hyperlaunch and its build capabilities, a non-privileged hwdom and a
privileged control domain should be possible.  Today the hwdom fails the
XSM_PRIV checks for hardware-related hooks which it should be allowed
access to.

Introduce XSM_HW_PRIV, and use it to mark many of the physdev_op and
platform_op.  The hwdom is allowed access for XSM_HW_PRIV.

Make XSM_HW_PRIV a new privilege level that is given to the hardware
domain, but is not exclusive.  The control domain can still execute
XSM_HW_PRIV commands.  This is a little questionable since it's unclear
how the control domain can meaningfully execute them.  But this approach
is chosen to maintain the increasing privileges and keep control domain
fully privileged.

I consider this conceptually wrong. "Control" aiui refers to software
(e.g. VMs or system-wide settings), but there ought to be a (pretty?)
clear boundary between control and hardware domains, imo. As to
"pretty" - should any overlap be necessary (xms_machine_memory_map()
comes to mind), such would need handling specially then, I think. At
the same time: The more of an overlap there is, the less clear it is
why the two want/need separating in the first place.

So you are in favor of splitting control and hardware into distinct sets? I am okay with this. I implemented that originally, but I started doubting it. Mainly, should control be denied any permission?

We aren't using the toolstack to build domains - dom0less or Hyperlaunch handles that. This avoids issues that might arise from running the toolstack.

Thanks for your feedback.

-Jason



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.