[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH 14/15] x86/hyperlaunch: add max vcpu parsing of hyperlaunch device tree


  • To: xen-devel@xxxxxxxxxxxxxxxxxxxx
  • From: "Daniel P. Smith" <dpsmith@xxxxxxxxxxxxxxxxxxxx>
  • Date: Sat, 23 Nov 2024 13:20:43 -0500
  • Arc-authentication-results: i=1; mx.zohomail.com; dkim=pass header.i=apertussolutions.com; spf=pass smtp.mailfrom=dpsmith@xxxxxxxxxxxxxxxxxxxx; dmarc=pass header.from=<dpsmith@xxxxxxxxxxxxxxxxxxxx>
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1732386079; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; bh=QKzJS5eFtECZxvuLdDMttAltTe8m1ybZIito+adUWcs=; b=lVOALTE35FkFWXsyQ6e06tDp/mGh/sYIAdoEicwRCaEG4zFWBJejQkbWE3+OX7KRD59N5TcKr2MigVEDzY0ZK2Ifid8u3ASewIIJmpI7yiv2Jysik0cnoQo7qDW/nstV7Wk+KPjyQTdkkovK+sn6CMFldF/KjSKM48XwI0ALVyo=
  • Arc-seal: i=1; a=rsa-sha256; t=1732386079; cv=none; d=zohomail.com; s=zohoarc; b=JfjUzRuVdgu03JmMJ5s2O0voxRwbLJ4CBYjs3PrFIMUo88e9iNtvVL1L+OFGNHhYT94FoIq72vGrI96jmy8ArAJRs/qmc7JA1n0W2fw185eflojlr2SEhNgFRfFzbrCTznDSV5j+Fud7r3rtxLeoKfV89Sc4R6h0IY+GH702ttg=
  • Cc: "Daniel P. Smith" <dpsmith@xxxxxxxxxxxxxxxxxxxx>, jason.andryuk@xxxxxxx, christopher.w.clark@xxxxxxxxx, stefano.stabellini@xxxxxxx, Jan Beulich <jbeulich@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Sat, 23 Nov 2024 18:30:52 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Introduce the `cpus` property, named as such for dom0less compatibility, that
represents the maximum number of vpcus to allocate for a domain. In the device
tree, it will be encoded as a u32 value.

Signed-off-by: Daniel P. Smith <dpsmith@xxxxxxxxxxxxxxxxxxxx>
---
 xen/arch/x86/dom0_build.c             |  3 +++
 xen/arch/x86/domain_builder/fdt.c     | 12 ++++++++++++
 xen/arch/x86/include/asm/bootdomain.h |  2 ++
 3 files changed, 17 insertions(+)

diff --git a/xen/arch/x86/dom0_build.c b/xen/arch/x86/dom0_build.c
index 1c3b7ff0e658..7ff052016bfd 100644
--- a/xen/arch/x86/dom0_build.c
+++ b/xen/arch/x86/dom0_build.c
@@ -617,6 +617,9 @@ int __init construct_dom0(struct boot_domain *bd)
     if ( !get_memsize(&dom0_max_size, LONG_MAX) && bd->max_pages )
         dom0_size.nr_pages = bd->max_pages;
 
+    if ( opt_dom0_max_vcpus_max == UINT_MAX && bd->max_vcpus )
+        opt_dom0_max_vcpus_max = bd->max_vcpus;
+
     if ( is_hvm_domain(d) )
         rc = dom0_construct_pvh(bd);
     else if ( is_pv_domain(d) )
diff --git a/xen/arch/x86/domain_builder/fdt.c 
b/xen/arch/x86/domain_builder/fdt.c
index b8ace5c18c6a..d24e265f2378 100644
--- a/xen/arch/x86/domain_builder/fdt.c
+++ b/xen/arch/x86/domain_builder/fdt.c
@@ -197,6 +197,18 @@ static int __init process_domain_node(
             bd->max_pages = PFN_DOWN(kb * SZ_1K);
             printk("  max memory: %ld\n", bd->max_pages << PAGE_SHIFT);
         }
+        if ( match_fdt_property(fdt, prop, "cpus" ) )
+        {
+            uint32_t val = UINT_MAX;
+            if ( fdt_prop_as_u32(prop, &val) != 0 )
+            {
+                printk("  failed processing max_vcpus for domain %s\n",
+                       name == NULL ? "unknown" : name);
+                return -EINVAL;
+            }
+            bd->max_vcpus = val;
+            printk("  max vcpus: %d\n", bd->max_vcpus);
+        }
     }
 
     fdt_for_each_subnode(node, fdt, dom_node)
diff --git a/xen/arch/x86/include/asm/bootdomain.h 
b/xen/arch/x86/include/asm/bootdomain.h
index 9a5ba2931665..d144d6173400 100644
--- a/xen/arch/x86/include/asm/bootdomain.h
+++ b/xen/arch/x86/include/asm/bootdomain.h
@@ -28,6 +28,8 @@ struct boot_domain {
     unsigned long min_pages;
     unsigned long max_pages;
 
+    unsigned int max_vcpus;
+
     struct boot_module *kernel;
     struct boot_module *ramdisk;
 
-- 
2.30.2




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.