[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 0/4] [HVM] NUMA support in HVM guests


these four patches allow to forward NUMA characteristics into HVM
guests. This works by allocating memory explicitly from different NUMA nodes and create an appropriate SRAT-ACPI table which describes the topology. Needs a decent guest kernel which uses the SRAT table to discover the NUMA topology. This allows to break the current de-facto limitation of guests to one NUMA node, one can use more memory and/or more VCPUs than there are available on one node.

        Patch 1/4: introduce numanodes=n config file option.
this states how many NUMA nodes the guest should see, the default is 0, which means to turn off most parts of the code. Patch 2/4: introduce CPU affinity for allocate_physmap call. currently the correct NUMA node to take the memory from is chosen by simply using the currently scheduled CPU, this patch allows to explicitly specify a CPU and provides XENMEM_DEFAULT_CPU for the old behavior
        Patch 3/4: allocate memory with NUMA in mind.
actually look at the numanodes=n option to split the memory request up
into n parts and allocate it from different nodes. Also change the VCPUs
affinity to match the nodes.
        Patch 4/4: inject created SRAT table into the guest.
create a SRAT table, fill it up with the desired NUMA topology and
inject it into the guest

Applies against staging c/s #15719.

Signed-off-by: Andre Przywara <andre.przywara@xxxxxxx>


Andre Przywara
AMD-Operating System Research Center (OSRC), Dresden, Germany
Tel: +49 351 277-84917
----to satisfy European Law for business letters:
AMD Saxony Limited Liability Company & Co. KG
Sitz (Geschäftsanschrift): Wilschdorfer Landstr. 101, 01109 Dresden, Deutschland
Registergericht Dresden: HRA 4896
vertretungsberechtigter Komplementär: AMD Saxony LLC (Sitz Wilmington, Delaware, USA)
Geschäftsführer der AMD Saxony LLC: Dr. Hans-R. Deppe, Thomas McCoy

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.