[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC 0/8] x86/hvm, libxl: HVM SMT topology support




On 02/25/2016 05:21 PM, Andrew Cooper wrote:
> On 22/02/16 21:02, Joao Martins wrote:
>> Hey!
>>
>> This series are a follow-up on the thread about the performance
>> of hard-pinned HVM guests. Here we propose allowing libxl to
>> change how the CPU topology looks like for the HVM guest, which can 
>> favor certain workloads as depicted by Elena on this thread [0]. 
>> It shows around 22-23% gain on io bound workloads having the guest
>> vCPUs hard pinned to the pCPUs with a matching core+thread.
>>
>> This series is divided as following:
>> * Patch 1     : Sets initial apicid to be the vcpuid as opposed
>>                 to vcpuid * 2 for each core;
>> * Patch 2     : Whitespace cleanup
>> * Patch 3     : Adds new leafs to describe Intel/AMD cache
>>                 topology. Though it's only internal to libxl;
>> * Patch 4     : Internal call to set per package CPUID values.
>> * Patch 5 - 8 : Interfaces for xl and libxl for setting topology.
>>
>> I couldn't quite figure out which user interface was better so I
>> included both our "smt" option and full description of the topology
>> i.e. "sockets", "cores", "threads" option same as the "-smp"
>> option on QEMU. Note that the latter could also be used on
>> libvirt since topology is described in their XML configs.
>>
>> It's also an RFC as AMD support isn't implemented yet.
>>
>> Any comments are appreciated!
> 
> Hey.  Sorry I am late getting to this - I am currently swamped.  Some
> general observations.
Hey Andrew, Thanks for the pointers!

> 
> The cpuid policy code in Xen was never re-thought through after
> multi-vcpu guests were introduced, which means they have no
> understanding of per-package, per-core and per-thread values.
> 
> As part of my further cpuid work, I will need to fix this.  I was
> planning to fix it by requiring full cpu topology information to be
> passed as part of the domaincreate or max_vcpus hypercall  (not chosen
> which yet).  This would include cores-per-package, threads-per-core etc,
> and allow Xen to correctly fill in the per-core cpuid values in leaves
> 4, 0xB and 80000008.
FWIW CPU topology on domaincreate sounds nice. Or would max_vcpus hypercall
serve other purposes too? (CPU hotplug, migration)

> 
> In particular, I am concerned about giving the toolstack the ability to
> blindly control the APIC IDs.  Their layout is very closely linked to
> topology, and in particular to the HTT flag.
> 
> Overall, I want to avoid any possibility of generating APIC layouts
> (including the emulated IOAPIC with HVM guests) which don't conform to
> the appropriate AMD/Intel manuals.
I see so overall having Xen control the topology would be a better approach that
"mangling" the APICIDs in the cpuid policy as I am proposing. One good thing
about Xen handling the topology bits would be for Intel CPUs with CPUID faulting
support where PV guests could also see the topology info. And given that the
word 10 of hw_caps won't be exposed (as per your CPUID), handling the PV case on
cpuid policy wouldn't be as clean.

Joao

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.