[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] libxc/x86: avoid overflow in CPUID APIC ID adjustments



On 20.09.2019 12:05, Andrew Cooper wrote:
> On 19/09/2019 12:48, Jan Beulich wrote:
>> Recent AMD processors may report up to 128 logical processors in CPUID
>> leaf 1. Doubling this value produces 0 (which OSes sincerely dislike),
>> as the respective field is only 8 bits wide. Suppress doubling the value
>> (and its leaf 0x80000008 counterpart) in such a case.
>>
>> Additionally don't even do any adjustment when the host topology already
>> includes room for multiple threads per core.
>>
>> Furthermore don't double the Maximum Cores Per Package at all - by us
>> introducing a fake HTT effect, the core count doesn't need to change.
>> Instead adjust the Maximum Logical Processors Sharing Cache field, which
>> so far was zapped altogether.
>>
>> Also zap leaf 4 (and at the same time leaf 2) EDX output for AMD.
>>
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>> ---
>> TBD: Using xc_physinfo() output here needs a better solution. The
>>      threads_per_core value returned is the count of active siblings of
>>      CPU 0, rather than a system wide applicable value (and constant
>>      over the entire session). Using CPUID output (leaves 4 and
>>      8000001e) doesn't look viable either, due to this not really being
>>      the host values on PVH. Judging from the host feature set's HTT
>>      flag also wouldn't tell us whether there actually are multiple
>>      threads per core.
> 
> The key thing is that htt != "more than one thread per core".  HTT is
> strictly a bit indicating that topology information is available in a
> new form in the CPUID leaves (or in AMDs case, the same information
> should be interpreted in a new way).  Just because HTT is set (and it
> does get set in non-HT capable systems), doesn't mean there is space for
> more than thread per core in topology information.
> 
> For PV guests, my adjustment in the CPUID series shows (what I believe
> to be) the only correct way of propagating the host HTT/CMP_LEGACY
> settings through.
> 
> For HVM guests, it really shouldn't really have anything to do with the
> host setting.  We should be choosing how many threads/core to give to
> the guest, then constructing the topology information from first principles.
> 
> Ignore the PVH case.  It is totally broken for several other reasons as
> well, and PVH Dom0 isn't a production-ready thing yet.
> 
> This gets us back to the PV case where the host information is actually
> in view, and (for backport purposes) can be trusted.

Okay, this means I'll revive and finish the half cpuid() based attempt
I had made initially. A fundamental question remains open though from
your reply: Do you agree with the idea of avoiding the multiplication
by 2 if the host topology already provides at least one bit of thread
ID within the APIC ID? Related to which then the question whether you
also agree with my approach of ditching the adjustment to Maximum Cores
Per Package? (I'm asking because I'd like to avoid several more round
trips of the patch itself.)

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.