[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 2/8] x86/vlapic: use apic_id array to set initial (x2)APIC ID

>>> On 24.04.18 at 09:53, <chao.gao@xxxxxxxxx> wrote:
> On Mon, Apr 23, 2018 at 10:04:56AM -0600, Jan Beulich wrote:
>>>>> On 08.01.18 at 05:01, <chao.gao@xxxxxxxxx> wrote:
>>> --- a/xen/include/asm-x86/hvm/domain.h
>>> +++ b/xen/include/asm-x86/hvm/domain.h
>>> @@ -213,6 +213,9 @@ struct hvm_domain {
>>>      uint8_t thread_per_core;
>>>  };
>>> +#define hvm_vcpu_x2apic_id(v) 
>>> (v->domain->arch.hvm_domain.apic_id[v->vcpu_id])
>>I can't seem to find where you set up this array.
>>> +#define hvm_vcpu_apic_id(v) (hvm_vcpu_x2apic_id(v) % 255)
>>I don't think the % 255 is appropriate here - the macro simply shouldn't be
>>invoked in such a case.
>>On the whole I'm not convinced using a array is appropriate - calculating
>>the APIC ID should be very involved, and require much less than possibly
>>multiple kb of storage.
> APIC ID can be inferred from a 3-tuple (socket ID, core ID and thread
> ID). If we want to give admin the ability to set the mapping between
> vcpu_id and this 3-tuple to anything he wants (such vcpu0 - socket ID 1
> core ID 0 thread ID 3 and vcpu1 - socket ID 0 core ID 1 thread ID 0...),
> IMO, we have no way to avoid store some related information (an APIC ID
> array or an 3-tuple array) except limiting the flexibility of guest CPU
> topology. At least, I want to emulate a CPU and cache topology which is
> similar to KNM's, namely each core has 4 logical threads and two cores
> share the same L2 cache.

I think it was pointed out to you already that the CPUID handling here
needs overhaul. We cannot allow the admin to specify things which are
impossible on real hardware. IOW the input here ought to be number of
sockets, number of cores per socket, and number of threads per core.
Everything else will need to be calculated from these.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.