[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-4.8] libxc/x86: Report consistent initial APIC value for PV guests



On 10/11/16 15:05, Boris Ostrovsky wrote:
> On 11/10/2016 09:55 AM, Andrew Cooper wrote:
>> On 10/11/16 14:50, Boris Ostrovsky wrote:
>>> Currently hypervisor provides PV guest's CPUID(1).EBX[31:24] (initial
>>> APIC ID) with contents of that field on the processor that launched
>>> the guest. This results in the guest reporting different initial
>>> APIC IDs across runs.
>>>
>>> We should be consistent in how this value is reported, let's set
>>> it to 0 (which is also what Linux guests expect).
>>>
>>> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
>> This surely wants to go along with:
> Probably, although Linux PV always reports APIC ID as zero (whole PV
> APIC is a mess there as it is tied to topology discovery and we don't do
> this well, to put it charitably).

If PV linux always overrides this to 0, why do you need the toolstack
fix in the first place?

>
>> andrewcoop@andrewcoop:/local/xen.git/xen$ git diff
>> diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
>> index b51b51b..bdf9339 100644
>> --- a/xen/arch/x86/traps.c
>> +++ b/xen/arch/x86/traps.c
>> @@ -985,6 +985,10 @@ void pv_cpuid(struct cpu_user_regs *regs)
>>          uint32_t tmp, _ecx, _ebx;
>>  
>>      case 0x00000001:
>> +        /* Fix up VLAPIC details. */
>> +        b &= 0x00FFFFFFu;
>> +        b |= (curr->vcpu_id * 2) << 24;
> Do we also need to multiply by two for PV guests? Or is it just to be
> consistent with HVM?

Frankly, until I get CPUID phase 2 sorted, this is all held together
with good wishes, rather than duck tape.  I am astounded it has held
together this long.

HVM chooses an even APIC ID to prevent the VM thinking it has hyperthreads.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.