[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/cpuid: Deal with broken firmware once more



On 11/12/2016 05:05 PM, M. Vefa Bicakci wrote:
> On 11/10/2016 06:31 PM, Boris Ostrovsky wrote:
>> On 11/10/2016 10:05 AM, Charles (Chas) Williams wrote:
>>>
>>> On 11/10/2016 09:02 AM, Boris Ostrovsky wrote:
>>>> On 11/10/2016 06:13 AM, Thomas Gleixner wrote:
>>>>> On Thu, 10 Nov 2016, M. Vefa Bicakci wrote:
>>>>>
>>>>>> I have found that your patch unfortunately does not improve the
>>>>>> situation
>>>>>> for me. Here is an excerpt obtained from the dmesg of a kernel
>>>>>> compiled
>>>>>> with this patch *as well as* Sebastian's patch:
>>>>>> [    0.002561] CPU: Physical Processor ID: 0
>>>>>> [    0.002566] CPU: Processor Core ID: 0
>>>>>> [    0.002572] [Firmware Bug]: CPU0: APIC id mismatch. Firmware:
>>>>>> ffff CPUID: 2
>>>>> So apic->cpu_present_to_apicid() gives us a completely bogus APIC id
>>>>> which
>>>>> translates to a bogus package id. And looking at the XEN code:
>>>>>
>>>>>    xen_pv_apic.cpu_present_to_apicid = xen_cpu_present_to_apicid,
>>>>>
>>>>> and xen_cpu_present_to_apicid does:
>>>>>
>>>>> static int xen_cpu_present_to_apicid(int cpu)
>>>>> {
>>>>>         if (cpu_present(cpu))
>>>>>                 return xen_get_apic_id(xen_apic_read(APIC_ID));
>>>>>         else
>>>>>                 return BAD_APICID;
>>>>> }
>>>>>
>>>>> So independent of which present CPU we query we get just some random
>>>>> information, in the above case we get BAD_APICID from
>>>>> xen_apic_read() not
>>>>> from the else path as this CPU _IS_ present.
>>>>>
>>>>> What's so wrong with storing the fricking firmware supplied APICid as
>>>>> everybody else does and report it back when queried?
>>>> By firmware you mean ACPI? It is most likely not available to PV guests.
>>>> How about returning cpu_data(cpu).initial_apicid?
>>>>
>>>> And what was the original problem?
>>> The original issue I found was that VMware was returning a different set
>>> of APIC id's in the ACPI tables than what it advertised on the CPU's.
>>>
>>> http://www.mail-archive.com/linux-kernel@xxxxxxxxxxxxxxx/msg1266716.html
>> For Xen, we recently added a6a198bc60e6 ("xen/x86: Update topology map
>> for PV VCPUs") to at least temporarily work around some topology map
>> problems that PV guests have with RAPL (which I think is what Vefa's
>> problem was).
> Hello Boris,
>
> (Sorry for the delay!)
>
> It appears that the problem is a bit different compared to the one
> corrected by a6a198bc60e6, because my kernel tree -- based on 4.8.6 --
> already includes the -stable backport of that commit, i.e.
>   88540ad0820ddfb05626e0136c0e5a79cea85fd1
>
> The patch I included in my previous e-mail (dated 2016-11-10) corrects
> root cause of the issue I am having with 4.8.6. Sebastian's original
> patch adding error checking to the RAPL module prevents the RAPL module
> from causing a kernel oops without my patch.

I don't see any messages from you on that date. Can you provide a link
to it (and to Sebastian's patch)?

(BTW, generally it's a good idea to copy xen-devel list on any
Xen-related issues).

>
> The issue I am experiencing is caused by the boot-up code in the
> 'init_apic_mappings' function switching the APIC ops structure from
> Xen's structure to a no-op structure by calling the 'apic_disable'
> function. Please let me know if I can clarify or elaborate.

apic_disable() is only invoked if there is no APIC present (i.e.
detect_init_APIC() returns a non-zero value) and I don't think this can
happen. Is your CPUID[1].edx[9] not set?

-boris

>
> For the record, using 4.8.7 without my correction patch patch does not
> rectify the issue at hand. 4.8.7 changes the call site of the
> 'init_apic_mapping' function, so I had thought that it could be helpful.
>
> Thank you,
>
> Vefa



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.