[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 7 of 7] KEXEC: correctly revert x2apic state when kexecing




On 15/06/11 11:14, Jan Beulich wrote:
>>>> On 15.06.11 at 11:40, Andrew Cooper <andrew.cooper3@xxxxxxxxxx> wrote:
>> On 15/06/11 10:26, Jan Beulich wrote:
>>>>>> On 13.06.11 at 19:02, Andrew Cooper <andrew.cooper3@xxxxxxxxxx> wrote:
>>>> @@ -345,6 +346,33 @@ void disable_local_APIC(void)
>>>>          wrmsrl(MSR_IA32_APICBASE, msr_content &
>>>>                 ~(MSR_IA32_APICBASE_ENABLE|MSR_IA32_APICBASE_EXTD));
>>>>      }
>>>> +
>>>> +    if ( kexecing )
>>>> +    {
>>>> +        uint64_t msr_content;
>>>> +        rdmsrl(MSR_IA32_APICBASE, msr_content);
>>>> +        msr_content &= ~(MSR_IA32_APICBASE_ENABLE|MSR_IA32_APICBASE_EXTD);
>>>> +        wrmsrl(MSR_IA32_APICBASE, msr_content);
>>>> +
>>>> +        switch ( apic_boot_mode )
>>> Did you verify this gets executed only for the single remaining CPU?
>> It most definitely runs on all CPUs.  Because of the difference between
>> x2apic and xapic interrupts, it is stark-raving mad to try and run a
>> system with different lapics in different modes as the default operating
>> state.
> But you're not returning each CPU to its boot state - you're instead
> forcing them all into the state the boot CPU was in (and hence
> possibly out of sync with other firmware provided information).

My point was that the apic state of the boot processor really cant be
different to the boot state of any other processors, due to the triple
faulting fun you get when lapics receive interrupts in the wrong mode.

>>>> +    if ( current_local_apic_mode() == APIC_MODE_X2APIC )
>>>> +        x2apic_enabled = 1;
>>>> +    else
>>>> +        x2apic_enabled = 0;
>>> Do you really need to force x2apic_enabled *both* ways to avoid
>>> described protection fault? And really I still don't follow why the
>>> variable at the end of the life of the system all of the sudden needs
>>> tweaking, when the system lived happily with its "normal" value.
>> You do need to force it both ways.  disable_IO_APIC() which is the
>> following call runs the risk of causing a protection fault, when setting
>> virtual wire mode back up.  However, in the alternate case where the
>> local APIC is in x2apic mode, and x2apic_enabled is false, all the APIC
>> code will attempt to use MMIO and get confused when twiddling it does
>> nothing.  (This is one of the problems the linux kdump kernel had until
>> very recently)
> How would the APIC end up in x2apic mode when x2apic_enabled
> is not set (or vice versa)?
>
> Jan
>
You get into that state when taring down the local APICs on the kexec
path, which is why we need to go and tweak the x2apic_enabled variable. 
Without this patch, there is nowhere in the Xen code which ever sets
x2apic_enabled to 0 (It defaults to 0 in the .data section).

~Andrew

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.