[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] OvmfPkg: End timer interrupt later to avoid stack overflow under load



On 06/17/20 05:16, Igor Druzhinin wrote:
> On 16/06/2020 19:42, Laszlo Ersek wrote
>> If I understand correctly, TimerInterruptHandler()
>> [OvmfPkg/8254TimerDxe/Timer.c] currently does the following:
>>
>> - RaiseTPL (TPL_HIGH_LEVEL) --> mask interrupts from being delivered
>>
>> - mLegacy8259->EndOfInterrupt() --> permit the PIC to generate further
>> interrupts (= make them pending)
>>
>> - RestoreTPL() --> unmask interrupts (allow delivery)
>>
>> RestoreTPL() is always expected to invoke handlers (on its own stack)
>> that have just been unmasked, so that behavior is not unexpected, in my
>> opinion.
> 
> Yes, this is where I'd like to have a confirmation - opening a window
> for uncontrollable number of nested interrupts with a small stack
> looks dangerous.

Sorry, I meant the above more generally. The sentence

  RestoreTPL() is always expected to invoke handlers (on its own stack)
  that have just been unmasked

doesn't only refer to actual timer hardware interrupts (in connection to
TPL_HGIH_LEVEL), but also to invoking event notification functions that
have been queued while running at the raised TPL.

Quoting "EFI_BOOT_SERVICES.CreateEvent()" from the spec:

    Events exist in one of two states, “waiting” or “signaled.” When an
    event is created, firmware puts it in the “waiting” state. When the
    event is signaled, firmware changes its state to “signaled” and, if
    EVT_NOTIFY_SIGNAL is specified, places a call to its notification
    function in a FIFO queue. There is a queue for each of the “basic”
    task priority levels defined in Section 7.1 (TPL_CALLBACK, and
    TPL_NOTIFY). The functions in these queues are invoked in FIFO
    order, starting with the highest priority level queue and proceeding
    to the lowest priority queue that is unmasked by the current TPL. If
    the current TPL is equal to or greater than the queued notification,
    it will wait until the TPL is lowered via
    EFI_BOOT_SERVICES.RestoreTPL().

In practice, when the event is signaled, and the current TPL is not
masking the TPL of the associated notify function, then the notify
function is called internally to signaling the event. Otherwise, if the
unmasking occurs via RestoreTPL(), then the queued notification
functions are invoked on the stack of RestoreTPL() -- in other words,
internally to the RestoreTPL() function call itself.

So all I meant was that notification functions running internally to
RestoreTPL() was by design.

What's unexpected is the "uncontrollable number" of nested interrupts.

> 
>> What seems unexpected is the queueing of a huge number of timer
>> interrupts. I would think a timer interrupt is either pending or not
>> pending (i.e. if it's already pending, then the next generated interrupt
>> is coalesced, not queued). While there would still be a window between
>> the EOI and the unmasking, I don't think it would normally allow for a
>> *huge* number of queued interrupts (and consequently a stack overflow).
> 
> It's not a window between EOI and unmasking but the very fact vCPU is 
> descheduled for a considerable amount of time that causes backlog of
> timer interrupts to build up. This is Xen default behavior and is
> configurable (there are several timer modes including coalescing
> you mention). That is done for compatibility with some guests basing
> time accounting on the number of periodic interrupts they receive.

OK, thanks for explaining.

> 
>> So I basically see the root of the problem in the interrupts being
>> queued rather than coalesced. I'm pretty unfamiliar with this x86 area
>> (= the 8259 PIC in general), but the following wiki article seems to
>> agree with my suspicion:
>>
>> https://wiki.osdev.org/8259_PIC#How_does_the_8259_PIC_chip_work.3F
>>
>>     [...] and whether there's an interrupt already pending. If the
>>     channel is unmasked and there's no interrupt pending, the PIC will
>>     raise the interrupt line [...]
>>
>> Can we say that the interrupt queueing (as opposed to coalescing) is a
>> Xen issue?
> 
> I can admit that the whole issue might be Xen specific if that form
> of timer mode is not used in QEMU-KVM. What mode is typical there
> then?

That question is too difficult for me to answer :(

> We might consider switching Xen to a different mode if so, as I believe
> those guests are not in support for many years.

Can you perhaps test this hypothesis? If you select the coalescing timer
mode for the Xen guest in question, does the symptom go away?

> 
>> (Hmmm... maybe the hypervisor *has* to queue the timer interrupts,
>> otherwise some of them would simply be lost, and the guest would lose
>> track of time.)
>>
>> Either way, I'm not sure what the best approach is. This driver was
>> moved under OvmfPkg from PcAtChipsetPkg in commit 1a3ffdff82e6
>> ("OvmfPkg: Copy 8254TimerDxe driver from PcAtChipsetPkg", 2019-04-11).
>> HpetTimerDxe also lives under PcAtChipsetPkg.
>>
>> So I think I'll have to rely on the expertise of Ray here (CC'd).
> 
> Also note that since the issue might be Xen specific we might want to
> try to fix it in XenTimer only - I modified 8254Timer due to the
> fact Xen is still present in general config (but that should soon
> go away).

We could also modify 8254TimerDxe like this:

- provide the new variant of the TimerInterruptHandler() function for
Xen only, without touching the existent one -- simply introduce it as a
new function,

- in TimerDriverInitialize(), first call XenDetected() from
XenPlatformLib, then choose the argument for the
mCpu->RegisterInterruptHandler() call accordingly.

This wouldn't be difficult to locate and revert when
<https://bugzilla.tianocore.org/show_bug.cgi?id=2122> is addressed. (It
would be easy to find by grepping for XenDetected().)

[...]

Thanks!
Laszlo




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.