[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH V3] firmware: Change level-triggered GPE event to a edge one for qemu-xen

This should help to reduce a CPU hotplug race window where a cpu hotplug
event while not be seen by the OS.

When hotplugging more than one vcpu, some of those vcpus might not be
seen as plugged by the guest.

This is what is currently happenning:

1. hw adds cpu, sets GPE.2 bit and sends SCI
2. OSPM gets SCI, reads GPE00.sts and masks GPE.2 bit in GPE00.en
3. OSPM executes _L02 (level-triggered event associate to cpu hotplug)
4. hw adds second cpu and sets GPE.2 bit but SCI is not asserted
    since GPE00.en masks event
5. OSPM resets GPE.2 bit in GPE00.sts and umasks it in GPE00.en

as result event for step 4 is lost because step 5 clears it and OS
will not see added second cpu.

ACPI 50 spec: 5.6.4 General-Purpose Event Handling
defines GPE event handling as following:

1. Disables the interrupt source (GPEx_BLK EN bit).
2. If an edge event, clears the status bit.
3. Performs one of the following:
* Dispatches to an ACPI-aware device driver.
* Queues the matching control method for execution.
* Manages a wake event using device _PRW objects.
4. If a level event, clears the status bit.
5. Enables the interrupt source.

So, by using edge-triggered General-Purpose Event instead of a
level-triggered GPE, OSPM is less likely to clear the status bit of the
addition of the second CPU. On step 5, QEMU will resend an interrupt if
the status bit is set.

This description apply also for PCI hotplug since the same step are
followed by QEMU, so we also change the GPE event type for PCI hotplug.

This does not apply to qemu-xen-traditional because it does not resend
an interrupt if necessary as a result of step 5.

Patch and description inspired by SeaBIOS's commit:
Replace level gpe event with edge gpe event for hot-plug handlers
from Igor Mammedov <imammedo@xxxxxxxxxx>

Signed-off-by: Anthony PERARD <anthony.perard@xxxxxxxxxx>
Change in V3:
  - description for: does not apply to qemu-dm
Change in V2:
  - better patch comment:
    patch does not fix race, but reduce the window
    include patch description of the quoted commit
  - change also apply to pci hotplug.
 tools/firmware/hvmloader/acpi/mk_dsdt.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/tools/firmware/hvmloader/acpi/mk_dsdt.c 
index 996f30b..a4b693b 100644
--- a/tools/firmware/hvmloader/acpi/mk_dsdt.c
+++ b/tools/firmware/hvmloader/acpi/mk_dsdt.c
@@ -220,9 +220,13 @@ int main(int argc, char **argv)
-    /* Define GPE control method '_L02'. */
+    /* Define GPE control method. */
     push_block("Scope", "\\_GPE");
-    push_block("Method", "_L02");
+    if (dm_version == QEMU_XEN_TRADITIONAL) {
+        push_block("Method", "_L02");
+    } else {
+        push_block("Method", "_E02");
+    }
     stmt("Return", "\\_SB.PRSC()");
@@ -428,7 +432,7 @@ int main(int argc, char **argv)
         decision_tree(0x00, 0x100, "SLT", pci_hotplug_notify);
     } else {
-        push_block("Method", "_L01");
+        push_block("Method", "_E01");
         for (slot = 1; slot <= 31; slot++) {
             push_block("If", "And(PCIU, ShiftLeft(1, %i))", slot);
             stmt("Notify", "\\_SB.PCI0.S%i, 1", slot);
Anthony PERARD

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.