[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH v4 4/8] arm/irq: Migrate IRQs from dyings CPUs


  • To: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Mykyta Poturai <Mykyta_Poturai@xxxxxxxx>
  • Date: Wed, 12 Nov 2025 10:51:48 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=v7Xue1J/5TT4npEO/DAmlQiS+zcOzlt5re/MztD1gyY=; b=Zo+VdohhfjAgn6TpdFOBW1jtFpSCFiK8IvD4bBWppiPY8Ye3r9YqfDR61vGtyNS2XTVxsIwpBGTmK9/DrHrNwUfGpmgGi7rM0i/4VWDVeePsodYKbY5H0C/nNo25f7a7gZ1vsLe5nQ6nI4zkNiHzA4uB80LKdqxPejlbOlHW0DYOXmm3PcXKR4Wn6yGKIx2SOBQs/DPf4t/39WqYAWS/bX+Jdg7QYDAgqt9K7QA6CnQ88udlqu6cWh1gcTZ7Ws99AnNTLxf/wrMPuFpTjlckR6kEjf4hbNVqPVZNKlFApcywMOR4jMYJCVmTUPCDUSfKySdl/aPwQFkD1IrjLKNnzg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=xq3AMRceXQOpZuHdxUeHS/m2Yr/1YG3voPokSFkMINk0XAc2XWhxX3YDOrqEz8JYb9jX7Pln7t5Mj6SkV9fgzD6aKVMLRyVUDyqohWnqF+0YR5WiiYuu8OtkZqHFowa+CB5r27qI6UHrsLGw52Y2owOxeP7pkPMpYbIYyUJArS6/qKwjYxY3rvuqYG/1q4ittA6QpWAMXF+JjZUhyuGgvI77DRJ7rfuatD0733c5xsTVHnCShKLX2XHBVwqaEQkKJb0mOvbflGeZOynAzmo7cOIkF/3zuJgc59Ak9yIgdQ45jlpEpGclYepJ/3tOtfy0vlneksXWgPFqlNYprZ//Bw==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=epam.com;
  • Cc: Mykyta Poturai <Mykyta_Poturai@xxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • Delivery-date: Wed, 12 Nov 2025 10:51:55 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHcU8JYvjT2yDm5JUiekiWBeQTl2w==
  • Thread-topic: [PATCH v4 4/8] arm/irq: Migrate IRQs from dyings CPUs

Move IRQs from dying CPU to the online ones.
Guest-bound IRQs are already handled by scheduler in the process of
moving vCPUs to active pCPUs, so we only need to handle IRQs used by Xen
itself.

If IRQ is to be migrated, it's affinity is set to a mask of all online
CPUs. With current GIC implementation, this means they are routed to a
random online CPU. This may cause extra moves if multiple cores are
disabled in sequence, but should prevent all interrupts from piling up
on CPU0 in case of repeated up-down cycles on different cores.

IRQs from CPU 0 are never migrated, as dying CPU 0 means we are either
shutting down compeletely or entering system suspend.

Considering that all Xen-used IRQs are currently allocated during init
on CPU 0, and setup_irq uses smp_processor_id for the initial affinity.
This change is not strictly required for correct operation for now, but
it should future-proof cpu hotplug and system suspend support in case
some kind if IRQ balancing is implemented later.

Signed-off-by: Mykyta Poturai <mykyta_poturai@xxxxxxxx>

v3->v4:
* patch introduced
---
 xen/arch/arm/include/asm/irq.h |  2 ++
 xen/arch/arm/irq.c             | 39 ++++++++++++++++++++++++++++++++++
 xen/arch/arm/smpboot.c         |  2 ++
 3 files changed, 43 insertions(+)

diff --git a/xen/arch/arm/include/asm/irq.h b/xen/arch/arm/include/asm/irq.h
index 09788dbfeb..6e6e27bb80 100644
--- a/xen/arch/arm/include/asm/irq.h
+++ b/xen/arch/arm/include/asm/irq.h
@@ -126,6 +126,8 @@ bool irq_type_set_by_domain(const struct domain *d);
 void irq_end_none(struct irq_desc *irq);
 #define irq_end_none irq_end_none
 
+void evacuate_irqs(unsigned int from);
+
 #endif /* _ASM_HW_IRQ_H */
 /*
  * Local variables:
diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
index 28b40331f7..b383d71930 100644
--- a/xen/arch/arm/irq.c
+++ b/xen/arch/arm/irq.c
@@ -158,6 +158,45 @@ static int init_local_irq_data(unsigned int cpu)
     return 0;
 }
 
+static void evacuate_irq(int irq, unsigned int from)
+{
+    struct irq_desc *desc = irq_to_desc(irq);
+    unsigned long flags;
+
+    /* Don't move irqs from CPU 0 as it is always last to be disabled */
+    if ( from == 0 )
+        return;
+
+    ASSERT(!cpumask_empty(&cpu_online_map));
+    ASSERT(!cpumask_test_cpu(from, &cpu_online_map));
+
+    spin_lock_irqsave(&desc->lock, flags);
+    if ( likely(!desc->action) )
+        goto out;
+
+    if ( likely(test_bit(_IRQ_GUEST, &desc->status) ||
+                test_bit(_IRQ_MOVE_PENDING, &desc->status)) )
+        goto out;
+
+    if ( cpumask_test_cpu(from, desc->affinity) )
+        irq_set_affinity(desc, &cpu_online_map);
+
+out:
+    spin_unlock_irqrestore(&desc->lock, flags);
+    return;
+}
+
+void evacuate_irqs(unsigned int from)
+{
+    int irq;
+
+    for ( irq = NR_LOCAL_IRQS; irq < NR_IRQS; irq++ )
+        evacuate_irq(irq, from);
+
+    for ( irq = ESPI_BASE_INTID; irq < ESPI_MAX_INTID; irq++ )
+        evacuate_irq(irq, from);
+}
+
 static int cpu_callback(struct notifier_block *nfb, unsigned long action,
                         void *hcpu)
 {
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index 7f3cfa812e..46b24783dd 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -425,6 +425,8 @@ void __cpu_disable(void)
 
     smp_mb();
 
+    evacuate_irqs(cpu);
+
     /* Return to caller; eventually the IPI mechanism will unwind and the 
      * scheduler will drop to the idle loop, which will call stop_cpu(). */
 }
-- 
2.51.2



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.