|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH] x86/irq: Skip unmap_domain_pirq XSM during destruction
xsm_unmap_domain_irq was seen denying unmap_domain_pirq when called from
complete_domain_destroy as an RCU callback. The source context was an
unexpected, random domain. Since this is a xen-internal operation,
we don't want the XSM hook denying the operation.
Check d->is_dying and skip the check when the domain is dead. The RCU
callback runs when a domain is in that state.
Suggested-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
Signed-off-by: Jason Andryuk <jandryuk@xxxxxxxxx>
---
Dan wants to change current to point at DOMID_IDLE when the RCU callback
runs. I think Juergen's commit 53594c7bd197 "rcu: don't use
stop_machine_run() for rcu_barrier()" may have changed this since it
mentions stop_machine_run scheduled the idle vcpus to run the callbacks
for the old code.
Would that be as easy as changing rcu_do_batch() to do:
+ /* Run as "Xen" not a random domain's vcpu. */
+ vcpu = get_current();
+ set_current(idle_vcpu[smp_processor_id()]);
list->func(list);
+ set_current(vcpu);
or is using set_current() only acceptable as part of context_switch?
xen/arch/x86/irq.c | 12 ++++++++----
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 285ac399fb..16488d287c 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -2340,10 +2340,14 @@ int unmap_domain_pirq(struct domain *d, int pirq)
nr = msi_desc->msi.nvec;
}
- ret = xsm_unmap_domain_irq(XSM_HOOK, d, irq,
- msi_desc ? msi_desc->dev : NULL);
- if ( ret )
- goto done;
+ /* When called by complete_domain_destroy via RCU, current is a random
+ * domain. Skip the XSM check since this is a Xen-initiated action. */
+ if ( d->is_dying != DOMDYING_dead ) {
+ ret = xsm_unmap_domain_irq(XSM_HOOK, d, irq,
+ msi_desc ? msi_desc->dev : NULL);
+ if ( ret )
+ goto done;
+ }
forced_unbind = pirq_guest_force_unbind(d, info);
if ( forced_unbind )
--
2.35.1
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |