[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 03/12] evtchn: don't call Xen consumer callback with per-channel lock held



Hi Jan,

On 28/09/2020 11:57, Jan Beulich wrote:
While there don't look to be any problems with this right now, the lock
order implications from holding the lock can be very difficult to follow
(and may be easy to violate unknowingly).

I think this is a good idea given that we are disabling interrupts now. Unfortunately...

The present callbacks don't
(and no such callback should) have any need for the lock to be held.

... I think the lock is necessary for the vm_event subsystem to avoid racing with the vm_event_disable().

The notification callback will use a data structure that is freed by vm_event_disable(). There is a lock, but it is part of the data structure...

One solution would be to have the lock outside of the data structure.


Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -746,9 +746,18 @@ int evtchn_send(struct domain *ld, unsig
          rport = lchn->u.interdomain.remote_port;
          rchn  = evtchn_from_port(rd, rport);
          if ( consumer_is_xen(rchn) )
-            xen_notification_fn(rchn)(rd->vcpu[rchn->notify_vcpu_id], rport);
-        else
-            evtchn_port_set_pending(rd, rchn->notify_vcpu_id, rchn);
+        {
+            /* Don't keep holding the lock for the call below. */
+            xen_event_channel_notification_t fn = xen_notification_fn(rchn);
+            struct vcpu *rv = rd->vcpu[rchn->notify_vcpu_id];
+
+            rcu_lock_domain(rd);
+            spin_unlock_irqrestore(&lchn->lock, flags);
+            fn(rv, rport);
+            rcu_unlock_domain(rd);
+            return 0;
+        }
+        evtchn_port_set_pending(rd, rchn->notify_vcpu_id, rchn);
          break;
      case ECS_IPI:
          evtchn_port_set_pending(ld, lchn->notify_vcpu_id, lchn);


Cheers,

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.