[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v3 5/5] evtchn: don't call Xen consumer callback with per-channel lock held
Hi Jan, On 23/11/2020 13:30, Jan Beulich wrote: While there don't look to be any problems with this right now, the lock order implications from holding the lock can be very difficult to follow (and may be easy to violate unknowingly). The present callbacks don't (and no such callback should) have any need for the lock to be held. However, vm_event_disable() frees the structures used by respective callbacks and isn't otherwise synchronized with invocations of these callbacks, so maintain a count of in-progress calls, for evtchn_close() to wait to drop to zero before freeing the port (and dropping the lock). AFAICT, this callback is not the only place where the synchronization is missing in the VM event code. For instance, vm_event_put_request() can also race against vm_event_disable(). So shouldn't we handle this issue properly in VM event? Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> --- Should we make this accounting optional, to be requested through a new parameter to alloc_unbound_xen_event_channel(), or derived from other than the default callback being requested? Aside the VM event, do you see any value for the other caller? --- v3: Drain callbacks before proceeding with closing. Re-base. --- a/xen/common/event_channel.c +++ b/xen/common/event_channel.c @@ -397,6 +397,7 @@ static long evtchn_bind_interdomain(evtcrchn->u.interdomain.remote_dom = ld;rchn->u.interdomain.remote_port = lport; + atomic_set(&rchn->u.interdomain.active_calls, 0); rchn->state = ECS_INTERDOMAIN;/*@@ -720,6 +721,10 @@ int evtchn_close(struct domain *d1, intdouble_evtchn_lock(chn1, chn2); + if ( consumer_is_xen(chn1) )+ while ( atomic_read(&chn1->u.interdomain.active_calls) ) + cpu_relax(); + evtchn_free(d1, chn1);chn2->state = ECS_UNBOUND;@@ -781,9 +786,15 @@ int evtchn_send(struct domain *ld, unsig rport = lchn->u.interdomain.remote_port; rchn = evtchn_from_port(rd, rport); if ( consumer_is_xen(rchn) ) + { + /* Don't keep holding the lock for the call below. */ + atomic_inc(&rchn->u.interdomain.active_calls); + evtchn_read_unlock(lchn); xen_notification_fn(rchn)(rd->vcpu[rchn->notify_vcpu_id], rport); - else - evtchn_port_set_pending(rd, rchn->notify_vcpu_id, rchn); atomic_dec() doesn't contain any memory barrier, so we will want one between xen_notification_fn() and atomic_dec() to avoid re-ordering. + atomic_dec(&rchn->u.interdomain.active_calls); + return 0; + } + evtchn_port_set_pending(rd, rchn->notify_vcpu_id, rchn); break; case ECS_IPI: evtchn_port_set_pending(ld, lchn->notify_vcpu_id, lchn); --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -104,6 +104,7 @@ struct evtchn } unbound; /* state == ECS_UNBOUND */ struct { evtchn_port_t remote_port; + atomic_t active_calls; struct domain *remote_dom; } interdomain; /* state == ECS_INTERDOMAIN */ struct { Cheers, -- Julien Grall
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |