[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [xen staging] evtchn: cut short evtchn_reset()'s loop in the common case
commit bb3d31e03771353ed546b0290fb3f4d8f34d962c Author: Jan Beulich <jbeulich@xxxxxxxx> AuthorDate: Fri Oct 2 08:37:04 2020 +0200 Commit: Jan Beulich <jbeulich@xxxxxxxx> CommitDate: Fri Oct 2 08:37:04 2020 +0200 evtchn: cut short evtchn_reset()'s loop in the common case The general expectation is that there are only a few open ports left when a domain asks its event channel configuration to be reset. Similarly on average half a bucket worth of event channels can be expected to be inactive. Try to avoid iterating over all channels, by utilizing usage data we're maintaining anyway. Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> Reviewed-by: Paul Durrant <paul@xxxxxxx> Acked-by: Julien Grall <jgrall@xxxxxxxxxx> --- xen/common/event_channel.c | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-) diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c index bd7894832f..e365b5498f 100644 --- a/xen/common/event_channel.c +++ b/xen/common/event_channel.c @@ -231,7 +231,11 @@ void evtchn_free(struct domain *d, struct evtchn *chn) evtchn_port_clear_pending(d, chn); if ( consumer_is_xen(chn) ) + { write_atomic(&d->xen_evtchns, d->xen_evtchns - 1); + /* Decrement ->xen_evtchns /before/ ->active_evtchns. */ + smp_wmb(); + } write_atomic(&d->active_evtchns, d->active_evtchns - 1); /* Reset binding to vcpu0 when the channel is freed. */ @@ -1069,6 +1073,19 @@ int evtchn_unmask(unsigned int port) return 0; } +static bool has_active_evtchns(const struct domain *d) +{ + unsigned int xen = read_atomic(&d->xen_evtchns); + + /* + * Read ->xen_evtchns /before/ active_evtchns, to prevent + * evtchn_reset() exiting its loop early. + */ + smp_rmb(); + + return read_atomic(&d->active_evtchns) > xen; +} + int evtchn_reset(struct domain *d, bool resuming) { unsigned int i; @@ -1093,7 +1110,7 @@ int evtchn_reset(struct domain *d, bool resuming) if ( !i ) return -EBUSY; - for ( ; port_is_valid(d, i); i++ ) + for ( ; port_is_valid(d, i) && has_active_evtchns(d); i++ ) { evtchn_close(d, i, 1); @@ -1332,6 +1349,10 @@ int alloc_unbound_xen_event_channel( spin_unlock_irqrestore(&chn->lock, flags); + /* + * Increment ->xen_evtchns /after/ ->active_evtchns. No explicit + * barrier needed due to spin-locked region just above. + */ write_atomic(&ld->xen_evtchns, ld->xen_evtchns + 1); out: -- generated by git-patchbot for /home/xen/git/xen.git#staging
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |