[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2 1/8] evtchn: avoid race in get_xen_consumer()
On Tue, Oct 20, 2020 at 04:08:13PM +0200, Jan Beulich wrote: > There's no global lock around the updating of this global piece of data. > Make use of cmpxchgptr() to avoid two entities racing with their > updates. > > While touching the functionality, mark xen_consumers[] read-mostly (or > else the if() condition could use the result of cmpxchgptr(), writing to > the slot unconditionally). I'm not sure I get this, likely related to the comment I have below. > The use of cmpxchgptr() here points out (by way of clang warning about > it) that its original use of const was slightly wrong. Adjust the > placement, or else undefined behavior of const qualifying a function > type will result. > > Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> > > --- > v2: Use (and hence generalize) cmpxchgptr(). Add comment. Expand / > adjust description. > > --- a/xen/common/event_channel.c > +++ b/xen/common/event_channel.c > @@ -57,7 +57,8 @@ > * with a pointer, we stash them dynamically in a small lookup array which > * can be indexed by a small integer. > */ > -static xen_event_channel_notification_t xen_consumers[NR_XEN_CONSUMERS]; > +static xen_event_channel_notification_t __read_mostly > + xen_consumers[NR_XEN_CONSUMERS]; > > /* Default notification action: wake up from wait_on_xen_event_channel(). */ > static void default_xen_notification_fn(struct vcpu *v, unsigned int port) > @@ -80,8 +81,9 @@ static uint8_t get_xen_consumer(xen_even > > for ( i = 0; i < ARRAY_SIZE(xen_consumers); i++ ) > { > + /* Use cmpxchgptr() in lieu of a global lock. */ > if ( xen_consumers[i] == NULL ) > - xen_consumers[i] = fn; > + cmpxchgptr(&xen_consumers[i], NULL, fn); > if ( xen_consumers[i] == fn ) > break; I think you could join it as: if ( !xen_consumers[i] && !cmpxchgptr(&xen_consumers[i], NULL, fn) ) break; As cmpxchgptr will return the previous value of &xen_consumers[i]? Thanks, Roger.
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |