[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/2] evtchn/fifo: don't corrupt queues if an old tail is linked



>>> On 20.11.13 at 18:21, David Vrabel <david.vrabel@xxxxxxxxxx> wrote:
> 2. Check for the old_q changing after locking old_q->lock and use
>    test_and_set_bit(LINKED) to bail early if another CPU linked it (see
>    below patch).
> 
> Any opinions on either of these solutions?

I'd favor 2, but ...

> --- a/xen/common/event_fifo.c Tue Nov 19 11:06:54 2013 +0000
> +++ b/xen/common/event_fifo.c Wed Nov 20 16:41:32 2013 +0000
> @@ -34,6 +34,30 @@ static inline event_word_t *evtchn_fifo_
>      return d->evtchn_fifo->event_array[p] + w;
>  }
>  
> +static struct evtchn_fifo_queue *lock_old_queue(const struct domain *d,
> +                                                struct evtchn *evtchn,
> +                                                unsigned long *flags)
> +{
> +    struct vcpu *v;
> +    struct evtchn_fifo_queue *q, *old_q;
> +
> +    for (;;)
> +    {
> +        v = d->vcpu[evtchn->last_vcpu_id];
> +        old_q = &v->evtchn_fifo->queue[evtchn->last_priority];
> +
> +        spin_lock_irqsave(&old_q->lock, *flags);
> +
> +        v = d->vcpu[evtchn->last_vcpu_id];
> +        q = &v->evtchn_fifo->queue[evtchn->last_priority];
> +
> +        if ( old_q == q )
> +            return old_q;
> +
> +        spin_unlock_irqrestore(&old_q->lock, *flags);
> +    }

... is there a guaranteed upper bound to this loop?

> +}          
> +
>  static int try_set_link(event_word_t *word, event_word_t *w, uint32_t link)
>  {
>      event_word_t new, old;
> @@ -127,7 +151,6 @@ static void evtchn_fifo_set_pending(stru
>      if ( !test_bit(EVTCHN_FIFO_MASKED, word)
>           && !test_bit(EVTCHN_FIFO_LINKED, word) )
>      {
> -        struct vcpu *old_v;
>          struct evtchn_fifo_queue *q, *old_q;
>          event_word_t *tail_word;
>          bool_t linked = 0;
> @@ -139,10 +162,13 @@ static void evtchn_fifo_set_pending(stru
>           */
>          q = &v->evtchn_fifo->queue[evtchn->priority];
>  
> -        old_v = d->vcpu[evtchn->last_vcpu_id];
> -        old_q = &old_v->evtchn_fifo->queue[evtchn->last_priority];
> +        old_q = lock_old_queue(d, evtchn, &flags);
>  
> -        spin_lock_irqsave(&old_q->lock, flags);
> +        if ( test_and_set_bit(EVTCHN_FIFO_LINKED, word) )
> +        {
> +            spin_unlock_irqrestore(&old_q->lock, flags);
> +            goto done;
> +        }
>  
>          /*
>           * If this event was a tail, the old queue is now empty and
> @@ -152,12 +178,9 @@ static void evtchn_fifo_set_pending(stru
>          if ( old_q->tail == port )
>              old_q->tail = 0;
>  
> -        set_bit(EVTCHN_FIFO_LINKED, word);
> -
>          /* Moved to a different queue? */
> -        if ( old_q != q)
> +        if ( old_q != q )
>          {
> -
>              evtchn->last_vcpu_id = evtchn->notify_vcpu_id;
>              evtchn->last_priority = evtchn->priority;
>  
> @@ -191,7 +214,7 @@ static void evtchn_fifo_set_pending(stru
>                                    &v->evtchn_fifo->control_block->ready) )
>              vcpu_mark_events_pending(v);
>      }
> -
> +done:

Labels indented by at least one space please.

>      if ( !was_pending )
>          evtchn_check_pollers(d, port);
>  }

Apart from that - what does this mean for the 2/2 patch you reply
to here? Apply it or wait (I assume the latter)? If wait, is 1/2 still
fine to apply?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.