[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/2] evtchn/fifo: don't corrupt queues if an old tail is linked



On 22/11/13 12:02, Jan Beulich wrote:
>>>> On 20.11.13 at 18:21, David Vrabel <david.vrabel@xxxxxxxxxx> wrote:
>> 2. Check for the old_q changing after locking old_q->lock and use
>>    test_and_set_bit(LINKED) to bail early if another CPU linked it (see
>>    below patch).
>>
>> Any opinions on either of these solutions?
> 
> I'd favor 2, but ...
> 
>> --- a/xen/common/event_fifo.c        Tue Nov 19 11:06:54 2013 +0000
>> +++ b/xen/common/event_fifo.c        Wed Nov 20 16:41:32 2013 +0000
>> @@ -34,6 +34,30 @@ static inline event_word_t *evtchn_fifo_
>>      return d->evtchn_fifo->event_array[p] + w;
>>  }
>>  
>> +static struct evtchn_fifo_queue *lock_old_queue(const struct domain *d,
>> +                                                struct evtchn *evtchn,
>> +                                                unsigned long *flags)
>> +{
>> +    struct vcpu *v;
>> +    struct evtchn_fifo_queue *q, *old_q;
>> +
>> +    for (;;)
>> +    {
>> +        v = d->vcpu[evtchn->last_vcpu_id];
>> +        old_q = &v->evtchn_fifo->queue[evtchn->last_priority];
>> +
>> +        spin_lock_irqsave(&old_q->lock, *flags);
>> +
>> +        v = d->vcpu[evtchn->last_vcpu_id];
>> +        q = &v->evtchn_fifo->queue[evtchn->last_priority];
>> +
>> +        if ( old_q == q )
>> +            return old_q;
>> +
>> +        spin_unlock_irqrestore(&old_q->lock, *flags);
>> +    }
> 
> ... is there a guaranteed upper bound to this loop?

No :(  But the only attack I could think of seems highly implausible.

A malicious guest A with two co-operating guests (B and C) can ping-pong
one of its queue locks (Q) between two VCPUs with repeated EVTCHNOP_send
calls on two interdomain event channels bound to A.  They need to be in
different domains otherwise there is a window were Q will not be locked.
 The time spent while holding Q is less than the time spent in the
hypercall while not holding the lock, then the guest will need more
co-operating guests to keep Q constantly locked.

If Guest A then has another two co-operating guests (D and E), it can
arrange for them to ping-pong another queue lock (R) between two VCPUs.

Guest A can also repeatedly change the priority of these four events.
With careful timing it will be able to change the priority such that
every send call moves the event between the two queues.

Guest A must also immediately clear any LINKED bit to prevent the unmask
calls from taking the 'already LINKED' fast path in
evtchn_fifo_set_pending().  This is trivial to do by just repeatedly
writing 0 to the relevant event words.

Guest V (the victim) then attempts to acquire the old queue lock Q.  If
it manages to lock it, it will now be the wrong lock and it must try and
acquire R.  If it manages to acquire R it will again be the wrong lock.
 And so on.

There might be an easier attack but I couldn't see it.

Do you think this is a real problem that should be resolved?

> Apart from that - what does this mean for the 2/2 patch you reply
> to here? Apply it or wait (I assume the latter)? If wait, is 1/2 still
> fine to apply?

Please apply 1/2 and wait for a revised 2/2.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.