[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 3/3] evtchn/fifo: don't spin indefinitely when setting LINK
On 04/11/13 14:57, Jan Beulich wrote: >>>> On 04.11.13 at 15:52, David Vrabel <david.vrabel@xxxxxxxxxx> wrote: >> On 04/11/13 14:39, Jan Beulich wrote: >>>>>> On 31.10.13 at 16:03, David Vrabel <david.vrabel@xxxxxxxxxx> wrote: >>>> If the buggy or malicious guest does cause the loop to exit early, the >>>> newly pending event will be unreachable by the guest and it and >>>> subsequent events may be lost. >>> >>> ... yet here it is not really clear which guest the last "guest" refers >>> to (i.e. it's fine if the malicious guest harms itself, but the change >>> would be pointless if the malicious guest could still harm the other >>> one). >> >> The malicious guest loses the event. > > So then "that guest" would perhaps be more precise than "the > guest"? Yes. I'll improve the commit message to be clearer. >>>> @@ -90,6 +92,8 @@ static void evtchn_fifo_set_pending(struct vcpu *v, >>>> struct evtchn *evtchn) >>>> event_word_t *tail_word; >>>> bool_t linked = 0; >>>> >>>> + evtchn->q = q; >>>> + >>>> spin_lock_irqsave(&q->lock, flags); >>>> >>>> /* >>> >>> I fail to see how this change is related to the rest of the patch. >> >> This is needed so the correct queue lock is used in evtchn_fifo_unmask(). > > I guessed that, but I wasn't able to immediately see why that would > not have been a requirement before. An event now has two queues associated with it. 1. The queue it is supposed to be on. This is found with evtchn->notify_vcpu_id and evtchn->priority when an event is being linked. 2. The queue the event is on right now (the new evtchn->q), until this patch we didn't care which queue an event was on, only that it was on some queue. These are not necessarily the same since an event may be moved between VCPUs or have its priority changed while it is already on a queue. Some additional notes on the locking: The locking in the FIFO-based ABI is designed on the assumption that every hypervisor operation on an event channel (set pending, unmask) is done while holding a per-event channel spin lock. Currently, the per-event locking is done with the coarser per-domain event_lock. The queue lock serializes adding events and clearing MASKED on tail events. The event lock also protects evtchn->q and the evtchn->q->tail == evtchn->port test. David _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |