[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] evtchn/fifo: don't corrupt queues if an old tail is linked

On 09/12/13 12:21, Jan Beulich wrote:
>>>> On 09.12.13 at 12:49, David Vrabel <david.vrabel@xxxxxxxxxx> wrote:
>> On 09/12/13 09:32, Jan Beulich wrote:
>>>>>> On 06.12.13 at 18:38, David Vrabel <david.vrabel@xxxxxxxxxx> wrote:
>>>> --- a/xen/include/xen/sched.h
>>>> +++ b/xen/include/xen/sched.h
>>>> @@ -98,6 +98,8 @@ struct evtchn
>>>>      } u;
>>>>      u8 priority;
>>>>      u8 pending:1;
>>>> +    u16 last_vcpu_id;
>>>> +    u8 last_priority;
>>> Is it really correct for these two new fields to remain uninitialized
>>> until evtchn_fifo_set_pending() would get run the first time (and
>>> hence thinking there was a move this first time through)?
>> They're initialized to zero and I think this is fine. The code as-is is
>> simpler than having to special case events that have never been on a queue.
> I'm not asking to add a special case, I'm only asking to initialize all
> fields correctly. Just like you ought to set up ->priority, you
> likely ought to set up the two new fields.

It's not clear how you think they're not initialized.  They're
initialized to zero when the evtchn is allocated and then they must only
be set in evtchn_fifo_set_pending() when they move to a new queue.

Do you think they should be initialized when an event is (re)bound?
Because this would be broken as an unbound event might be an old tail.

>>> Which also gets me to ask whether it's really correct to only set
>>> the priority to EVTCHN_FIFO_PRIORITY_DEFAULT in setup_ports(),
>>> but not on any subsequently allocated/bound ones?
>> This patch fixes this but would you prefer a new evtchn_port_op hook for
>> the init?
> Not really, and yes please (or perhaps it would be cheaper to have
> struct evtchn_port_ops specify the intended default priority, and
> common code simply copy that field).



Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.