[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/HVM: avoid pointer wraparound in bufioreq handling



On 16/06/15 07:40, Jan Beulich wrote:
>>>> On 15.06.15 at 17:55, <andrew.cooper3@xxxxxxxxxx> wrote:
>> On 15/06/15 15:30, Jan Beulich wrote:
>>> +    /* Canonicalize read/write pointers to prevent their overflow. */
>>> +    while ( s->bufioreq_atomic &&
>>> +            pg->ptrs.read_pointer >= IOREQ_BUFFER_SLOT_NUM )
>>> +    {
>>> +        union bufioreq_pointers old = pg->ptrs, new;
>>> +        unsigned int n = old.read_pointer / IOREQ_BUFFER_SLOT_NUM;
>>> +
>>> +        new.read_pointer = old.read_pointer - n * IOREQ_BUFFER_SLOT_NUM;
>>> +        new.write_pointer = old.write_pointer - n * IOREQ_BUFFER_SLOT_NUM;
>>> +        cmpxchg(&pg->ptrs.full, old.full, new.full);
>> This has the possibility for a misbehaving emulator to livelock Xen by
>> playing with the pointers.
>>
>> I think you need to break and kill the ioreq server if the read pointer
>> is ever observed going backwards, or overtaking the write pointer.  It
>> is however legitimate to observe the read pointer stepping forwards one
>> entry at a time, as processing is occurring.
> Watching for the pointer to step backwards isn't nice; what I
> would do instead is to limit the loop count here to
> IOREQ_BUFFER_SLOT_NUM (on the basis that we're not
> creating new entries, and hence the reader can't legitimately
> update the pointer more than that number of times); for
> simplicity's sake I wouldn't try to limit the loop further (e.g. to
> write_pointer - read_pointer iterations).

That seems like a reasonable compromise.  511 cmpxchg()s isn't over the
top in terms of time.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.