[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/HVM: avoid pointer wraparound in bufioreq handling



On 15/06/15 15:30, Jan Beulich wrote:
> The number of slots per page being 511 (i.e. not a power of two) means
> that the (32-bit) read and write indexes going beyond 2^32 will likely
> disturb operation. Extend I/O req server creation so the caller can
> indicate that it is using suitable atomic accesses where needed (not
> all accesses to the two pointers really need to be atomic), allowing
> the hypervisor to atomically canonicalize both pointers when both have
> gone through at least one cycle.
>
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

Oh dear.  How did we end up with a circular buffer with non power-of-two
size.

> ---
> TBD: Do we need to be worried about non-libxc users of the changed
>      (tools only) interface?
>      Do we also need a way for default servers to flag atomicity?

It should only be qemu-trad using the default server these days, but
this issue probably does want fixing there as well.

> @@ -2568,17 +2575,29 @@ int hvm_buffered_io_send(ioreq_t *p)
>          return 0;
>      }
>  
> -    pg->buf_ioreq[pg->write_pointer % IOREQ_BUFFER_SLOT_NUM] = bp;
> +    pg->buf_ioreq[pg->ptrs.write_pointer % IOREQ_BUFFER_SLOT_NUM] = bp;
>  
>      if ( qw )
>      {
>          bp.data = p->data >> 32;
> -        pg->buf_ioreq[(pg->write_pointer+1) % IOREQ_BUFFER_SLOT_NUM] = bp;
> +        pg->buf_ioreq[(pg->ptrs.write_pointer+1) % IOREQ_BUFFER_SLOT_NUM] = 
> bp;
>      }
>  
>      /* Make the ioreq_t visible /before/ write_pointer. */
>      wmb();
> -    pg->write_pointer += qw ? 2 : 1;
> +    pg->ptrs.write_pointer += qw ? 2 : 1;
> +
> +    /* Canonicalize read/write pointers to prevent their overflow. */
> +    while ( s->bufioreq_atomic &&
> +            pg->ptrs.read_pointer >= IOREQ_BUFFER_SLOT_NUM )
> +    {
> +        union bufioreq_pointers old = pg->ptrs, new;
> +        unsigned int n = old.read_pointer / IOREQ_BUFFER_SLOT_NUM;
> +
> +        new.read_pointer = old.read_pointer - n * IOREQ_BUFFER_SLOT_NUM;
> +        new.write_pointer = old.write_pointer - n * IOREQ_BUFFER_SLOT_NUM;
> +        cmpxchg(&pg->ptrs.full, old.full, new.full);

This has the possibility for a misbehaving emulator to livelock Xen by
playing with the pointers.

I think you need to break and kill the ioreq server if the read pointer
is ever observed going backwards, or overtaking the write pointer.  It
is however legitimate to observe the read pointer stepping forwards one
entry at a time, as processing is occurring.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.