[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


On Wed, Mar 12, 2014 at 03:23:09PM +0000, Paul Durrant wrote:
> > > Actually ancient memory tells me that, unfortunately, netback's backend-
> > >frontend GSO protocol is broken in this way... it requires one more
> > response slot than the number of requests it consumes (for the extra
> > metadata), which means that if the frontend keeps the ring full you can get
> > overflow. It's a bit of a tangent though, because that code doesn't use this
> > macro (or in fact check the ring has space in any way IIRC). The prefix 
> > variant
> > of the protocol is ok though.
> > 
> > I think it's not: it consumes a request for the metadata, and when the
> > packet is grant copied to the guest, it creates a response for that slot
> > as well.
> As explained verbally, it doesn't consume a request for the 'extra' info. Let 
> me elaborate here for the benefit of the list...
> In xenvif_gop_skb(), in the non-prefix GSO case, a single request is consumed 
> for the header along with a meta slot which is used to hold the GSO data. 
> Later on in xenvif_rx_action() the code calls make_rx_response() for the 
> header, but then *before* moving onto the next meta slot it makes an 'extra' 
> response for the GSO metadata. So - one meta slot - one request consumed, but 
> two responses produced.
> So this mechanism totally relies on the netfront driver not completely 
> filling the shared ring. If it ever does, you'll get overflow.

(... which reminds me of the heisenbug Sander is seeing.)

But do we not check for there's enough space in the ring before


>   Paul

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.