[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: netback behaviour (mirage under xen)



Adding to what Dave said, another way to trigger this is to load the system with a large TCP transfer across a fat pipe.  If TCP reaches the point where some packet loss starts to occur (possibly because the TX ring is full), then the segments retransmitted by fast retransmit path in tcp now look like what Dave described - the header is fine but the payload is all zeros.  This seg would be dropped by the receiving TCP (bad checksum) and then the whole transmission would grind to a halt till the retransmission timer kicks in.  Interestingly when that same packet is retransmitted by the timer (at this point the load would be back down to zero), the segment is fine and the transmission starts up again.

Thanks,

Balraj


On Thu, Jan 10, 2013 at 4:31 PM, David Scott <scott.dj@xxxxxxxxx> wrote:
Hi,

Balraj and I are seeing some strange network transmit behaviour which is either a bug in our frontend or the linux backend -- does anyone know which is most likely? :-)

Most of our transmitted packets have been split into 2 fragments: one containing headers and the other containing application data. When we push the fragments to the netfront transmit ring, we include the whole packet length and set the "more data" bit on the first fragment. The vast majority of the time this works fine so I'm confident we've written the requests properly.

However if I put a "sleep 1" between writing the fragments then netback in dom0 will (from my PoV) prematurely transmit the packet with only the first fragment. The packet is the full requested size but there are zeroes where the second fragment should be. The second fragment is then transmitted afterwards and is obviously dropped pretty quickly because it has application data where the ethernet/IP/... headers should be.

The only thing I can see that we might be doing wrong is we're updating the shared request pointer per fragment rather than per packet. This will allow the backend to see the initial fragment by itself if it has time to look (hence the sleep 1 triggering the problem). However given the packet is clearly incomplete I'm surprised the backend is prepared to transmit it anyway.

I suspect a workaround will probably be to update the request pointer per packet rather than per fragment.

Anyone have any thoughts?

Cheers,
--
Dave Scott


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.