[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 6/6] Under conditions of high load, it was possible for xenvif to queue up very large numbers of packets before pushing them all upstream in one go. This was partially ameliorated by use of the NDIS_RECEIVE_FLAGS_RESOURCES flag.



On 23/07/2021 17:59, Martin Harvey wrote:
[snip]

Let me know if you're happy with this approach.


Not really. How about the single DPC idea?

Humm. So in a single DPC, we'd go from top to bottom:

- Indicate as much out of packet complete as we could to NDIS (but we're 
inventing a fixed limit here)

Well, we could do away with low resources and have NDIS take everything we can throw at it, but some upper bound seems reasonable and it's an upper bound in one place.

- Move as much from packetqueue to packetcomplete as we could (inventing 
another fixed limit)
- Move from ring to packetqueue (same limit as packetcomplete above)

No, with a single DPC we don't have separate packet queue and packet complete; that's the point.

- And if ring overflows, tough cookies.


How does a ring overflow? If there's no space in the ring then netback has nowhere to put stuff so it puts back-pressure on its queuing discipline; which is pfifo by default but could be something that doesn't tail drop.

I just think that I'd far prefer to rate limit per DPC unless/until NDIS 
provided some useful mechanism for telling us how to size rings / buffers / 
amount of stuff you can reasonably have outstanding up the stack (most of which 
gets processed at dispatch anyway).

The above notwithstanding. We have 6 months of testing with this approach, and 
it seems to work quite nicely.
I can do some degree of rework, but this then invalidates all our performance 
results, and I'll have to resubmit you a rather different patch in 3-6 months 
time.


I would very much prefer that we don't band-aid the two-DPC approach if it is not doing the right thing.

  Paul



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.