[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH net v2 1/3] xen-netback: remove pointless clause from if statement
From: Sander Eikelenboom > Friday, March 28, 2014, 11:35:58 AM, you wrote: > > > From: Paul Durrant > >> > A reasonable high estimate for the number of slots required for a > >> > specific > >> > message is 'frag_count + total_size/4096'. > >> > So if that are that many slots free it is definitely ok to add the > >> > message. > >> > > >> > >> Hmm, that may work. By total_size, I assume you mean skb->len, so that > >> calculation is based on an > >> overhead of 1 non-optimally packed slot per frag. There'd still need to be > >> a +1 for the GSO 'extra' > >> though. > > > Except I meant '2 * frag_count + size/4096' :-( > > > You have to assume that every fragment starts at n*4096-1 (so need > > at least two slots). A third slot is only needed for fragments > > longer that 1+4096+2 - but an extra one is needed for every > > 4096 bytes after that. > > He did that in his followup patch series .. that works .. for small packets > But for larger ones it's an extremely wasteful estimate and it quickly get > larger than the > MAX_SKB_FRAGS > we had before and even to large causing stalls. I tried doing this type of > calculation with a CAP of > the old MAX_SKB_FRAGS calculation and that works. I'm confused (easily done). If you are trying to guess at the number of packets to queue waiting for the thread that sets things up to run then you want an underestimate. Since any packets that can't actually be transferred will stay on the queue. A suitable estimate might be max(frag_count, size/4096). The '2*frag_count + size/4096' is right for checking if there is enough space for the current packet - since it gets corrected as soon as the packet is transferred to the ring slots. David _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |