[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: SKB paged fragment lifecycle on receive


  • To: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
  • From: Eric Dumazet <eric.dumazet@xxxxxxxxx>
  • Date: Fri, 24 Jun 2011 19:56:23 +0200
  • Cc: netdev@xxxxxxxxxxxxxxx, Rusty Russell <rusty@xxxxxxxxxxxxxxx>, xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>, Ian Campbell <Ian.Campbell@xxxxxxxxxx>
  • Delivery-date: Fri, 24 Jun 2011 10:57:27 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=subject:from:to:cc:in-reply-to:references:content-type:date :message-id:mime-version:x-mailer:content-transfer-encoding; b=diRiKwT6EsAxnqP0j9BQ1XQQcZdM047bNcx9htQKy3PTQQwkD6wKsZ2cyC32g51Now a/jxKw60/PCNBwNFnoOrqGWAGezjwsztJSckHYVfi+faadH5xr5DyaAAyHaObst+qm7g kttAEdG3YPdRy/ewLR6Q+05ajbxEc+10S94GE=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Le vendredi 24 juin 2011 Ã 10:29 -0700, Jeremy Fitzhardinge a Ãcrit :
> On 06/24/2011 08:43 AM, Ian Campbell wrote:
> > We've previously looked into solutions using the skb destructor callback
> > but that falls over if the skb is cloned since you also need to know
> > when the clone is destroyed. Jeremy Fitzhardinge and I subsequently
> > looked at the possibility of a no-clone skb flag (i.e. always forcing a
> > copy instead of a clone) but IIRC honouring it universally turned into a
> > very twisty maze with a number of nasty corner cases etc. It also seemed
> > that the proportion of SKBs which get cloned at least once appeared as
> > if it could be quite high which would presumably make the performance
> > impact unacceptable when using the flag. Another issue with using the
> > skb destructor is that functions such as __pskb_pull_tail will eat (and
> > free) pages from the start of the frag array such that by the time the
> > skb destructor is called they are no longer there.
> >
> > AIUI Rusty Russell had previously looked into a per-page destructor in
> > the shinfo but found that it couldn't be made to work (I don't remember
> > why, or if I even knew at the time). Could that be an approach worth
> > reinvestigating?
> >
> > I can't really think of any other solution which doesn't involve some
> > sort of driver callback at the time a page is free()d.
> 

This reminds me the packet mmap (tx path) games we play with pages.

net/packet/af_packet.c : tpacket_destruct_skb(), poking
TP_STATUS_AVAILABLE back to user to tell him he can reuse space...

> One simple approach would be to simply make sure that we retain a page
> reference on any granted pages so that the network stack's put pages
> will never result in them being released back to the kernel.  We can
> also install an skb destructor.  If it sees a page being released with a
> refcount of 1, then we know its our own reference and can free the page
> immediately.  If the refcount is > 1 then we can add it to a queue of
> pending pages, which can be periodically polled to free pages whose
> other references have been dropped.
> 
> However, the question is how large will this queue get?  If it remains
> small then this scheme could be entirely practical.  But if almost every
> page ends up having transient stray references, it could become very
> awkward.
> 
> So it comes down to "how useful is an skb destructor callback as a
> heuristic for page free"?
> 

Dangerous I would say. You could have a skb1 page transferred to another
skb2, and call skb1 destructor way before page being released.

TCP stack could do that in tcp_collapse() [ it currently doesnt play
with pages ]




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.