[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots properly when larger MTU sizes are used



On Wed, Dec 05, 2012 at 11:56:32AM +0000, Palagummi, Siva wrote:
> Matt,
[...]
> You are right. The above chunk which is already part of the upstream
> is unfortunately incorrect for some cases. We also ran into issues
> in our environment around a week back and found this problem. The
> count will be different based on head len because of the
> optimization that start_new_rx_buffer is trying to do for large
> buffers.  A hole of size "offset_in_page" will be left in first page
> during copy if the remaining buffer size is >=PAG_SIZE. This
> subsequently affects the copy_off as well.
>
> So xen_netbk_count_skb_slots actually needs a fix to calculate the
> count correctly based on head len. And also a fix to calculate the
> copy_off properly to which the data from fragments gets copied.

Can you explain more about the copy_off problem? I'm not seeing it.

> max_required_rx_slots also may require a fix to account the
> additional slot that may be required in case mtu >= PAG_SIZE. For
> worst case scenario atleast another +1.  One thing that is still
> puzzling here is, max_required_rx_slots seems to be assuming that
> linear length in head will never be greater than mtu size. But that
> doesn't seem to be the case all the time. I wonder if it requires
> some kind of fix there or special handling when count_skb_slots
> exceeds max_required_rx_slots.

We should only be using the number of pages required to copy the
data. The fix shouldn't be to anticipate wasting ring space by
increasing the return value of max_required_rx_slots().

[...]

> > Why increment count by the /estimated/ count instead of the actual
> > number of slots used? We have the number of slots in the line just
> > above, in sco->meta_slots_used.
> > 
>
> Count actually refers to ring slots consumed rather than meta_slots
> used.  Count can be different from meta_slots_used.

Aah, indeed. This can end up being too pessimistic if you have lots of
frags that require multiple copy operations. I still think that it
would be better to calculate the actual number of ring slots consumed
by netbk_gop_skb() to avoid other bugs like the one you originally
fixed.

> > > >                 __skb_queue_tail(&rxq, skb);
> > > >
> > > > +               skb = skb_peek(&netbk->rx_queue);
> > > > +               if (skb == NULL)
> > > > +                       break;
> > > > +               sco = (struct skb_cb_overlay *)skb->cb;
> > > >                 /* Filled the batch queue? */
> > > > -               if (count + MAX_SKB_FRAGS >= XEN_NETIF_RX_RING_SIZE)
> > > > +               if (count + sco->count >= XEN_NETIF_RX_RING_SIZE)
> > > >                         break;
> > > >         }
> > > >
> > 
> > This change I like.
> > 
> > We're working on a patch to improve the buffer efficiency and the
> > miscalculation problem. Siva, I'd be happy to re-base and re-submit
> > this patch (with minor adjustments) as part of that work, unless you
> > want to handle that.
> > 
> > Matt
> 
> Thanks!!  Please feel free to re-base and re-submit :-)

OK, thanks!

Matt


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.