[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 net-next 5/7] xen-netback: process guest rx packets in batches



> -----Original Message-----
> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@xxxxxxxxxx]
> Sent: 04 October 2016 13:48
> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> Cc: netdev@xxxxxxxxxxxxxxx; xen-devel@xxxxxxxxxxxxxxxxxxxx; Wei Liu
> <wei.liu2@xxxxxxxxxx>; David Vrabel <david.vrabel@xxxxxxxxxx>
> Subject: Re: [Xen-devel] [PATCH v2 net-next 5/7] xen-netback: process
> guest rx packets in batches
> 
> On Tue, Oct 04, 2016 at 10:29:16AM +0100, Paul Durrant wrote:
> > From: David Vrabel <david.vrabel@xxxxxxxxxx>
> >
> > Instead of only placing one skb on the guest rx ring at a time,
> > process a batch of up-to 64.  This improves performance by ~10% in some
> tests.

I believe the tests are mainly throughput tests, but David would know the 
specifics.

> 
> And does it regress latency workloads?
> 

It shouldn't, although I have not run ping-pong tests to verify. If packets are 
only placed on the vif queue singly though then the batching should have no 
effect, since rx_action will complete and do the push as before.

  Paul

> What are those 'some tests' you speak off?
> 
> Thanks.
> >
> > Signed-off-by: David Vrabel <david.vrabel@xxxxxxxxxx> [re-based]
> > Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
> > ---
> > Cc: Wei Liu <wei.liu2@xxxxxxxxxx>
> > ---
> >  drivers/net/xen-netback/rx.c | 15 ++++++++++++++-
> >  1 file changed, 14 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/net/xen-netback/rx.c
> > b/drivers/net/xen-netback/rx.c index 9548709..ae822b8 100644
> > --- a/drivers/net/xen-netback/rx.c
> > +++ b/drivers/net/xen-netback/rx.c
> > @@ -399,7 +399,7 @@ static void xenvif_rx_extra_slot(struct
> xenvif_queue *queue,
> >     BUG();
> >  }
> >
> > -void xenvif_rx_action(struct xenvif_queue *queue)
> > +void xenvif_rx_skb(struct xenvif_queue *queue)
> >  {
> >     struct xenvif_pkt_state pkt;
> >
> > @@ -425,6 +425,19 @@ void xenvif_rx_action(struct xenvif_queue
> *queue)
> >     xenvif_rx_complete(queue, &pkt);
> >  }
> >
> > +#define RX_BATCH_SIZE 64
> > +
> > +void xenvif_rx_action(struct xenvif_queue *queue) {
> > +   unsigned int work_done = 0;
> > +
> > +   while (xenvif_rx_ring_slots_available(queue) &&
> > +          work_done < RX_BATCH_SIZE) {
> > +           xenvif_rx_skb(queue);
> > +           work_done++;
> > +   }
> > +}
> > +
> >  static bool xenvif_rx_queue_stalled(struct xenvif_queue *queue)  {
> >     RING_IDX prod, cons;
> > --
> > 2.1.4
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxx
> > https://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.