[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH net] xen-netback: Fix slot estimation



On 03/06/14 14:37, Paul Durrant wrote:
-----Original Message-----
From: Zoltan Kiss
Sent: 03 June 2014 14:32
To: xen-devel@xxxxxxxxxxxxxxxxxxxx; Ian Campbell; Wei Liu; Paul Durrant;
linux@xxxxxxxxxxxxxx
Cc: netdev@xxxxxxxxxxxxxxx; David Vrabel; davem@xxxxxxxxxxxxx; Zoltan
Kiss
Subject: [PATCH net] xen-netback: Fix slot estimation

A recent commit (a02eb4 "xen-netback: worse-case estimate in
xenvif_rx_action is
underestimating") capped the slot estimation to MAX_SKB_FRAGS, but that
triggers
the next BUG_ON a few lines down, as the packet consumes more slots than
estimated.
This patch remove that cap, and if the frontend doesn't provide enough slot,
put back the skb to the top of the queue and caps rx_last_skb_slots. When
the
next try also fails, it drops the packet.
Capping rx_last_skb_slots is needed because if the frontend never gives
enough
slots, the ring gets stalled.

Signed-off-by: Zoltan Kiss <zoltan.kiss@xxxxxxxxxx>
Cc: Paul Durrant <paul.durrant@xxxxxxxxxx>
Cc: Wei Liu <wei.liu2@xxxxxxxxxx>
Cc: Ian Campbell <ian.campbell@xxxxxxxxxx>
Cc: David Vrabel <david.vrabel@xxxxxxxxxx>

diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-
netback/netback.c
index da85ffb..7164157 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -600,13 +600,6 @@ static void xenvif_rx_action(struct xenvif *vif)
                                                         PAGE_SIZE);
                }

-               /* To avoid the estimate becoming too pessimal for some
-                * frontends that limit posted rx requests, cap the estimate
-                * at MAX_SKB_FRAGS.
-                */
-               if (max_slots_needed > MAX_SKB_FRAGS)
-                       max_slots_needed = MAX_SKB_FRAGS;
-
                /* We may need one more slot for GSO metadata */
                if (skb_is_gso(skb) &&
                   (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4 ||
@@ -615,9 +608,27 @@ static void xenvif_rx_action(struct xenvif *vif)

                /* If the skb may not fit then bail out now */
                if (!xenvif_rx_ring_slots_available(vif, max_slots_needed)) {
+                       /* If the skb needs more than MAX_SKB_FRAGS
slots, it
+                        * can happen that the frontend never gives us
enough.
+                        * To avoid spining on that packet, first we put it back
+                        * to the top of the queue, but if the next try fail,
+                        * we drop it.
+                        */
+                       if (max_slots_needed > MAX_SKB_FRAGS &&
+                           vif->rx_last_skb_slots == MAX_SKB_FRAGS) {

Isn't it sufficient to say:

if (vif->rx_last_skb_slots != 0)

here? We should not ordinarily wake before the requisite number of slots is 
available.

Yep, that would be enough



   Paul

+                               kfree_skb(skb);
+                               vif->rx_last_skb_slots = 0;
+                               continue;
+                       }
                        skb_queue_head(&vif->rx_queue, skb);
                        need_to_notify = true;
-                       vif->rx_last_skb_slots = max_slots_needed;
+                       /* Cap this otherwise if the guest never gives us
+                        * enough slot, rx_work_todo will spin
+                        */
+                       vif->rx_last_skb_slots =
+                               max_slots_needed > MAX_SKB_FRAGS ?
+                               MAX_SKB_FRAGS :
+                               max_slots_needed;
                        break;
                } else
                        vif->rx_last_skb_slots = 0;


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.