[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH] Add upper bound to receiver ring poll to reduce DPC latency


  • To: <win-pv-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: <Rachel.Yan@xxxxxxxxxx>
  • Date: Tue, 7 Feb 2023 14:05:09 +0000
  • Authentication-results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
  • Cc: Rachel Yan <Rachel.Yan@xxxxxxxxxx>, Rachel Yan <rachel.yan@xxxxxxxxxx>
  • Delivery-date: Thu, 09 Feb 2023 11:37:45 +0000
  • Ironport-data: A9a23:uJ7cTq2c8/anopuQwvbD5aVxkn2cJEfYwER7XKvMYLTBsI5bpzwEn GsXWG3UOvqCazGneN50b9+3px8FvZTTm982QVc9pC1hF35El5HIVI+TRqvS04F+DeWYFR46s J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo tq3qMDEULOf82cc3lk8tuTS93uDgNyo4GlD5gZmOqgS1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfLFFBr fgxDGs0Uw2pnLqq4vGCWLR0r5F2RCXrFNt3VnBIyDjYCbAtQIzZQrWM7thdtNsyrpkQR7CEP ZNfMGcxKk2aOHWjOX9OYH46tN2hjXnyd3tpoVS9rqsr+WnDigd21dABNfKEJIDRHZwKzy50o ErA5GTTRUw+JebY2GqY7UuwhPDqkx70Ddd6+LqQqacx3Qz7KnYoIAIXUx6jv7y1h1CzX/pbK lcI4Ww+oK4q7kupQ9LhGRqirxa5UgU0AoQKVbdgsUfUl/SSulzCboQZctJfQPEWu5cybBhx7 2KUtcixGR4sia/IUVvIo994sgiOESQSKGYDYwoNQg0E/8TvrekPs/7fcjpwOPXr14OoQFkc1 xjP9XFj3OtL0abnwo3hpTj6bySQSo8lp+LfziHeRSqb4wxwf+ZJjKT4uAGAvZ6swGt0J2RtX UToeeDEtIji7rnXzkRhpdnh+5n3j8tpyBWG3TZS82AJrlxBAUKLc4FK+y1ZL0x0KMsCcjKBS BaN5l4OvM8PZCT2N/Ufj2eN5yMClPmIKDgYfqqMMoomjmZZJGdrAx2ClWbPhjuwwSDAYIk0O IuBcNbEMJrpIf0P8dZCfM9EieVD7nlnlQvuqWXTk0zPPUy2OCTEFt/o8TKmMogE0U9ziF+Lr 4kEa5TRkkk3vS+XSnC/zLP/5GsidRATba0aYeQKJoZv/iIO9LkdNsLs
  • Ironport-hdrordr: A9a23:RsOS/akjB1C78XLm8VW1dKvI087pDfIc3DAbv31ZSRFFG/Fw8P rDoB1773DJYVMqM03I9urvBEDtexLhHOdOkPAs1NWZLWrbURqTTL2KhLGKq1eMJ8SUzJ8+6U 4PSdkYNPTASXR8kMbm8E2ZPr8bsb+6GXmT9ILjJqFWPGVXV50=
  • List-id: Developer list for the Windows PV Drivers subproject <win-pv-devel.lists.xenproject.org>

From: Rachel Yan <Rachel.Yan@xxxxxxxxxx>

Adds an upper bound to the ring poll iteration with optimal value found through 
experimentation to avoid going round the ring an infinite (or very large) 
number of times. This has been tested to show improvements in DPC latencies and 
file transfer speeds.

Signed-off-by: Rachel Yan <rachel.yan@xxxxxxxxxx>

---
 src/xenvif/receiver.c | 15 ++++++++++++++-
 1 file changed, 14 insertions(+), 1 deletion(-)

diff --git a/src/xenvif/receiver.c b/src/xenvif/receiver.c
index 2145133..d469de4 100644
--- a/src/xenvif/receiver.c
+++ b/src/xenvif/receiver.c
@@ -2013,11 +2013,15 @@ ReceiverRingPoll(
     PXENVIF_RECEIVER            Receiver;
     PXENVIF_FRONTEND            Frontend;
     ULONG                       Count;
+    ULONG                       MaxCount;
+    BOOLEAN                     NeedQueueDpc;
 
     Receiver = Ring->Receiver;
     Frontend = Receiver->Frontend;
 
     Count = 0;
+    MaxCount = 10 * XENVIF_RECEIVER_RING_SIZE;
+    NeedQueueDpc = FALSE;
 
     if (!Ring->Enabled || Ring->Paused)
         goto done;
@@ -2068,6 +2072,15 @@ ReceiverRingPoll(
             PXENVIF_RECEIVER_FRAGMENT   Fragment;
             PMDL                        Mdl;
 
+            // avoid going through the ring an infinite  (or very large) 
amount of times
+            // if the netback producer happens to fill in just enough packets 
to cause us
+            // to go around the ring multiple times. This should reduce Dpc 
latencies.
+
+            if (Count >= MaxCount) {
+                NeedQueueDpc = TRUE;
+                break;
+            }
+
             rsp = RING_GET_RESPONSE(&Ring->Front, rsp_cons);
 
             // netback is required to complete requests in order and place
@@ -2247,7 +2260,7 @@ ReceiverRingPoll(
     if (!__ReceiverRingIsStopped(Ring))
         ReceiverRingFill(Ring);
 
-    if (Ring->PacketQueue != NULL &&
+    if ((NeedQueueDpc || Ring->PacketQueue != NULL) &&
         KeInsertQueueDpc(&Ring->QueueDpc, NULL, NULL))
         Ring->QueueDpcs++;
 
-- 
2.38.0.windows.1




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.