[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2] Limit the amount of work done for each Receiver DPC



On 06/06/2022 13:00, Martin Harvey wrote:
+    if (!IsListEmpty(&Ring->PacketComplete)) {
+        // Re-run for the remainder from the back of DPC queue.
+        Ring->BackPressured = TRUE;
+        if (KeInsertQueueDpc(&Ring->QueueDpc, NULL, NULL))
+            Ring->QueueDpcs++;
+    } else {
+        Ring->BackPressured = FALSE;
+
+        // PollDpc is zeroed before final flush, don't queue it here.
+        if (!Ring->Flush && KeInsertQueueDpc(&Ring->PollDpc, NULL, NULL))
+            Ring->PollDpcs++;
+    }
  }

Additionally, there's a problem here in your rewrite. We only want to queue the 
PollDpc when we go from the backpressured to the non-backpressured state. Doing 
this in the low load case will run the entire PollDpc rx path continuously, 
which is definitely a mistake.

Yes, that is a mistake.

I'll rebase the 2nd patch on top and let's see how that looks.

  Paul


MH.




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.