[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] evtchn_do_upcall: search a snapshot of level 2 bits for pending upcalls



On Jan 31, 2010, at 2:24 AM, Keir Fraser wrote:

> On 31/01/2010 00:40, "Kaushik Kumar Ram" <kaushik@xxxxxxxx> wrote:
> 
>>> So how about making the clear of l1i in the l1 mask unconditional? I think
>>> that would be better, but I wasn't sure it is safe, since the first l1i you
>>> scan you may start halfway through, and thus legitimately have more work to
>>> do on that l1i on a later iteration of the outer loop. But I think that is
>>> the only case it is good to leave the l1 unmasked? Also, even better, on
>>> that second scan of that l1i, you would preferably want to scan only those
>>> bits in the l2 mask that you didn't scan on the first iteration of the outer
>>> loop!
>> 
>> OK. I agree the following is a good compromise.
>> - Unconditionally clear l1 bits except the first l1i (but only if l2 is
>> scanned from halfway).
>> - Remember where the scanning began (both l1i and l2i) and stop scanning at
>> that point after wrapping around.
>> - Read active_evtchns() once per l1i (except the first l1i where you might
>> have to do it twice).
> 
> Yes, sounds good. Are you going to make the patch?

Here is a first version of the patch. It was more complicated than I expected 
and it was also hard for me to decide if it was "efficient" enough.
I can work on improving it, based on your feedback.

# HG changeset patch
# User kaushik@xxxxxxxxxxxxxxxx
# Date 1264994160 21600
# Branch evtchn2
# Node ID e539a849a3348c3e87e8d50eba5998b7cdb9394d
# Parent  c88a02a22a057a632e6c21442e42e56e07904988
Fair processing of pending upcalls.

Signed-off-by: Kaushik Kumar Ram <kaushik@xxxxxxxx>

diff -r c88a02a22a05 -r e539a849a334 drivers/xen/core/evtchn.c
--- a/drivers/xen/core/evtchn.c Fri Jan 29 07:57:48 2010 +0000
+++ b/drivers/xen/core/evtchn.c Sun Jan 31 21:16:00 2010 -0600
@@ -236,9 +236,9 @@ static DEFINE_PER_CPU(unsigned int, curr
 /* NB. Interrupts are disabled on entry. */
 asmlinkage void evtchn_do_upcall(struct pt_regs *regs)
 {
-       unsigned long       l1, l2;
+       unsigned long       l1, l2, l2_start = 0;
        unsigned long       masked_l1, masked_l2;
-       unsigned int        l1i, l2i, port, count;
+       unsigned int        l1i, l2i = 0, port, count, l1i_start, l2i_start;
        int                 irq;
        unsigned int        cpu = smp_processor_id();
        shared_info_t      *s = HYPERVISOR_shared_info;
@@ -261,48 +261,96 @@ asmlinkage void evtchn_do_upcall(struct 
 #endif
                l1 = xchg(&vcpu_info->evtchn_pending_sel, 0);

-               l1i = per_cpu(current_l1i, cpu);
-               l2i = per_cpu(current_l2i, cpu);
+               l1i = l1i_start = per_cpu(current_l1i, cpu);
+               l2i_start = per_cpu(current_l2i, cpu);
+       
+               if(l1 != 0)
+               {
+                       masked_l1 = l1 & ((~0UL) << l1i);
+                       if (masked_l1 != 0) {
+                               l1i = __ffs(masked_l1);
+                               if(l1i == l1i_start && l2i_start != 0) {
+                                       l2 = active_evtchns(cpu, s, l1i);
+                                       l2i = l2i_start;
+                                       l2_start = l2 & ((~0UL) >> 
(BITS_PER_LONG - l2i));
+                               }       
+                       }       
+                       else {
+                               l1i = 0;
+                               masked_l1 = l1 & ((~0UL) << l1i);
+                               l1i = __ffs(masked_l1);
+                               l2 = active_evtchns(cpu, s, l1i);
+                       }

-               while (l1 != 0) {
-                       masked_l1 = l1 & ((~0UL) << l1i);
-                       /* If we masked out all events, wrap to beginning. */
-                       if (masked_l1 == 0) {
-                               l1i = l2i = 0;
-                               continue;
-                       }
-                       l1i = __ffs(masked_l1);
+                       while (1) {

-                       do {
-                               l2 = active_evtchns(cpu, s, l1i);
-                               masked_l2 = l2 & ((~0UL) << l2i);
-                               if (masked_l2 == 0)
-                                       break;
-                               l2i = __ffs(masked_l2);
+                               do {
+                                       masked_l2 = l2 & ((~0UL) << l2i);
+                                       if (masked_l2 == 0)
+                                               break;
+                                       l2i = __ffs(masked_l2);

-                               /* process port */
-                               port = (l1i * BITS_PER_LONG) + l2i;
-                               if ((irq = evtchn_to_irq[port]) != -1)
-                                       do_IRQ(irq, regs);
-                               else
-                                       evtchn_device_upcall(port);
+                                       /* process port */
+                                       port = (l1i * BITS_PER_LONG) + l2i;
+                                       if ((irq = evtchn_to_irq[port]) != -1)
+                                               do_IRQ(irq, regs);
+                                       else
+                                               evtchn_device_upcall(port);

-                               l2i = (l2i + 1) % BITS_PER_LONG;
+                                       l2i = (l2i + 1) % BITS_PER_LONG;

-                               /* Next caller starts at last processed + 1 */
-                               per_cpu(current_l1i, cpu) =
-                                       l2i ? l1i : (l1i + 1) % BITS_PER_LONG;
-                               per_cpu(current_l2i, cpu) = l2i;
+                                       /* Next caller starts at last processed 
+ 1 */
+                                       per_cpu(current_l1i, cpu) =
+                                               l2i ? l1i : (l1i + 1) % 
BITS_PER_LONG;
+                                       per_cpu(current_l2i, cpu) = l2i;

-                       } while (l2i != 0);
+                               } while (l2i != 0);

-                       l2 = active_evtchns(cpu, s, l1i);
-                       /* If we handled all ports, clear the selector bit. */
-                       if (l2 == 0)
                                l1 &= ~(1UL << l1i);

-                       l1i = (l1i + 1) % BITS_PER_LONG;
-                       l2i = 0;
+                               if(l1 == 0)
+                                       break;
+
+                               l1i = (l1i + 1) % BITS_PER_LONG;
+                       
+                               masked_l1 = l1 & ((~0UL) << l1i);
+                               /* If we masked out all events, wrap to 
beginning. */
+                               if (masked_l1 == 0) {
+                                       l1i = 0;
+                                       masked_l1 = l1 & ((~0UL) << l1i);
+                               }
+                               l1i = __ffs(masked_l1);
+                               l2i = 0;
+                               l2 = active_evtchns(cpu, s, l1i);
+                       }
+
+                       /* Check and process any pending events in the 
+                       * unprocessed portion of bits selected by l1i_start.
+                       */
+                       if(l2_start != 0) {
+                               l1i = l1i_start;
+                               l2i = 0;
+                               do {
+                                       masked_l2 = l2_start & ((~0UL) << l2i);
+                                       if (masked_l2 == 0)
+                                               break;
+                                       l2i = __ffs(masked_l2);
+                               
+                                       /* process port */
+                                       port = (l1i * BITS_PER_LONG) + l2i;
+                                       if ((irq = evtchn_to_irq[port]) != -1)
+                                               do_IRQ(irq, regs);
+                                       else
+                                               evtchn_device_upcall(port);
+
+                                       l2i = (l2i + 1) % BITS_PER_LONG;
+
+                                       /* Next caller starts at last processed 
+ 1 */
+                                       per_cpu(current_l1i, cpu) =
+                                               l2i ? l1i : (l1i + 1) % 
BITS_PER_LONG;
+                                       per_cpu(current_l2i, cpu) = l2i;
+                               }while(1);
+                       }
                }

                /* If there were nested callbacks then we have more to do. */
 
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.