[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [XenARM] [Xen-devel] [PATCH] arm: support fewer LRs register than virtual irqs



On Tue, 14 Feb 2012, David Vrabel wrote:
> > +void gic_set_guest_irq(unsigned int virtual_irq,
> > +        unsigned int state, unsigned int priority)
> > +{
> > +    int i;
> > +    struct pending_irq *iter, *n;
> > +
> > +    spin_lock(&gic.lock);
> > +    for (i = 0; i < nr_lrs; i++) {
> > +        if (!test_and_set_bit(i, &gic.lr_mask))
> > +        {
> > +            gic_set_lr(i, virtual_irq, state, priority);
> > +            spin_unlock(&gic.lock);
> > +            return;
> > +        }
> > +    }
> 
> You can skip this loop if gic.lr_pending is non-empty as there won't be
> any spare bits in gic.lr_mask.

Right


> > +    n = irq_to_pending(current, virtual_irq);
> > +    list_for_each_entry ( iter, &gic.lr_pending, lr_link )
> > +    {
> > +        if ( iter->priority < priority )
> > +        {
> > +            list_add_tail(&n->lr_link, &iter->lr_link);
> > +            spin_unlock(&gic.lock);
> > +            return;
> > +        }
> > +    }
> 
> How many pending irqs are expected?  If it's lots then looping through a
> simple list like this might be slow.   Something to keep in mind -- I
> wouldn't try and fix it now.

How many interrupts are the guests going to receive while 4 are already
in service? I am not sure yet.


> > +    list_add(&n->lr_link, &gic.lr_pending);
> > +    spin_unlock(&gic.lock);
> > +    return;
> > +}
> > +
> >  void gic_inject_irq_start(void)
> >  {
> >      uint32_t hcr;
> > @@ -435,13 +471,26 @@ static void maintenance_interrupt(int irq, void 
> > *dev_id, struct cpu_user_regs *r
> >      uint32_t lr;
> >      uint64_t eisr = GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_EISR1]) << 
> > 32);
> >  
> > -    for ( i = 0; i < 64; i++ ) {
> > +    for ( i = 0; i < nr_lrs; i++ ) {
> >          if ( eisr & ((uint64_t)1 << i) ) {
> >              struct pending_irq *p;
> >  
> > +            spin_lock(&gic.lock);
> >              lr = GICH[GICH_LR + i];
> >              virq = lr & GICH_LR_VIRTUAL_MASK;
> >              GICH[GICH_LR + i] = 0;
> > +            clear_bit(i, &gic.lr_mask);
> > +
> > +            if ( !list_empty(gic.lr_pending.next) ) {
> > +                p = list_entry(gic.lr_pending.next, typeof(*p), lr_link);
> > +                gic_set_lr(i, p->irq, GICH_LR_PENDING, p->priority);
> > +                list_del(&p->lr_link);
> > +                INIT_LIST_HEAD(&p->lr_link);
> 
> I don't think you need the INIT_LIST_HEAD() here (and even if you did
> you should use list_del_init()).  You only need to init nodes if you
> need to test if they are in a list or not.

OK


> > +                set_bit(i, &gic.lr_mask);
> > +            } else {
> > +                gic_inject_irq_stop();
> > +            }
> > +            spin_unlock(&gic.lock);
> >  
> >              spin_lock(&current->arch.vgic.lock);
> >              p = irq_to_pending(current, virq);
> > @@ -449,7 +498,6 @@ static void maintenance_interrupt(int irq, void 
> > *dev_id, struct cpu_user_regs *r
> >                  p->desc->status &= ~IRQ_INPROGRESS;
> >                  GICC[GICC_DIR] = virq;
> >              }
> > -            gic_inject_irq_stop();
> >              list_del(&p->link);
> >              INIT_LIST_HEAD(&p->link);
> 
> Similarly, here (but this should be fixed up in a separate patch).

OK

_______________________________________________
Xen-arm mailing list
Xen-arm@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/mailman/listinfo/xen-arm


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.