[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 for-xen-4.5 1/3] dpci: Move from domain centric model to hvm_dirq_dpci model.



On Thu, Sep 25, 2014 at 03:24:53PM +0100, Jan Beulich wrote:
> >>> On 23.09.14 at 04:10, <konrad.wilk@xxxxxxxxxx> wrote:
> > @@ -130,6 +146,7 @@ int pt_irq_create_bind(
> >          return -ENOMEM;
> >      }
> >      pirq_dpci = pirq_dpci(info);
> > +    pt_pirq_reset(d, pirq_dpci);
> >  
> >      switch ( pt_irq_bind->irq_type )
> >      {
> > @@ -232,7 +249,6 @@ int pt_irq_create_bind(
> >          {
> >              unsigned int share;
> >  
> > -            pirq_dpci->dom = d;
> >              if ( pt_irq_bind->irq_type == PT_IRQ_TYPE_MSI_TRANSLATE )
> >              {
> >                  pirq_dpci->flags = HVM_IRQ_DPCI_MAPPED |
> > @@ -258,7 +274,6 @@ int pt_irq_create_bind(
> >              {
> >                  if ( pt_irq_need_timer(pirq_dpci->flags) )
> >                      kill_timer(&pirq_dpci->timer);
> > -                pirq_dpci->dom = NULL;
> >                  list_del(&girq->list);
> >                  list_del(&digl->list);
> >                  hvm_irq_dpci->link_cnt[link]--;
> > @@ -391,7 +406,6 @@ int pt_irq_destroy_bind(
> >          msixtbl_pt_unregister(d, pirq);
> >          if ( pt_irq_need_timer(pirq_dpci->flags) )
> >              kill_timer(&pirq_dpci->timer);
> > -        pirq_dpci->dom   = NULL;
> >          pirq_dpci->flags = 0;
> >          pirq_cleanup_check(pirq, d);
> >      }
> 
> Is all of the above really necessary? I.e. I can neither see why setting
> ->dom earlier is needed, nor why clearing it on the error paths should
> be dropped.

Yes. We need the ->dom so that the hvm_dirq_assist can run without
hitting an NULL pointer exception. Please keep in mind that the moment
we setup the IRQ action handler, we are "live" - thought to be fair we
have not yet told QEMU of the PIRQ value so the device driver hasn't
written the value in. But I can see some drivers doing a mix of
pt_irq_create_bind->pt_irq_destroy_bind->pt_irq_create_bind->...

 setup MSI
        [calls QEMU, the pt_irq_create_bind, QEMU returns the PIRQ value]
 sets up IDT
 tells the hardware chip to use the PIRQ value that QEMU gave it.
 destroys MSI
 setup MSI
 destroys MSI
 setup MSI

.. while forgetting to tell the chip to stop sending interrupts. During those
windows the (destroy, and then create) we can have interrupts coming in
and since hvm_dirq_assist needs the ->dom, we cannot clear it.

The extra '->dom = d' in the pt_irq_create_bind was an extra one since the
pt_pirq_reset does it now.


> 
> > @@ -459,7 +480,7 @@ int hvm_do_IRQ_dpci(struct domain *d, struct pirq *pirq)
> >          return 0;
> >  
> >      pirq_dpci->masked = 1;
> > -    tasklet_schedule(&dpci->dirq_tasklet);
> > +    tasklet_schedule(&pirq_dpci->tasklet);
> 
> Please also drop the effectively no longer used local variable "dpci"
> here (the NULL check of it needs to stay though, but you can simply
> check the return value of domain_get_irq_dpci() without using a local
> variable now).

That can certainly be done.
> 
> > @@ -564,7 +585,7 @@ static int _hvm_dirq_assist(struct domain *d, struct 
> > hvm_pirq_dpci *pirq_dpci,
> >  
> >  static void hvm_dirq_assist(unsigned long _d)
> >  {
> > -    struct domain *d = (struct domain *)_d;
> > +    struct domain *d = ((struct hvm_pirq_dpci *)_d)->dom;
> >  
> >      ASSERT(d->arch.hvm_domain.irq.dpci);
> >  
> 
> This seems too little of a change - there's no point in calling
> pt_pirq_iterate() here anymore, as you already have the struct
> hvm_pirq_dpci instance that needs acting on (any others will get
> dealt with in their own tasklets). I.e. the current hvm_dirq_assist()
> can go away, and what right now is _hvm_dirq_assist() should
> become hvm_dirq_assist().

Right :-)
> 
> Oh - peeking ahead I see this is patch 2 of your series. Not sure
> this should be separate patches...

I've been conditioned to do 'one logical change per patch' so figured
it would be far easier if I had split it apart.

Of course since you prefer to squash them I will do so promptly!
> 
> Jan
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.