[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 8/9] xen/smp/pvhvm: Don't initialize IRQ_WORKER as we are using the native one.



On Fri, Apr 26, 2013 at 05:27:20PM +0100, Stefano Stabellini wrote:
> On Tue, 16 Apr 2013, Konrad Rzeszutek Wilk wrote:
> > There is no need to use the PV version of the IRQ_WORKER mechanism
> > as under PVHVM we are using the native version. The native
> > version is using the SMP API.
> > 
> > They just sit around unused:
> > 
> >   69:          0          0  xen-percpu-ipi       irqwork0
> >   83:          0          0  xen-percpu-ipi       irqwork1
> > 
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
> 
> Might be worth trying to make it work instead?
> Is it just because we don't set the apic->send_IPI_* functions to the
> xen specific version on PVHVM?
> 

Right. We use the baremetal mechanism to do it. And it works fine.

> 
> >  arch/x86/xen/smp.c | 13 ++++++++++++-
> >  1 file changed, 12 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
> > index 22c800a..415694c 100644
> > --- a/arch/x86/xen/smp.c
> > +++ b/arch/x86/xen/smp.c
> > @@ -144,6 +144,13 @@ static int xen_smp_intr_init(unsigned int cpu)
> >             goto fail;
> >     per_cpu(xen_callfuncsingle_irq, cpu) = rc;
> >  
> > +   /*
> > +    * The IRQ worker on PVHVM goes through the native path and uses the
> > +    * IPI mechanism.
> > +    */
> > +   if (xen_hvm_domain())
> > +           return 0;
> > +
> >     callfunc_name = kasprintf(GFP_KERNEL, "irqwork%d", cpu);
> >     rc = bind_ipi_to_irqhandler(XEN_IRQ_WORK_VECTOR,
> >                                 cpu,
> > @@ -167,6 +174,9 @@ static int xen_smp_intr_init(unsigned int cpu)
> >     if (per_cpu(xen_callfuncsingle_irq, cpu) >= 0)
> >             unbind_from_irqhandler(per_cpu(xen_callfuncsingle_irq, cpu),
> >                                    NULL);
> > +   if (xen_hvm_domain())
> > +           return rc;
> > +
> >     if (per_cpu(xen_irq_work, cpu) >= 0)
> >             unbind_from_irqhandler(per_cpu(xen_irq_work, cpu), NULL);
> >  
> > @@ -661,7 +671,8 @@ static void xen_hvm_cpu_die(unsigned int cpu)
> >     unbind_from_irqhandler(per_cpu(xen_callfunc_irq, cpu), NULL);
> >     unbind_from_irqhandler(per_cpu(xen_debug_irq, cpu), NULL);
> >     unbind_from_irqhandler(per_cpu(xen_callfuncsingle_irq, cpu), NULL);
> > -   unbind_from_irqhandler(per_cpu(xen_irq_work, cpu), NULL);
> > +   if (!xen_hvm_domain())
> > +           unbind_from_irqhandler(per_cpu(xen_irq_work, cpu), NULL);
> >     xen_uninit_lock_cpu(cpu);
> >     xen_teardown_timer(cpu);
> >     native_cpu_die(cpu);
> > -- 
> > 1.8.1.4
> > 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.