[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/HPET: mask interrupt while changing affinity



Hi Jan,

Could this change have a averse affect on AMD systems ?
With this patch booting the dom0 kernel slowly seems to come to a halt 
(sometime trying to mount rootfs, sometimes a little further trying to bring 
networking up.)
I don't see any evident warnings or errors, reverting this commit makes the 
system boot OK again.

(System is a 890fx motherboard with amd phenom II x6)

--
Sander

Monday, March 18, 2013, 12:12:50 PM, you wrote:

> While being unable to reproduce the "No irq handler for vector ..."
> messages observed on other systems, the change done by 5dc3fd2 ('x86:
> extend diagnostics for "No irq handler for vector" messages') appears
> to point at the lack of masking - at least I can't see what else might
> be wrong with the HPET MSI code that could trigger these warnings.

> While at it, also adjust the message printed by aforementioned commit
> to not pointlessly insert spaces - we don't need aligned tabular output
> here.

> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

> --- a/xen/arch/x86/hpet.c
> +++ b/xen/arch/x86/hpet.c
> @@ -466,7 +466,9 @@ static void set_channel_irq_affinity(con
>  
>      ASSERT(!local_irq_is_enabled());
>      spin_lock(&desc->lock);
> +    hpet_msi_mask(desc);
>      hpet_msi_set_affinity(desc, cpumask_of(ch->cpu));
> +    hpet_msi_unmask(desc);
>      spin_unlock(&desc->lock);
>  }
>  
> --- a/xen/arch/x86/irq.c
> +++ b/xen/arch/x86/irq.c
> @@ -826,7 +826,7 @@ void do_IRQ(struct cpu_user_regs *regs)
>                  if ( ~irq < nr_irqs && irq_desc_initialized(desc) )
>                  {
>                      spin_lock(&desc->lock);
> -                    printk("IRQ%d a=%04lx[%04lx,%04lx] v=%02x[%02x] t=%-15s 
> s=%08x\n",
> +                    printk("IRQ%d a=%04lx[%04lx,%04lx] v=%02x[%02x] t=%s 
> s=%08x\n",
>                             ~irq, *cpumask_bits(desc->affinity),
>                             *cpumask_bits(desc->arch.cpu_mask),
>                             *cpumask_bits(desc->arch.old_cpu_mask),





_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.