[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] How to display dom0 kernel printk on hvc0



On 08/07/2014 03:36 PM, Stefano Stabellini wrote:
> On Thu, 7 Aug 2014, manish jaggi wrote:
>> Thanks,
>> I manged to do something similar in the meantime. I am seeing a crash after 
>> I do /etc/init.d/xencommns start
>>
>> [<ffffffc00038fee4>] clear_bit+0x14/0x30
>> [<ffffffc0003d9ca4>] ack_dynirq+0x44/0x58
>> [<ffffffc0000e6a34>] handle_edge_irq+0x74/0x178
>> [<ffffffc0003dc0e8>] evtchn_fifo_handle_events+0x280/0x288
>> [<ffffffc0003d8f50>] __xen_evtchn_do_upcall+0x68/0xd0
>> [<ffffffc0003d8fc0>] xen_hvm_evtchn_do_upcall+0x8/0x18
>> [<ffffffc00009271c>] xen_arm_callback+0x4c/0x68
>> [<ffffffc0000e7560>] handle_percpu_devid_irq+0x88/0x120
>> [<ffffffc0000e38b4>] generic_handle_irq+0x24/0x40
>> [<ffffffc000084890>] handle_IRQ+0x40/0xa8
>> [<ffffffc000081348>] gic_handle_irq+0x50/0xa0
>>
>> I found that consume_one_event calls handle_irq_for_port which gets IRQ=7 in 
>> case of a crash.
>> What is the use of IRQ 7 ?
>> IRQ1 is UART which i saw in cat /proc/interrupts
> 
> What kernel version are you using?
> 
> It looks like the FIFO event channel is not properly initialized.
> You could try switching to the old style 2-level ABI:
> 
> diff --git a/drivers/xen/events/events_fifo.c 
> b/drivers/xen/events/events_fifo.c
> index 84b4bfb..4a23e08 100644
> --- a/drivers/xen/events/events_fifo.c
> +++ b/drivers/xen/events/events_fifo.c
> @@ -428,6 +428,8 @@ int __init xen_evtchn_fifo_init(void)
>       int cpu = get_cpu();
>       int ret;
>  
> +     return -1;
> +
>       ret = evtchn_fifo_init_control_block(cpu);
>       if (ret < 0)
>               goto out;

You don't need to recompile the kernel.

xen.fifo_events=0 in the command line will choose the 2-level ABI.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.