[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Debugging DomU


  • To: Julien Grall <julien.grall@xxxxxxxxxx>
  • From: "Chris (Christopher) Brand" <chris.brand@xxxxxxxxxxxx>
  • Date: Fri, 29 May 2015 23:07:27 +0000
  • Accept-language: en-US
  • Cc: xen-users <xen-users@xxxxxxxxxxxxx>, Ian Campbell <ian.campbell@xxxxxxxxxx>
  • Delivery-date: Fri, 29 May 2015 23:08:53 +0000
  • List-id: Xen user discussion <xen-users.lists.xen.org>
  • Thread-index: AdCB2wWZcr9SQU3CS2ODe9QE3SnJMACfCemAAAUH66AAIWukAACjOORwACWVbwABa+9kIAAqoyMAAAGcR8AAL4ylAAEI4IqwAIsLYwAAZrncAAArZ/SAAA3FjUAANrEygAAF+fvgABn11wAAB6x6AAAzniUAAAEt5lA=
  • Thread-topic: [Xen-users] Debugging DomU

Hi Julien,

>When you create a guest you only need to provide the kernel. The device tree 
>will be created by the toolstack.

That I didn't know. I was appending a dt blob to the kernel. Without that, it's 
definitely happier (see below).

>Sorry if I already asked it before. Can you summarize your status:
>       - Version of Xen used and modification you made

Built from a git checkout (cb34a7c8d741), with a couple of small hacks that we 
needed to get to the point of Dom0 running on our platform (there's a change to 
xen/drivers/char/ns16550.c to workaround a uart bug, and one to 
xen/arch/platforms/brcm.c to set dom0_gnttab_start and dom0_gnttab_end).

>       - Version of Linux DOM0 used
>       - Version of Linux DOMU used

Both built from the same source tree, which is 3.14 with some changes for our 
hardware that haven't yet made it upstream. I took the .config we use 
standalone, enabled the various options for Xen and rebuilt to create the Dom0 
kernel. Then tweaked the config again slightly per your instructions (mostly to 
switch to CONFIG_ARCH_VIRT, IIRC). There are also currently a couple of patches 
that I've backported in that tree, too.

>       - Do you append a device tree to DOMU?

I was until today, yes.

>       - xl configuration file used to create the DOMU.

name = "VM2"
kernel = "/mnt/hd/vmlinuz-domu"
#extra = "earlyprintk=xen console=hvc0 debug rw init=/sbin/poweroff"
extra = "earlyprintk=xen console=hvc0 debug rw init=/bin/bash"
#extra = "console=hvc0 debug rw init=/bin/bash"
vcpus = 1
memory = 512

I have now seen DomU come up. More often, though, I see some "Division by zero 
in kernel" errors from clocksource_of_init(), then nothing after it reports the 
sched clock (but it reports it at "0 Hz"). Log extract below. It looks like the 
initial problem is that "Architected timer frequency not available". If I hack 
arch_timer_detect_rate() to set arch_timer_rate to the 27MHz value that Dom0 
gets, DomU comes up ok (Yay!).

Chris

(d1) 6NR_IRQS:16 nr_irqs:16 16
(d1) 4Architected timer frequency not available
(d1) Division by zero in kernel.
(d1) dCPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.14.13-1.0pre-g1ba194f963e0-dir
ty #107
(d1) [<c0015c04>] (unwind_backtrace) from [<c0012748>] (show_stack+0x10/0x14)
(d1) [<c0012748>] (show_stack) from [<c04e9548>] (dump_stack+0x80/0x90)
(d1) [<c04e9548>] (dump_stack) from [<c0258da4>] (Ldiv0_64+0x8/0x18)
(d1) [<c0258da4>] (Ldiv0_64) from [<c006edd4>] (clockevents_config.part.3+0x24/0
x88)
(d1) [<c006edd4>] (clockevents_config.part.3) from [<c006ee58>] (clockevents_con
fig_and_register+0x20/0x2c)
(d1) [<c006ee58>] (clockevents_config_and_register) from [<c03f260c>] (arch_time
r_setup+0xb8/0x1a4)
(d1) [<c03f260c>] (arch_timer_setup) from [<c06c6510>] (arch_timer_init+0x1f4/0x
25c)
(d1) [<c06c6510>] (arch_timer_init) from [<c06c60a8>] (clocksource_of_init+0x4c/
0x8c)
(d1) [<c06c60a8>] (clocksource_of_init) from [<c06aa9e0>] (start_kernel+0x238/0x
378)
(d1) [<c06aa9e0>] (start_kernel) from [<40008084>] (0x40008084)
(d1) 4------------[ cut here ]------------
(d1) 4WARNING: CPU: 0 PID: 0 at kernel/time/clockevents.c:44 cev_delta2ns.isra.1
+0xe8/0x100()
(d1) dModules linked in:
(d1) dCPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.14.13-1.0pre-g1ba194f963e0-dir
ty #107
(d1) [<c0015c04>] (unwind_backtrace) from [<c0012748>] (show_stack+0x10/0x14)
(d1) [<c0012748>] (show_stack) from [<c04e9548>] (dump_stack+0x80/0x90)
(d1) [<c04e9548>] (dump_stack) from [<c002106c>] (warn_slowpath_common+0x6c/0x88
)
(d1) [<c002106c>] (warn_slowpath_common) from [<c0021124>] (warn_slowpath_null+0
x1c/0x24)
(d1) [<c0021124>] (warn_slowpath_null) from [<c006ed84>] (cev_delta2ns.isra.1+0x
e8/0x100)
(d1) [<c006ed84>] (cev_delta2ns.isra.1) from [<c006ee14>] (clockevents_config.pa
rt.3+0x64/0x88)
(d1) [<c006ee14>] (clockevents_config.part.3) from [<c006ee58>] (clockevents_con
fig_and_register+0x20/0x2c)
(d1) [<c006ee58>] (clockevents_config_and_register) from [<c03f260c>] (arch_time
r_setup+0xb8/0x1a4)
(d1) [<c03f260c>] (arch_timer_setup) from [<c06c6510>] (arch_timer_init+0x1f4/0x
25c)
(d1) [<c06c6510>] (arch_timer_init) from [<c06c60a8>] (clocksource_of_init+0x4c/
0x8c)
(d1) [<c06c60a8>] (clocksource_of_init) from [<c06aa9e0>] (start_kernel+0x238/0x
378)
(d1) [<c06aa9e0>] (start_kernel) from [<40008084>] (0x40008084)
(d1) 4---[ end trace 3406ff24bd97382e ]---
(d1) 6Architected cp15 timer(s) running at 0.00MHz (virt).
(d1) Division by zero in kernel.
(d1) dCPU: 0 PID: 0 Comm: swapper/0 Tainted: G        W    3.14.13-1.0pre-g1ba19
4f963e0-dirty #107
(d1) [<c0015c04>] (unwind_backtrace) from [<c0012748>] (show_stack+0x10/0x14)
(d1) [<c0012748>] (show_stack) from [<c04e9548>] (dump_stack+0x80/0x90)
(d1) [<c04e9548>] (dump_stack) from [<c0258da4>] (Ldiv0_64+0x8/0x18)
(d1) [<c0258da4>] (Ldiv0_64) from [<c006bfa8>] (__clocksource_updatefreq_scale+0
x34/0x1a8)
(d1) [<c006bfa8>] (__clocksource_updatefreq_scale) from [<c006c130>] (__clocksou
rce_register_scale+0x14/0xa4)
(d1) [<c006c130>] (__clocksource_register_scale) from [<c06c62c4>] (arch_timer_c
ommon_init+0x1dc/0x234)
(d1) [<c06c62c4>] (arch_timer_common_init) from [<c06c60a8>] (clocksource_of_ini
t+0x4c/0x8c)
(d1) [<c06c60a8>] (clocksource_of_init) from [<c06aa9e0>] (start_kernel+0x238/0x
378)
(d1) [<c06aa9e0>] (start_kernel) from [<40008084>] (0x40008084)
(d1) Division by zero in kernel.
(d1) dCPU: 0 PID: 0 Comm: swapper/0 Tainted: G        W    3.14.13-1.0pre-g1ba19
4f963e0-dirty #107
(d1) [<c0015c04>] (unwind_backtrace) from [<c0012748>] (show_stack+0x10/0x14)
(d1) [<c0012748>] (show_stack) from [<c04e9548>] (dump_stack+0x80/0x90)
(d1) [<c04e9548>] (dump_stack) from [<c0258da4>] (Ldiv0_64+0x8/0x18)
(d1) [<c0258da4>] (Ldiv0_64) from [<c006be48>] (clocks_calc_mult_shift+0xa4/0xdc
)
(d1) [<c006be48>] (clocks_calc_mult_shift) from [<c006c01c>] (__clocksource_upda
tefreq_scale+0xa8/0x1a8)
(d1) [<c006c01c>] (__clocksource_updatefreq_scale) from [<c006c130>] (__clocksou
rce_register_scale+0x14/0xa4)
(d1) [<c006c130>] (__clocksource_register_scale) from [<c06c62c4>] (arch_timer_c
ommon_init+0x1dc/0x234)
(d1) [<c06c62c4>] (arch_timer_common_init) from [<c06c60a8>] (clocksource_of_ini
t+0x4c/0x8c)
(d1) [<c06c60a8>] (clocksource_of_init) from [<c06aa9e0>] (start_kernel+0x238/0x
378)
(d1) [<c06aa9e0>] (start_kernel) from [<40008084>] (0x40008084)
(d1) Division by zero in kernel.
(d1) dCPU: 0 PID: 0 Comm: swapper/0 Tainted: G        W    3.14.13-1.0pre-g1ba19
4f963e0-dirty #107
(d1) [<c0015c04>] (unwind_backtrace) from [<c0012748>] (show_stack+0x10/0x14)
(d1) [<c0012748>] (show_stack) from [<c04e9548>] (dump_stack+0x80/0x90)
(d1) [<c04e9548>] (dump_stack) from [<c0258da4>] (Ldiv0_64+0x8/0x18)
(d1) [<c0258da4>] (Ldiv0_64) from [<c006be48>] (clocks_calc_mult_shift+0xa4/0xdc
)
(d1) [<c006be48>] (clocks_calc_mult_shift) from [<c06b5394>] (sched_clock_regist
er+0x64/0x284)
(d1) [<c06b5394>] (sched_clock_register) from [<c06c62f8>] (arch_timer_common_in
it+0x210/0x234)
(d1) [<c06c62f8>] (arch_timer_common_init) from [<c06c60a8>] (clocksource_of_ini
t+0x4c/0x8c)
(d1) [<c06c60a8>] (clocksource_of_init) from [<c06aa9e0>] (start_kernel+0x238/0x
378)
(d1) [<c06aa9e0>] (start_kernel) from [<40008084>] (0x40008084)
(d1) 6sched_clock: 56 bits at 0 Hz, resolution 0ns, wraps every 0ns


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.