[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] No DomU boot after upgrade to xen 4.0



Matej Zary schrieb:
You cand find nice "howto" for 2.6.31 kernel compilation (also
with .config files) in the second half of this page -
http://wiki.xensource.com/xenwiki/XenParavirtOps.

I would still do "make menuconfig" with these provided .config files,
IIRC there are some debug options turned on, which can negatively impact
performance (which could matter on production servers).
Thank you Matej. With that howto I built a working kernel.
But why is the TUN disabled? I use a quite standard config for my hvms which then always will fail.

Another problem:

Sometimes I get this trace while booting.

[  257.653395] BUG: scheduling while atomic: xenwatch/28/0x00000002
[  257.791008] Modules linked in: [last unloaded: scsi_wait_scan]
[  257.926138] Pid: 28, comm: xenwatch Not tainted 2.6.31.13 #2
[  258.062008] Call Trace:
[  258.118138]  [<ffffffff81060a70>] __schedule_bug+0x5c/0x60
[  258.242138]  [<ffffffff8159c8af>] schedule+0xd1/0x8ec
[  258.359139]  [<ffffffff8120a347>] ? kvasprintf+0x45/0x6e
[  258.481140]  [<ffffffff8159e7d5>] ? _spin_unlock_irqrestore+0x34/0x3f
[  258.631139]  [<ffffffff8126bd80>] read_reply+0x9a/0x12f
[  258.749139]  [<ffffffff81080dfb>] ? autoremove_wake_function+0x0/0x38
[  258.899139]  [<ffffffff8126bfb8>] xs_talkv+0xc7/0x184
[  259.014140]  [<ffffffff8102ef25>] ? xen_force_evtchn_callback+0xd/0xf
[  259.162139]  [<ffffffff8126c19d>] xs_single+0x42/0x44
[  259.278138]  [<ffffffff8126c908>] xenbus_read+0x3d/0x54
[  259.404139]  [<ffffffff8126c9f5>] xenbus_gather+0xd6/0x166
[  259.531139]  [<ffffffff8126c8b7>] ? xenbus_printf+0xdd/0xf1
[  259.659139]  [<ffffffff8103034d>] ? kzalloc+0xf/0x11
[  259.772138]  [<ffffffff8126a8e5>] xenbus_read_driver_state+0x29/0x39
[  259.919138]  [<ffffffff81276d66>] pciback_attach+0x49/0x1b5
[  260.048138]  [<ffffffff8126ad25>] ? xenbus_switch_state+0x5d/0x97
[  260.187139]  [<ffffffff812773dd>] pciback_be_watch+0x251/0x263
[  260.321139]  [<ffffffff8159e7d5>] ? _spin_unlock_irqrestore+0x34/0x3f
[  260.469139]  [<ffffffff81205ea6>] ? __up_read+0x92/0x9c
[  260.593139]  [<ffffffff81084273>] ? up_read+0x9/0xb
[  260.706138]  [<ffffffff8126c5a3>] ? register_xenbus_watch+0xfd/0x108
[  260.852138]  [<ffffffff812778b8>] pciback_xenbus_probe+0x143/0x167
[  260.994139]  [<ffffffff8126d8d6>] xenbus_dev_probe+0x96/0x134
[  261.127008]  [<ffffffff812dccba>] driver_probe_device+0x97/0x13c
[  261.263139]  [<ffffffff812dcdda>] ? __device_attach+0x0/0x3c
[  261.395139]  [<ffffffff812dce0d>] __device_attach+0x33/0x3c
[  261.523138]  [<ffffffff812dc2c4>] bus_for_each_drv+0x51/0x88
[  261.654139]  [<ffffffff812dce98>] device_attach+0x5e/0x73
[  261.776138]  [<ffffffff812dc133>] bus_probe_device+0x1f/0x36
[  261.910139]  [<ffffffff812dac43>] device_add+0x3bd/0x546
[  262.031139]  [<ffffffff812026fa>] ? kobject_init+0x43/0x83
[  262.159138]  [<ffffffff812dade5>] device_register+0x19/0x1d
[  262.285008]  [<ffffffff8126d496>] xenbus_probe_node+0x126/0x1aa
[  262.422139]  [<ffffffff812dc5cb>] ? bus_for_each_dev+0x75/0x85
[  262.556138]  [<ffffffff8126d683>] xenbus_dev_changed+0x169/0x187
[  262.694139]  [<ffffffff8126db2d>] backend_changed+0x16/0x18
[  262.821139]  [<ffffffff8126bcb3>] xenwatch_thread+0x11a/0x14d
[  262.955139]  [<ffffffff81080dfb>] ? autoremove_wake_function+0x0/0x38
[  263.110139]  [<ffffffff8159e7d5>] ? _spin_unlock_irqrestore+0x34/0x3f
[  263.259139]  [<ffffffff8126bb99>] ? xenwatch_thread+0x0/0x14d
[  263.390138]  [<ffffffff810809e4>] kthread+0x8f/0x97
[  263.502138]  [<ffffffff81034d9a>] child_rip+0xa/0x20
[  263.617139]  [<ffffffff81033f27>] ? int_ret_from_sys_call+0x7/0x1b
[  263.760008]  [<ffffffff810346e1>] ? retint_restore_args+0x5/0x6
[  263.894138]  [<ffffffff81034d90>] ? child_rip+0x0/0x20

It is a known bug but how to solve this?
Best regards

Ralf

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.