[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] Random XCP guest reboots.



Hello,

the issue is quite simple: after running a scp from another machine in order to write data to the XCP guest - the guest will start rebooting.

The log of the Control Host Domain clearly says that the virtual machine got some kind of unwanted "force shutdown"

Mar 2 13:21:23 morgoth ovs-vsctl: 00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl list-br
Mar  2 13:25:47 morgoth kernel: device tap161.0 left promiscuous mode
Mar 2 13:25:47 morgoth BLKTAP-DAEMON[6309]: /local/domain/0/backend/tap/161/5632: got start/shutdown watch on /local/domain/0/backend/tap/161/5632/tapdisk-request Mar 2 13:25:47 morgoth BLKTAP-DAEMON[6309]: /local/domain/0/backend/tap/161/5632: got tapdisk-request 'shutdown-force', shutdown state down (force): 0 Mar 2 13:25:47 morgoth BLKTAP-DAEMON[6309]: driving channel state running, vbd down, unpaused to closed (11) Mar 2 13:25:47 morgoth BLKTAP-DAEMON[6309]: /local/domain/0/backend/tap/161/5632: sending 'force shutdown' message to 23:418, state running Mar 2 13:25:47 morgoth TAPDISK[12548]: received 'force shutdown' message (uuid = 418) Mar 2 13:25:47 morgoth TAPDISK[12548]: /dev/VG_XenStorage-60a2b615-473b-9b5c-c72f-3520844e0f2f/VHD-f1678b62-d5ba-4772-806c-ef4ca579becb: state: 0x00000000, new: 0x00, pending: 0x00, failed
: 0x00, completed: 0x00
Mar 2 13:25:47 morgoth TAPDISK[12548]: last activity: 1299065394.699241, errors: 0x0000, retries: 0x0000, received: 0x0000006b, returned: 0x0000006b, kicked: 0x0000006b, kicks in: 0x000000
44, out: 0x00000076
Mar  2 13:25:47 morgoth TAPDISK[12548]: gaps written/skipped: 0/0
Mar 2 13:25:47 morgoth BLKTAP-DAEMON[6309]: /local/domain/0/backend/tap/161/5632: handled start/shutdown watch on /local/domain/0/backend/tap/161/5632/tapdisk-request Mar 2 13:25:47 morgoth TAPDISK[12548]: /dev/VG_XenStorage-60a2b615-473b-9b5c-c72f-3520844e0f2f/VHD-f1678b62-d5ba-4772-806c-ef4ca579becb: b: 485, a: 2, f: 0, n: 16656 Mar 2 13:25:47 morgoth TAPDISK[12548]: closed image /dev/VG_XenStorage-60a2b615-473b-9b5c-c72f-3520844e0f2f/VHD-f1678b62-d5ba-4772-806c-ef4ca579becb (0 users, state: 0x00000000, type: 4) Mar 2 13:25:47 morgoth TAPDISK[12548]: sending 'close response' message (uuid = 418) Mar 2 13:25:47 morgoth BLKTAP-DAEMON[6309]: got 'close response' message from 23:418 Mar 2 13:25:47 morgoth BLKTAP-DAEMON[6309]: driving channel state closed, vbd down, unpaused to closed (11)
Mar  2 13:25:47 morgoth TAPDISK[12548]: tapdisk-log: closing after 0 errors
Mar  2 13:25:47 morgoth kernel: tap_blkif_schedule[12555]: exiting
Mar 2 13:25:47 morgoth xapi: [error|morgoth|144 xal_listen|VM (domid: 161) device_event = device shutdown {tap,5632} D:2c22dbe5c04e|event] device_event could not be processed because VM record not in database Mar 2 13:25:47 morgoth BLKTAP-DAEMON[6309]: got remove watch on /local/domain/0/backend/tap/161/5632 Mar 2 13:25:47 morgoth BLKTAP-DAEMON[6309]: /local/domain/0/backend/tap/161/5632: marking channel dead, uuid 418 Mar 2 13:25:47 morgoth BLKTAP-DAEMON[6309]: driving channel state closed, vbd down, dead to closed (11) Mar 2 13:25:47 morgoth BLKTAP-DAEMON[6309]: destroying channel 23:418, state closed

So what are the conditions for this kind of behavior? First I thought, that the XCP host is running out of memory (there was only cca. ~750Mb free, so I have freed additional 4GB by shutting down a not important guest.)
But when the large disk-write operations are performed, this issue remains.

We're talking about a HVM Arch Linux guest machine installed on XCP 0.5

Anybody has some thoughts?

P.S. This issue looks similar, but as I have said, the insufficient memory issues are now out. http://serverfault.com/questions/209686/finding-the-reason-of-a-force-shutdown-of-a-vm

Thanks in advance.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.