[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] XCP 0.5 killing umanaged domain (BUG?)


  • To: xen-devel@xxxxxxxxxxxxxxxxxxx
  • From: George Shuklin <george.shuklin@xxxxxxxxx>
  • Date: Tue, 26 Oct 2010 19:33:40 +0400
  • Delivery-date: Tue, 26 Oct 2010 08:35:06 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=subject:from:to:content-type:date:message-id:mime-version:x-mailer :content-transfer-encoding; b=iHElc6FwG3GQ+74Toj1TZq1q7lTSXwmsgdsuClqyS2QB+MwN4BnmmMC78VDPYCnCT8 62p8N7Am8NcDdX5rF7hFRvCwiPgr3z2QMRTBlxn/J9kb4GjKCf0oTaj3bviT+7wgvDPd iO/YS/NzCiQGSpY1eCgA/NbXgp9Z3hdr8/n9s=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

1) Misspelled message 'killing umanaged domain' (must be killing
UNmanaged domain)

2) more serious. I found this message after disappearance of virtual
machine created early. I restart XAPI tool stack and machine was killed
(not shutdowned, but completely removing). Here log (from machine where
domain was resident) 

Main line in log file I suspect:

[20101026T14:50:09.639Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|dbsync] killing umanaged domain:
4e118da2-8485-23c9-de75-a6c6862df33c


Full log part:

[20101026T14:50:09.518Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|dbsync] updating VM states
[20101026T14:50:09.523Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|dbsync] Updating the list of VMs
[20101026T14:50:09.639Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|dbsync] killing umanaged domain:
4e118da2-8485-23c9-de75-a6c6862df33c
[20101026T14:50:09.640Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|xenops] Domain.destroy: all known devices = [ frontend
(domid=369 | kind=vbd | devid=51712); backend (domid=0 | kind=tap |
devid=51712); frontend (domid=369 | kind=vif | devid=0); backend
(domid=0 | kind=vif | devid=0) ]
[20101026T14:50:09.640Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|xenops] Domain.destroy calling Xc.domain_destroy (domid
369)
[20101026T14:50:09.681Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|xenops] No qemu-dm pid in xenstore; assuming this domain
was PV
[20101026T14:50:09.681Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|xenops] Device.Vbd.hard_shutdown frontend (domid=369 |
kind=vbd | devid=51712); backend (domid=0 | kind=tap | devid=51712)
[20101026T14:50:09.681Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|xenops] Device.Vbd.request_shutdown frontend (domid=369 |
kind=vbd | devid=51712); backend (domid=0 | kind=tap | devid=51712)
force
[20101026T14:50:09.681Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|xenops]
xenstore-write /local/domain/0/backend/tap/369/51712/shutdown-request =
force
[20101026T14:50:09.683Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|xenops] watch: watching xenstore paths:
[ /local/domain/0/backend/tap/369/51712/shutdown-done ] with timeout
1200.000000 seconds
[20101026T14:50:09.690Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|xenops] Device.rm_device_state frontend (domid=369 |
kind=vbd | devid=51712); backend (domid=0 | kind=tap | devid=51712)
[20101026T14:50:09.690Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|xenops] xenstore-rm /local/domain/369/device/vbd/51712
[20101026T14:50:09.691Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|xenops] xenstore-rm /local/domain/0/backend/tap/369/51712
[20101026T14:50:09.692Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|xenops] xenstore-rm /local/domain/0/error/backend/tap/369
[20101026T14:50:09.692Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|xenops]
xenstore-rm /local/domain/369/error/device/vbd/51712
[20101026T14:50:09.693Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|xenops] Device.Vbd.hard_shutdown complete
[20101026T14:50:09.693Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|xenops] Device.Vif.hard_shutdown frontend (domid=369 |
kind=vif | devid=0); backend (domid=0 | kind=vif | devid=0)
[20101026T14:50:09.693Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|xenops]
xenstore-write /local/domain/0/backend/vif/369/0/online = 0
[20101026T14:50:09.693Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|xenops] Device.Vif.hard_shutdown about to blow away
frontend
[20101026T14:50:09.693Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|xenops] xenstore-rm /local/domain/369/device/vif/0
[20101026T14:50:09.694Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|xenops] watch: watching xenstore paths:
[ /xapi/369/hotplug/vif/0/hotplug ] with timeout 1200.000000 seconds
[20101026T14:50:09.827Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|xenops] Device.Vif.hard_shutdown about to blow away
backend and error paths
[20101026T14:50:09.827Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|xenops] Device.rm_device_state frontend (domid=369 |
kind=vif | devid=0); backend (domid=0 | kind=vif | devid=0)
[20101026T14:50:09.827Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|xenops] xenstore-rm /local/domain/369/device/vif/0
[20101026T14:50:09.827Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|xenops] xenstore-rm /local/domain/0/backend/vif/369/0
[20101026T14:50:09.827Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|xenops] xenstore-rm /local/domain/0/error/backend/vif/369
[20101026T14:50:09.827Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|xenops] xenstore-rm /local/domain/369/error/device/vif/0
[20101026T14:50:09.828Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|hotplug] Hotplug.release: frontend (domid=369 | kind=vbd
| devid=51712); backend (domid=0 | kind=tap | devid=51712)
[20101026T14:50:09.828Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|hotplug] Hotplug.wait_for_unplug: frontend (domid=369 |
kind=vbd | devid=51712); backend (domid=0 | kind=tap | devid=51712)
[20101026T14:50:09.828Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|xenops] watch: watching xenstore paths:
[ /xapi/369/hotplug/tap/51712/hotplug ] with timeout 1200.000000 seconds
[20101026T14:50:09.828Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|hotplug] Synchronised ok with hotplug script: frontend
(domid=369 | kind=vbd | devid=51712); backend (domid=0 | kind=tap |
devid=51712)
[20101026T14:50:09.828Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|hotplug] Hotplug.release: frontend (domid=369 | kind=vif
| devid=0); backend (domid=0 | kind=vif | devid=0)
[20101026T14:50:09.828Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|hotplug] Hotplug.wait_for_unplug: frontend (domid=369 |
kind=vif | devid=0); backend (domid=0 | kind=vif | devid=0)
[20101026T14:50:09.828Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|xenops] watch: watching xenstore paths:
[ /xapi/369/hotplug/vif/0/hotplug ] with timeout 1200.000000 seconds
[20101026T14:50:09.829Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|hotplug] Synchronised ok with hotplug script: frontend
(domid=369 | kind=vif | devid=0); backend (domid=0 | kind=vif | devid=0)
[20101026T14:50:09.829Z| warn|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|hotplug] Warning, deleting 'vif' entry
from /xapi/369/hotplug/vif/0
[20101026T14:50:09.829Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|xenops] Domain.destroy: rm /local/domain/369
[20101026T14:50:09.829Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|xenops] Domain.destroy: deleting backend paths
[20101026T14:50:09.830Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|xenops] Xc.domain_getinfo 369 threw: getinfo failed:
domain 369: hypercall 36 fail: 11: Resource temporarily unavailable (ret
-1) -- assuming domain nolonger exists
[20101026T14:50:09.830Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|xenops] Xc.domain_getinfo 369 threw: getinfo failed:
domain 369: getinfo failed: domain 369: hypercall 36 fail: 11: Resource
temporarily unavailable (ret -1) -- assuming domain nolonger exists
[20101026T14:50:09.977Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|dbsync] syncing devices and registering vm for
monitoring: ad68556d-3914-4f7e-9c92-78e969657706
[20101026T14:50:09.977Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|locking_helpers] Acquired lock on VM
OpaqueRef:9a5fa24f-b674-e7a4-f732-5893788d6bb1 with token 0
[20101026T14:50:09.977Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|locking_helpers] Released lock on VM
OpaqueRef:9a5fa24f-b674-e7a4-f732-5893788d6bb1 with token 0
[20101026T14:50:10.004Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|monitor_rrds] Loading RRD from local filesystem for
object uuid=ad68556d-3914-4f7e-9c92-78e969657706
[20101026T14:50:10.029Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|monitor_rrds] RRD loaded from local filesystem for object
uuid=ad68556d-3914-4f7e-9c92-78e969657706
[20101026T14:50:10.067Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|dbsync] syncing devices and registering vm for
monitoring: 4ada4125-6173-cd33-6307-e879b8a4e866
[20101026T14:50:10.067Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|locking_helpers] Acquired lock on VM
OpaqueRef:68afb203-224f-f5b8-f756-c12b399c88ce with token 1
[20101026T14:50:10.075Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|event] VM OpaqueRef:68afb203-224f-f5b8-f756-c12b399c88ce
(domid: 85) Resync.vbd OpaqueRef:4cdccb5f-06b7-3308-71e2-fe4a59d0efe1
[20101026T14:50:10.100Z|debug|cvt-xh4|0 thread_zero|dbsync (update_env)
D:634f731b810a|sm] SM lvmoiscsi sr_content_type
sr=OpaqueRef:1d5d61a2-06be-136b-6526-a265f66750cd




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.