[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-users] live migration fails with device still active
I am trying to do a test live migration between my two Domain-0 using RHEL5.3 but it fails with the error that the disk device is still active. This disk in this case is an iscsi disk which both dom0 can see an access without any problem. udev is used to make the device name (/dev/iscsi )to be identical from the two dom0. Below is the /var/log/xen/xend.log. Any advice is greatly appreciated. Thanks, Vu[2009-04-22 10:52:00 xend.XendDomainInfo 4274] DEBUG (XendDomainInfo:791) Storing domain details: {'console/ring-ref': '2211136', 'console/port': '2', 'name': 'migrating-rh401', 'console/limit': '1048576', 'vm': '/vm/d84f3f23-4f31-1322-cb5a-67ba2e6cdfb7', 'domid': '4', 'cpu/0/availability': 'online', 'memory/target': '1048576', 'store/ring-ref': '2211137', 'store/port': '1'} [2009-04-22 10:52:00 xend 4274] DEBUG (blkif:24) exception looking up device number for xvda: [Errno 2] No such file or directory: '/dev/xvda' [2009-04-22 10:52:00 xend 4274] DEBUG (DevController:110) DevController: writing {'virtual-device': '51712', 'device-type': 'disk', 'protocol': 'x86_32-abi', 'backend-id': '0', 'state': '1', 'backend': '/local/domain/0/backend/vbd/4/51712'} to /local/domain/4/device/vbd/51712. [2009-04-22 10:52:00 xend 4274] DEBUG (DevController:112) DevController: writing {'domain': 'migrating-rh401', 'frontend': '/local/domain/4/device/vbd/51712', 'dev': 'xvda', 'state': '1', 'params': '/dev/iscsi1', 'mode': 'w', 'online': '1', 'frontend-id': '4', 'type': 'phy'} to /local/domain/0/backend/vbd/4/51712. [2009-04-22 10:52:00 xend 4274] DEBUG (DevController:110) DevController: writing {'mac': '00:16:3e:3a:a9:57', 'handle': '0', 'protocol': 'x86_32-abi', 'backend-id': '0', 'state': '1', 'backend': '/local/domain/0/backend/vif/4/0'} to /local/domain/4/device/vif/0. [2009-04-22 10:52:00 xend 4274] DEBUG (DevController:112) DevController: writing {'bridge': 'xenbr0', 'domain': 'migrating-rh401', 'handle': '0', 'script': '/etc/xen/scripts/vif-bridge', 'state': '1', 'frontend': '/local/domain/4/device/vif/0', 'mac': '00:16:3e:3a:a9:57', 'online': '1', 'frontend-id': '4'} to /local/domain/0/backend/vif/4/0. [2009-04-22 10:52:00 xend.XendDomainInfo 4274] DEBUG (XendDomainInfo:1623) XendDomainInfo.resumeDomain: devices created [2009-04-22 10:52:00 xend.XendDomainInfo 4274] ERROR (XendDomainInfo:1628) XendDomainInfo.resume: xc.domain_resume failed on domain 4. Traceback (most recent call last):File "/usr/lib64/python2.4/site-packages/xen/xend/XendDomainInfo.py", line 1625, in resumeDomain xc.domain_resume(self.domid, fast) Error: (1, 'Internal error', "Couldn't map p2m_frame_list_list")[2009-04-22 10:52:00 xend 4274] DEBUG (XendCheckpoint:136) XendCheckpoint.save: resumeDomain [2009-04-22 10:52:00 xend.XendDomainInfo 4274] INFO (XendDomainInfo:1719) Dev 51712 still active, looping... [2009-04-22 10:52:00 xend.XendDomainInfo 4274] INFO (XendDomainInfo:1719) Dev 51712 still active, looping... [2009-04-22 10:52:01 xend.XendDomainInfo 4274] INFO (XendDomainInfo:1719) Dev 51712 still active, looping... [2009-04-22 10:52:01 xend.XendDomainInfo 4274] INFO (XendDomainInfo:1719) Dev 51712 still active, looping... [2009-04-22 10:52:01 xend.XendDomainInfo 4274] INFO (XendDomainInfo:1719) Dev 51712 still active, looping... [2009-04-22 10:52:01 xend.XendDomainInfo 4274] INFO (XendDomainInfo:1719) Dev 51712 still active, looping... [2009-04-22 10:52:01 xend.XendDomainInfo 4274] INFO (XendDomainInfo:1719) Dev 51712 still active, looping... [2009-04-22 10:52:01 xend.XendDomainInfo 4274] INFO (XendDomainInfo:1719) Dev 51712 still active, looping... [2009-04-22 10:52:01 xend.XendDomainInfo 4274] INFO (XendDomainInfo:1719) Dev 51712 still active, looping... [2009-04-22 10:52:01 xend.XendDomainInfo 4274] INFO (XendDomainInfo:1719) Dev 51712 still active, looping... _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |