[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Xen 3.4.1-rc10: Cannot restore/migrate 32-bit HVM domU (W2K3 Server) on 64-bit dom0



On Fri, Jul 31, 2009 at 10:19:04AM +0300, Pasi Kärkkäinen wrote:
> On Thu, Jul 30, 2009 at 08:00:21PM -0400, Joshua West wrote:
> > Hey all,
> > 
> > I've been attempting to get 'xm restore' (and thus 'xm migrate') to work
> > for a 32-bit HVM domU on a 64-bit dom0 running Xen 3.4.1-rc10.  Has
> > anyone else been able to do so?  I can boot the VM and work within it
> > just fine.  The 'xm save' command also functions properly; issue is just
> > with 'restore' and therefore 'migrate'.
> > 
> 
> What dom0 kernel you are running?
> 
> If something else than linux-2.6.18-xen.hg please try with that instead.
> 

Also if you can reproduce this problem running linux-2.6.18-xen on dom0,
then please send an email to xen-devel about this. Keir wants to release Xen
3.4.1 final early next week, and this sounds like something that might be
good to check before the release.

If this problem happens running for example Debian lenny 2.6.26-xen kernel in 
dom0,
or some other 2.6.27, 2.6.29, 2.6.30 dom0 kernel then it _could_ be a bug in
them, and not in Xen.

-- Pasi

> 
> > When I watch the system via "xm top" during the restoration process, I
> > do notice the memory allocation for the VM increase all the way to about
> > 1024MB.  Suddenly, the amount of memory allocated to the VM decreases by
> > a bit, and then finally the VM disappears.
> > 
> > It may be of interest that I don't have issues
> > saving/restoring/migrating 32-bit PV domU's on this same set of 64-bit
> > dom0's.  This seems to be an issue only with HVM domU's.
> > 
> > The following is taken from /var/log/xen/xend.log and demonstrates the
> > failure:
> > 
> > [2009-07-30 19:48:18 4839] INFO (image:745) Need to create platform
> > device.[domid:37]
> > [2009-07-30 19:48:18 4839] DEBUG (XendCheckpoint:261)
> > restore:shadow=0x9, _static_max=0x40000000, _static_min=0x0,
> > [2009-07-30 19:48:18 4839] DEBUG (balloon:166) Balloon: 31589116 KiB
> > free; need 1061888; done.
> > [2009-07-30 19:48:18 4839] DEBUG (XendCheckpoint:278) [xc_restore]:
> > /usr/lib64/xen/bin/xc_restore 4 37 2 3 1 1 1
> > [2009-07-30 19:48:18 4839] INFO (XendCheckpoint:417) xc_domain_restore
> > start: p2m_size = 100000
> > [2009-07-30 19:48:18 4839] INFO (XendCheckpoint:417) Reloading memory
> > pages:   0%
> > [2009-07-30 19:48:27 4839] INFO (XendCheckpoint:417) Failed allocation
> > for dom 37: 1024 extents of order 0
> > [2009-07-30 19:48:27 4839] INFO (XendCheckpoint:417) ERROR Internal
> > error: Failed to allocate memory for batch.!
> > [2009-07-30 19:48:27 4839] INFO (XendCheckpoint:417)
> > [2009-07-30 19:48:27 4839] INFO (XendCheckpoint:417) Restore exit with rc=1
> > [2009-07-30 19:48:27 4839] DEBUG (XendDomainInfo:2724)
> > XendDomainInfo.destroy: domid=37
> > [2009-07-30 19:48:27 4839] ERROR (XendDomainInfo:2738)
> > XendDomainInfo.destroy: domain destruction failed.
> > Traceback (most recent call last):
> >   File "/usr/lib64/python2.4/site-packages/xen/xend/XendDomainInfo.py",
> > line 2731, in destroy
> >     xc.domain_pause(self.domid)
> > Error: (3, \047No such process\047)
> > [2009-07-30 19:48:27 4839] DEBUG (XendDomainInfo:2204) No device model
> > [2009-07-30 19:48:27 4839] DEBUG (XendDomainInfo:2206) Releasing devices
> > [2009-07-30 19:48:27 4839] DEBUG (XendDomainInfo:2219) Removing vif/0
> > [2009-07-30 19:48:27 4839] DEBUG (XendDomainInfo:1134)
> > XendDomainInfo.destroyDevice: deviceClass = vif, device = vif/0
> > [2009-07-30 19:48:27 4839] DEBUG (XendDomainInfo:2219) Removing vbd/768
> > [2009-07-30 19:48:27 4839] DEBUG (XendDomainInfo:1134)
> > XendDomainInfo.destroyDevice: deviceClass = vbd, device = vbd/768
> > [2009-07-30 19:48:27 4839] DEBUG (XendDomainInfo:2219) Removing vfb/0
> > [2009-07-30 19:48:27 4839] DEBUG (XendDomainInfo:1134)
> > XendDomainInfo.destroyDevice: deviceClass = vfb, device = vfb/0
> > [2009-07-30 19:48:27 4839] DEBUG (XendDomainInfo:2219) Removing console/0
> > [2009-07-30 19:48:27 4839] DEBUG (XendDomainInfo:1134)
> > XendDomainInfo.destroyDevice: deviceClass = console, device = console/0
> > [2009-07-30 19:48:27 4839] ERROR (XendDomain:1149) Restore failed
> > Traceback (most recent call last):
> >   File "/usr/lib64/python2.4/site-packages/xen/xend/XendDomain.py", line
> > 1147, in domain_restore_fd
> >     return XendCheckpoint.restore(self, fd, paused=paused,
> > relocating=relocating)
> >   File "/usr/lib64/python2.4/site-packages/xen/xend/XendCheckpoint.py",
> > line 282, in restore
> >     forkHelper(cmd, fd, handler.handler, True)
> >   File "/usr/lib64/python2.4/site-packages/xen/xend/XendCheckpoint.py",
> > line 405, in forkHelper
> >     raise XendError("%s failed" % string.join(cmd))
> > XendError: /usr/lib64/xen/bin/xc_restore 4 37 2 3 1 1 1 failed
> > 
> > And here is the xmconfig file:
> > 
> > #---------------------------------------------------------------#
> > import os, re
> > arch_libdir = 'lib'
> > arch = os.uname()[4]
> > if os.uname()[0] == 'Linux' and re.search('64', arch):
> >     arch_libdir = 'lib64'
> > kernel = "/usr/lib/xen/boot/hvmloader"
> > builder='hvm'
> > memory = 1024
> > name = "winxen"
> > vcpus=1
> > vif = [ 'type=ioemu, bridge=xenbr100, mac=aa:bb:cc:00:00:99' ]
> > disk = [ 'phy:/dev/drbd/by-res/vm_winxen,hda,w' ]
> > device_model = '/usr/' + arch_libdir + '/xen/bin/qemu-dm'
> > boot="c"
> > sdl=0
> > opengl=1
> > vnc=1
> > vncpasswd='...'
> > stdvga=1
> > monitor=1
> > usbdevice='tablet'
> > #---------------------------------------------------------------#
> > 
> > One thing of interest I did also notice in the logs was during the "xm
> > save" process:
> > 
> > [2009-07-30 19:47:26 4839] DEBUG (XendCheckpoint:110) [xc_save]:
> > /usr/lib64/xen/bin/xc_save 56 36 0 0 4
> > [2009-07-30 19:47:26 4839] DEBUG (XendCheckpoint:388) suspend
> > [2009-07-30 19:47:26 4839] DEBUG (XendCheckpoint:113) In
> > saveInputHandler suspend
> > [2009-07-30 19:47:26 4839] DEBUG (XendCheckpoint:115) Suspending 36 ...
> > [2009-07-30 19:47:26 4839] DEBUG (XendDomainInfo:511)
> > XendDomainInfo.shutdown(suspend)
> > [2009-07-30 19:47:26 4839] INFO (XendCheckpoint:417) xc_save: failed to
> > get the suspend evtchn port
> > [2009-07-30 19:47:26 4839] INFO (XendCheckpoint:417)
> > [2009-07-30 19:47:26 4839] DEBUG (XendDomainInfo:1709)
> > XendDomainInfo.handleShutdownWatch
> > [2009-07-30 19:47:27 4839] INFO (XendDomainInfo:1895) Domain has
> > shutdown: name=migrating-winxen id=36 reason=suspend.
> > [2009-07-30 19:47:27 4839] INFO (XendCheckpoint:121) Domain 36 suspended.
> > [2009-07-30 19:47:27 4839] INFO (image:479) signalDeviceModel:restore dm
> > state to running
> > [2009-07-30 19:47:27 4839] DEBUG (XendCheckpoint:130) Written done
> >  1: sent 266240, skipped 0, delta 8484ms, dom0 46%, target 0%, sent
> > 1028Mb/s, dirtied 0Mb/s 0 pages
> > [2009-07-30 19:47:36 4839] INFO (XendCheckpoint:417) Total pages sent=
> > 266240 (0.25x)
> > [2009-07-30 19:47:36 4839] INFO (XendCheckpoint:417) (of which 0 were
> > fixups)
> > [2009-07-30 19:47:36 4839] INFO (XendCheckpoint:417) All memory is saved
> > [2009-07-30 19:47:36 4839] INFO (XendCheckpoint:417) Save exit rc=0
> > 
> > Not sure if that xc_save error message has anything to do with this...
> > 
> > If there is any additional information you need, such as how I built Xen
> > or even the 'xm save' copy of the VM itself, just let me know and I'll
> > make it available.
> > 
> > Thanks.
> > 
> > -- 
> > Joshua West
> > Senior Systems Engineer
> > Brandeis University
> > http://www.brandeis.edu
> > 
> > 
> > _______________________________________________
> > Xen-users mailing list
> > Xen-users@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-users
> 
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.