[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Live checkpointing not working in 3.4.x?



On Thu, Mar 04, 2010 at 08:05:24AM -0600, Tom Verbiscer wrote:
> Normal 'xm save' and 'xm restore' works just fine.  My PV guest kernel is:
>

Ok.

> [root@ovm-pv-01 ~]# uname -a
> Linux ovm-pv-01.example.com 2.6.18-164.el5xen #1 SMP Thu Sep 3 04:41:04  
> EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
>

This kernel should be OK. You should update to the latest hotfix release though 
(-164.something)

Does it work with the default Xen 3.1.2 that comes with RHEL5? 

-- Pasi

> Thanks,
> Tom
>
>
> Pasi Kärkkäinen wrote:
>> On Thu, Mar 04, 2010 at 12:12:56AM -0600, Tom Verbiscer wrote:
>>   
>>> I've been banging my head against a wall for a couple days now.  Does 
>>>  anyone know if live checkpointing ('xm save -c') is currently 
>>> working in  3.4.x?  I've now tried with 3.4.0 on OracleVM, 3.4.1 on 
>>> CentOS 5.4 and  3.4.2 on OpenSolaris.  Each platform gives me the 
>>> same results.  It  seems like the suspend works but does not release 
>>> the devices so when  the resume runs, it freaks because the devices 
>>> are already attached.  I  don't know enough about Xen to know if the 
>>> devices are supposed to  remain attached (because it doesn't destroy 
>>> the domain) or not.  Every  time I try to live checkpoint the VM 
>>> winds up suspended and the only way  to bring it back to life is to 
>>> run 'xm destroy' on it and then 'xm  resume'.  I'll be happy to 
>>> provide more logs if I've leaving something  out.  The following is 
>>> on a OracleVM hypervisor (yes, OracleVM doesn't  support 
>>> checkpointing but the results are the same with vanilla Xen).   Also 
>>> doesn't matter if I use a file backend device for the disk or a   
>>> physical device or a file on an NFS share, same result.
>>>
>>>     
>>
>> does normal "xm save" and then "xm restore" work for you? 
>>
>> What's the guest kernel version? save/restore heavily depends on the  
>> guest kernel version/features (for pv guests).
>>
>> -- Pasi
>>
>>   
>>> Thanks,
>>> Tom
>>>
>>> [root@compute-01 ~]# rpm -qa | grep xen
>>> xen-devel-3.4.0-0.0.23.el5
>>> xen-tools-3.4.0-0.0.23.el5
>>> xen-debugger-3.4.0-0.0.23.el5
>>> xen-3.4.0-0.0.23.el5
>>> xen-64-3.4.0-0.0.23.el5
>>> [root@compute-01 ~]# uname -a
>>> Linux compute-01.example.com 2.6.18-128.2.1.4.9.el5xen #1 SMP Fri Oct 
>>> 9  14:57:31 EDT 2009 i686 i686 i386 GNU/Linux
>>>
>>> [root@compute-01 ~]# cat /OVS/running_pool/1_ovm_pv_01_example_com/vm.cfg
>>> bootargs = 'bridge=xenbr0,mac=00:16:3E:AA:EB:08,type=netfront'
>>> bootloader = '/usr/bin/pypxeboot'
>>> disk = ['file:/tmp/System.img,xvda,w']
>>> maxmem = 512
>>> memory = 512
>>> name = '1_ovm_pv_01_example_com'
>>> on_crash = 'restart'
>>> on_reboot = 'restart'
>>> uuid = '7408c627-3232-4c1d-b5e3-1cf05cb015c8'
>>> vcpus = 1
>>> vfb = ['type=vnc,vncunused=1,vnclisten=0.0.0.0,vncpasswd=<removed>']
>>> vif = ['bridge=xenbr0,mac=00:16:3E:AA:EB:08,type=netfront']
>>> vif_other_config = []
>>>
>>>     
>>
>>
>> _______________________________________________
>> Xen-users mailing list
>> Xen-users@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-users
>>   

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.