[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] fail to migrate xen 4.1pv guest to xen 4.3


  • To: xen-users@xxxxxxxxxxxxx
  • From: Cyrus Tam <cyrustam@xxxxxxxxx>
  • Date: Fri, 11 Apr 2014 22:32:44 +0800
  • Delivery-date: Fri, 11 Apr 2014 14:33:59 +0000
  • List-id: Xen user discussion <xen-users.lists.xen.org>

Hi All,

I found a method to bring up my FC17,

1. mount the FC17 domU logical volume from dom0
2. vi the domU /boot/grub2/grub.cfg
3. edit the "timout" to 60 sec.
4. umount the logical volume
5. bring up the domU "xl create fc17.cfg"
6. "xl console fc17" to access the console, i can see the pygrub menu
and press "up" or "down" key to stop the count down
7. open another terminal. "xl list", the fc17 domU is pause state "p",
8. "xl unpause fc17"
9. go back to the first terminal, press "Enter" to start the FC17,
10. the FC17 can boot normally.

for every time i reboot the fc17 domainU , I need to connect the
console, stop the count down, "unpause" the domainU
and choose the item in pygrub menu.

but it is very annoying. and no problem for the FC20 domU.

I found this error message, but I'm not sure is it related

#cat /var/log/xen/qemu-dm-fc17.log
domid: 3985
Warning: vlan 0 is not connected to host network
-videoram option does not work with cirrus vga device model. Videoram set to 4M.
/builddir/build/BUILD/xen-4.3.2/tools/qemu-xen-traditional/hw/xen_blktap.c:628:
Init blktap pipes
Could not open /var/run/tap/qemu-read-3985
xs_read(): target get error. /local/domain/3985/target.


anyone can help,

Thanks
Cyrus



On Fri, Apr 11, 2014 at 8:45 PM, Cyrus Tam <cyrustam@xxxxxxxxx> wrote:
>

>     I have a Fedora 17 running Xen 4.1.4-4 and 5 PV Fedora17 guest, it is
>     running fine,
>     # xl info
>     host : xen01
>     release : 3.3.4-5.fc17.x86_64
>     version : #1 SMP Mon May 7 17:29:34 UTC 2012
>     machine : x86_64
>     nr_cpus : 12
>     nr_nodes : 1
>     cores_per_socket : 6
>     threads_per_core : 2
>     cpu_mhz : 2000
>     hw_caps :
>     bfebfbff:2c100800:00000000:00003f40:13bee3ff:00000000:00000001:00000000
>     virt_caps : hvm hvm_directio
>     total_memory : 24498
>     free_memory : 4150
>     free_cpus : 0
>     xen_major : 4
>     xen_minor : 1
>     xen_extra : .4
>     xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p
>
> - Ignored:
>     hvm-3.0-x86_64
>     xen_scheduler : credit
>     xen_pagesize : 4096
>     platform_params : virt_start=0xffff800000000000
>     xen_changeset : unavailable
>     xen_commandline : placeholder dom0_max_vcpus=4 dom0_vcpus_pin
>     cc_compiler : gcc version 4.7.2 20120921 (Red Hat 4.7.2-2) (GCC)
>     cc_compile_by : mockbuild
>     cc_compile_domain : [unknown]
>     cc_compile_date : Wed Feb 6 21:24:13 UTC 2013
>     xend_config_format : 4
>
>
>     I have a new machine with more memory and cpu
>     I installed Fedora 20 host with Xen 4.3.2-1, guest running PV Fedora20
>     without any problem.
>
>     # xl info
>     host : xen02
>     release : 3.11.10-301.fc20.x86_64
>     version : #1 SMP Thu Dec 5 14:01:17 UTC 2013
>     machine : x86_64
>     nr_cpus : 32
>     max_cpu_id : 63
>     nr_nodes : 2
>     cores_per_socket : 8
>     threads_per_core : 2
>     cpu_mhz : 2194
>     hw_caps :
>     bfebfbff:2c100800:00000000:00003f00:17bee3ff:00000000:00000001:00000000
>     virt_caps : hvm hvm_directio
>     total_memory : 98269
>     free_memory : 5486
>     sharing_freed_memory : 0
>     sharing_used_memory : 0
>     outstanding_claims : 0
>     free_cpus : 0
>     xen_major : 4
>     xen_minor : 3
>     xen_extra : .2
>     xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p
>     hvm-3.0-x86_64
>     xen_scheduler : credit
>     xen_pagesize : 4096
>     platform_params : virt_start=0xffff800000000000
>     xen_changeset :
>     xen_commandline : placeholder dom0_max_vcpus=6 dom0_vcpus_pin
>     cc_compiler : gcc (GCC) 4.8.2 20131212 (Red Hat 4.8.2-7)
>     cc_compile_by : mockbuild
>     cc_compile_domain : [unknown]
>     cc_compile_date : Tue Feb 18 21:00:14 UTC 2014
>     xend_config_format : 4
>
>
>     now I want to migrate these five PV FC17 to new machine,
>
>     I using dd to dump the logical volume to new machine, and try to start the
>     domainU, but it crash repeatedly.
>
>     # cat fc17.cfg
>     name = "fc17"
>     memory = 4096
>     vcpus = 4
>     bootloader = "pygrub"
>     localtime = 0
>     on_poweroff = "destroy"
>     on_reboot = "restart"
>     on_crash = "restart"
>     vfb = [ 'type=vnc,vncdisplay=6,vnclisten=0.0.0.0,vncpasswd=password' ]
>     vnc = 1
>     vncunused = 0
>     vncdisplay = 3
>     disk = [ "phy:/dev/vg01/vg01_fc17,xvda,w" ]
>     vif = [ "bridge=xenbrdum,script=vif-bridge" ]
>     parallel = "none"
>     serial = "none"
>
>
>
>     # xl create fc17.cfg
>     Parsing config from fc17.cfg
>     Daemon running with PID 4778
>     #
>
>
>
>     # xl vncviewer fc17
>     fc17 is an invalid domain identifier (rc=-6)
>
>
>     # xl list
>     Name ID Mem VCPUs State Time(s)
>     Domain-0 0 81244 6 r----- 1857.5
>     fc17 22 0 0 --p--- 0.0
>
>     # xl list
>     Name ID Mem VCPUs State Time(s)
>     Domain-0 0 81244 6 r----- 1859.7
>     fc17 22 4096 1 --psc- 0.4
>
>
>     # cat qemu-dm-fc17.log
>     domid: 5
>     Warning: vlan 0 is not connected to host network
>     -videoram option does not work with cirrus vga device model. Videoram set
>     to 4M.
>     
> /builddir/build/BUILD/xen-4.3.2/tools/qemu-xen-traditional/hw/xen_blktap.c:628:
>     Init blktap pipes
>     Could not open /var/run/tap/qemu-read-5
>     xs_read(): target get error. /local/domain/5/target.
>
>
>     I can open the console sometimes by "xl console fc17" and see the pygrub
>     menu, after I wait for timeout, it tried to boot to PV guest but failed
>     it seems crash and restart again
>
>
>     Am I missing something??
>
>
>     Thanks
>
>     Cyrus

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.