[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] Run KVM guest on Xen



I have a lot of KVM guests that run a big benchmark that I had to port to Xen for my PHD experiments. I'm trying to test the migration of KVM guests to Xen to see if I avoid to reconfiguring all the system. 

Does anyone have successfully made this kind of migration?

On my tests I couldn't make it work!

Created and KVM guest using libvirt that used qcow disks and have the following configuration : 

<domain type='kvm'>
  <name>test</name>
  <uuid>8b86300d-8279-453e-88c3-958a10182597</uuid>
  <memory unit='KiB'>524288</memory>
  <currentMemory unit='KiB'>524288</currentMemory>
  <vcpu placement='static'>1</vcpu>
  <os>
    <type arch='x86_64' machine='pc-i440fx-xenial'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
  </features>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/bin/kvm-spice</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/uvtool/libvirt/images/test.qcow'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw'/>
      <source file='/var/lib/uvtool/libvirt/images/test-ds.qcow'/>
      <target dev='vdb' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>
    <controller type='usb' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'/>
    <interface type='network'>
      <mac address='52:54:00:d0:74:26'/>
      <source network='default'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target port='0'/>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='vnc' port='-1' autoport='yes' listen='127.0.0.1'>
      <listen type='address' address='127.0.0.1'/>
    </graphics>
    <video>
      <model type='cirrus' vram='16384' heads='1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </memballoon>
  </devices>
</domain>


Then tryied to port the guest out of the box just changing to xen libvirt config file:

<domain type='xen'>
  <name>testmig</name>
  <uuid>8b86300d-8279-453e-88c3-958a10182597</uuid>
  <memory unit='KiB'>524288</memory>
  <currentMemory unit='KiB'>524288</currentMemory>
  <vcpu placement='static'>1</vcpu>
  <bootloader>/usr/lib/xen-4.6/bin/pygrub</bootloader>
  <os>
    <type arch='x86_64' machine='xenpv'>linux</type>
    <cmdline>root=/dev/xvda1 ro (null)</cmdline>
  </os>
  <clock offset='utc' adjustment='reset'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <disk type='file' device='disk'>
      <driver name='tap' type='qcow'/>
      <source file='/var/lib/uvtool/libvirt/images/test.qcow'/>
      <target dev='xvda1' bus='xen'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='tap' type='raw'/>
      <source file='/var/lib/uvtool/libvirt/images/test-ds.qcow'/>
      <target dev='xvda2' bus='xen'/>
    </disk>
    <interface type='bridge'>
      <mac address='52:54:00:d0:74:26'/>
      <source bridge='virbr0'/>
    </interface>
    <console type='pty'>
      <target type='xen' port='0'/>
    </console>
  </devices>
</domain>


And nothing, the libvirt says:

root@charles-OptiPlex-7010:/home/charles# virsh -c xen:/// create testmigration.xml 
error: Failed to create domain from testmigration.xml
error: internal error: libxenlight failed to create new domain 'testmig'


Then finally tried to use the xen itself porting the configuration :
root@charles-OptiPlex-7010:/etc/xen# cat testmig.cfg 
#
# Configuration file for the Xen instance testxn4, created
# by xen-tools 4.6.2 on Fri Dec  9 09:38:15 2016.
#

#
#  Kernel + memory size
#


bootloader = '/usr/lib/xen-4.6/bin/pygrub'

vcpus       = '1'
memory      = '512'


#
#  Disk device(s).
#
root        = '/dev/xvda2 ro'
disk        = [
                  'tap:qcow:/var/lib/uvtool/libvirt/images/test.qcow,xvda2,w',
                  'tap:qcow:/var/lib/uvtool/libvirt/images/test-ds.qcow,xvda1,w',
              ]


#
#  Physical volumes
#


#
#  Hostname
#
name        = 'testmig'

#
#  Networking
#
dhcp        = 'dhcp'
vif         = [ 'mac=52:54:00:d0:74:26,bridge=virbr0' ]

#
#  Behaviour
#
_on_poweroff_ = 'destroy'
on_reboot   = 'restart'
on_crash    = 'restart'

Again, no success:

root@charles-OptiPlex-7010:/etc/xen# xl create testmig.cfg
Parsing config from testmig.cfg
libxl: error: libxl_device.c:1269:libxl__wait_for_backend: Backend /local/domain/0/backend/qdisk/0/51760 not ready
libxl: error: libxl_bootloader.c:408:bootloader_disk_attached_cb: failed to attach local disk for bootloader execution
libxl: error: libxl_bootloader.c:279:bootloader_local_detached_cb: unable to detach locally attached disk
libxl: error: libxl_create.c:1144:domcreate_rebuild_done: cannot (re-)build domain: -3
libxl: error: libxl.c:1610:libxl__destroy_domid: non-existant domain 11
libxl: error: libxl.c:1568:domain_destroy_callback: unable to destroy guest with domid 11
libxl: error: libxl.c:1495:domain_destroy_cb: destruction of domain 11 failed



Any help? 
It would save me a LOT of time!



--
Charles F.'. Gonçalves
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
https://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.