Folks,
I'm trying to run a FreeBSD HVM domU on an Alpine Linux dom0, installed from the Alpine Linux Xen x86_64 installer from the web site. I can run a Alpine Linux HVM no problem, but when I try to start a FreeBSD-based domU the whole system crashes hard and reboots. Hardware is a Supermicro X11SDV-4C-TLN2F, 64 GB RAM, 32 GB SSD (drive is just for testing purposes).
My config file:
builder="hvm"
# Path to HDD and iso file disk = [ 'format=raw, vdev=xvda, access=w, target=/root/freebsd-hvm.img', 'format=raw, vdev=xvdc, access=r, devtype=cdrom, target=/root/FreeBSD-12.0-RELEASE-amd64-disc1.iso' ]
# Network configuration vif = ['bridge=br0']
serial = "pty"
# DomU settings memory = 2048 name = "freebsd-hvm" vcpus = 1 maxvcpus = 1
Output from create command:
localhost:~# xl -v create freebsd-hvm-xen-install.cfg -c Parsing config from freebsd-hvm-xen-install.cfg domainbuilder: detail: xc_dom_allocate: cmdline="", features="" domainbuilder: detail: xc_dom_kernel_file: filename="/usr/lib/xen/boot/hvmloader" domainbuilder: detail: xc_dom_malloc_filemap : 466 kB domainbuilder: detail: xc_dom_boot_xen_init: ver 4.11, caps xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 domainbuilder: detail: xc_dom_parse_image: called domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader ... domainbuilder: detail: loader probe failed domainbuilder: detail: xc_dom_find_loader: trying HVM-generic loader ... domainbuilder: detail: loader probe OK xc: detail: ELF: phdr: paddr=0x100000 memsz=0x7e524 xc: detail: ELF: memory: 0x100000 -> 0x17e524 domainbuilder: detail: xc_dom_mem_init: mem 2040 MB, pages 0x7f800 pages, 4k each domainbuilder: detail: xc_dom_mem_init: 0x7f800 pages domainbuilder: detail: xc_dom_boot_mem_init: called domainbuilder: detail: range: start=0x0 end=0x7f800000 domainbuilder: detail: xc_dom_malloc : 4080 kB xc: detail: PHYSICAL MEMORY ALLOCATION: xc: detail: 4KB PAGES: 0x0000000000000200 xc: detail: 2MB PAGES: 0x00000000000003fb xc: detail: 1GB PAGES: 0x0000000000000000 domainbuilder: detail: xc_dom_build_image: called domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x100+0x7f at 0x7efe961ec000 domainbuilder: detail: xc_dom_alloc_segment: kernel : 0x100000 -> 0x17f000 (pfn 0x100 + 0x7f pages) xc: detail: ELF: phdr 0 at 0x7efe9616d000 -> 0x7efe961e1980 domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x17f+0x40 at 0x7efe961ac000 domainbuilder: detail: xc_dom_alloc_segment: System Firmware module : 0x17f000 -> 0x1bf000 (pfn 0x17f + 0x40 pages) domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x1bf+0x1 at 0x7efe961ab000 domainbuilder: detail: xc_dom_alloc_segment: HVM start info : 0x1bf000 -> 0x1c0000 (pfn 0x1bf + 0x1 pages) domainbuilder: detail: alloc_pgtables_hvm: doing nothing domainbuilder: detail: xc_dom_build_image : virt_alloc_end : 0x1c0000 domainbuilder: detail: xc_dom_build_image : virt_pgtab_end : 0x0 domainbuilder: detail: xc_dom_boot_image: called domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_64 domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_32p domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32 <= matches
^^^^^^^^^^^^^^^^^^ this line seems odd. Why would it be using x86_32? domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32p domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_64 domainbuilder: detail: domain builder memory footprint domainbuilder: detail: allocated domainbuilder: detail: malloc : 4087 kB domainbuilder: detail: anon mmap : 0 bytes domainbuilder: detail: mapped domainbuilder: detail: file mmap : 466 kB domainbuilder: detail: domU mmap : 768 kB domainbuilder: detail: vcpu_hvm: called domainbuilder: detail: xc_dom_gnttab_hvm_seed: called, pfn=0xff000 domainbuilder: detail: xc_dom_gnttab_hvm_seed: called, pfn=0xff001 domainbuilder: detail: xc_dom_release: called Connection to 192.168.X.X closed by remote host. Connection to 192.168.X.X closed.
Relevant lines from /var/log/messages:
Mar 21 09:40:45 localhost daemon.debug root: /etc/xen/scripts/block: add XENBUS_PATH=backend/vbd/1/51712 Mar 21 09:40:45 localhost daemon.debug root: /etc/xen/scripts/block: Writing backend/vbd/1/51712/node /dev/loop0 to xenstore. Mar 21 09:40:45 localhost daemon.debug root: /etc/xen/scripts/block: Writing backend/vbd/1/51712/physical-device 7:0 to xenstore. Mar 21 09:40:45 localhost daemon.debug root: /etc/xen/scripts/block: Writing backend/vbd/1/51712/physical-device-path /dev/loop0 to xenstore. Mar 21 09:40:45 localhost daemon.debug root: /etc/xen/scripts/block: Writing backend/vbd/1/51712/hotplug-status connected to xenstore. Mar 21 09:40:46 localhost daemon.debug root: /etc/xen/scripts/vif-bridge: online type_if=vif XENBUS_PATH=backend/vif/1/0 Mar 21 09:40:46 localhost daemon.debug root: /etc/xen/scripts/vif-bridge: Successful vif-bridge online for vif1.0, bridge br0. Mar 21 09:40:46 localhost daemon.debug root: /etc/xen/scripts/vif-bridge: Writing backend/vif/1/0/hotplug-status connected to xenstore. Mar 21 09:40:46 localhost daemon.debug root: /etc/xen/scripts/vif-bridge: add type_if=tap XENBUS_PATH=backend/vif/1/0 Mar 21 09:40:46 localhost daemon.debug root: /etc/xen/scripts/vif-bridge: Successful vif-bridge add for vif1.0-emu, bridge br0. ^^^^^^ Crash at this point, next log line is after reboot
It seems like some kind of interaction with the networking stack causes a panic, maybe?
Any ideas out there?
--Paul Paul Suh VP of Deployment Cypient Black
|