[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] FreeBSD 11.0 (pfSense 2.4) kernel panic with more than 3 Xen (4.8.1) VIF's


  • To: xen-users@xxxxxxxxxxxxx
  • From: John Keates <john@xxxxxxxxx>
  • Date: Wed, 2 Aug 2017 04:59:13 +0200
  • Delivery-date: Thu, 03 Aug 2017 19:18:46 +0000
  • List-id: Xen user discussion <xen-users.lists.xen.org>

Hi,

I’m using FreeBSD 11.0 (technically in a pfSense 2.4 distribution form) in a VM (HVM with PV on for optimised virtual devices) using Xen 4.8.1. (management OS is Debian Stretch).
When starting the VM with 3 or less network interfaces, everything works fine, but as soon as I add a 4th interface the VM gets a kernel panic and dumps (and reboots). When adding a 5th or more interface and booting it doesn’t even dump anymore, it panics and immediately goes into db>.

Configuration file for the VM:

name = 'firewall'
bios = "ovmf"
uuid="11d1366c-40e2-43b2-83be-52cfe0c6542d"
builder = 'hvm'
memory = '2024'
vcpus = 4
disk = ['file:/dev/hypervisor-local/firewall-system,xvda,w']
vif = [
'mac=00:16:3e:e5:68:fd, bridge=wan0, script=vif-openvswitch-no-offload, type=vif',
'mac=00:16:3e:e8:20:db, bridge=lan10, script=vif-openvswitch-no-offload, type=vif',
#'mac=00:16:3e:ee:02:44, bridge=lan20, script=vif-openvswitch-no-offload, type=vif',
#'mac=00:16:3e:e1:03:f1, bridge=lan30, script=vif-openvswitch-no-offload, type=vif',
'mac=00:16:3e:e2:30:44, bridge=lan40, script=vif-openvswitch-no-offload, type=vif'
]
boot = 'c'
serial=‘pty'


Initial panic lines (with 4 interfaces):

panic: HYPERVISOR_memory_op failed to map gnttab
cpuid = 2
KDB: enter: panic
[ thread pid 26 tid 100093 ]
Stopped at      kdb_enter+0x3b: movq    $0,kdb_why

I can supply the dumps themselves as well I suppose, they should still be stored on the system. If needed I should be able to re-create this on pure FreeBSD 11.0 too.

I am wondering what to do next. I could quickly solve this with some VLAN magic, or chain a few VM’s together to get more interfaces interconnected to get the functionality,
but it’d rather just fix the problem. It’s probably not Xen’s fault, but I do wonder if this is something that has happened before with the Xen netback/netfront combination and BSD and if so, what the solution is/was.

Regards,
John
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
https://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.