[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] DomU fails to reboot with storage driver domain
On Wed, Mar 23, 2016 at 6:56 AM, Roger Pau Monné <roger.pau@xxxxxxxxxx> wrote: > Hello, > > On Mon, 21 Mar 2016, Alex Velazquez wrote: >> Hello, >> >> I am running Xen 4.6.0, with Ubuntu 14.04 as my Domain-0. >> >> I have a storage driver domain (PV guest running Ubuntu 14.04) that >> serves a disk backend to a PV DomU (also running Ubuntu 14.04). >> >> Here is the XL config file of StorageDom: >> >> > name = "StorageDom" >> > memory = 1024 >> > maxmem = 1024 >> > vcpus = 2 >> > maxvcpus = 2 >> > driver_domain = 1 >> > pci = [ "84:00.0" ] >> > builder = "generic" >> > kernel = "/var/lib/xen/images/vmlinuz-3.19.0-56-generic" >> > ramdisk = "/var/lib/xen/images/initrd.img-3.19.0-56-generic" >> > cmdline = "root=/dev/sda1 ro" >> >> >> Here is the XL config file of ClientDom: >> >> > name = "ClientDom" >> > memory = 1024 >> > maxmem = 1024 >> > vcpus = 2 >> > maxvcpus = 2 >> > builder = "generic" >> > kernel = "/usr/local/lib/xen/boot/pv-grub-x86_64.gz" >> > cmdline = "(hd0,0)/boot/grub/menu.lst" >> > disk = [ >> > "format=raw,vdev=xvda,access=rw,backend=StorageDom,target=/dev/sdb" ] >> >> >> When I start ClientDom, everything looks good. Here is the backend >> entry in xenstore: >> >> > user@ubuntu ~> $ sudo xenstore-ls /local/domain/1/backend/vbd >> > 2 = "" >> > 51712 = "" >> > frontend = "/local/domain/2/device/vbd/51712" >> > params = "/dev/sdb" >> > script = "/etc/xen/scripts/block" >> > frontend-id = "2" >> > online = "1" >> > removable = "0" >> > bootable = "1" >> > state = "4" >> > dev = "xvda" >> > type = "phy" >> > mode = "w" >> > device-type = "disk" >> > discard-enable = "1" >> > physical-device = "8:10" >> > feature-flush-cache = "1" >> > feature-discard = "0" >> > feature-barrier = "1" >> > feature-persistent = "1" >> > feature-max-indirect-segments = "256" >> > sectors = "1562824368" >> > info = "2" >> > sector-size = "512" >> > physical-sector-size = "512" >> > hotplug-status = "connected" >> >> >> And here is the corresponding frontend entry: >> >> > user@ubuntu ~> $ sudo xenstore-ls /local/domain/2/device/vbd >> > 51712 = "" >> > backend = "/local/domain/1/backend/vbd/2/51712" >> > backend-id = "1" >> > state = "4" >> > virtual-device = "51712" >> > device-type = "disk" >> > protocol = "x86_64-abi" >> > ring-ref = "8" >> > event-channel = "17" >> > feature-persistent = "1" >> >> >> I run into problems if I try to reboot ClientDom (either from within >> the VM, or by calling "xl reboot ClientDom" from Domain-0). As >> ClientDom goes down, the backend entry is cleared out: >> >> > user@ubuntu ~> $ sudo xenstore-ls /local/domain/1/backend/vbd >> > 2 = "" >> >> >> Then ClientDom comes back up with ID 3, but the new backend/frontend >> are not created: >> >> > user@ubuntu ~> $ sudo xenstore-ls /local/domain/1/backend/vbd >> > 2 = "" >> >> >> > user@ubuntu ~> $ sudo xenstore-ls /local/domain/3/device/vbd >> > xenstore-ls: xs_directory (/local/domain/3/device/vbd): No such file or >> > directory >> >> >> > user@ubuntu ~> $ sudo xenstore-ls /local/domain/3/device >> > suspend = "" >> > event-channel = "" >> >> >> Connecting to ClientDom's console shows the PvGrub prompt, because it >> can't find its boot disk: >> >> > >> > GNU GRUB version 0.97 (1048576K lower / 0K upper memory) >> > >> > [ Minimal BASH-like line editing is supported. For >> > the first word, TAB lists possible command >> > completions. Anywhere else TAB lists the possible >> > completions of a device/filename. ] >> > >> > grubdom> root (hd0,0) >> > >> > Error 21: Selected disk does not exist >> > >> > grubdom> > > That's certainly not expected, do you see any kind of error messages in > the xl logs inside of /var/log/xen? You should look into the > xl-<domain_name>.log.X files. > >> If I shutdown ClientDom and start it again ("xl destroy", followed by >> "xl create"), everything works again: >> >> > user@ubuntu ~> $ sudo xenstore-ls /local/domain/1/backend/vbd >> > 2 = "" >> > 4 = "" >> > 51712 = "" >> > frontend = "/local/domain/4/device/vbd/51712" >> > params = "/dev/sdb" >> > script = "/etc/xen/scripts/block" >> > frontend-id = "4" >> > online = "1" >> > removable = "0" >> > bootable = "1" >> > state = "4" >> > dev = "xvda" >> > type = "phy" >> > mode = "w" >> > device-type = "disk" >> > discard-enable = "1" >> > physical-device = "8:10" >> > feature-flush-cache = "1" >> > feature-discard = "0" >> > feature-barrier = "1" >> > feature-persistent = "1" >> > feature-max-indirect-segments = "256" >> > sectors = "1562824368" >> > info = "2" >> > sector-size = "512" >> > physical-sector-size = "512" >> > hotplug-status = "connected" >> >> >> > user@ubuntu ~> $ sudo xenstore-ls /local/domain/4/device/vbd >> > 51712 = "" >> > backend = "/local/domain/1/backend/vbd/4/51712" >> > backend-id = "1" >> > state = "4" >> > virtual-device = "51712" >> > device-type = "disk" >> > protocol = "x86_64-abi" >> > ring-ref = "8" >> > event-channel = "17" >> > feature-persistent = "1" >> >> >> However, I can't prevent a user from attempting to reboot ClientDom >> from within the VM, and when it drops to the PvGrub prompt, >> intervention on Domain-0 is required to restart it. >> >> I don't run into this problem if Domain-0 is the disk backend. When >> the domain reboots, the old backend entry is removed xenstore (not >> just cleared out, as in the case of StorageDom), the new >> backend/frontend entries are created, and ClientDom starts up >> correctly. One difference I can see is that StorageDom is running an >> "xl devd" process (which is started by the xendriverdomain init >> script), whereas Domain-0 is not. Is the use-case of rebooting a DomU >> supported by storage driver domains? > > Yes, this use-case should be supported, so it looks that what you are > seeing is a bug. Could you try the same with xen-unstable? > > Roger. Hi Roger, I installed xen-unstable: > xen_version : 4.7-unstable > xen_changeset : Thu Mar 17 13:50:39 2016 +0100 git:a6f2cdb I get the same result as before. ClientDom boots fine, xenstore backend/frontend entries look fine, ClientDom reboots, xenstore backend/frontend entries are missing and ClientDom drops to PvGrub prompt. Here are the contents of xl log files before reboot: > user@ubuntu ~> $ cat /var/log/xen/xl-StorageDom.log > Waiting for domain StorageDom (domid 1) to die [pid 2831] > user@ubuntu ~> $ cat /var/log/xen/xl-ClientDom.log > Waiting for domain ClientDom (domid 2) to die [pid 2855] And here are the contents of xl log files after reboot: > user@ubuntu ~> $ cat /var/log/xen/xl-StorageDom.log > Waiting for domain StorageDom (domid 1) to die [pid 2831] > user@ubuntu ~> $ cat /var/log/xen/xl-ClientDom.log > Waiting for domain ClientDom (domid 2) to die [pid 2855] > Domain 2 has shut down, reason code 1 0x1 > Action for shutdown reason code 1 is restart > libxl: warning: libxl.c:6770:libxl_retrieve_domain_configuration: Device > present in JSON but not in xenstore, ignored > Domain 2 needs to be cleaned up: destroying the domain > Done. Rebooting now > Waiting for domain ClientDom (domid 3) to die [pid 2855] If I type 'halt' at the PvGrub prompt to bring down ClientDom, some additional lines are appended to the log file: > user@ubuntu ~> $ cat /var/log/xen/xl-ClientDom.log > Waiting for domain ClientDom (domid 2) to die [pid 2855] > Domain 2 has shut down, reason code 1 0x1 > Action for shutdown reason code 1 is restart > libxl: warning: libxl.c:6770:libxl_retrieve_domain_configuration: Device > present in JSON but not in xenstore, ignored > Domain 2 needs to be cleaned up: destroying the domain > Done. Rebooting now > Waiting for domain ClientDom (domid 3) to die [pid 2855] > Domain 3 has shut down, reason code 3 0x3 > Action for shutdown reason code 3 is destroy > Domain 3 needs to be cleaned up: destroying the domain > Done. Exiting now There is also a log file inside StorageDom at '/var/log/xen/xldevd.log', but its contents are empty. Thanks for your help, Alex _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxx http://lists.xen.org/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |