[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-users] PV DomU can't access disk from storage driver domain
Hi all, I've been trying to set up a storage driver domain that can provide a disk to another DomU, but am having trouble getting it to work. If anyone has experience with this and could provide some suggestions, I'd be grateful. I'm running Xen 4.6.0-rc3. Both the storage driver domain ("storagedd") and the other DomU ("client") are PV guests running Ubuntu 14.04.3 (kernel 3.13.0-63-generic). To set up the storage domain, I followed instructions from the "Storage driver domains" page on the wiki (http://wiki.xenproject.org/wiki/Storage_driver_domains) and got some additional ideas from "XCP storage driver domains" (http://wiki.xenproject.org/wiki/XCP_storage_driver_domains). I'll summarize my steps here: 1) clone xen.git repository (using the 4.6.0-rc3 release, same as Dom0) 2) make tools & make install tools 3) apt-get install blktap-utils 4) mount -t xenfs xenfs /proc/xen 5) modprobe xen-blkback & modprobe xen-gntalloc & modprobe xen-gntdev (not sure if any/all of these are necessary, or if they're handled automatically) My testing so far has been without PCI passthrough. I've tried with files located on an NFS share mounted by storagedd (qcow2 and raw), files located on storagedd's filesystem (again, qcow2 and raw), as well a block device (loop device created using losetup). Here is the xl config file for "storagedd": xenuser@xenhost:~$ cat storagedd.cfg name = "storagedd" builder = "generic" kernel = "/usr/local/lib/xen/boot/pv-grub-x86_64.gz" extra = "(hd0,0)/boot/grub/menu.lst" driver_domain = 1 vcpus = 1 memory = 2048 disk = [ "format=qcow2,vdev=xvda,access=rw,target=/var/lib/xen/images/storagedd.qcow2" ] vif = [ "mac=00:16:3e:36:00:01,bridge=xenbr0,script=vif-openvswitch" ] And here is the xl config file for "client": xenuser@xenhost:~$ cat client.cfg name = "client" builder = "generic" kernel = "/usr/local/lib/xen/boot/pv-grub-x86_64.gz" extra = "(hd0,0)/boot/grub/menu.lst" vcpus = 1 memory = 1024 disk = [ "format=raw,backendtype=phy,backend=storagedd,vdev=xvda,target=/dev/loop0" ] vif = [ "mac=00:16:3e:37:00:02,bridge=xenbr0,script=vif-openvswitch" ] storagedd starts up and runs normally. Here are the Xen kernel modules that are loaded: admin@storagedd:~$ sudo lsmod | grep xen xen_gntalloc 13626 0 xen_gntdev 18675 0 xen_blkback 37209 0 xenfs 12978 1 xen_privcmd 13243 1 xenfs Initially, there are no backend entries in xenstore: admin@storagedd:~$ sudo xenstore-ls /local/domain/1/backend xenstore-ls: xs_directory (/local/domain/1/backend): No such file or directory Then I start the client. Here are some excerpts of the output of "xl -vvv create client.cfg": xenuser@xenhost:~$ sudo xl -vvv create client.cfg Parsing config from xlcfg/client.cfg libxl: debug: libxl_create.c:1556:do_domain_create: ao 0x21d46d0: create: how=(nil) callback=(nil) poller=0x21d3b10 libxl: debug: libxl_device.c:269:libxl__device_disk_set_backend: Disk vdev=xvda spec.backend=phy libxl: debug: libxl_device.c:201:disk_try_backend: Disk vdev=xvda, is using a storage driver domain, skipping physical device check libxl: debug: libxl_create.c:944:initiate_domain_create: running bootloader libxl: debug: libxl_bootloader.c:330:libxl__bootloader_run: no bootloader configured, using user supplied kernel libxl: debug: libxl_event.c:691:libxl__ev_xswatch_deregister: watch w=0x21d19b0: deregister unregistered domainbuilder: detail: xc_dom_allocate: cmdline="(hd0,0)/boot/grub/menu.lst", features="(null)" libxl: debug: libxl_dom.c:623:libxl__build_pv: pv kernel mapped 0 path /usr/local/lib/xen/boot/pv-grub-x86_64.gz domainbuilder: detail: xc_dom_kernel_file: filename="/usr/local/lib/xen/boot/pv-grub-x86_64.gz" [....] domainbuilder: detail: launch_vm: called, ctxt=0x7f9631c10004 domainbuilder: detail: xc_dom_release: called libxl: debug: libxl_device.c:269:libxl__device_disk_set_backend: Disk vdev=xvda spec.backend=phy libxl: debug: libxl_device.c:201:disk_try_backend: Disk vdev=xvda, is using a storage driver domain, skipping physical device check libxl: debug: libxl_event.c:639:libxl__ev_xswatch_register: watch w=0x21d2df0 wpath=/local/domain/1/backend/vbd/3/51712/state token=3/0: register slotnum=3 libxl: debug: libxl_create.c:1573:do_domain_create: ao 0x21d46d0: inprogress: poller=0x21d3b10, flags=i libxl: debug: libxl_event.c:576:watchfd_callback: watch w=0x21d2df0 wpath=/local/domain/1/backend/vbd/3/51712/state token=3/0: event epath=/local/domain/1/backend/vbd/3/51712/state libxl: debug: libxl_event.c:884:devstate_callback: backend /local/domain/1/backend/vbd/3/51712/state wanted state 2 still waiting state 1 libxl: debug: libxl_event.c:576:watchfd_callback: watch w=0x21d2df0 wpath=/local/domain/1/backend/vbd/3/51712/state token=3/0: event epath=/local/domain/1/backend/vbd/3/51712/state libxl: debug: libxl_event.c:880:devstate_callback: backend /local/domain/1/backend/vbd/3/51712/state wanted state 2 ok libxl: debug: libxl_event.c:677:libxl__ev_xswatch_deregister: watch w=0x21d2df0 wpath=/local/domain/1/backend/vbd/3/51712/state token=3/0: deregister slotnum=3 libxl: debug: libxl_device.c:938:device_backend_callback: calling device_backend_cleanup libxl: debug: libxl_event.c:691:libxl__ev_xswatch_deregister: watch w=0x21d2df0: deregister unregistered libxl: debug: libxl_device.c:993:device_hotplug: Backend domid 1, domid 0, assuming driver domains libxl: debug: libxl_device.c:996:device_hotplug: Not a remove, not executing hotplug scripts libxl: debug: libxl_event.c:691:libxl__ev_xswatch_deregister: watch w=0x21d2ef0: deregister unregistered libxl: debug: libxl_event.c:639:libxl__ev_xswatch_register: watch w=0x21d6010 wpath=/local/domain/0/backend/vif/3/0/state token=3/1: register slotnum=3 [....] Meanwhile, client's console prints the following: xenuser@xenhost:~$ sudo xl console client Xen Minimal OS! start_info: 0xba4000(VA) nr_pages: 0x40000 shared_inf: 0xa166c000(MA) pt_base: 0xba7000(VA) nr_pt_frames: 0xb mfn_list: 0x9a4000(VA) mod_start: 0x0(VA) mod_len: 0 flags: 0x0 cmd_line: (hd0,0)/boot/grub/menu.lst stack: 0x9630e0-0x9830e0 MM: Init _text: 0x0(VA) _etext: 0x75374(VA) _erodata: 0x90000(VA) _edata: 0x95d20(VA) stack start: 0x9630e0(VA) _end: 0x9a36e0(VA) start_pfn: bb5 max_pfn: 40000 Mapping memory range 0x1000000 - 0x40000000 setting 0x0-0x90000 readonly skipped 1000 MM: Initialise page allocator for dad000(dad000)-40000000(40000000) MM: done Demand map pfns at 40001000-0x2040001000. Heap resides at 2040002000-4040002000. Initialising timer interface Initialising console ... done. gnttab_table mapped at 0x40001000. Initialising scheduler Thread "Idle": pointer: 0x0x2040002050, stack: 0x0xfc0000 Thread "xenstore": pointer: 0x0x2040002800, stack: 0x0xfd0000 xenbus initialised on irq 1 mfn 0x240fa5 Thread "shutdown": pointer: 0x0x2040002fb0, stack: 0x0xfe0000 main.c: dummy main: start_info=0x9831e0 Thread "main": pointer: 0x0x2040003760, stack: 0x0xff0000 "main" "(hd0,0)/boot/grub/menu.lst" vbd 51712 is hd0 ******************* BLKFRONT for device/vbd/51712 ********** backend at /local/domain/1/backend/vbd/3/51712 And it never advances beyond this point. "xl list" indicates that client is blocked and waiting, and it remains at 0.1s of CPU time: xenuser@xenhost:~$ sudo xl list Name ID Mem VCPUs State Time(s) Domain-0 0 13024 8 r----- 123.6 storagedd 1 2048 1 -b---- 23.4 client 3 1024 1 -b---- 0.1 "xl block-list client" yields an empty list of block devices on the client. In xenstore, the storage domain does, however, have a backend entry for this disk: admin@storagedd:~$ sudo xenstore-ls /local/domain/1/backend vbd = "" 3 = "" 51712 = "" frontend = "/local/domain/3/device/vbd/51712" params = "/dev/loop0" script = "/etc/xen/scripts/block" frontend-id = "3" online = "1" removable = "0" bootable = "1" state = "2" dev = "xvda" type = "phy" mode = "w" device-type = "disk" discard-enable = "1" Upon destroying client, the backend entry disappears from xenstore. If you noticed any mistakes with my approach, or have an idea of next steps I could take to debug this, I appreciate any input. Thanks, Alex _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxx http://lists.xen.org/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |