[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Error: Device 2049 (vbd) could not be connected. Hotplug scripts not working.



Hi,

I think your config is wrong. it has to be "phy:/..." not "pty:/...".

regrads,
 Ralph

Am Dienstag, 20. Dezember 2005 09:28 schrieb Yacine Kheddache:
> Hi,
>
> I saw a few post on with the same error in the mailing archive but none
> seem to be related to the issue I have.
>
> OS is Ubuntu 5.10 "Breezy Badger"
>
> Issue reproduce with the latest xen_changeset :
> # xm info
> system                 : Linux
> host                   : srv002
> release                : 2.6.12.6-xen0
> version                : #2 SMP Mon Dec 19 21:05:01 CET 2005
> machine                : i686
> nr_cpus                : 4
> nr_nodes               : 1
> sockets_per_node       : 2
> cores_per_socket       : 1
> threads_per_core       : 2
> cpu_mhz                : 3591
> hw_caps                : bfebfbff:20100000:00000000:00000080:0000659d
> total_memory           : 6144
> free_memory            : 3394
> xen_major              : 3
> xen_minor              : 0
> xen_extra              : .0
> xen_caps               : xen-3.0-x86_32p
> platform_params        : virt_start=0xf5800000
> xen_changeset          : Thu Dec 15 20:57:27 2005 +0100 8259:5baa96bedc13
> cc_compiler            : gcc version 3.4.5 20050809 (prerelease) (Ubuntu
> 3.4.4-6ubuntu8)
> cc_compile_by          : root
> cc_compile_domain      : (none)
> cc_compile_date        : Mon Dec 19 21:04:20 CET 2005
>
> # cat debian.3-1.xen-lvm.cfg
> kernel = "/boot/vmlinuz-2.6-xenU"
> memory = 128
> name = "debian.3-1.lvm"
> nics = 1
> dhcp = "dhcp"
> disk = ['pty:/dev/Alyseo/VMroot01,sda1,w',
> 'pty:/dev/Alyseo/VMswap01,sda2,w']
> root = "/dev/sda1 ro"
>
> # xm create debian.3-1.xen-lvm.cfg
> Using config file "debian.3-1.xen-lvm.cfg".
> Error: Device 2049 (vbd) could not be connected. Hotplug scripts not
> working.
>
> # tail -10 syslog
> Dec 20 00:32:05 srv002 logger: /etc/xen/scripts/vif-bridge: online
> XENBUS_PATH=backend/vif/11/0
> Dec 20 00:32:05 srv002 logger: /etc/xen/scripts/block: add
> XENBUS_PATH=backend/vbd/11/2050
> Dec 20 00:32:05 srv002 kernel: device vif11.0 entered promiscuous mode
> Dec 20 00:32:05 srv002 kernel: xenbr0: port 3(vif11.0) entering learning
> state
> Dec 20 00:32:05 srv002 kernel: xenbr0: topology change detected,
> propagating Dec 20 00:32:05 srv002 kernel: xenbr0: port 3(vif11.0) entering
> forwarding state
> Dec 20 00:32:05 srv002 logger: /etc/xen/scripts/vif-bridge: Successful
> vif-bridge online for vif11.0, bridge xenbr0.
> Dec 20 00:32:05 srv002 logger: /etc/xen/scripts/vif-bridge: Writing
> backend/vif/11/0/hotplug-status connected to xenstore.
> Dec 20 00:32:05 srv002 logger: /etc/xen/scripts/vif-bridge: online
> XENBUS_PATH=backend/vif/11/0
> Dec 20 00:32:05 srv002 logger: /etc/xen/scripts/vif-bridge: vif11.0 already
> attached to a bridge
>
> # tail -10 xend.log
> [2005-12-20 00:32:05 xend] DEBUG (DevController:409) hotplugStatusCallback
> /local/domain/0/backend/vbd/11/2049/hotplug-status.
> [2005-12-20 00:32:15 xend] ERROR (SrvBase:87) Request wait_for_devices
> failed.
> Traceback (most recent call last):
>   File
> "/space1/Xen/HG/xen-3.0-testing.hg/dist/install/usr/lib/python/xen/web/SrvB
>a se.py", line 85, in perform
>   File
> "/space1/Xen/HG/xen-3.0-testing.hg/dist/install/usr/lib/python/xen/xend/ser
>v er/SrvDomain.py", line 72, in op_wait_for_devices
>   File
> "/space1/Xen/HG/xen-3.0-testing.hg/dist/install/usr/lib/python/xen/xend/Xen
>d DomainInfo.py", line 1349, in waitForDevices
>   File
> "/space1/Xen/HG/xen-3.0-testing.hg/dist/install/usr/lib/python/xen/xend/Xen
>d DomainInfo.py", line 977, in waitForDevices_
>   File
> "/space1/Xen/HG/xen-3.0-testing.hg/dist/install/usr/lib/python/xen/xend/ser
>v er/DevController.py", line 135, in waitForDevices
>   File
> "/space1/Xen/HG/xen-3.0-testing.hg/dist/install/usr/lib/python/xen/xend/ser
>v er/DevController.py", line 145, in waitForDevice
> VmError: Device 2049 (vbd) could not be connected. Hotplug scripts not
> working.
>
> # brctl show
> bridge name     bridge id       STP enabled     interfaces
> xenbr0          8000.feffffffffff       no              peth0
>                                                 vif0.0
>                                                 vif11.0
>                                                 vif2.0
>
>
> After many destroy and create ID and VIF increase :
> # ip a
> 1: peth0: <BROADCAST,MULTICAST,NOARP,UP> mtu 1500 qdisc pfifo_fast qlen
> 1000 link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff
> 2: eth1: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
>     link/ether 00:11:43:e7:56:b2 brd ff:ff:ff:ff:ff:ff
>     inet 192.168.0.11/24 brd 192.168.0.255 scope global eth1
> 3: lo: <LOOPBACK,UP> mtu 16436 qdisc noqueue
>     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>     inet 127.0.0.1/8 scope host lo
> 4: vif0.0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue
>     link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff
> 5: eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop
>     link/ether 00:11:43:e7:56:b1 brd ff:ff:ff:ff:ff:ff
> 6: vif0.1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop
>     link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff
> 7: veth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop
>     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
> 8: vif0.2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop
>     link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff
> 9: veth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop
>     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
> 10: vif0.3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop
>     link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff
> 11: veth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop
>     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
> 12: vif0.4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop
>     link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff
> 13: veth4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop
>     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
> 14: vif0.5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop
>     link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff
> 15: veth5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop
>     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
> 16: vif0.6: <BROADCAST,MULTICAST> mtu 1500 qdisc noop
>     link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff
> 17: veth6: <BROADCAST,MULTICAST> mtu 1500 qdisc noop
>     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
> 18: vif0.7: <BROADCAST,MULTICAST> mtu 1500 qdisc noop
>     link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff
> 19: veth7: <BROADCAST,MULTICAST> mtu 1500 qdisc noop
>     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
> 20: xenbr0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue
>     link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff
> 22: vif2.0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue
>     link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff
> 24: vif4.0: <BROADCAST,MULTICAST> mtu 1500 qdisc noqueue
>     link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff
> 25: vif5.0: <BROADCAST,MULTICAST> mtu 1500 qdisc noqueue
>     link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff
> 26: vif6.0: <BROADCAST,MULTICAST> mtu 1500 qdisc noqueue
>     link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff
> 27: vif7.0: <BROADCAST,MULTICAST> mtu 1500 qdisc noqueue
>     link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff
> 28: vif8.0: <BROADCAST,MULTICAST> mtu 1500 qdisc noqueue
>     link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff
> 29: vif9.0: <BROADCAST,MULTICAST> mtu 1500 qdisc noqueue
>     link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff
> 30: vif10.0: <BROADCAST,MULTICAST> mtu 1500 qdisc noqueue
>     link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff
> 31: vif11.0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue
>     link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff
>
> # ip route sh
> 192.168.0.0/24 dev eth1  proto kernel  scope link  src 192.168.0.11
> default via 192.168.0.1 dev eth1
>
> # lvscan
>   ACTIVE            '/dev/Alyseo/root' [200.00 MB] inherit
>   ACTIVE            '/dev/Alyseo/var' [1.00 GB] inherit
>   ACTIVE            '/dev/Alyseo/usr' [500.00 MB] inherit
>   ACTIVE            '/dev/Alyseo/home' [100.00 MB] inherit
>   ACTIVE            '/dev/Alyseo/tmp' [100.00 MB] inherit
>   ACTIVE            '/dev/Alyseo/space1' [80.00 GB] inherit
>   ACTIVE            '/dev/Alyseo/swap' [1.00 GB] inherit
>   ACTIVE            '/dev/Alyseo/VMroot01' [4.00 GB] inherit
>   ACTIVE            '/dev/Alyseo/VMswap01' [512.00 MB] inherit
>   ACTIVE            '/dev/Alyseo/VMroot02' [4.00 GB] inherit
>   ACTIVE            '/dev/Alyseo/VMswap02' [512.00 MB] inherit
>
> # blockdev --getsize /dev/Alyseo/VMroot01
> 8388608
>
> # blockdev --getsize /dev/Alyseo/VMswap01
> 1048576
>
> I also tried with :
> disk = ['pty:Alyseo/VMroot01,sda1,w', 'pty:Alyseo/VMswap01,sda2,w']
> or
> disk = ['pty:mapper/Alyseo-VMroot01,sda1,w',
> 'pty:mapper/Alyseo-VMswap01,sda2,w']
>
> without any success.
>
> FYI :
> - xm unpause is simply killing the domU VM
> - creating domU VM with file instead of pty is working fine (same config
> tested)
> - Do not know if it is related to bug #392
> - /etc/hotplug/xen-backend.agent has been copied by hang (not done during
> dist install)
>
> I'm stuck on this point and will appreciate any help or suggestions.
>
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.