[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] problem for add second bridge xenbr1
On Fri, Feb 18, 2011 at 11:07 AM, Bruno Steven <aspenbr@xxxxxxxxx> wrote: > > Dear > I need create second bridge xenbr1 for separate traffic between my virtual > machine , I following this step but show message that Link veth0 is missing, > how I can create veth0 ? > Creating bridges manually is often a better approach. Some examples in the Xen 4.1+ guide: http://wiki.xensource.com/xenwiki/MigrationGuideToXen4.1%2B The network examples should apply equally as well to older version of Xen. You'll just want to disable (by commenting them) the network-bridge and vif-bridge scripts in your xend config file. Hope that helps, Todd > > When I run script for create xenbr1 > ./network-bridge netdev=eth1 bridge=xenbr1 start > Shwo this message > Link veth0 is missing. > This may be because you have reached the limit of the number of interfaces > that the loopback driver supports. If the loopback driver is a module, you > may raise this limit by passing it as a parameter (nloopbacks=<N>); if the > driver is compiled statically into the kernel, then you may set the parameter > using loopback.nloopbacks=<N> on the domain 0 kernel command line. > without interface veth0 > [root@XEN01 scripts]# ip link > 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > 2: peth0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 100 > link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff > 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 > link/ether 00:30:4f:79:2d:fa brd ff:ff:ff:ff:ff:ff > 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 > link/ether 00:30:4f:79:2d:ff brd ff:ff:ff:ff:ff:ff > 5: sit0: <NOARP> mtu 1480 qdisc noop > link/sit 0.0.0.0 brd 0.0.0.0 > 6: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue > link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff > 7: vif0.0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue > link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff > 8: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue > link/ether 00:15:17:24:d5:a8 brd ff:ff:ff:ff:ff:ff > 9: vif0.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue > link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff > 10: veth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue > link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff > 11: vif0.2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop > link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff > 12: veth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop > link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff > 13: vif0.3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop > link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff > 14: veth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop > link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff > 15: vif0.4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop > link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff > 16: veth4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop > link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff > 17: vif0.5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop > link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff > 18: veth5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop > link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff > 19: xenbr0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue > link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff > > -- > Bruno Steven - Administrador de sistemas. > > CompTIA Security+ - Code: JYN7BD9BJGRECFM8 > > > > LPIC-1 - LPI ID: lpi000119659 / Code: p2e4wz47e4 > > MCP/MCSA Windows 2003 - TranscriptID: 793804 / Access Code: 080089100 > > > > > > > _______________________________________________ > Xen-users mailing list > Xen-users@xxxxxxxxxxxxxxxxxxx > http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |