[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Bridging



It's amazing how much clearer things are when I actually read all of the instructions...

Everything seems good now, the boot.ini option is now in place and only the single Xen NIC shows up. NIC bridging is working fine, everything seems good.

Thanks!


----- "Dustin Henning" <Dustin.Henning@xxxxxxxxxxx> wrote:
>
>

Regarding the GPLPV drivers, I am running 0.9.10, and  I donât ever see both NICs.  A lot has been changed in the 0.9.11-preX versions, though, so you might have to review discussions on the list.  Is this what you see when you boot with /GPLPV in boot.ini, or have you not taken that step?  This may be normal now when that step isnât taken.

Dustin

 

>

From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Wendell Dingus
> Sent: Monday, October 06, 2008 13:55
> To: xen-users
> Subject: Re: [Xen-users] Bridging

 

FYI, we apparently resolved this with a bit of trial and error today. We removed the references to xenbr0 or any other device from the vif line and the interface is bridged and Windows DHCP's an address from the physical network just fine:
>
> vif = [ "mac=00:16:3e:6b:e7:e2" ]
>
> instead of the previous:
>
> vif = [ "mac=00:16:3e:6b:e7:e2,bridge=virbr0,script=vif-bridge" ]
>
> Also at the same time we edited xend-config.sxp to add a netdev= line here:
>
> (network-script network-bridge netdev=eth0)
>
> I still have to wonder if something isn't 100% though, the Windows guest still has 2 NICs, a RealTek and a Xen one with the Xen one still showing "cable unplugged". Is that normal? Just an indicator of the maturity of the PV drivers? Not that I'm complaining, this ability is fantastic...
>
> Thanks.
>
>
> ----- "Wendell Dingus" <wendell@xxxxxxxxxxxxx> wrote:
> >

> We just installed CentOS 5.2 on a SuperMicro server and went to install a VM and the "share a physical network device" option had nothing that could be selected. I'm thinking that possibly it has to do with the peculiarities of the NICs in this box, maybe...
> >
> > Before the domu is started:
> >
> > # brctl show
> > bridge name     bridge id               STP enabled     interfaces
> > virbr0          8000.000000000000       yes
> > xenbr0          8000.feffffffffff       no              peth0
> >                                                         vif0.0
> >
> > # /etc/xen/scripts/network-bridge status
> > ============================================================
> > 7: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
> >     link/ether 00:30:48:c4:55:fa brd ff:ff:ff:ff:ff:ff
> >     inet 204.87.213.163/24 brd 204.87.213.255 scope global eth0
> >     inet6 fe80::230:48ff:fec4:55fa/64 scope link
> >        valid_lft forever preferred_lft forever
> > 14: xenbr0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue
> >     link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff
> >  
> > bridge name     bridge id               STP enabled     interfaces
> > virbr0          8000.000000000000       yes
> > xenbr0          8000.feffffffffff       no              peth0
> >                                                         vif0.0
> >  
> > 204.87.xx.0/24 dev eth0  proto kernel  scope link  src 204.87.xx.163
> > 192.168.122.0/24 dev virbr0  proto kernel  scope link  src 192.168.122.1
> > 169.254.0.0/16 dev eth0  scope link
> > default via 204.87.xx.1 dev eth0
> >  
> > Kernel IP routing table
> > Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
> > 204.87.xx.0    0.0.0.0         255.255.255.0   U     0      0        0 eth0
> > 192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr0
> > 169.254.0.0     0.0.0.0         255.255.0.0     U     0      0        0 eth0
> > 0.0.0.0         204.87.xx.1    0.0.0.0         UG    0      0        0 eth0
> > ============================================================
> >
> > After it is started:
> >
> > # brctl show
> > bridge name     bridge id               STP enabled     interfaces
> > virbr0          8000.0a9c119ed9dd       yes             vif10.0
> >                                                         tap0
> > xenbr0          8000.feffffffffff       no              peth0
> >                                                         vif0.0
> >
> > The odd thing I noticed, graphically in system-config-network actually, is that peth0 seems to be associated with eth1 instead of eth0. Might just be an oddity of how this graphical utility represents things, not sure. The eth0 hardware (according to this pointy-clicky tool) is e1000e and the eth1 hardware is "Intel Corporation 80003". Physically the eth0 interface has a cable and nothing in the other. Yet in the hardware tab of this tool it shows peth0 as being the "Intel Corporation 80003" hardware. Like it's linked to the physical eth1(?)
> >
> > Since I couldn't select it initially I went ahead and completed the install with default networking then edited the config and changed the vif line to have ,bridge=xenbr0 instead of the ,bridge=virbr0,script=vif-bridge that was put in there automatically. In neither case is the domu successfully bridged though.
> >
> > # lspci | grep -i eth
> > 06:00.0 Ethernet controller: Intel Corporation 80003ES2LAN Gigabit Ethernet Controller (Copper) (rev 01)
> > 06:00.1 Ethernet controller: Intel Corporation 80003ES2LAN Gigabit Ethernet Controller (Copper) (rev 01)
> >
> > # grep -i eth /etc/modprobe.conf
> > alias eth0 e1000e
> > alias eth1 e1000e
> > alias peth0 e1000e
> >
> > Thanks.
> >
> > PS. I installed both another CentOS domu as well as a Windows XP one. On the XP one I got the newest compiled PV drivers and installed those. Afterwards I have both a Realtek and a Xen NIC defined in Windows. The Realtek one says it has a cable plugged in but the Xen one does not. Not sure if that's a symptom of the previous problem leaking through or if I did something wrong setting that up in Windows.
> >


> > _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.