[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] question about PXE



On Sat, 25 Sep 2004 09:47:25 +0100
Ian Pratt <Ian.Pratt@xxxxxxxxxxxx> wrote:

>  
> > > It creates a bridge and then attempts to transfer the original
> > > network setup over to the bridge. Positing the output of
> > > "ip link show" and "ip route show" before and after should help
> > > figure out what's going wrong.
> > 
> > Adding bash -x didn't produce any output but having the console back let
> > me see the problem: NFS was hanging, problem solved.  Sorry, I should
> > have connected the dots, there was a message before about this...
> 
> Adding '-x' should have generated more log output to
> /var/log/xend

I see, woops.  I guess twisted diverts stdout.
 
> It's rather unfortunate that we need to move all the addresses
> and routes to the bridge. We should probably try convincing the
> linux bridge maintainers that the current behaviour isn't helpful.
> 
> > They are.  As an experiment, I wiped the /lib/modules/2.4.27-xen0
> > directory and redid 'make world' and 'make install' (after I recompiled
> > to fix the serial IRQ 4 issue) and still the same error.  Before I did
> > not include this message from the kernel ring buffer: 
> > 
> > "Universal TUN/TAP device driver 1.5 (C)1999-2002 Maxim Krasnyansky
> > tun: Can't register misc device 200"
> > (/dev/net/tun created by "mknod /dev/net/tun c 10 200")
> 
> Damn. Guess what we major/minor got picked for the Xen's evtchn
> driver: 10,200 :-(   
> 
> Moving this shouldn't be too bad, as it's only xend that talks to
> evtchn, so we could modify xend to re-mknod our evtchn device. 
> 
> The question is, what major,minor would be best to go for? Pretty
> much all the numbers in the misc range have already gone.
> At the very least, stealing something like the atarimouse would
> have been smarter than tun.
> 

Great.  I'm of no help with the question, but:

On Sat, 25 Sep 2004 10:05:55 +0100
Keir Fraser <Keir.Fraser@xxxxxxxxxxxx> wrote:
[...]
> I'm moving it to 10,201 and added code to xend so it will
> automatically recreate the device file if it has the wrong
> major,minor. 

Awesome, I'll be moving along then.


On Sat, 25 Sep 2004 09:47:25 +0100
Ian Pratt <Ian.Pratt@xxxxxxxxxxxx> wrote:
> > The objective is to start VMs on remote grid nodes and L2 bridge all of
> > their network traffic to another network.  There, they are used as a
> > backend for a grid node, i.e., a completely portable and custom
> > environment to run jobs in.
> 
> When it's ready for release, you might find that Mike Wray's vnet
> driver is actually a more convenient way of doing this.

You mentioned that before when I was considering 'VNET', but I never
found any information about vnet (nor did I realize Mike Wray was on
this list!).  OpenVPN is going well though, it runs in userspace which is
good for convincing the remote admins (unless of course they're running
Xen, then they are our friends) and also supports a slew of options
including PKI which we need.  But I would be extremely interested in
hearing more about vnet.

Mike, is this to incorporate into Xen?  Is it L2 or point to point? 
Does it require any VM participation? 

Thanks,

Tim


> 
> > The VMs themselves have no hand in the bridging (by design).  I make a
> > tap interface on the host resource for each VM and bridge the VM
> > directly to each tap interface (this tap interface is one end of the L2
> 
> Ian
> 


-- 


-------------------------------------------------------
This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170
Project Admins to receive an Apple iPod Mini FREE for your judgement on
who ports your project to Linux PPC the best. Sponsored by IBM.
Deadline: Sept. 24. Go here: http://sf.net/ppc_contest.php
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.