[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Communication between Domain0 and Domain1
> Hello, > > I am using 'xen unstable / xenolinux 2.4.26' and I am having some problems > to get the domain0 and domain1 to communicate with each other. The python > scripts xc_dom_create.py is not available on the xen unstable release. If > I try to run the copy xc_dom_create.py that I have from xen 1.2 I get an > error on the line "xc = Xc.new()". So this is what I did to start a new > domain1 on xen unstable. From within domain0 I ran the following: Don't mix the tools! The 'xm' command replaces all the old xc_* tools. > 1.) run 'xend start' > > 2.) run the script xen_nat_enable that I got from xen1.2 I'm not sure this script is still useful. We tend to use bridging rather than routing by default, but I believe bridge-nf still enables you to NAT onto an outgoing interface. I haven't tried this, though. > 3.) I ran the following command from within a bash script > xm create -c vmid=1 \ > name=domain1 \ > kernel=/boot/vmlinuz-2.4.26-xenU \ > memory=64 \ > disk=phy:hda9,hda9,w \ > ipaddr=169.254.1.1 \ > root=/dev/hda9 ro \ > ip=169.254.1.1 \ > gateway=169.254.1.0 \ > netmask=255.255.0.0 \ > hostname=host_dom1 > > Domain1 is loaded and I can successfully login to domain1 > through the console. Note: leaving 'mingetty tty1' in the > inittab file of domain1 is fine. I read somewhere that I should > disable that. tty1 works fine now, as you've observed. > Here is the network configuration of domain0 > =================================================================== > [root@domain0 /]# ifconfig > eth0 Link encap:Ethernet HWaddr 00:0E:A6:6B:70:CC > inet addr:128.100.241.161 Bcast:128.100.241.255 Mask:255.255.255.0 > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 > RX packets:4228 errors:0 dropped:0 overruns:0 frame:0 > TX packets:400 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:1000 > RX bytes:493895 (482.3 Kb) TX bytes:60050 (58.6 Kb) > Interrupt:22 Memory:feafc000-0 > > eth0:xen Link encap:Ethernet HWaddr 00:0E:A6:6B:70:CC > inet addr:169.254.1.0 Bcast:169.254.255.255 Mask:255.255.0.0 > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 > RX packets:0 errors:0 dropped:0 overruns:0 frame:0 > TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:1000 > RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) > Interrupt:22 Memory:feafc000-0 > > lo Link encap:Local Loopback > inet addr:127.0.0.1 Mask:255.0.0.0 > UP LOOPBACK RUNNING MTU:16436 Metric:1 > RX packets:19836 errors:0 dropped:0 overruns:0 frame:0 > TX packets:19836 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:0 > RX bytes:1357481 (1.2 Mb) TX bytes:1357481 (1.2 Mb) > > vif1.0 Link encap:Ethernet HWaddr AA:00:01:75:09:CB > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 > RX packets:0 errors:0 dropped:0 overruns:0 frame:0 > TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:0 > RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) When you ran 'xend start' a bridge device should have been created. Have you got the bridge utils installed (/sbin/brctl) ? > Also, it seems that the way to control XEN on the xen unstable release is > a bit different from xen1.2. The python scripts to use are a bit > different. There a few scripts that I do not know what they do, example > xend, netfix and xfrd. That is probably because it is my 1st experience > with Python. Are there any documentation ? 'xm' is the main user tool -- netfix and xfrd are internal rather than user tools. We do desperately need to update the documentation -- sorry. Ian ------------------------------------------------------- This SF.Net email is sponsored by BEA Weblogic Workshop FREE Java Enterprise J2EE developer tools! Get your free copy of BEA WebLogic Workshop 8.1 today. http://ads.osdn.com/?ad_id=4721&alloc_id=10040&op=click _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.sourceforge.net/lists/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |