[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] hvm networking with vif-route



Greetings xen gurus...
 
I'm working with xen-3.0-testing downloaded on July 3, 2006.  I have configured an hvm domain and want to use vif-route and network-route to configure the communication.
 
The domU starts up fine.  I was able to complete the installation of the OS (RHES3) and can restart the domain repeatedly.  However, I cannot ping the network interface from dom0 using the standard configuration scripts.  The VIF interface gets created with vif-route in the dom0, but appears to have no link to the domU.  The qemu-dm log file says that the qemu-ifup script could not be run, but that script only sets up a bridge, which I am not using.
 
I noticed that a tun0 interface also gets created.  I searched through the source code and as little as I can understand it, this interface is created as a tunnel into the hvm domU and then bridged in the dom0.  I added a bridge name in the sxp script for the domU (rhes3.hvm)  but no bridge gets created.  I cannot find the syntax for the qemu-ifup command to determine what arguments are passed to it.
 
I was finally able to ping the domU by manually assigning an ip address to tun0 on the same subnet as the domU and updating the routing table in dom0 to point that subnet to tun0.  My problem with this configuration is that I can't find any way to identify the tun# interface that is created for each subdomain so I can assign the correct subnet to it.  Also if I specify the 'ip=' parameter in the sxp script, this address is attached to the VIF interface and a second route is added to that interface blocking traffic to the tunnel.
 
In Xen 2.0.7 I was able to route by setting the vifname in the sxp script, and assigning IP addresses to the VIF interfaces in vif-route.  Xen 3.0 doesn't let me to assign or configure the tun# interfaces used in HVM domains.
 
Can some one help me figure out how to identify and/or specify the tunnel interface attached to a domU?  Maybe show me how and where to customize the source code for ioemu (vl.c) so I can hardcode networking parameters?
 
 
 
SXP script:
 
*******
 
cat /etc/xen/rhes3.hvm
#  -*- mode: python; -*-
#============================================================================
# Python configuration setup for 'xm create'.
# This script sets the parameters used when a domain is created using 'xm create'.
# You use a separate script for each domain you want to create, or
# you can set the parameters for the domain on the xm command line.
#============================================================================
 
import os, re
arch = os.uname()[4]
if re.search('64', arch):
    arch_libdir = 'lib64'
else:
    arch_libdir = 'lib'
 
#----------------------------------------------------------------------------
# Kernel image file.
kernel = "/usr/lib/xen/boot/hvmloader"
 
# The domain build function. HVM domain uses 'hvm'.
builder='hvm'
 
# Initial memory allocation (in megabytes) for the new domain.
memory = 512
 
# A name for your domain. All domains must have different names.
name = "VTD1"
 
#-----------------------------------------------------------------------------
# the number of cpus guest platform has, default=1
vcpus=1
 
# enable/disable HVM guest PAE, default=0 (disabled)
#pae=0
 
# enable/disable HVM guest ACPI, default=0 (disabled)
#acpi=0
 
# enable/disable HVM guest APIC, default=0 (disabled)
#apic=0
 
# List of which CPUS this domain is allowed to use, default Xen picks
#cpus = ""         # leave to Xen to pick
#cpus = "0"        # all vcpus run on CPU0
#cpus = "0-3,5,^1" # run on cpus 0,2,3,5
 
# Optionally define mac and/or bridge for the network interfaces.
# Random MACs are assigned if not given.
#vif = [ 'type=ioemu, mac=00:16:3e:00:00:11, bridge=xenbr0' ]
# type=ioemu specify the NIC is an ioemu device not netfront
vif = [ 'type=ioemu, bridge=xenbr1, mac=00:16:3e:00:10:11' ]
 
#----------------------------------------------------------------------------
# Define the disk devices you want the domain to have access to, and
# what you want them accessible as.
# Each disk entry is of the form phy:UNAME,DEV,MODE
# where UNAME is the device, DEV is the device name the domain will see,
# and MODE is r for read-only, w for read-write.
 
#disk = [ 'phy:hda1,hda1,r' ]
#disk = [ 'file:/var/images/min-el3-i386.img,ioemu:hda,w' ]
 
disk = [ 'phy:/dev/fsivg01/xenlv01,ioemu:hda,r' ,
  'phy:/dev/fsivg01/d3lv01,ioemu:hdb,r' ]
 
#----------------------------------------------------------------------------
# Configure the behaviour when a domain exits.  There are three 'reasons'
# for a domain to stop: poweroff, reboot, and crash.  For each of these you
# may specify:
#
#   "destroy",        meaning that the domain is cleaned up as normal;
#   "restart",        meaning that a new domain is started in place of the old
#                     one;
#   "preserve",       meaning that no clean-up is done until the domain is
#                     manually destroyed (using xm destroy, for example); or
#   "rename-restart", meaning that the old domain is not cleaned up, but is
#                     renamed and a new domain started in its place.
#
# The default is
#
#  
#   on_reboot   = 'restart'
#   on_crash    = 'restart'
#
# For backwards compatibility we also support the deprecated option restart
#
# restart = 'onreboot' means
#                            on_reboot   = 'restart'
#                            on_crash    = 'destroy'
#
# restart = 'always'   means
#                            on_reboot   = 'restart'
#                            on_crash    = 'restart'
#
# restart = 'never'    means
#                            on_reboot   = 'destroy'
#                            on_crash    = 'destroy'
 
#
#on_reboot   = 'restart'
#on_crash    = 'restart'
 
#============================================================================
 
# New stuff
device_model = '/usr/' + arch_libdir + '/xen/bin/qemu-dm'
 
#-----------------------------------------------------------------------------
# Disk image for
#cdrom='/tmp/rhes3u5.iso'
 
#-----------------------------------------------------------------------------
# boot on floppy (a), hard disk (c) or CD-ROM (d)
#boot=[a|c|d]
boot='c'
 
#-----------------------------------------------------------------------------
#  write to temporary files instead of disk image files
#snapshot=1
 
#----------------------------------------------------------------------------
# enable SDL library for graphics, default = 0
sdl=1
 
#----------------------------------------------------------------------------
# enable VNC library for graphics, default = 1
vnc=0
 
#----------------------------------------------------------------------------
# enable spawning vncviewer(only valid when vnc=1), default = 1
vncviewer=0
 
#----------------------------------------------------------------------------
# no graphics, use serial port
#nographic=0
 
#----------------------------------------------------------------------------
# enable stdvga, default = 0 (use cirrus logic device model)
stdvga=0
 
#-----------------------------------------------------------------------------
#   serial port re-direct to pty deivce, /dev/pts/n
#   then xm console or minicom can connect
serial='pty'
 
#----------------------------------------------------------------------------
# enable ne2000, default = 0(use pcnet)
ne2000=0
 

#-----------------------------------------------------------------------------
#   enable audio support
#audio=1
 

#-----------------------------------------------------------------------------
#    set the real time clock to local time [default=0 i.e. set to utc]
#localtime=1
 

#-----------------------------------------------------------------------------
#    start in full screen
#full-screen=1  
***********
 
 
QEMU-DM Log File:
 

domid: 8
qemu: the number of cpus is 1
Connected to host network interface: tun0
/etc/xen/qemu-ifup: could not launch network script
shared page at pfn:20401, mfn: 3898f
Could not open CD /dev/fsivg01/d3lv01.
char device redirected to /dev/pts/1
False I/O request ... in-service already: 0, pvalid: 0, port: 0, data: 0, count: 0, size: 0
HVM Loader
Loading ROMBIOS ...
Loading Cirrus VGABIOS ...
Loading VMXAssist ...
VMX go ...
VMXAssist (Jul  3 2006)
Memory size 512 MB
E820 map:
0000000000000000 - 000000000009F800 (RAM)
000000000009F800 - 00000000000A0000 (Reserved)
00000000000A0000 - 00000000000C0000 (Type 16)
00000000000F0000 - 0000000000100000 (Reserved)
0000000000100000 - 000000001FFFE000 (RAM)
000000001FFFE000 - 000000001FFFF000 (Type 18)
000000001FFFF000 - 0000000020000000 (Type 17)
0000000020000000 - 0000000020003000 (ACPI NVS)
0000000020003000 - 000000002000D000 (ACPI Data)
00000000FEC00000 - 0000000100000000 (Type 16)
 
Start BIOS ...
Starting emulated 16-bit real-mode: ip=F000:FFF0
 rombios.c,v 1.138 2005/05/07 15:55:26 vruppert Exp $
HVM_PIT:guest init pit channel 0!
HVM_PIT:pass info 0xc00e90b to HV!
Remapping master: ICW2 0x8 -> 0x20
Remapping slave: ICW2 0x70 -> 0x28
VGABios $Id: vgabios.c,v 1.61 2005/05/24 16:50:50 vruppert Exp $
 
set_map result i 0 result 3898b
set_map result i 1 result 38988
...yatta, yatta, yatta,
 
set_map result i 3ff result 38589
HVMAssist BIOS, 1 cpu, $Revision: 1.138 $ $Date: 2005/05/07 15:55:26 $
 
ata0-0: PCHS=10402/16/63 translation=lba LCHS=652/255/63
ata0 master: QEMU HARDDISK ATA-2 Hard-Disk (5120 MBytes)
ata0-1: PCHS=10402/16/63 translation=lba LCHS=652/255/63
ata0  slave: QEMU HARDDISK ATA-2 Hard-Disk (5120 MBytes)
 
Booting from Hard Disk...
int13_harddisk: function 41, unmapped device for ELDL=82
int13_harddisk: function 08, unmapped device for ELDL=82
*** int 15h function AX=00C0, BX=0000 not yet supported!
KBD: unsupported int 16h function 03
int13_harddisk: function 41, unmapped device for ELDL=82
HVM_PIT:guest init pit channel 0!
HVM_PIT:pass info 0xc002e9c to HV!
Bad SWSTYLE=0x04
Thankx...

Max Baro
meb@xxxxxxxxxxxxxxxxx

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.