[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-users] Xen a couple of questions


  • To: "Octavian Teodorescu" <octav@xxxxxxxxxxxxxxxx>
  • From: "Petersson, Mats" <Mats.Petersson@xxxxxxx>
  • Date: Tue, 5 Jun 2007 16:12:53 +0200
  • Cc: xen-users@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Tue, 05 Jun 2007 07:12:11 -0700
  • List-id: Xen user discussion <xen-users.lists.xensource.com>
  • Thread-index: Acene3IJ1eg9pgsXSu+tKnba1E/tIAAAByKg
  • Thread-topic: [Xen-users] Xen a couple of questions

 

> -----Original Message-----
> From: Octavian Teodorescu [mailto:octav@xxxxxxxxxxxxxxxx] 
> Sent: 05 June 2007 15:13
> To: Petersson, Mats
> Cc: xen-users@xxxxxxxxxxxxxxxxxxx
> Subject: RE: [Xen-users] Xen a couple of questions
> 
> Tried to create the ramdisk with blkfront module:
> mkinitrd --with=blkfront initrd-test.img 2.6.20-2925.9.fc7xen
> ... but I get the same error.

What does you config file look like? 

--
Mats
> 
> 
> >>
> >>
> >>> -----Original Message-----
> >>> From: Octavian Teodorescu [mailto:octav@xxxxxxxxxxxxxxxx]
> >>> Sent: 05 June 2007 13:50
> >>> To: Petersson, Mats
> >>> Cc: xen-users@xxxxxxxxxxxxxxxxxxx
> >>> Subject: RE: [Xen-users] Xen a couple of questions
> >>>
> >>> Yap, I have the ramdisk of dom0 also. Without the ramdisk 
> set up, I've
> >>> seen that it can't mount logical volumes as partitions (with
> >>> any kind of
> >>> kernel).
> >>
> >> Yes, but you need one that uses blkfront driver if you are 
> using it in
> >> domU's. You can technically create the same one for both 
> DomU and Dom0,
> >> but I think it's easier/better to have separate ones, as 
> at least you
> >> don't risk breaking the working Dom0 one trying to set the 
> DomU one up.
> >>
> >> --
> >> Mats
> >>>
> >>> Best regards.
> >>>
> >>> >>
> >>> >>
> >>> >>> -----Original Message-----
> >>> >>> From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
> >>> >>> [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of
> >>> >>> Geert Janssens
> >>> >>> Sent: 05 June 2007 13:26
> >>> >>> To: xen-users@xxxxxxxxxxxxxxxxxxx
> >>> >>> Cc: Octavian Teodorescu
> >>> >>> Subject: Re: [Xen-users] Xen a couple of questions
> >>> >>>
> >>> >>> I am sorry, this is way over my head. I'm just a (relatively
> >>> >>> new) user of xen
> >>> >>> myself. I hope someone else can help you here.
> >>> >>>
> >>> >>> Regards,
> >>> >>>
> >>> >>> Geert
> >>> >>>
> >>> >>> On Tuesday 5 June 2007 14:19, Octavian Teodorescu wrote:
> >>> >>> > I had stoped the virtual domain and modified in the config
> >>> >>> file to use the
> >>> >>> > dom0 kernel.
> >>> >>> > Here are the last lines of the guest boot where you can see
> >>> >>> the error:
> >>> >>> > "SCSI subsystem initialized
> >>> >>> > device-mapper: ioctl: 4.11.0-ioctl (2006-10-12) initialised:
> >>> >>> > dm-devel@xxxxxxxxxx
> >>> >>> > Kernel panic - not syncing: Attempted to kill init!"
> >>> >>> >
> >>> >>> > But if I use the old kernelU i have which is actually for
> >>> >>> Fedorac core 5
> >>> >>> > then everthing it's ok.
> >>> >>
> >>> >> Do you have an "ramdisk" entry in your PV domain config?
> >>> You probably
> >>> >> need one.
> >>> >>
> >>> >> --
> >>> >> Mats
> >>> >>> >
> >>> >>> > >> On Tuesday 5 June 2007 13:11, you wrote:
> >>> >>> > >>> Thanks a lot, I appreciate your help.
> >>> >>> > >>> 1. I tried with the same configuration and with the
> >>> >>> kernel of dom0, but
> >>> >>> > >>> I
> >>> >>> > >>> receive a lot of errors, on both Fedora Core 7 and
> >>> >>> Centos 5 (systems
> >>> >>> > >>> with
> >>> >>> > >>> which I've tried xen).
> >>> >>> > >>
> >>> >>> > >> Hmm, I don't know what exactly you tried and 
> what is failing.
> >>> >>> > >>
> >>> >>> > >> I simply used virt-manager to create a CentOS 5 guest on
> >>> >>> my CentOS 5
> >>> >>> > >> dom0. I
> >>> >>> > >> followed (more or less) the guidelines that come
> >>> with CentOS 5's
> >>> >>> > >> release:
> >>> http://www.centos.org/docs/5/html/Virtualization-en-US/
> >>> >>> > >>
> >>> >>> > >> I don't remember having particular difficulty with this.
> >>> >>> The only caveat
> >>> >>> > >> I remember was that Anaconda insists on a block device
> >>> >>> that can hold a
> >>> >>> > >> partition map. So while installing, you can't provide
> >>> >>> the guest with
> >>> >>> > >> separate
> >>> >>> > >> partitions, because it will treat these separate
> >>> >>> partitions as complete
> >>> >>> > >> diskss that have to be partitioned still.
> >>> >>> > >>
> >>> >>> > >> Regards,
> >>> >>> > >>
> >>> >>> > >> Geert
> >>> >>> > >>
> >>> >>> > >>> 2.Yap that's a bridge interface. I'll look into
> >>> your mails about
> >>> >>> > >>> advance bridging, thanks.
> >>> >>> > >>>
> >>> >>> > >>> Best regards.
> >>> >>> > >>>
> >>> >>> > >>> >> 1. On CentOS 5, Redhat Enterprise 5 and Fedora core
> >>> >>> 6 and up, the
> >>> >>> > >>> >> xen kernel
> >>> >>> > >>> >> can be used for both dom0 and domU. There is no need
> >>> >>> anymore for two
> >>> >>> > >>> >> kernels.
> >>> >>> > >>> >>
> >>> >>> > >>> >> 2. I don't know the complete answer to your second
> >>> >>> question. From
> >>> >>> > >>>
> >>> >>> > >>> your
> >>> >>> > >>>
> >>> >>> > >>> >> ifconfig output, it looks as if Fedora Core 7
> >>> >>> creates a virbr0
> >>> >>> > >>>
> >>> >>> > >>> interface
> >>> >>> > >>>
> >>> >>> > >>> >> instead of a xenbr0 interface. You could check if
> >>> >>> this is really a
> >>> >>> > >>> >> bridge with the command "brctl show". You probably
> >>> >>> have to execute
> >>> >>> > >>>
> >>> >>> > >>> this
> >>> >>> > >>>
> >>> >>> > >>> >> as root.
> >>> >>> > >>> >>
> >>> >>> > >>> >> Then if I understand your question correctly, you
> >>> >>> are trying to
> >>> >>> > >>> >> setup
> >>> >>> > >>>
> >>> >>> > >>> a
> >>> >>> > >>>
> >>> >>> > >>> >> xen
> >>> >>> > >>> >> guest domain to act as a
> >>> >>> firewall/router/gateway/whatever for your
> >>> >>> > >>>
> >>> >>> > >>> lan.
> >>> >>> > >>>
> >>> >>> > >>> >> So I assume you only want this guest domain to use
> >>> >>> the external
> >>> >>> > >>>
> >>> >>> > >>> network
> >>> >>> > >>>
> >>> >>> > >>> >> card
> >>> >>> > >>> >> (your eth0). There are two ways of 
> accomplishing this:
> >>> >>> > >>> >> * either use PCI passthrough so that your dom0 won't
> >>> >>> see eth0, but
> >>> >>> > >>> >> instead it's passed to your guest system (search for
> >>> >>> pciback on
> >>> >>> > >>>
> >>> >>> > >>> Google
> >>> >>> > >>>
> >>> >>> > >>> >> for more info). Unfortunatly, I didn't manage to set
> >>> >>> this up in my
> >>> >>> > >>> >> particular case, so
> >>> >>> > >>> >> I used the seconde option:
> >>> >>> > >>> >> * create two xenbridges, one for your external
> >>> >>> network interface,
> >>> >>> > >>> >> and one for
> >>> >>> > >>> >> your internal network interface. Then configure dom0
> >>> >>> such that it
> >>> >>> > >>>
> >>> >>> > >>> isn't
> >>> >>> > >>>
> >>> >>> > >>> >> allowed to use the bridge for the external
> >>> >>> interface. You can do
> >>> >>> > >>> >> this
> >>> >>> > >>>
> >>> >>> > >>> by
> >>> >>> > >>>
> >>> >>> > >>> >> either disabling the virtual interface in dom0
> >>> >>> (which will be called
> >>> >>> > >>> >> eth0) or
> >>> >>> > >>> >> by setting some firewall rules in dom0, or both.
> >>> >>> > >>> >> You can search this list for one of my earlier
> >>> mails, where I
> >>> >>> > >>> >> explain
> >>> >>> > >>>
> >>> >>> > >>> my
> >>> >>> > >>>
> >>> >>> > >>> >> configuration (on CentOS 5). It's titled "advanced
> >>> >>> bridging..." and
> >>> >>> > >>> >> dated May
> >>> >>> > >>> >> 16th, 2007.
> >>> >>> > >>> >>
> >>> >>> > >>> >> Hopefully this will help you along the way.
> >>> >>> > >>> >>
> >>> >>> > >>> >> Cheers,
> >>> >>> > >>> >>
> >>> >>> > >>> >> Geert
> >>> >>> > >>> >>
> >>> >>> > >>> >> On Tuesday 5 June 2007 10:34, Octavian 
> Teodorescu wrote:
> >>> >>> > >>> >>> Hi guys,
> >>> >>> > >>> >>>
> >>> >>> > >>> >>> 1. Regarding Centos and Fedora core 7 compared with
> >>> >>> fedora core 5.
> >>> >>> > >>>
> >>> >>> > >>> I've
> >>> >>> > >>>
> >>> >>> > >>> >>> seen that on fedora core 5 when you want to install
> >>> >>> xen you have to
> >>> >>> > >>> >>> install the following packages: xen, kernel-xen0
> >>> >>> and kernel-xenU
> >>> >>> > >>> >>> (of course with the dependencies needed). But on
> >>> >>> Centos, FC7 and I
> >>> >>> > >>> >>> think redhat versions, you only have to install xen
> >>> >>> and kernel-xen,
> >>> >>> > >>> >>> you
> >>> >>> > >>>
> >>> >>> > >>> don't
> >>> >>> > >>>
> >>> >>> > >>> >>> have any kernel for the guest system. In my case I
> >>> >>> could only start
> >>> >>> > >>>
> >>> >>> > >>> a
> >>> >>> > >>>
> >>> >>> > >>> >>> xen
> >>> >>> > >>> >>> guest (on FC7) with an older kernel-xenU installed
> >>> >>> from FC version
> >>> >>> > >>>
> >>> >>> > >>> 5.
> >>> >>> > >>>
> >>> >>> > >>> >>> My question is: Why does the newer releases of
> >>> >>> linux has xen kernel
> >>> >>> > >>> >>> prebuilt but just for dom0, not for the guest
> >>> >>> systems, and you
> >>> >>> > >>> >>> can't even
> >>> >>> > >>> >>> find a domU kernel special for those systems?
> >>> >>> > >>> >>>
> >>> >>> > >>> >>> 2.My network topology in my home is like this:
> >>> >>> > >>> >>> --------
> >>> >>> > >>> >>> -router-
> >>> >>> > >>> >>> --------
> >>> >>> > >>> >>>
> >>> >>> > >>> >>>
> >>> >>> > >>> >>>
> >>> >>> > >>> >>> -----------         ------------
> >>> >>> > >>> >>> -linux xen-   ----  -other 2 pc-
> >>> >>> > >>> >>> -----------         ------------
> >>> >>> > >>> >>>
> >>> >>> > >>> >>> The linux xen machine has two network interfaces
> >>> >>> and xen installed.
> >>> >>> > >>> >>> I want: -  one windows machine virtualized
> >>> >>> > >>> >>>         -  one linux machine for which I want to
> >>> >>> have a public
> >>> >>> > >>> >>> ipaddress (to put the ip in DMS on the router) and
> >>> >>> I want it to use
> >>> >>> > >>> >>> eth0 (so in this case the traffic can not be
> >>> >>> sniffed by other guest
> >>> >>> > >>> >>> systems or dom0).
> >>> >>> > >>> >>>
> >>> >>> > >>> >>> ifconfig -a (on dom0) it shows like this:
> >>> >>> > >>> >>> eth0      Link encap:Ethernet  HWaddr 
> 00:00:E8:76:E2:4D
> >>> >>> > >>> >>>           UP BROADCAST MULTICAST  MTU:1500  Metric:1
> >>> >>> > >>> >>>           RX packets:0 errors:0 dropped:0
> >>> overruns:0 frame:0
> >>> >>> > >>> >>>           TX packets:0 errors:0 dropped:0
> >>> >>> overruns:0 carrier:0
> >>> >>> > >>> >>>           collisions:0 txqueuelen:1000
> >>> >>> > >>> >>>           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)
> >>> >>> > >>> >>>           Interrupt:21 Base address:0x2000
> >>> >>> > >>> >>>
> >>> >>> > >>> >>> eth1      Link encap:Ethernet  HWaddr 
> 00:16:76:B3:16:AB
> >>> >>> > >>> >>>           inet addr:192.168.0.101  
> Bcast:192.168.0.255
> >>> >>> > >>> >>> Mask:255.255.255.0
> >>> >>> > >>> >>>           inet6 addr:
> >>> fe80::216:76ff:feb3:16ab/64 Scope:Link
> >>> >>> > >>> >>>           UP BROADCAST RUNNING MULTICAST
> >>> MTU:1500  Metric:1
> >>> >>> > >>> >>>           RX packets:198578 errors:0 dropped:0
> >>> >>> overruns:0 frame:0
> >>> >>> > >>> >>>           TX packets:117290 errors:0 dropped:0
> >>> >>> overruns:0 carrier:0
> >>> >>> > >>> >>>           collisions:0 txqueuelen:0
> >>> >>> > >>> >>>           RX bytes:267328989 (254.9 MiB)  TX
> >>> >>> bytes:8294632 (7.9
> >>> >>> > >>> >>> MiB)
> >>> >>> > >>> >>>
> >>> >>> > >>> >>> lo        Link encap:Local Loopback
> >>> >>> > >>> >>>           inet addr:127.0.0.1  Mask:255.0.0.0
> >>> >>> > >>> >>>           inet6 addr: ::1/128 Scope:Host
> >>> >>> > >>> >>>           UP LOOPBACK RUNNING  MTU:16436  Metric:1
> >>> >>> > >>> >>>           RX packets:2689 errors:0 dropped:0
> >>> >>> overruns:0 frame:0
> >>> >>> > >>> >>>           TX packets:2689 errors:0 dropped:0
> >>> >>> overruns:0 carrier:0
> >>> >>> > >>> >>>           collisions:0 txqueuelen:0
> >>> >>> > >>> >>>           RX bytes:12510296 (11.9 MiB)  TX
> >>> >>> bytes:12510296 (11.9
> >>> >>> > >>> >>> MiB)
> >>> >>> > >>> >>>
> >>> >>> > >>> >>> peth1     Link encap:Ethernet  HWaddr 
> 00:16:76:B3:16:AB
> >>> >>> > >>> >>>           inet6 addr:
> >>> fe80::216:76ff:feb3:16ab/64 Scope:Link
> >>> >>> > >>> >>>           UP BROADCAST RUNNING MULTICAST
> >>> MTU:1500  Metric:1
> >>> >>> > >>> >>>           RX packets:198588 errors:0 dropped:0
> >>> >>> overruns:0 frame:0
> >>> >>> > >>> >>>           TX packets:117311 errors:0 dropped:0
> >>> >>> overruns:0 carrier:0
> >>> >>> > >>> >>>           collisions:0 txqueuelen:100
> >>> >>> > >>> >>>           RX bytes:270906777 (258.3 MiB)  TX
> >>> >>> bytes:8813848 (8.4
> >>> >>> > >>> >>> MiB) Base address:0x40c0 Memory:92200000-92220000
> >>> >>> > >>> >>>
> >>> >>> > >>> >>> vif4.0    Link encap:Ethernet  HWaddr 
> FE:FF:FF:FF:FF:FF
> >>> >>> > >>> >>>           inet6 addr: fe80::fcff:ffff:feff:ffff/64
> >>> >>> Scope:Link
> >>> >>> > >>> >>>           UP BROADCAST RUNNING MULTICAST
> >>> MTU:1500  Metric:1
> >>> >>> > >>> >>>           RX packets:9 errors:0 dropped:0
> >>> overruns:0 frame:0
> >>> >>> > >>> >>>           TX packets:1 errors:0 dropped:6
> >>> >>> overruns:0 carrier:0
> >>> >>> > >>> >>>           collisions:0 txqueuelen:1
> >>> >>> > >>> >>>           RX bytes:1068 (1.0 KiB)  TX bytes:342
> >>> (342.0 b)
> >>> >>> > >>> >>>
> >>> >>> > >>> >>> virbr0    Link encap:Ethernet  HWaddr 
> FE:FF:FF:FF:FF:FF
> >>> >>> > >>> >>>           inet addr:192.168.122.1  
> Bcast:192.168.122.255
> >>> >>> > >>> >>> Mask:255.255.255.0 inet6 addr:
> >>> >>> fe80::200:ff:fe00:0/64 Scope:Link
> >>> >>> > >>> >>>           UP BROADCAST RUNNING MULTICAST
> >>> MTU:1500  Metric:1
> >>> >>> > >>> >>>           RX packets:43 errors:0 dropped:0
> >>> >>> overruns:0 frame:0
> >>> >>> > >>> >>>           TX packets:17 errors:0 dropped:0
> >>> >>> overruns:0 carrier:0
> >>> >>> > >>> >>>           collisions:0 txqueuelen:0
> >>> >>> > >>> >>>           RX bytes:3208 (3.1 KiB)  TX
> >>> bytes:2018 (1.9 KiB)
> >>> >>> > >>> >>>
> >>> >>> > >>> >>> I don't see any xen bridge, because that's what I
> >>> >>> think I need: one
> >>> >>> > >>> >>> network card, and one xen bridge.
> >>> >>> > >>> >>> I found on google that I could use the 
> following script:
> >>> >>> > >>> >>> #!/bin/sh
> >>> >>> > >>> >>> dir=$(dirname "$0")
> >>> >>> > >>> >>> "$dir/network-bridge" "$@" vifnum=0 netdev=eth0
> >>> >>> bridge=xenbr0
> >>> >>> > >>> >>> "$dir/network-bridge" "$@" vifnum=1 netdev=eth1
> >>> >>> bridge=xenbr1
> >>> >>> > >>> >>> "$dir/network-bridge" "$@" vifnum=2 netdev=eth2
> >>> >>> bridge=xenbr2
> >>> >>> > >>> >>> And then set it into xen-config.sxp:
> >>> >>> > >>> >>> network-script matrix-network
> >>> >>> > >>> >>> But it gives errors that network-script has only
> >>> >>> start, stop and
> >>> >>> > >>> >>> status. The only thing that it succeds is that I
> >>> >>> can see a xen
> >>> >>> > >>>
> >>> >>> > >>> bridge.
> >>> >>> > >>>
> >>> >>> > >>> >>> If this would work, doesn't this affects other
> >>> >>> guest domains also?
> >>> >>> > >>> >>>
> >>> >>> > >>> >>> My question is: How can I set a guest dom to use
> >>> >>> directly a network
> >>> >>> > >>> >>> card with other ip class ?
> >>> >>> > >>> >>>
> >>> >>> > >>> >>> Best regards,
> >>> >>> > >>> >>> Octav
> >>> >>> > >>> >>>
> >>> >>> > >>> >>>
> >>> >>> > >>> >>>
> >>> >>> > >>> >>> _______________________________________________
> >>> >>> > >>> >>> Xen-users mailing list
> >>> >>> > >>> >>> Xen-users@xxxxxxxxxxxxxxxxxxx
> >>> >>> > >>> >>> http://lists.xensource.com/xen-users
> >>> >>> > >>> >>
> >>> >>> > >>> >> --
> >>> >>> > >>> >> Kobalt W.I.T.
> >>> >>> > >>> >> Web & Information Technology
> >>> >>> > >>> >> Brusselsesteenweg 152
> >>> >>> > >>> >> 1850 Grimbergen
> >>> >>> > >>> >>
> >>> >>> > >>> >> Tel  : +32 479 339 655
> >>> >>> > >>> >> Email: info@xxxxxxxxxxxx
> >>> >>> > >>> >>
> >>> >>> > >>> >> _______________________________________________
> >>> >>> > >>> >> Xen-users mailing list
> >>> >>> > >>> >> Xen-users@xxxxxxxxxxxxxxxxxxx
> >>> >>> > >>> >> http://lists.xensource.com/xen-users
> >>> >>> > >>
> >>> >>> > >> --
> >>> >>> > >> Kobalt W.I.T.
> >>> >>> > >> Web & Information Technology
> >>> >>> > >> Brusselsesteenweg 152
> >>> >>> > >> 1850 Grimbergen
> >>> >>> > >>
> >>> >>> > >> Tel  : +32 479 339 655
> >>> >>> > >> Email: info@xxxxxxxxxxxx
> >>> >>> > >>
> >>> >>> > >> _______________________________________________
> >>> >>> > >> Xen-users mailing list
> >>> >>> > >> Xen-users@xxxxxxxxxxxxxxxxxxx
> >>> >>> > >> http://lists.xensource.com/xen-users
> >>> >>> >
> >>> >>> > _______________________________________________
> >>> >>> > Xen-users mailing list
> >>> >>> > Xen-users@xxxxxxxxxxxxxxxxxxx
> >>> >>> > http://lists.xensource.com/xen-users
> >>> >>>
> >>> >>> --
> >>> >>> Kobalt W.I.T.
> >>> >>> Web & Information Technology
> >>> >>> Brusselsesteenweg 152
> >>> >>> 1850 Grimbergen
> >>> >>>
> >>> >>> Tel  : +32 479 339 655
> >>> >>> Email: info@xxxxxxxxxxxx
> >>> >>>
> >>> >>> _______________________________________________
> >>> >>> Xen-users mailing list
> >>> >>> Xen-users@xxxxxxxxxxxxxxxxxxx
> >>> >>> http://lists.xensource.com/xen-users
> >>> >>>
> >>> >>>
> >>> >>>
> >>> >>
> >>> >>
> >>> >>
> >>> >> _______________________________________________
> >>> >> Xen-users mailing list
> >>> >> Xen-users@xxxxxxxxxxxxxxxxxxx
> >>> >> http://lists.xensource.com/xen-users
> >>> >>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>
> >>
> >>
> 
> 
> 
> 
> 



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.