[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-users] Problems installing guest domains


  • To: "xen-users@xxxxxxxxxxxxxxxxxxx" <xen-users@xxxxxxxxxxxxxxxxxxx>, Nathan Eisenberg <nathan@xxxxxxxxxxxxxxxx>
  • From: Boris Derzhavets <bderzhavets@xxxxxxxxx>
  • Date: Mon, 30 Mar 2009 01:00:07 -0700 (PDT)
  • Cc:
  • Delivery-date: Mon, 30 Mar 2009 01:01:55 -0700
  • Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=Message-ID:X-YMail-OSG:Received:X-Mailer:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type; b=bh6gDQejHdiMqFDYfjuvW+necu0yzsV2sLC23NwlS6gamw1ym4JWO29ZKoXCXN0slA+A+M5LZ58VvfFwButz9Tm0mJbpNgMfAdLjmQ/1koz7GIOau/7azPjt2Du8oSNKitLjB7/BI+Fn2oPl5AkLVSNiy6NCG82Fe9WGZp/ZBts=;
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

>I don't think installation from ISO is supported or generally done. 
>Typically, you'd use xen-tools or 'rinse' directly to do the install
> for a Centos guest, just like debootstrap is used for debian based guests.

Debootstrap has nothing to do with issues described bellow.
Standard virt-install procedure on Xen RHEL 5.X Servers requires
local ( or remote) NFS share or HTTP mirror created via "mount loop"
corresponding ISO image to nfs shared directory or folder kind of /var/www/rhel
with local httpd daemon up and running (Apache HTTP Server on RHEL)

Boris.


--- On Sun, 3/29/09, Nathan Eisenberg <nathan@xxxxxxxxxxxxxxxx> wrote:
From: Nathan Eisenberg <nathan@xxxxxxxxxxxxxxxx>
Subject: RE: [Xen-users] Problems installing guest domains
To: "xen-users@xxxxxxxxxxxxxxxxxxx" <xen-users@xxxxxxxxxxxxxxxxxxx>
Date: Sunday, March 29, 2009, 3:16 PM

I don't think installation from ISO is supported or generally done. 
Typically, you'd use xen-tools or 'rinse' directly to do the install
for a Centos guest, just like debootstrap is used for debian based guests.

Best Regards
Nathan Eisenberg
Sr. Systems Administrator
Atlas Networks, LLC
support@xxxxxxxxxxxxxxxx
http://support.atlasnetworks.us/portal

-----Original Message-----
From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
[mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Andrew Kilham
Sent: Saturday, March 28, 2009 8:17 PM
To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-users] Problems installing guest domains

Hi,

I am trying out Xen for the first time and I am having a few problems
with getting it working. The computer is a quad core Intel Xeon with VT
enabled, 8gb of RAM and 2 x 15,000rpm SAS drives in RAID1.

I have installed CentOS 5 64bit and installed Xen 3.3.0 via yum. I have
successfully booted in to dom0. Here is my grub.conf on dom0:

> # grub.conf generated by anaconda
> #
> # Note that you do not have to rerun grub after making changes to this
> file
> # NOTICE: You have a /boot partition. This means that
> # all kernel and initrd paths are relative to /boot/, eg.
> # root (hd0,0)
> # kernel /vmlinuz-version ro root=/dev/sda2
> # initrd /initrd-version.img
> #boot=/dev/sda
> default=0
> timeout=5
> splashimage=(hd0,0)/grub/splash.xpm.gz
> hiddenmenu
> title CentOS (2.6.18-92.1.22.el5xen)
> root (hd0,0)
> kernel /xen.gz-3.3.0
> module /vmlinuz-2.6.18-92.1.22.el5xen ro root=LABEL=/
> module /initrd-2.6.18-92.1.22.el5xen.img
> title CentOS (2.6.18-92.el5)
> root (hd0,0)
> kernel /vmlinuz-2.6.18-92.el5 ro root=LABEL=/
> initrd /initrd-2.6.18-92.el5.img

However, I can't for the life of me install a guest domain. I have been
Googling for the last 3 days and I am extremely confused - it seems like
there are multiple ways to do it but none are working for me.

I am wanting to use file-based HDD's for the guests, and I want to
install off an iso of CentOS5 on my HDD for now.


First I tried using the "virt-install" script. Firstly, should I be
able
to install a fully virtualized guest or will only para-virtualized work?
I will show what happens when I try both.

Firstly, if I try to install it as a paravirtualized guest I am running
this command:

> virt-install -n test1 -r 512 -f /vm/test1.img -s 5
> --location=/vm/CentOS-5.2-x86_64-bin-DVD.iso
************************************************************
View for virt-install :-
http://lxer.com/module/newswire/view/95262/index.html
In general, --location should point to NFS share or local
simulated HTTP mirror via Apache Server.
************************************************************
This creates the domain fine and starts what I assume is the CentOS
installation - it asks me to first select a language, once I have done
that it says "What type of media contains the packages to be
installed?"
and gives me a list of Local CDROM, Hard drive, NFS, FTP and HTTP. What
is this asking me for?
********************************************************
Create local NFS share via "mount loop"
and point installer to it. You gonna be done
Create local HTTP mirror:-
# mkdir -p /var/www/rhel
# mount loop /etc/xen/isos/rhel.iso /var/www/rhel
and point installer to :-
http://locahost/var/www/rhel
All written above is standard technique described in
Red Hat's online manuals
*******************************************************
If it has already started the installation then
surely it knows where to get the packages from? Anyway, if I select
Local CDROM it says "Unable to find any devices of the type needed for
this installation type. Would you like to manually select your driver or
use a driver disk?" I have got no idea what to do from here.



If I try to install a fully virtualized guest using virt-install, here
is the command I am running:

> virt-install -n test1 -r 512 -f /vm/test1.img -s 5
> --location=/vm/CentOS-5.2-x86_64-bin-DVD.iso --hvm
****************************************
-c /vm/CentOS-5.2-x86_64-bin-DVD.iso
Error in virt-install command line
*****************************************
This comes up and it just hangs here:

> Starting install...
> Creating storage file... 100% |=========================| 5.0 GB 00:00
> Creating domain... 0 B 00:00
> â
My xend-debug.log file says this:

XendInvalidDomain: <Fault 3:
'b5e19b10-7540-902c-b585-f8783447521f'>
Traceback (most recent call last):
File "/usr/lib64/python2.4/site-packages/xen/web/httpserver.py", line

140, in process
resource = self.getResource()
File "/usr/lib64/python2.4/site-packages/xen/web/httpserver.py", line

172, in getResource
return self.getServer().getResource(self)
File "/usr/lib64/python2.4/site-packages/xen/web/httpserver.py", line

351, in getResource
return self.root.getRequestResource(req)
File "/usr/lib64/python2.4/site-packages/xen/web/resource.py", line
39,
in getRequestResource
return findResource(self, req)
File "/usr/lib64/python2.4/site-packages/xen/web/resource.py", line
26,
in findResource
next = resource.getPathResource(pathElement, request)
File "/usr/lib64/python2.4/site-packages/xen/web/resource.py", line
49,
in getPathResource
val = self.getChild(path, request)
File "/usr/lib64/python2.4/site-packages/xen/web/SrvDir.py", line 71,
in
getChild
val = self.get(x)
File
"/usr/lib64/python2.4/site-packages/xen/xend/server/SrvDomainDir.py",

line 52, in get
return self.domain(x)
File
"/usr/lib64/python2.4/site-packages/xen/xend/server/SrvDomainDir.py",

line 44, in domain
dom = self.xd.domain_lookup(x)
File "/usr/lib64/python2.4/site-packages/xen/xend/XendDomain.py",
line
529, in domain_lookup
raise XendInvalidDomain(str(domid))
XendInvalidDomain: <Fault 3: 'test1'>







So, now I try doing what looks like the manual way - creating a config
file in /etc/xen and using xm create.

First I create a file for the HDD:

> dd if=/dev/zero of=test1.img bs=1M count=1 seek=1023

Then I created this config file and placed it at /etc/xen/test:

> # -*- mode: python; -*-
>
#============================================================================
> # Python configuration setup for 'xm create'.
> # This script sets the parameters used when a domain is created using
> 'xm create'.
> # You use a separate script for each domain you want to create, or
> # you can set the parameters for the domain on the xm command line.
>
#============================================================================
>
>
#----------------------------------------------------------------------------
> # Kernel image file.
> kernel = "/boot/vmlinuz-2.6.18-92.1.22.el5xen"
>
> # Optional ramdisk.
> #ramdisk = "/boot/initrd.gz"
> ramdisk = "/boot/initrd-2.6.18-92.1.22.el5xen.img"
> #ramdisk = "/boot/initrd-centos5-xen.img"
>
> # The domain build function. Default is 'linux'.
> #builder='linux'
>
> # Initial memory allocation (in megabytes) for the new domain.
> #
> # WARNING: Creating a domain with insufficient memory may cause out of
> # memory errors. The domain needs enough memory to boot kernel
> # and modules. Allocating less than 32MBs is not recommended.
> memory = 512
>
> # A name for your domain. All domains must have different names.
> name = "Test1"
>
> # 128-bit UUID for the domain. The default behavior is to generate a
> new UUID
> # on each call to 'xm create'.
> #uuid = "06ed00fe-1162-4fc4-b5d8-11993ee4a8b9"
>
> # List of which CPUS this domain is allowed to use, default Xen picks
> #cpus = "" # leave to Xen to pick
> #cpus = "0" # all vcpus run on CPU0
> #cpus = "0-3,5,^1" # all vcpus run on cpus 0,2,3,5
> #cpus = ["2", "3"] # VCPU0 runs on CPU2, VCPU1 runs on
CPU3
>
> # Number of Virtual CPUS to use, default is 1
> #vcpus = 1
>
>
#----------------------------------------------------------------------------
> # Define network interfaces.
>
> # By default, no network interfaces are configured. You may have one
> created
> # with sensible defaults using an empty vif clause:
> #
> # vif = [ '' ]
> #
> # or optionally override backend, bridge, ip, mac, script, type, or
> vifname:
> #
> # vif = [ 'mac=00:16:3e:00:00:11, bridge=xenbr0' ]
> #
> # or more than one interface may be configured:
> #
> # vif = [ '', 'bridge=xenbr1' ]
>
> vif = [ '' ]
>
>
#----------------------------------------------------------------------------
> # Define the disk devices you want the domain to have access to, and
> # what you want them accessible as.
> # Each disk entry is of the form phy:UNAME,DEV,MODE
> # where UNAME is the device, DEV is the device name the domain will see,
> # and MODE is r for read-only, w for read-write.
>
> #disk = [ 'phy:hda1,hda1,w' ]
> #disk = [ 'file:/vm/test1.img,ioemu:sda1,w',
> 'phy:/dev/cdrom,hdc:cdrom,r' ]
> disk = [ 'file:/vm/test1.img,ioemu:sda1,w' ]
>
>
#----------------------------------------------------------------------------
> # Define frame buffer device.
> #
> # By default, no frame buffer device is configured.
> #
> # To create one using the SDL backend and sensible defaults:
> #
> # vfb = [ 'type=sdl' ]
> #
> # This uses environment variables XAUTHORITY and DISPLAY. You
> # can override that:
> #
> # vfb = [ 'type=sdl,xauthority=/home/bozo/.Xauthority,display=:1'
]
> #
> # To create one using the VNC backend and sensible defaults:
> #
> # vfb = [ 'type=vnc' ]
> #
> # The backend listens on 127.0.0.1 port 5900+N by default, where N is
> # the domain ID. You can override both address and N:
> #
> # vfb = [ 'type=vnc,vnclisten=127.0.0.1,vncdisplay=1' ]
> #
> # Or you can bind the first unused port above 5900:
> #
> # vfb = [ 'type=vnc,vnclisten=0.0.0.0,vncunused=1' ]
> #
> # You can override the password:
> #
> # vfb = [ 'type=vnc,vncpasswd=MYPASSWD' ]
> #
> # Empty password disables authentication. Defaults to the vncpasswd
> # configured in xend-config.sxp.
>
>
#----------------------------------------------------------------------------
> # Define to which TPM instance the user domain should communicate.
> # The vtpm entry is of the form 'instance=INSTANCE,backend=DOM'
> # where INSTANCE indicates the instance number of the TPM the VM
> # should be talking to and DOM provides the domain where the backend
> # is located.
> # Note that no two virtual machines should try to connect to the same
> # TPM instance. The handling of all TPM instances does require
> # some management effort in so far that VM configration files (and thus
> # a VM) should be associated with a TPM instance throughout the lifetime
> # of the VM / VM configuration file. The instance number must be
> # greater or equal to 1.
> #vtpm = [ 'instance=1,backend=0' ]
>
>
#----------------------------------------------------------------------------
> # Set the kernel command line for the new domain.
> # You only need to define the IP parameters and hostname if the
domain's
> # IP config doesn't, e.g. in ifcfg-eth0 or via DHCP.
> # You can use 'extra' to set the runlevel and custom environment
> # variables used by custom rc scripts (e.g. VMID=, usr= ).
>
> # Set if you want dhcp to allocate the IP address.
> #dhcp="dhcp"
> # Set netmask.
> #netmask=
> # Set default gateway.
> #gateway=
> # Set the hostname.
> #hostname= "vm%d" % vmid
>
> # Set root device.
> root = "/dev/sda1 ro"
>
> # Root device for nfs.
> #root = "/dev/nfs"
> # The nfs server.
> #nfs_server = '192.0.2.1'
> # Root directory on the nfs server.
> #nfs_root = '/full/path/to/root/directory'
>
> # Sets runlevel 4.
> extra = "4"
>
>
#----------------------------------------------------------------------------
> # Configure the behaviour when a domain exits. There are three
'reasons'
> # for a domain to stop: poweroff, reboot, and crash. For each of these
you
> # may specify:
> #
> # "destroy", meaning that the domain is cleaned up as normal;
> # "restart", meaning that a new domain is started in place of
the old
> # one;
> # "preserve", meaning that no clean-up is done until the domain
is
> # manually destroyed (using xm destroy, for example); or
> # "rename-restart", meaning that the old domain is not cleaned
up, but is
> # renamed and a new domain started in its place.
> #
> # In the event a domain stops due to a crash, you have the additional
> options:
> #
> # "coredump-destroy", meaning dump the crashed domain's core
and then
> destroy;
> # "coredump-restart', meaning dump the crashed domain's core
and the
> restart.
> #
> # The default is
> #
> #
> #
> #
> #
> # For backwards compatibility we also support the deprecated option
> restart
> #
> # restart = 'onreboot' means
> #
> #
> #
> # restart = 'always' means
> #
> #
> #
> # restart = 'never' means
> #
> #
>
> #
> #
> #
>
>
#-----------------------------------------------------------------------------
> # Configure PVSCSI devices:
> #
> #vscsi=[ 'PDEV, VDEV' ]
> #
> # PDEV gives physical SCSI device to be attached to specified guest
> # domain by one of the following identifier format.
> # - XX:XX:XX:XX (4-tuples with decimal notation which shows
> # "host:channel:target:lun")
> # - /dev/sdxx or sdx
> # - /dev/stxx or stx
> # - /dev/sgxx or sgx
> # - result of 'scsi_id -gu -s'.
> # ex. # scsi_id -gu -s /block/sdb
> # 36000b5d0006a0000006a0257004c0000
> #
> # VDEV gives virtual SCSI device by 4-tuples (XX:XX:XX:XX) as
> # which the specified guest domain recognize.
> #
>
> #vscsi = [ '/dev/sdx, 0:0:0:0' ]
>
>
#============================================================================
I then ran this command:

> xm create -c test1

And this is the last few lines of the output before it stops:

> Scanning and configuring dmraid supported devices
> Creating root device.
> Mounting root filesystem.
> mount: could not find filesystem '/dev/root'
> Setting up other filesystems.
> Setting up new root fs
> setuproot: moving /dev failed: No such file or directory
> no fstab.sys, mounting internal defaults
> setuproot: error mounting /proc: No such file or directory
> setuproot: error mounting /sys: No such file or directory
> Switching to new root and running init.
> unmounting old /dev
> unmounting old /proc
> unmounting old /sys
> switchroot: mount failed: No such file or directory
> Kernel panic - not syncing: Attempted to kill init!


So I am utterly stumped and extremely frustrated by now that I cannot
get something that is seemingly simple to work!

Any advise and help would be very greatly appreciated!

Thanks in advance :)

Andrew






_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.