[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] Raidproblem booting dom0

  • From: Jan Peters-Anders <petersja@xxxxxx>
  • Date: Mon, 21 Nov 2005 00:05:22 +0100
  • Cc: xen-users@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Sun, 20 Nov 2005 23:05:27 +0000
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

Hello list,

I am just at the beginning of installing Xen. After two days of trying I
finally managed to boot the xen0 kernel. Now I am trying to create my first
domain. Since the system is a Raid system, I know there is a problem with my
root entry (I had these problems with the xen0 kernel as well in the
beginning) I receive this error message when trying to create the domain
with "xm create -c xmjan1 vmid=1"  (system is FC4 on Dell 450 WS, 2 CPUs,
SATA Raid, 2GB RAM):

Making device-mapper control node
Unable to find device-mapper major/minor
Scanning logical volumes
 Reading all physical volumes.  This may take a while...
 Found volume group "VolGroup00" using metadata type lvm2
Activating logical volumes
 /proc/misc: No entry for device-mapper found
 Is device-mapper driver missing from kernel?
 Failure to communicate with kernel device-mapper driver.
 0 logical volume(s) in volume group "VolGroup00" now active
Creating root device
Mounting root filesystem
mount: error 6 mounting ext3
Switching to new root
ERROR opening /dev/console!!!!: 2
error dup2'ing fd of 0 to 0
error dup2'ing fd of 0 to 1
error dup2'ing fd of 0 to 2
unmounting old /proc
unmounting old /sys
switchroot: mount failed: 22
Kernel panic - not syncing: Attempted to kill init!

The config file looks as follows:

# Kernel image file.
kernel = "/boot/vmlinuz-"

# Optional ramdisk.
ramdisk = "/boot/xen-3.0.jan.img"

# The domain build function. Default is 'linux'.

# Initial memory allocation (in megabytes) for the new domain.
memory = 128

# A name for your domain. All domains must have different names.
name = "FirstDomain"

# Which CPU to start domain on?
#cpu = -1   # leave to Xen to pick

# Number of Virtual CPUS to use, default is 1
#vcpus =

# Define network interfaces.

# Number of network interfaces. Default is 1.

# Optionally define mac and/or bridge for the network interfaces.
# Random MACs are assigned if not given.
#vif = [ 'mac=aa:00:00:00:00:11, bridge=xenbr0'

# Define the disk devices you want the domain to have access to, and
# what you want them accessible as.
# Each disk entry is of the form phy:UNAME,DEV,MODE
# where UNAME is the device, DEV is the device name the domain will see,
# and MODE is r for read-only, w for read-write.

disk = [ 'phy:/dev/sda2,/dev/sda2,w'

# Define to which TPM instance the user domain should communicate.
# The vtpm entry is of the form 'instance=INSTANCE,backend=DOM'
# where INSTANCE indicates the instance number of the TPM the VM
# should be talking to and DOM provides the domain where the backend
# is located.
# Note that no two virtual machines should try to connect to the same
# TPM instance. The handling of all TPM instances does require
# some management effort in so far that VM configration files (and thus
# a VM) should be associated with a TPM instance throughout the lifetime
# of the VM / VM configuration file. The instance number must be
# greater or equal to 1.
#vtpm = [ 'instance=1,backend=0' ]
# Set the kernel command line for the new domain.
# You only need to define the IP parameters and hostname if the domain's
# IP config doesn't, e.g. in ifcfg-eth0 or via DHCP.
# You can use 'extra' to set the runlevel and custom environment
# variables used by custom rc scripts (e.g. VMID=, usr= ).

# Set if you want dhcp to allocate the IP address.
# Set netmask.
# Set default gateway.
# Set the hostname.
#hostname= "vm%d" % vmid

# Set root device.
root = "/dev/mapper/VolGroup00-LogVol00 ro"

# Root device for nfs.
#root = "/dev/nfs"
# The nfs server.
#nfs_server = ''
# Root directory on the nfs server.
#nfs_root   = '/full/path/to/root/directory'

# Sets runlevel 4.
extra = "4"


Can anyone point me to a solution, e.g. where to look to determine the
correct root drive/dev in a Raid system? Since this is my first machine with
a Raid system installed, I am a bit stuck here...

Thanks in advance

Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.