[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] centos5 guest in opensuse 10.2 with lvm


  • To: Xen-users@xxxxxxxxxxxxxxxxxxx
  • From: mark pryor <tlviewer@xxxxxxxxx>
  • Date: Sun, 6 May 2007 19:33:09 -0700 (PDT)
  • Delivery-date: Sun, 06 May 2007 19:31:54 -0700
  • Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=X-YMail-OSG:Received:Date:From:Subject:To:MIME-Version:Content-Type:Content-Transfer-Encoding:Message-ID; b=GIbmEgUzxkwNu86nHXFu8gYDgcAHOjdMU1YKRlZp6dyYZZ9A1V6lQl7Ke/szj9TH2TsrocGePPyUOGkpxZfX/NHl4O+apsTO/oro+EQKs01VgGwd5wutnSEweNYbzKipXNau9FgFXy1bU0mgHB5xINN5sMJEWO0VZDsb80s3QlA=;
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

hello,

I have default install of opensuse 10.2 x86_64 with all xen related.
I have several guests working:
   Winxp pro sp2  (full virtual)
   FC6 3m respin (para)

My latest adventure is to try and install CentOS 5.0 full-virtual to an LVM filesys.
What happens is Centos boots from DVD into the installer. It sees my LVs and starts a text install.

I select the default proposal. It then fails, saying there is not enough room for
/boot.

Maybe this is not meant to be -- perhaps Centos can't boot from an LV, even in Xen?

I setup the LV's as detailed here
http://www.profi-admin.eu/xeninst/index_eng.html

thanks for helping,
Mark

--------------- traces --------------------------
KN9ULTRA:/home/tlviewer # lvs
  LV     VG    Attr   LSize   Origin Snap%  Move Log Copy%
  centh  xenvm -wi-a-   4.00G
  centr  xenvm -wi-a-  11.77G
  vmswap xenvm -wi-a- 512.00M


-------------- centos5  (hvm) --------------------------
kernel = "/usr/lib/xen/boot/hvmloader"
#kernel = "/boot/vmlinuz-xen"
#ramdisk = "/boot/initrd-xen"

# The domain build function. HVM domain uses 'hvm'.
builder='hvm'
memory = 512
name = "centos5"
# 128-bit UUID for the domain.  The default behavior is to generate a new UUID
# on each call to 'xm create'.
#uuid = "06ed00fe-1162-4fc4-b5d8-11993ee4a8b9"

# the number of cpus guest platform has, default=1
#vcpus=1

pae=1
acpi=1
apic=1
# List of which CPUS this domain is allowed to use, default Xen picks
cpus = ""         # leave to Xen to pick

vif = [ 'type=ioemu, bridge=xenbr0' ]

disk = ['file:/windows/D/download/CentOS-5.0-x86_64-bin-DVD/CentOS-5.0-x86_64-bin-DVD.iso,hdc:cdrom,r' , 'phy:xenvm/centr,hda,w' ]

device_model = '/usr/lib/xen/bin/qemu-dm'
boot="dca"

#  write to temporary files instead of disk image files
#snapshot=1
sdl=1
vnc=0
#vnclisten="127.0.0.1"
#vncdisplay=1
#nographic=0
#full-screen=1
serial = "pty"
vncviewer = 0
ne2000 = 0
localtime = 1
------------------------------- end cfg --------------------------

#xm create -c centos5

---------------- #xm dmesg -----------------------
(XEN) (GUEST: 6) HVM Loader
(XEN) (GUEST: 6) Detected Xen v3.0.3_11774-20
(XEN) (GUEST: 6) Writing SMBIOS tables ...
(XEN) (GUEST: 6) Loading ROMBIOS ...
(XEN) (GUEST: 6) Creating MP tables ...
(XEN) (GUEST: 6) Loading Cirrus VGABIOS ...
(XEN) (GUEST: 6) Loading ACPI ...
(XEN) (GUEST: 6) SVM go ...
(XEN) (GUEST: 6)  rombios.c,v 1.138 2005/05/07 15:55:26 vruppert Exp $
(XEN) (GUEST: 6) VGABios $Id: vgabios.c,v 1.61 2005/05/24 16:50:50 vruppert Exp $
(XEN) (GUEST: 6) HVMAssist BIOS, 1 cpu, $Revision: 1.138 $ $Date: 2005/05/07 15:55:26 $
(XEN) (GUEST: 6)
(XEN) (GUEST: 6) ata0-0: PCHS=2/16/63 translation=none LCHS=2/16/63
(XEN) (GUEST: 6) ata0 master: QEMU HARDDISK ATA-7 Hard-Disk (0 MBytes)
(XEN) (GUEST: 6) ata0  slave: Unknown device
(XEN) (GUEST: 6) ata1 master: QEMU CD-ROM ATAPI-4 CD-Rom/DVD-Rom
(XEN) (GUEST: 6) ata1  slave: Unknown device
(XEN) (GUEST: 6)
(XEN) (GUEST: 6) Booting from CD-Rom...

KN9ULTRA:/home/tlviewer # xm shutdown centos5
-------------------- end log ------------------


Never miss an email again!
Yahoo! Toolbar
alerts you the instant new Mail arrives. Check it out.
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.