[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [Xen-users] Xen 3.0.4 on RH EL 4.4 - can't make it boot kernel
> I did some googling and found some indications that it might have been > related to using the ext3 filesystem, so I rebuild the server with ext2 > and got the same results. > > Any suggestions? Here's my config for a stock RHEL4u4 installation: --------------- disk = [ 'file:/RAID/data/xen/images/xen-rh64-gold/hda,hda,w' ] kernel = "/boot/vmlinuz-2.6.16.33-xenU" ramdisk = "/boot/initrd-2.6.16.33-xenU.img" root = "/dev/VolGroup00/LogVol00" memory = 4096 vcpus = 1 builder = 'linux' name = 'xen-rh64-gold' vif = [ 'mac=00:16:3e:00:00:22, bridge=xenbr2' ] localtime = 0 on_poweroff = 'preserve' on_reboot = 'restart' on_crash = 'restart' extra = ' TERM=xterm' # #sdl = 1 --------------- As you see, I'm installed on an image file. I installed that with Qemu. I had to do nothing special there, except to do a mutation to /etc/modprobe.conf (add "alias eth0 xennnet") and do system- config-network, and do service network restart, and then do system-config-network again (the double network config is associated with the text version not having the option to set up the autoconfig of eth0 on boot). My xenU kernel handles RH out of the box. I often had your problem until I figured out that I had to point the root to a valid logical volume. Without the root= entry... kernel panic exactly the way you're getting it. At a glance you didn't look like you were using logical volumes, but I didn't look to carefully. I forget why, but another thing I had to do was to add IDE support to the kernel. Probably I could have done that in the initrd, but I didn't know how, so I compiled it in (if you were having that problem, it would manifest differently: "unreadable block (0,..)" or some such). Joe. p.s. nothing to do with ext3, I think. _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |