[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] pygrub howto (Was:PV guest kernel location)




Well, since nobody answered, I'll do it myself. With much pain and suffering of trial and error, I figure it out. Turns out, it is the very well undocumented feature called pygrub. The following is how I converted my PV guest mentioned below into having its own kernel and pygrub boot.

My host system is running Xen 3.0.4 x86_64 on Ubuntu 6.06. Soon to be 3.1.0. My guests are all x86_64 Ubuntu 6.06 as well.

**** FROM THE PV GUEST ****
1. First, I downloaded the tarballs for xen 3.0.4 and 3.1.0 for x86_64 (since my guests were x86_64).
xen-3.0.4_1-install-x86_64.tgz
xen-3.1.0-install-x86_64.tgz

2. Remove/rename the old /lib/modules/2.6.16.33-xen, which was my kernel from 3.0.4 install.

3. Run the install.sh for each.

4. Create a file called /boot/grub/menu.lst with the following.

default         2
timeout         5

title           Ubuntu, memtest86+
root            (hd0,0)
kernel          /boot/memtest86+.bin
boot

title           Xen 3.0.4 vmlinuz-2.6.16.33-xen
root            (hd0,0)
kernel          /boot/vmlinuz-2.6.16.33-xen root=/dev/hda1 ro splash
savedefault
boot

title           Xen 3.1.0 vmlinux-syms-2.6.18-xen
root            (hd0,0)
kernel          /boot/vmlinux-syms-2.6.18-xen root=/dev/hda1 ro splash
savedefault
boot

5. shutdown -h now

**** FROM THE HOST ***
6. Change the cfg file below to look like this. Remove all kernel stuff and add bootloader.
#  -*- mode: python; -*-
memory = 256
name = "mail"
vif = ['bridge=xenbr0']
disk = ['file:/vserver/mail/guest_base2G.img,hda1,w','file:/vserver/mail/guest_swap256M.img,hda2,w','phy:data/mail,hda3,w']
bootloader = '/usr/bin/pygrub'

7. Start the new domU.
 xm create -c xmguest-mail

*******

That's it. It's nice that the domU has it's own copy of the kernel now and doesn't boot the kernel hosted on the host filesystem. Also, it will make it easier if I have to move this domU to another system.

I did run into one strange problem. When I first ran the 'xm create' for each domU I converted, it failed saying 'nothing returned'. I found that strange, but when I tried the 2nd time for each, it worked. I hope it has nothing to do with running Xen 3.1.0 guests from pygrub on 3.0.4 host.

Hope people find this useful. I wish there was more info on this topic from people that know how this works. Also more info on the difference between file backed file: compared to tap:aio:.

Greg


Gregory Gee wrote:

Still new at this and was wondering if someone could give me some pointers. I have Xen 3.0.4 running great with 5 PV guests. I just followed setup instructions from a few wiki pages. But the one thing I noticed, is that the guest kernel is on the host. Below is the cfg file for my mail server guest.

#  -*- mode: python; -*-
kernel = "/boot/vmlinuz-2.6.16.33-xen"
ramdisk = "/boot/initrd.img-2.6.16.33-xen"
memory = 256
name = "mail"
vif = ['bridge=xenbr0']
disk = ['file:/vserver/mail/guest_base2G.img,hda1,w','file:/vserver/mail/guest_swap256M.img,hda2,w','phy:data/mail,hda3,w']
ip = "192.168.10.43"
netmask = "255.255.255.0"
gateway = "192.168.10.1"
hostname = "mail"
root = "/dev/hda1 ro"
extra = "4"

I see post talking about 32bit guests on 64bit hosts. How do you do that? My main question is, how can I create a PV guest that is self contained for files? I want to pick up my guest and move it to another system without worrying if the same kernel is available on the new host. I can't seem to find a wiki that gives these types of instructions.

Can the kernel be copied into the guest and the guest boot from those file? How would my above cfg change?

Thanks in advance for any help provided.
Greg


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.