[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] CentOS 4 and CentOS 5 DomU on CentOS 5 Dom0: problems with hand-built OS and LVM



On Wed, May 02, 2007 at 09:47:31AM +0100, Nico Kadel-Garcia wrote:
> I've been running any number of CentOS 4 based Xen hosts and guests 
> using CentOS 4.4, and the Xen RPM's from www.xensource.com, and various 
> hand-built OS images using jailtime and my own own OS images.
> 
> I like the jailtime approach:
> 
>    One partition for / on the DomU, /dev/sda1, built as a local file or 
> local LVM partition
>    One partiton for swap on the DomU, /dev/sda2, built as a local file 
> or local LVM partition
> 
> The default .cfg from jailtime, and its variants, looks like this:
> 
>    kernel = "/boot/vmlinuz-2.6-xenU"
>    memory = 256
>    name = "centos.5-0"
>    vif = [ '' ]
>    dhcp = "dhcp"
>    disk = ['file:/xen/centos/centos.5-0.img,sda1,w', 
> 'file:/xen/centos/centos.swap,sda2,w']
>    root = "/dev/sda1 ro"
> 
> Unfortunately, this does *not* seem to work well with the CentOS 5 
> kernel-xen: it does work with the xensource.com Dom0 kernels, and the 
> old xensource.com DomU kernels seem to be just too darned old. the 
> jailtime images and their like have *no* grub or bootloader on DomU, 
> boot straight from the installed kernel on DomU and Dom0 for 
> para-virtualization with the arguments in "root =" above.

What is that kernel you are pointing the config to ? The standard RHEL-5
(and thus CentOS-5) kernel-xen RPMs are fully modular & thus require
use of an initrd. If you install the kernel-xen RPM in the host Dom0,
the initrd that is built is setup with drivers for booting Dom0. If
you install kernel-xen RPM inside the DomU, the initrd is setup for
booting DomU. If you want to boot a DomU, using the kernel installed
in Dom0, then you'll need to built yourself a custom initrd explicitly
asking for the xen paravirt drivers to be included.

> 
> By the way:  the differences betweent he jailtime setups, which I really 
> like, and the virt-instlal and virt-manager built systems are legion and 
> favor the stripped down jailtime setups. The virt-install systens use 
> the Dom0 partitions as disk drives, *ALWAYS* called /dev/xvda and the 
> like. This means that extracting the data from a paused or shutdown 
> domain for backup purposes requires me looking *inside* that disk image, 
> extracting the partition information (with tools like kpartx, I assume), 
> and successfully mounting the partitions on Dom0 to image their 
> ocntents. And I haven't worked that out yet, and I like having my 
> partitions managed out of Dom, not leaving DomU to play with it.

Take a look at section 9.4  "# Accessing data on a guest disk image"

  http://fedoraproject.org/wiki/FedoraXenQuickstartFC6

virt-install/virt-manager doesn't really care whether you use /dev/xvda
directly, or whether you sub-partition it - that's a decision made by
the guest OS installer (eg Anaconda in this case). The reason for doing
sub-partition of xdva into xvda1, xvda2, etc is to give better isolation
between Dom0 and DomU - prevent administrator accidents mixing up FSs.

If you had a DomU, using a real physical partition from Dom0, for example
if you had a config    disk = ['phy:/dev/hdb1'], and then you guest was
configuring its root filesystem directly onto /dev/xvda instead of /dev/xvda1,
this filesystem would be directly visible in Dom0. If the DomU filesystems
were given labels which were the same as Dom0, then mount by label may
end up picking the DomU's disk in Dom0.

Dan.
-- 
|=- Red Hat, Engineering, Emerging Technologies, Boston.  +1 978 392 2496 -=|
|=-           Perl modules: http://search.cpan.org/~danberr/              -=|
|=-               Projects: http://freshmeat.net/~danielpb/               -=|
|=-  GnuPG: 7D3B9505   F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505  -=| 

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.