[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Re: upgrade leny-squeeze, xen3.2-xen4.0, what's wrong?


  • To: Henrik Langos <hlangos-xen@xxxxxxxxxxxxxx>
  • From: Mauro <mrsanna1@xxxxxxxxx>
  • Date: Fri, 11 Feb 2011 11:38:34 +0000
  • Cc: xen-users@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Fri, 11 Feb 2011 03:39:40 -0800
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=ucDdA7x97okux6+QhcN0yAg9BArEoBuDo9n+kVfDj6csYtghJQ+Sp/SN3+5+VWMgNQ MCsdQZayXRHt9v0dqxh+tI2RHzbsHYCP0S4OpQNCatnp7qdOiD3vz7So9HhH1AbPKxls D3xZvP6UWwBWKjH8C3Rl903FXW4ZGVPZBfLVI=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

On 11 February 2011 11:12, Henrik Langos <hlangos-xen@xxxxxxxxxxxxxx> wrote:
> On Fri, Feb 11, 2011 at 10:04:44AM +0000, Mauro wrote:
>> On 11 February 2011 08:59, Henrik Langos <hlangos-xen@xxxxxxxxxxxxxx> wrote:
>> > Hi Mauro,
>> >
>> > It would be more useful if you had included the /etc/xen/mail1.cfg
>> > as that is the file ultimately used to create the domU. The xen-tools.conf
>> > is just a tool that does a some (actually quite good) work for you by
>> > creating that file and the disk/partition volumes. xen-tools is not 
>> > involved
>> > in *running* those VMs though.
>> >
> ...
>> >
>> > My (wild) guess on this would be that you'd have to change the "root=" 
>> > parameter
>> > in your config file to point to some "sda*" device/partition instead of 
>> > "xvda*".
>> >
>> > root = "/dev/xvda2"
>> > to
>> > root= "/dev/sda2"
>> >
>> >
>> > Alternatively you'd have to change the "disk=" parameters to map your LVs 
>> > to xdva*
>> > instead of sda*.
>> >
>> > disk    Â= [
>> > Â Â Â Â Â Â Â Â Â'phy:/dev/vg00/mail1-swap,sda1,w',
>> > Â Â Â Â Â Â Â Â Â'phy:/dev/vg00/mail1-disk,sda2,w',
>> > Â Â Â Â Â Â Â]
>> > to
>> > disk    Â= [
>> > Â Â Â Â Â Â Â Â Â'phy:/dev/vg00/mail1-swap,xvda1,w',
>> > Â Â Â Â Â Â Â Â Â'phy:/dev/vg00/mail1-disk,xvda2,w',
>> > Â Â Â Â Â Â Â]
>> >
>> >
>> >> I've solved using pygrub.
>> >> So it seems that without pygrub I can't run DomUs?
>> >
>> > You shouldn't have to use pygrub for something that simple.
>> >
>> > My guess is that there was a lack of communication between the
>> > maintainers of xen and xen-tools in regard to the default devices.
>> >
>>
>> mail1.cfg:
>>
>> bootloader = '/usr/lib/xen-default/bin/pygrub'
>>
>> vcpus    = '4'
>> memory   Â= '2048'
>>
>> #
>> # ÂDisk device(s).
>> #
>> root    Â= '/dev/xvda2 ro'
>> disk    Â= [
>> Â Â Â Â Â Â Â Â Â 'phy:/dev/vg00/mail1-disk,xvda2,w',
>> Â Â Â Â Â Â Â Â Â 'phy:/dev/vg00/mail1-swap,xvda1,w',
>> Â Â Â Â Â Â Â ]
>>
>>
>> #
>> # ÂPhysical volumes
>> #
>>
>>
>> #
>> # ÂHostname
>> #
>> name    Â= 'mail1'
>>
>> #
>> # ÂNetworking
>> #
>> vif     = [ 'ip=172.16.10.154,mac=00:16:3E:01:D7:24' ]
>>
>> #
>> # ÂBehaviour
>> #
>> on_poweroff = 'destroy'
>> on_reboot  = 'restart'
>> on_crash  Â= 'restart
>>
>> It's exactly as another DomU that runs with xen 3.2 on a debian lenny 
>> machine.
>> The only difference is the bootloader = '/usr/lib/xen-default/bin/pygrub'.
>
> Well,
>
> There is no "kernel=" or "ramdisk=" parameter in your mail1.cfg.
> So I guess you removed those when you switched to using pygrub?

No I don't remove them, it is the default.

> If you want to know what really changed when switching from lenny to
> squeeze you could change it back to try the two fixes that I mentioned.
>
>> I've noticed that when debootstrap install the system it install the
>> linux-image and not linux-modules.
>> >From mail1.log, debian squeeze and xen4.0:
>>
>> The following extra packages will be installed:
>> Â firmware-linux-free libuuid-perl linux-base linux-image-2.6.32-5-xen-amd64
>> Suggested packages:
>> Â linux-doc-2.6.32 grub
>> The following NEW packages will be installed:
>> Â firmware-linux-free libuuid-perl linux-base linux-image-2.6.32-5-xen-amd64
>> Â linux-image-xen-amd64
>>
>> While from mail1.log with xen3.2 and debian lenny:
>>
>> The following NEW packages will be installed:
>> Â linux-modules-2.6.26-2-xen-amd64
>>
>
> Wow, that, I think, is a good thing! The linux-image package contains the 
> modules
> but by also including the kernel and the base packages it will leave you with 
> a
> much more "complete" domU.
>
> You could not have booted the old lenny domU with pygrub because the domU 
> file system
> didn't even contain a kernel image or initrd. Those were always taken from 
> dom0's /boot/.
> Thats what the "root=" and "ramdisk=" parameters in your mail1.cfg are for. 
> BUT that made
> your domU depend much more on the state of your dom0.
>
> Now with pygrub you can update the kernel in domU, have it reboot and pygrub 
> will pick
> up the new kernel from your domU's file system when xen creates your domU VM. 
> (be sure
> to read and *understand* the release notes on xen/pygrub/grub2 issues though).
>
> I personally like that change. It will make it easier to migrate PVM domUs 
> that I create
> with squeeze to the next debian release. Currently I have PVM domUs from Etch 
> and Lenny
> on top of a squeeze xen-4.0.1 linux-2.6.32 and I had to copy the old domU 
> kernels and
> initrds to my squeeze box as well as the /etc/xen/machine.cfg files and the 
> domUs disk
> images.
>
> In the future I will only need the config file and the disk images. (Actually 
> my domU's
> disk images are now on an iSCSI storage now. So I will not have to worry 
> about migrating
> those and ultimately I want to move even the config files to a location that 
> is shared
> by all my xen dom0 hosts. But thats a different story and I guess I'll have 
> to read up
> on GFS(2?) or some other cluster file system.)

Ok, you convinced me to use pygrub.
I have many doubts on using DomU in a SAN system, but for this I open
a new thread.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.