[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [OSSTEST Nested PATCH v11 5/7] Add new script to customize nested test configuration



longtao.pang writes ("[OSSTEST Nested PATCH v11 5/7] Add new script to 
customize nested test configuration"):
> 1. In this script, make some appropriate runvars which selecthost would
> recognise.
> 2. Prepare the configurations for installing L2 guest VM.
> 3. Create a lv disk in L0 and hot-attach it to L1; Inside L1, using this
> new added disk to create a VG which will be used for installing L2 guest.
...

Thanks.  This is roughly the right approach.  I have some minor
comments.


> +target_cmd_root($l1, "update-rc.d osstest-confirm-booted start 99 2 .");

This is an open coded copy of the code from ts-host-install.  It
should be broken out, I think.


> +target_install_packages_norec($l1, qw(lvm2 rsync genisoimage));

Do we need to install genisoimage here ?  The guest install scripts do
it.  Or are we just doing it here for efficiency reasons ?


> +# We need to attach an extra disk to the L1 guest to be used as L2
> +# guest storage.
> +#
> +# When running in a nested HVM environment the L1 domain is acting
> +# as both a guest to L0 and a host to L2 guests and therefore potentially
> +# sees connections to two independent xenstore instances, one provided by
> +# the L0 host and one which is provided by the L1 instance of xenstore.
> +#
> +# Unfortunately the kernel is not capable of dealing with this and is only
> +# able to cope with a single xenstore connection. Since the L1 toolstack and
> +# L2 guests absolutely require xenstore to function we therefore cannot use
> +# the L0 xenstore and therefore cannot use PV devices (xvdX etc) in the L1
> +# guest and must use emulated devices (sdX etc).
> +#
> +# However at the moment we have not yet rebooted L1 into Xen and so it does
> +# have PV devices available and sdb actually appears as xvdb. We could
> +# disable the Xen platform device and use emulated devices for the install
> +# phase too but that would be needlessly slow.
> +
> +my $vgname = $l1->{Vg};
> +my $guest_storage_lv_name = "${l1_ident}_guest_storage";
> +my $guest_storage_lv_size = guest_var($l1,'guest_storage_size',undef);
> +die "guest_storage_lv_size is undefined" unless $guest_storage_lv_size;
> +my $guest_storage_lvdev = "/dev/${vgname}/${guest_storage_lv_name}";
> +
> +die "toolstack $r{toolstack}" unless $r{toolstack} eq "xl";
> +target_cmd_root($l0, <<END);
> +    lvremove -f $guest_storage_lvdev ||:
> +    lvcreate -L ${guest_storage_lv_size}M -n $guest_storage_lv_name $vgname
> +    dd if=/dev/zero of=$guest_storage_lvdev count=10
> +    xl block-attach $l1->{Name} ${guest_storage_lvdev},raw,sdb,rw
> +END
> +
> +# Create a vg in L1 guest and vg name is ${l1_gn}-disk
> +target_cmd_root($l1, "pvcreate /dev/xvdb && vgcreate ${l1_gn}-disk 
> /dev/xvdb");

We would avoid having to mention /dev/xvdb if we created the VG in the
host, before doing block-attach.  I'm not sure whether that's an
improvement.


Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.