[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-users] Making system templates


  • To: "Jeff Williams" <jeffw@xxxxxxxxxxxxxx>, "Fajar A. Nugraha" <fajar@xxxxxxxxx>
  • From: "Robert Dunkley" <Robert@xxxxxxxxx>
  • Date: Tue, 9 Jun 2009 08:21:05 +0100
  • Cc: Xen User-List <xen-users@xxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Tue, 09 Jun 2009 00:21:53 -0700
  • List-id: Xen user discussion <xen-users.lists.xensource.com>
  • Thread-index: AcnoxzOkIcam1PZjTc2oYNx+wXIX5QACvJFQ
  • Thread-topic: [Xen-users] Making system templates

I use DD :)

I use LVM partitioning at Dom 0 level with DRBD and pass each VM a
partition as a disk (For DBs I have faster discs for DB storage so I
pass two disks to the PV, One for boot and one for DB storage). I
specify a specific MAC in the config file, this way when the cloned
machine starts it won't apply an previous IP addresses the image had to
the differing MAC. Works fine.

Resizing seems to work OK, I just use LVM on DOM0 and then LVM on the
DOMU.

Rob



-----Original Message-----
From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
[mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Jeff
Williams
Sent: 09 June 2009 06:56
To: Fajar A. Nugraha
Cc: Xen User-List
Subject: Re: [Xen-users] Making system templates

On 09/06/09 12:04, Fajar A. Nugraha wrote:
> On Tue, Jun 9, 2009 at 7:08 AM, Jeff Williams<jeffw@xxxxxxxxxxxxxx>
wrote:
>   
>> So to confirm, rather than making /dev/xenvg/domudisk and doing:
>>
>> disk = ['phy:/dev/xenvg/domudisk,xvda,w']
>>
>> and partitioning /dev/xenvg/domudisk in the guest, you'd make (for
example):
>>
>> /dev/xenvg/domudisk-root
>> /dev/xenvg/domudisk-home
>> /dev/xenvg/domudisk-swap
>>
>> and configure it like:
>>
>> disk = [
>>   'phy:/dev/xenvg/domudisk-root,xvda1,w',
>>   'phy:/dev/xenvg/domudisk-home,xvda2,w',
>>   'phy:/dev/xenvg/domudisk-swap,xvda3,w'
>> ]
>>
>> Is that right?
>>     
>
> That's what I do with templates-based installation.
>
>   
>> The idea had crossed my mind, but all the tools seemed to
>> want to do it the other way.
>>     
>
> For some tools (like virt-manager), yes. Other tools (like eucalyptus)
> seems to use tar.gz. images.
> Personally I don't use provisioning tools, but rather doing it all
> manually (lvcreate, mkfs, tar xfvz, etc.).
>
>   
Interesting. So what I have so far is:

- no-one seems to use lvm snap shot for provisioning as I guess it is 
too inflexible.
- no-one seems to use dd either
- no-one seems to use virt-clone either
- most people seem to do one of:

1) Create the disks and file systems and do a file level copy of the 
template (tar seems to be preferred over cpio)
2) Use some sort of bootstrap procedure to do a network install of the 
OS, followed optionally by a start -up script which installs and 
configures required packages.

Also interesting was that a few people are doing "partitioning" at the 
Dom0/LVM level with a separate LV per partition and passing those 
partitions through to the DomU rather than passing an LV as a disk and 
partitioning at the DomU level.

Thanks for all the input. At this point I'll be using templates with a 
file level copy and a separate LV per partition to make resize easier.

Regards,
Jeff

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

The SAQ Group

Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ
SAQ is the trading name of SEMTEC Limited. Registered in England & Wales
Company Number: 06481952

http://www.saqnet.co.uk AS29219

SAQ Group Delivers high quality, honestly priced communication and I.T. 
services to UK Business.

Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : 
Backups : Managed Networks : Remote Support.

ISPA Member


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.