[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Questions on qcow, qcow2 versus LVM



On Thu, Dec 24, 2009 at 9:04 PM, Matthew Law <matt@xxxxxxxxxxxxxxxxxx> wrote:
> I do care about space saving but would trade a little for flexibility and
> performance.

Good to hear that. So at least we can rule out LVM snapshot problems.
At least for now.

>  The dom0 in question has 8 disks on an Adaptec 5805Z
> controller split into a 2 disk RAID1 volume for the OS and a 6 disk RAID6
> volume for domU storage and vm images.  I don't want to use individual
> disks or create scads of partitions for each domU.

I use LVM for domU disks as well. Never had the problem you mentioned.

> I have a bunch of shell scripts to
> provision domUs by creating an LV, formatting it, mounting it and
> untarring the template image into it.

That's basically what I do. I use RHEL 5.4, some with builtin Xen,
others with Gitco's Xen rpm.
You mentioned about disklabels and partitions. Does that mean you use
kpartx? Did you remember to delete the mappings later using "kpartx
-d"?

I seriously suspect your problem is related to kpartx. Try changing
your setup a little bit so that it maps LVs as partition instead of
disks. Something like this on your domU config file:

disk =  [
        'phy:/dev/vg/rootlv,xvda1,w',
        'phy:/dev/vg/swaplv,xvda2,w',
        ]

you could use s/hda1 instead of xvda1, if your existing domU already
use that. There shouldn't be any change necessary to your domU tar
image (including fstab or initramfs) if you don't use LVM on domU side
as well.

After a successful series of lvcreate - mkfs - mount - untar - unmount
- xm create - xm destroy - lvremove, that should at least narrow down
your problem.

-- 
Fajar

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.