[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] xvda1, xvda2 disk configuration with hvm



Hi Juan,

> Am 07.03.2018 um 22:59 schrieb Juan Rossi <juan@xxxxxxxxxxxxxxx>:
> 
> Hi,
> 
> We are converting some VMs from pv into hvm, we used to map the disks in the 
> configuration file as follows:
> 
> disk = [ 'phy:/dev/users/debian.img,xvda1,w', 
> 'phy:/dev/users/debian.swapfs.swp,xvda9,w' ]
> 
> so we are basically mapping block devices that have no partition tables, just 
> file systems to xvdaX style devices.
> 
> When moving into type=hvm we hit issues with this, if I am correct it is due 
> qemu-dm and the need to have partition tables, mapping to raw disks and 
> devices have the need to be in different denomination letter, eg 
> /dev/users/debian.img,xvda1 /dev/users/debian.swapfs.swp,xvdb1.
> 
> Is there a work around or setting we may be missing that will allows us to 
> map FSs in the form of xvdaX using hvm?

your life will be easiest if you switch to "valid" disk images. It's overdue 
these days and I don't think you'll gain much by postponing it through another 
migration.
Nonetheless it could be a few days of work if it's a lot of VMs. 
That's basically the debt you need to get rid of and nothing (except a smarter 
solution than I could think of) will save you there. I hope I'm not too blunt, 
but the thing is, LVM has come with AIX in 1992.
It's made for this kind of stuff. Below I'm generally assuming you don't have 
that option.

Depending on the number of VMs I can imagine two approaches:

0) (not counted) - if you got very few, just scrap them.

1) For 10-100 VMs I would suggest attaching a "new" (whole) disk to the VMs and 
to mirror their disks into partitions/LVs on the new disk. And to make that 
disk bootable of course.
If you already use LVM this could be done with mirrors, and you could without 
too much headache even keep using multiple PVs if there's a real performance 
benefit behind this.
A MD mirror is also fine, but it is quite risky (more testing needed). You'd 
build a raid and fill it with the old contents, then boot and be happy.
If you do it wrong, the likely outcome is booting with nothing, and optionally 
overwriting your old disks with that. :->

A good tool to mention at that point is blocksync.py which can do rsync-style 
block updates between devices (should be block devices, not files) with 
multiple threads.
That way you can do incremental updates and limit your downtime need. I *have* 
used that, stopped services, done one more sync and xl destroy'd VMs to good 
effect.
I would always go with a "offline" final sync though if there's any liability 
involved.

2) For 100s or 1000s of VMs I would write something that scans the existing 
disks, builds a partition table & boot loader
this redirected > into a file
all existing partitions appended >> to the file (you can cp -s or dd to 
sparsify, if needed)
if using GPT you also need to write a second table to the end.
If afraid, you could work with a full-sized image that gets all this info, use 
kpartx to make it a visible disk, and then just copy your individual parts.
Of course, just concatenating them will be a lot faster.
Finally, your script would need to edit grub and fstab.

Y) I think you can also do something with virt-image if you can take the VMs 
offline one by one for a prolonged period (as long as copying takes)
Z) There are options with things like ndb or dmsetup where you could do step 2) 
without moving data, at least to a point (building your own linear devices)
It is possible, but you would be the only person on the planet to do that and 
you'd probably be proud for a month and then regret it for a decade.

Closing words:
My own migrations were always the least painful if I slapped in iSCSI targets 
and LVM mirrors or even MD. I like migrations to be online, and to *add* 
redundancy while in progress...
Your example had file based images. Fragmentation might be a serious nightmare 
if you parallelize your migration.

Myself I love storage management and I hope whatever you end up with is fun and 
not horribly tedious.

Flo
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.