[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] xvda1, xvda2 disk configuration with hvm


  • To: xen-users@xxxxxxxxxxxxxxxxxxxx
  • From: Juan Rossi <juan@xxxxxxxxxxxxxxx>
  • Date: Thu, 8 Mar 2018 13:27:45 +1300
  • Delivery-date: Thu, 08 Mar 2018 00:29:15 +0000
  • Domainkey-signature: a=rsa-sha1; c=simple; d=rimuhosting.com; h=subject :to:references:from:message-id:date:mime-version:in-reply-to :content-type:content-transfer-encoding; q=dns; s=mail; b=VqH0lc I9l/hhlAoMKuP4Wv/GexZ4CZ0zR7d4dpJuoXeg4ZsXyz5pOP9dJEpSQn+UPf5zir +Dqr/OoCJspA4L6G+IwoZUugLlQPJWT7oEMNRRJbY7Ns2BvshcXfH2bT/YGIX9kd Es70oUSrSvXIar8q6ZdfcGyH79rsRDH0TvyqM=
  • List-id: Xen user discussion <xen-users.lists.xenproject.org>

Hi


Thanks for the detailed answer and the time you have put into it.


On 08/03/18 11:48, Florian Heigl wrote:
Hi Juan,

Am 07.03.2018 um 22:59 schrieb Juan Rossi <juan@xxxxxxxxxxxxxxx>:

Hi,

We are converting some VMs from pv into hvm, we used to map the disks in the 
configuration file as follows:

disk = [ 'phy:/dev/users/debian.img,xvda1,w', 
'phy:/dev/users/debian.swapfs.swp,xvda9,w' ]

so we are basically mapping block devices that have no partition tables, just 
file systems to xvdaX style devices.

When moving into type=hvm we hit issues with this, if I am correct it is due 
qemu-dm and the need to have partition tables, mapping to raw disks and devices 
have the need to be in different denomination letter, eg 
/dev/users/debian.img,xvda1 /dev/users/debian.swapfs.swp,xvdb1.

Is there a work around or setting we may be missing that will allows us to map 
FSs in the form of xvdaX using hvm?

your life will be easiest if you switch to "valid" disk images. It's overdue 
these days and I don't think you'll gain much by postponing it through another migration.
Nonetheless it could be a few days of work if it's a lot of VMs.
That's basically the debt you need to get rid of and nothing (except a smarter 
solution than I could think of) will save you there. I hope I'm not too blunt, 
but the thing is, LVM has come with AIX in 1992.
It's made for this kind of stuff. Below I'm generally assuming you don't have 
that option.


The block devices /dev/users/debian.img are LVs, just for clarification


Depending on the number of VMs I can imagine two approaches
0) (not counted) - if you got very few, just scrap them.

1) For 10-100 VMs I would suggest attaching a "new" (whole) disk to the VMs and 
to mirror their disks into partitions/LVs on the new disk. And to make that disk bootable 
of course.
If you already use LVM this could be done with mirrors, and you could without 
too much headache even keep using multiple PVs if there's a real performance 
benefit behind this.
A MD mirror is also fine, but it is quite risky (more testing needed). You'd 
build a raid and fill it with the old contents, then boot and be happy.
If you do it wrong, the likely outcome is booting with nothing, and optionally 
overwriting your old disks with that. :->

A good tool to mention at that point is blocksync.py which can do rsync-style 
block updates between devices (should be block devices, not files) with 
multiple threads.
That way you can do incremental updates and limit your downtime need. I *have* 
used that, stopped services, done one more sync and xl destroy'd VMs to good 
effect.
I would always go with a "offline" final sync though if there's any liability 
involved.

2) For 100s or 1000s of VMs I would write something that scans the existing disks, 
builds a partition table & boot loader
this redirected > into a file
all existing partitions appended >> to the file (you can cp -s or dd to 
sparsify, if needed)
if using GPT you also need to write a second table to the end.
If afraid, you could work with a full-sized image that gets all this info, use 
kpartx to make it a visible disk, and then just copy your individual parts.
Of course, just concatenating them will be a lot faster.
Finally, your script would need to edit grub and fstab.

We are looking into this, see what can be done, but for the time being it appears that mapping the disks to different letters may be the only option viable. Using hacky solutions like linerar md devices I believe may not be the best (have not looked into others yet)

I have been reading about lvm resizes, sadly I cannot find solutions that will allow me to add space at the start of the LV, and then build the partition table smartly there without moving around data. We are trying to avoid downtime and issues like that.

Y) I think you can also do something with virt-image if you can take the VMs 
offline one by one for a prolonged period (as long as copying takes)
Z) There are options with things like ndb or dmsetup where you could do step 2) 
without moving data, at least to a point (building your own linear devices)
It is possible, but you would be the only person on the planet to do that and 
you'd probably be proud for a month and then regret it for a decade.

Closing words:
My own migrations were always the least painful if I slapped in iSCSI targets 
and LVM mirrors or even MD. I like migrations to be online, and to *add* 
redundancy while in progress...
Your example had file based images. Fragmentation might be a serious nightmare 
if you parallelize your migration.

Myself I love storage management and I hope whatever you end up with is fun and 
not horribly tedious.


Thanks again.


If you have other questions let us know, we are always happy to help.

Regards.

Juan.-
http://ri.mu - Startups start here. Hosting; DNS; monitoring; backups; email; web programming

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.