[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] Re: Snapshotting LVM backed guests from dom0


  • To: Xen-Users List <xen-users@xxxxxxxxxxxxxxxxxxx>
  • From: chris <tknchris@xxxxxxxxx>
  • Date: Fri, 23 Apr 2010 13:53:12 -0400
  • Delivery-date: Fri, 23 Apr 2010 10:56:42 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=rQBPJyuEOMUUkIXAtqXDJYoGtNN0xylw20gdhxYQ+zJTICnHoFzv0YO1LReJWEtXBO Fki5ViJllaUY37x2DL/SiEc4KLqSuPmd2Wg1pz8ztitsK5KQxuJ+FMzvqlU7oB0EOcyU Fnt6G3jgNm1mV/Oklilw4tluxEe/QKMLp+EjM=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

I think this got missed during the mailinglist downtime last
weekend... I can't imagine no one has any inpurt?

- chris

On Sat, Apr 17, 2010 at 2:53 PM, chris <tknchris@xxxxxxxxx> wrote:
> Just looking for some feedback from other people who do this. I know
> its not a good "backup" method but "crash consistent" images have been
> very useful for me in disaster situations just to get OS running
> quickly then restore data from a data backup. My typical setup is to
> put the LV in snapshot mode while guest is running then dd the data to
> a backup file which is on a NFS mount point. The thing that seems to
> be happening is that the VM's performance gets pretty poor during the
> time the copy is happening. My guesses at why this was happening were:
>
> 1.   dom0 having equal weight to the other 4 guests on the box and
> somehow hogging cpu time
> 2.   lack of QoS on the IO side / dom0 hogging IO
> 3.   process priorities in dom0
> 4.   NFS overhead
>
> For each of these items I tried to adjust things to see if it improved.
>
> 1.   Tried increasing dom0 weight to 4x the other VM's.
> 2.   Saw pasi mentioning dm-ioband a few times and think this might
> address IO scheduling but haven't tried it yet.
> 3.   Tried nice-ing the dd to lowest priority and qemu-dm to highest
> 4.   Changing destination to a local
>
> Changing the things above didn't really seem to help either alone or
> in combination. My setup is Xen 3.2 and Xen 4.0 on dual nehalem
> processors, 24GB RAM, RAID 5+0 of WD RE3 1TB disks. The hardware in
> the boxes is quite good and there seems to be no noticable difference
> between Xen versions. What I'd ideally like to accomplish is to be
> able to take the backups with the least possible impact on the running
> VM's as possible. I honestly don't care how long the backups take but
> I want to avoid just slowing them down to a fixed speed, because it
> seems inefficient/hacky. Can anyone share their experiences both good
> and bad?
>
> Thanks,
> - chris
>

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.