[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Re: Snapshotting LVM backed guests from dom0


  • To: Nick Couchman <Nick.Couchman@xxxxxxxxx>
  • From: chris <tknchris@xxxxxxxxx>
  • Date: Fri, 23 Apr 2010 17:10:40 -0400
  • Cc: Xen-Users List <xen-users@xxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Fri, 23 Apr 2010 14:12:05 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=Haq0gV9VfE+m0wFOfxUGfT3lJTG6J9MrB2GHQ6PdWfVrtZbeAe8wNR3YRDHCkef+t7 jUTrO/ksgRUU2zjbd5svIATvVbaEqxPIij62Y06ZOdFXjXSNNydigmizTWRJJnUW0fJ7 9GFTf+QE+Up9Y5RdP1CKBcox6YlBitJgjYHtg=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

Thanks everyone for the tips i will try experimenting with these over
this weekend and let you know how much it helps if any.

- chris

On Fri, Apr 23, 2010 at 3:37 PM, Nick Couchman <Nick.Couchman@xxxxxxxxx> wrote:
>> On Sat, Apr 17, 2010 at 2:53 PM, chris <tknchris@xxxxxxxxx> wrote:
>>> Just looking for some feedback from other people who do this. I know
>>> its not a good "backup" method but "crash consistent" images have been
>>> very useful for me in disaster situations just to get OS running
>>> quickly then restore data from a data backup. My typical setup is to
>>> put the LV in snapshot mode while guest is running then dd the data to
>>> a backup file which is on a NFS mount point. The thing that seems to
>>> be happening is that the VM's performance gets pretty poor during the
>>> time the copy is happening. My guesses at why this was happening were:
>>>
>>> 1.   dom0 having equal weight to the other 4 guests on the box and
>>> somehow hogging cpu time
>>> 2.   lack of QoS on the IO side / dom0 hogging IO
>>> 3.   process priorities in dom0
>>> 4.   NFS overhead
>>>
>>> For each of these items I tried to adjust things to see if it improved.
>>>
>>> 1.   Tried increasing dom0 weight to 4x the other VM's.
>
> Probably not going to help - if you increase the weight, you'll choke out 
> your other domUs, if you decrease the weight, the domUs also may be affected 
> because network and disk I/O end up going through dom0 in the end, anyway.
>
>>> 2.   Saw pasi mentioning dm-ioband a few times and think this might
>>> address IO scheduling but haven't tried it yet.
>>> 3.   Tried nice-ing the dd to lowest priority and qemu-dm to highest
>
> I would expect this to help, some, but may not be the only thing.  Also, 
> remember that network and disk I/O are still done through drivers on dom0, 
> which means pushing qemu-dm to the highest really won't buy you anything.  I 
> would expect re-niceing dd to help some, though.
>
>>> 4.   Changing destination to a local
>
> This indicates that the bottleneck is local and not the network.  The next 
> step would be to grab some Linux performance monitoring and debugging tools 
> and figure out where your bottleneck is.   So, things like top, xentop, 
> iostat, vmstat, and sar may be useful in determining what component is 
> hitting its performance limit and needs to be tweaked or worked around.
>
> -Nick
>
>
>
>
> --------
> This e-mail may contain confidential and privileged material for the sole use 
> of the intended recipient.  If this email is not intended for you, or you are 
> not responsible for the delivery of this message to the intended recipient, 
> please note that this message may contain SEAKR Engineering (SEAKR) 
> Privileged/Proprietary Information.  In such a case, you are strictly 
> prohibited from downloading, photocopying, distributing or otherwise using 
> this message, its contents or attachments in any way.  If you have received 
> this message in error, please notify us immediately by replying to this 
> e-mail and delete the message from your mailbox.  Information contained in 
> this message that does not relate to the business of SEAKR is neither 
> endorsed by nor attributable to SEAKR.
>

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.