[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Xen LVM2 snapshot backup



David Della Vecchia  wrote:
>                 
> ----------------------------------------------------------------------------------
> 
> I'm not sure what you were expecting, thats how snapshots work, there's 
> always a downside. All writes go to the snapshot instead of the origin disk 
> and snapshots are sparse disks so they have the added performance hit of 
> having to allocate the space as needed up to the limit. If you create a snap 
> of a 20gb origin disk and specify -L 1G during the snap create it will 
> effectively store 1gb worth of changes until its either dropped or 
> auto-expanded depending on your settings. I believe if you do not specify a 
> size it will create a snap the same size as the origin volume. Snapshots are 
> effectively branches of the origin volume in much the same way as they are 
> used in typical source control systems (svn, cvs, etc).
> 
> On Mon, Nov 28, 2011 at 2:06 PM, Denis J. Cirulis <denis@xxxxxxxxxxxxx> wrote:
> > On Mon, Nov 28, 2011 at 01:30:02PM -0500, Errol Neal wrote:
> > Denis J. Cirulis  wrote:
> > >                 Hi,
> > >
> > > is there a way to speed up lvm performance while logical volume has one
> > > or more snapshots ?
> > >
> > > I have several domUs on lvm volumes, I'm taking a snapshot of each
> > > volume, then with the help of dd taking backup of each volume.
> > > It seems to me that performance of snapshotted lv goes down, in my
> > > case from 600MB/s write to 78-80MB/s.
> > > Tried the solution with archive=0 in lvm.conf == no result.
> > >
> > > Is there a way to speed it up or maybe there are more interesting live
> > > domU backup solutions out there ?
> > >
> > Are you saying your performance drops by that much before you even start 
> > backing up?
> > Why not use ntfsclone/partclone as opposed to dd? That will certainly 
> > improve performance and reduce your exposure.
> 
> For example:
> 
> suse-cloud:~ # vgs
>   VG           #PV #LV #SN Attr   VSize   VFree
>   nova-volumes   1   0   0 wz--n-  83.43G  83.43G
>   test-vg        2   1   0 wz--n- 596.17G 586.17G
> suse-cloud:~ # lvs
>   LV       VG      Attr   LSize  Origin Snap%  Move Log Copy%  Convert
>   test-vol test-vg -wi-a- 10.00G
> suse-cloud:~ # mount /dev/test-vg/test-vol /mnt/test/
> suse-cloud:~ # dd if=/dev/zero of=/mnt/test/file.1g bs=1M count=1000
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes (1.0 GB) copied, 2.38892 s, 439 MB/s
> suse-cloud:~ # lvcreate -s -n test-vol-snap /dev/test-vg/test-vol -L3G
>   Logical volume "test-vol-snap" created
> suse-cloud:~ # lvs
>   LV            VG      Attr   LSize  Origin   Snap%  Move Log Copy%  Convert
>   test-vol      test-vg owi-ao 10.00G
>   test-vol-snap test-vg swi-a-  3.00G test-vol   0.00
> suse-cloud:~ # dd if=/dev/zero of=/mnt/test/file.1-1g bs=1M count=1000
> 1000+0 records in
> 1000+0 records out

David beat me to it. I won't be redundant, but I will say you are artificially 
inducing an issue unless you plan on writing your backups to the same device 
you've snapped. Your read (both rand and seq) should be largely unaffected by 
the presence of a snap.               

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.