[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Xen LVM2 snapshot backup



----------------------------------------------------------------------------------

I'm not sure what you were expecting, thats how snapshots work, there's always a downside. All writes go to the snapshot instead of the origin disk and snapshots are sparse disks so they have the added performance hit of having to allocate the space as needed up to the limit. If you create a snap of a 20gb origin disk and specify -L 1G during the snap create it will effectively store 1gb worth of changes until its either dropped or auto-expanded depending on your settings. I believe if you do not specify a size it will create a snap the same size as the origin volume. Snapshots are effectively branches of the origin volume in much the same way as they are used in typical source control systems (svn, cvs, etc).

On Mon, Nov 28, 2011 at 2:06 PM, Denis J. Cirulis <denis@xxxxxxxxxxxxx> wrote:
On Mon, Nov 28, 2011 at 01:30:02PM -0500, Errol Neal wrote:
> Denis J. Cirulis  wrote:
> >                 Hi,
> >
> > is there a way to speed up lvm performance while logical volume has one
> > or more snapshots ?
> >
> > I have several domUs on lvm volumes, I'm taking a snapshot of each
> > volume, then with the help of dd taking backup of each volume.
> > It seems to me that performance of snapshotted lv goes down, in my
> > case from 600MB/s write to 78-80MB/s.
> > Tried the solution with archive=0 in lvm.conf == no result.
> >
> > Is there a way to speed it up or maybe there are more interesting live
> > domU backup solutions out there ?
> >
> Are you saying your performance drops by that much before you even start backing up?
> Why not use ntfsclone/partclone as opposed to dd? That will certainly improve performance and reduce your exposure.

For example:

suse-cloud:~ # vgs
 VG           #PV #LV #SN Attr   VSize   VFree
 nova-volumes   1   0   0 wz--n-  83.43G  83.43G
 test-vg        2   1   0 wz--n- 596.17G 586.17G
suse-cloud:~ # lvs
 LV       VG      Attr   LSize  Origin Snap%  Move Log Copy%  Convert
 test-vol test-vg -wi-a- 10.00G
suse-cloud:~ # mount /dev/test-vg/test-vol /mnt/test/
suse-cloud:~ # dd if=/dev/zero of=/mnt/test/file.1g bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 2.38892 s, 439 MB/s
suse-cloud:~ # lvcreate -s -n test-vol-snap /dev/test-vg/test-vol -L3G
 Logical volume "test-vol-snap" created
suse-cloud:~ # lvs
 LV            VG      Attr   LSize  Origin   Snap%  Move Log Copy%  Convert
 test-vol      test-vg owi-ao 10.00G
 test-vol-snap test-vg swi-a-  3.00G test-vol   0.00
suse-cloud:~ # dd if=/dev/zero of=/mnt/test/file.1-1g bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 6.60005 s, 159 MB/s
suse-cloud:~ # lvremove /dev/test-vg/test-vol-snap
Do you really want to remove active logical volume "test-vol-snap"? [y/n]: y
 Logical volume "test-vol-snap" successfully removed
suse-cloud:~ # dd if=/dev/zero of=/mnt/test/file.1-2g bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 3.43745 s, 305 MB/s
suse-cloud:~ #

These results are from test system.
While I have only one snapshot of test-vg/test-vol I have 3 times
performance drop, the more snapshots the less write speed on original
volume perform.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.