[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] LVM or file storage?

On Sun, Feb 11, 2007 at 07:27:55PM +0100, Boris Senker wrote:
> Load wise, loop doesn't make loads anymore (since not used with LVM; 
> this box has an old Pentium IV 2.8 GHz CPU without HT so loop loads 
> are kind of noticeable at times) and I *think* the CPU usage is hence 
> lower (haven't tested performance yet, but I think LVM storage is 
> faster for Samba operations - opening a folder full of images in 
> thumbnail view from Windows box seems faster than with file image mounts).

The reason loop appears to not generate as much load is that it is not 
writing your data out to disk. It is cached in the memory by the loop
driver and only flushed periodically. Needless to say this is playing
russian roulette with your data - if you experiance an outage on Dom0
chances are that your guest filesystems will experiance *catastrophic*
data loss. Not even journalling in the guest FS will help you here
since the journall writes will simply be cached in memory in the loop

If you want to compare performance of real block devices, vs a file 
backed image use the blocktap driver instead of the loop driver.

eg Instead of


Use the path like


Also, I'd recommend fully allocating the disk space for your file image,
rather than using sparse files - there is significant overhead involved
in extending the sparse files at runtime which can lead to unexpected
performance degradation. Sparse is fine for development/testing, but in 
production you want non-sparse files.

|=- Red Hat, Engineering, Emerging Technologies, Boston.  +1 978 392 2496 -=|
|=-           Perl modules: http://search.cpan.org/~danberr/              -=|
|=-               Projects: http://freshmeat.net/~danielpb/               -=|
|=-  GnuPG: 7D3B9505   F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505  -=| 

Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.