[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Questions on qcow, qcow2 versus LVM



On Thu, December 24, 2009 12:51 pm, Fajar A. Nugraha wrote:
> That ... depends.
> Generally, performance-wise files will not be as good as block device
> (partition, LVM, etc.) That being said, if you correctly predict domUs
> resource assignement so that swapping rarely occures (it kills
> performance anyway), and it's just a safety net against OOM, you
> probably won't notice the performance difference.

Thanks.  I was hoping that the lvremove bug was only evident on the LVs
formatted as swap (my thinking was that perhaps the kernel was keeping
some internal pointer to it once 'mkswap' had been run on it) but it
occurs for me on all LVs.  I could have tolerated a file for swap since
for most domUs it wouldn't and shouldn't get much use.

> That would work. Sun even sells iscsi SAN server based on zfs, called
> Sun Unified Storage Systems.
> Note however that in my test, even on the same server, zfs + zvol
> performance is lower compared to LVM. Add to that iscsi and network
> overhead. Whether or not it's acceptable depends on your requirement,
> so it's best to try it yourself.

There is a serious issue with iSCSI performance on opensolaris which, if I
understand it properly, is down to the way that ZFS + COMSTAR must commit
every write (i.e. is has to be synchronous) for NFS and iSCSI clients. 
Its a serious hit and the workarounds don't sound attractive either.  You
wouldn't see this if the dom0 was on OpenSolaris - I'm testing that now
(albeit with SXCE build 129 rather than osol).

> If you don't care about space saving (I seem to recall you mentioned
> snapshot in another thread), you can just simply use the "disk"
> directly on domU as swap. That is, you assign two disks to domU, one
> of them for filesystem, the other as swap. Don't label the swap disk,
> don't create partitions, just use mkswap directly. In my case I assign
> it directly as partition (h/s/xvda1, xvda2, etc) but you can assign it
> as disk so it would work better with GUI tools
> (virt-install/virt-manager).
>
> Another thing to note if you use LVM snapshot, if somehow you let the
> snapshot fill to 100%, you might lose data. That's why I only use
> snapshots for temporary purposes. It might not be a problem if you can
> guarantee that it will always be below 100% (perhaps with some
> monitoring/alert system), but IMHO it's not worth it.

I do care about space saving but would trade a little for flexibility and
performance.  The dom0 in question has 8 disks on an Adaptec 5805Z
controller split into a 2 disk RAID1 volume for the OS and a 6 disk RAID6
volume for domU storage and vm images.  I don't want to use individual
disks or create scads of partitions for each domU.  I use snapshots to
take backups but not as a means of thin provisioning domUs. IMHO LVM
snapshots aren't up to that job.

I don't use any of the gui tools.  I have a bunch of shell scripts to
provision domUs by creating an LV, formatting it, mounting it and
untarring the template image into it.  Could it be this that is causing
the problem?  Should I switch to some other method?

Thanks,

Matt.


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.