[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Debian - DomU on ZFS

Thanks for your answer,

I have two scenarios.
One local server for testing purposes using zfs.

I think I will move forward with something like this, disk=[ 'iscsi:2011-09.us.example:server,xvda,w', ]
Do you think is this much more reasonable?
I think that for the local server that will be much better than using images, do you agree?

Thank you very much.

El 09/14/2011 01:03 AM, Fajar A. Nugraha escribió:
On Wed, Sep 14, 2011 at 10:50 AM, Net Warrior<netwarrior863@xxxxxxxxx>  wrote:
Hi there .

I just come up with the need to migrate my LVM to zfs ,
Is this zfs-fuse, zfsonlinux, or did you switch the dom0 to
opensolaris, or did you have separate storage server with zfs?

when using LVM I was
able to reference my LV partitions as /dev/VG/LV,  then within the
configuration file I could reference the device as phy:/dev/VG/LV, now with
zfs I've got my disk and the pools as in , mypool/storage1,2,3 and so on.

Now my question is.

I did not find any /dev/ reference to point to in the configuracion file as
in solaris, like /dev/zvol,  so, should I create an image file and then
zfs-fuse does not support zvols, and it's not recommended to store VM
images as files (trust me, I tried).

With zfsonlinux you WILL have /dev/zvol/mypool/storage1. That is,
assuming you either use Ubuntu ppa or latest source from git to
install zfsonlinux.

Will that methood downgrade my I/O  performance or will that be handled bu
the access methood I use? iSCSI, SAN Storage, Disk Type,  HBA, network speed
and so on.
Roughly speaking, on the same hardware, using file image on zfs or
zvol, will make i/o performance drop by 50-75% compared to plain LVM.
Again, this is ROUGHLY based on my past tests. YMMV.

Anyone already installed the combination of ->  DomU+ZFS+Debian?
I have a dev system with xen+zvol+RHEL, as well as another one with

Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.