[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Storage alternatives




IOW: the iSCSI initiator and RAID (i guess it's RAID1) should be on
Dom0, and the DomU configs should refer the resultant blockdevices.

This is one solution we are discussing at the moment, but I think it would be a lot smarter to get the raid functionality on a layer between the harddisks and the iscsi targets as adviced by Nathan.

Agreed.  You could even potentially move the mirroring down to the
storage nodes (mirrored nbd/etc. devices) and HA the iSCSI target
service itself to reduce dom0's work, although that would depend on you
being comfortable with iSCSI moving around during a storage node
failure, which may be a risk factor.

I think that we would have to reboot each domU in this case after a failure, isn't it? The goal is to have domUs which would not be affected by failure of one storage servers.

If you have a storage node go offline in your current configuration for any real length of time, when it becomes available again, all of the nodes will begin to resync the array simultaneously. With a single DomU, you'll just consume the vast majority of either your Disk IO or Network IO. However, if you had a dozen guests, and they all start to rebuild their RAID1s from the same source SAN to the same destination SAN, through the same network link (in and out), at the same time, things are probably going to grind to an absolute halt.

This is of course also one reason why I want to change the current setup.

Abstract your disks and iscsi exports; then use ZFS on two pools this will
minimize the administration.

ZFS seems to be very nice, but sadly we are not using Solaris and don't want to use it with FUSE under Linux. Nevertheless does anyone use ZFS under Linux and can share his/her experiences?

Regards,

Jan


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.