[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] iscsi vs nfs for xen VMs


  • To: "xen-users@xxxxxxxxxxxxxxxxxxx" <xen-users@xxxxxxxxxxxxxxxxxxx>
  • From: Freddie Cash <fjwcash@xxxxxxxxx>
  • Date: Wed, 26 Jan 2011 09:11:49 -0800
  • Delivery-date: Wed, 26 Jan 2011 09:12:51 -0800
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=aU74tUdQ19itANoHzue/qZYeMaM0PJNXvMqm6I3Mo3iRrdaHEtavIAqp02/pO7JxFw 99YrWCgnucqo3crLgRY2UenXmeodg6U8lxiX2s5G7SrNjPI2TxsflWfXbmEmU2w9s3cv TKLyXnWqrC4KVfBH1nKriwxQQ3Ej9IM/YBVZE=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

On Wed, Jan 26, 2011 at 12:55 AM, Rudi Ahlers <Rudi@xxxxxxxxxxx> wrote:
> Well, that's the problem. We have (had, soon to be returned) a so
> called "enterprise SAN" with dual everything, but it failed miserably
> during December and we ended up migrating everyone to a few older NAS
> devices just to get the client's websites up again (VPS hosting). So,
> just cause a SAN has dual PSU's, dual controllers, dual NIC's, dual
> HEAD's, etc doesn't mean it's non-redundant.
>
> I'm thinking of setting up 2 independent SAN's, of for that matter
> even NAS clusters, and then doing something like RAID1 (mirror) on the
> client nodes with the iSCSI mounts. But, I don't know if it's feasible
> or worth the effort. Has anyone done something like this ?

Our plan is to use FreeBSD + HAST + ZFS + CARP to create a
redundant/fail-over storage setup, using NFS.  VM hosts will boot off
the network and mount / via NFS, start up libvirtd, pick up their VM
configs, and start the VMs.  The guest OSes will also boot off the
network using NFS, with separate ZFS filesystems for each guest.

If the master storage node fails for any reason (network, power,
storage, etc), CARP/HAST will fail-over to the slave node, and
everything carries on as before.  NFS clients will notice the link is
down, try again, try again, try again, notice the slave node is up
(same IP/hostname), and carry on.

The beauty of using NFS is that backups can be done from the storage
box without touching the VMs (snapshot, backup from snapshot).  And
provisioning a new server is as simple as cloning a ZFS filesystem
(takes a few seconds).  There's also no need to worry about sizing the
storage (NFS can grow/shrink without the client caring); and even less
to worry about due to the pooled storage setup of ZFS (if there's
blocks available in the pool, any filesystem can use it).  Add in
dedupe and compression across the entire pool ... and storage needs go
way down.

It's also a lot easier to configure live-migration using NFS than iSCSI.

To increase performance, just add a couple of fast SSDs (one for write
logging, one for read caching) and let ZFS handle it.

Internally, the storage boxes have multiple CPUs, multiple cores,
multiple PSUs, multiple NICs bonded together, multiple drive
controllers etc.  And then there's two of them (one physically across
town connected via fibre).

VM hosts are basically throw-away appliances with gobs of CPU, RAM,
and NICs, and no local storage to worry about.  One fails, just swap
it with another and add it to the VM pool.

Can't get much more redundant than that.

If there's anything that we've missed, let me know.  :)

-- 
Freddie Cash
fjwcash@xxxxxxxxx

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.