[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] iscsi vs nfs for xen VMs


  • To: "xen-users@xxxxxxxxxxxxxxxxxxx" <xen-users@xxxxxxxxxxxxxxxxxxx>
  • From: Marcin Kuk <marcin.kuk@xxxxxxxxx>
  • Date: Wed, 26 Jan 2011 23:59:40 +0100
  • Delivery-date: Wed, 26 Jan 2011 15:09:12 -0800
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=LP/oiLklvENXAGDj3CBZujYJSqWXi3BNRR0jvgxxUrNtDAie3T0zASHZdLAU9COZTw 7ZM0RKjjiJGg3Hmh0OPd4pItJMlRo0+bEvUQ5S2X51xEyapbroyUfbf6/abCVLiGO1LG WAUh4dkSy5H8SnddgkCcaknwTUQ5q4a9CHi4g=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

2011/1/26 Freddie Cash <fjwcash@xxxxxxxxx>:
> On Wed, Jan 26, 2011 at 12:55 AM, Rudi Ahlers <Rudi@xxxxxxxxxxx> wrote:
>> Well, that's the problem. We have (had, soon to be returned) a so
>> called "enterprise SAN" with dual everything, but it failed miserably
>> during December and we ended up migrating everyone to a few older NAS
>> devices just to get the client's websites up again (VPS hosting). So,
>> just cause a SAN has dual PSU's, dual controllers, dual NIC's, dual
>> HEAD's, etc doesn't mean it's non-redundant.
>>
>> I'm thinking of setting up 2 independent SAN's, of for that matter
>> even NAS clusters, and then doing something like RAID1 (mirror) on the
>> client nodes with the iSCSI mounts. But, I don't know if it's feasible
>> or worth the effort. Has anyone done something like this ?
>
> Our plan is to use FreeBSD + HAST + ZFS + CARP to create a
> redundant/fail-over storage setup, using NFS.  VM hosts will boot off
> the network and mount / via NFS, start up libvirtd, pick up their VM
> configs, and start the VMs.  The guest OSes will also boot off the
> network using NFS, with separate ZFS filesystems for each guest.
>
> If the master storage node fails for any reason (network, power,
> storage, etc), CARP/HAST will fail-over to the slave node, and
> everything carries on as before.  NFS clients will notice the link is
> down, try again, try again, try again, notice the slave node is up
> (same IP/hostname), and carry on.
>
> The beauty of using NFS is that backups can be done from the storage
> box without touching the VMs (snapshot, backup from snapshot).  And
> provisioning a new server is as simple as cloning a ZFS filesystem
> (takes a few seconds).  There's also no need to worry about sizing the
> storage (NFS can grow/shrink without the client caring); and even less
> to worry about due to the pooled storage setup of ZFS (if there's
> blocks available in the pool, any filesystem can use it).  Add in
> dedupe and compression across the entire pool ... and storage needs go
> way down.
>
> It's also a lot easier to configure live-migration using NFS than iSCSI.
>
> To increase performance, just add a couple of fast SSDs (one for write
> logging, one for read caching) and let ZFS handle it.
>
> Internally, the storage boxes have multiple CPUs, multiple cores,
> multiple PSUs, multiple NICs bonded together, multiple drive
> controllers etc.  And then there's two of them (one physically across
> town connected via fibre).
>
> VM hosts are basically throw-away appliances with gobs of CPU, RAM,
> and NICs, and no local storage to worry about.  One fails, just swap
> it with another and add it to the VM pool.
>
> Can't get much more redundant than that.
>
> If there's anything that we've missed, let me know.  :)

Yes. NFS can handle only 16 first groups. If user belong to more than
16 users - you are close to have troubles.

Regards,
Marcin Kuk

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.