[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Xen and iSCSI



Hi,

Am Dienstag, 31. Januar 2006 15:06 schrieb Michael Mey:
> On Tuesday 31 January 2006 14:41, Markus Hochholdinger wrote:
> > Am Dienstag, 31. Januar 2006 12:25 schrieb Michael Mey:
> > > > Has anybody built a system using gnbd that supports several dom0
> > > > systms and migrating domU's?
> > > I had several scenarios for testing purposes running domU migrations
> > > between dom0s. GNBD and iSCSI (iSCSI Enterprise Target and open-iscsi
> > > as initiator) both worked fine for me and my stress tests :)
> > which one have you decided to use? And why?
> It was for getting to know which technology really works under high domU
> load for live-migration.
> The storage decision depends on what you want to do and what budget you
> have.

i want to consolidate servers in consideration of upgrades. So my points are:
 1. inexpensively
     state of the art computers, but NOT highend (=expensive)
     means sata disks, 1GBit/s ethernt, ..
 2. failover
     because of standard hardware there should be (almost) no single point of
     failuer. at least two storage servers and two Xen hosts
 3. expandable
     easy to expand. when we use Xen with storage servers we can separately
     expand storage (hard disks) and cpu power (Xen hosts)

So i now think that gnbd should be enough for this purpose.


> The first choice disregarding the costs would be a san. fc works fine, but
> san-boxes with iscsi are cheaper as the required equipment is (you don't
> need expensive fc-hba, fc-switch etc.).

OK, a san with fc is faster (really that much?) but also really expensive!


> If you want to implement a solution on your own, go for gnbd because it's
> simple, fast and reliable and can run on common hardware. You really

Yeah, thats the point: "common hardware". I don't want to invest in uncommon 
hardware which will be unsupported tomorrow.


> wouldn't want to mess up with nfs. ok, a vbd as nfsroot is stable for high
> i/o load in contrast to a file image on a nfs server, but there a security
> and performance issues.

Well, to make nfs failsafe is not that easy as setting up a raid1.


> If you don't want a third box for live-migration and load-balancing, the
> next version of drbd could be interesting (with multiple masters) or you
> want to customize xen for usage with the current stable drbd release.

I don't like the thing with master slave in drbd. So the next version "could" 
be interessting. But i also like to use approved solutions.
I will set up a live environment with at least 4 boxes. For this I'd like to 
use proved techniques. And i also will set up a development and working 
environment with two storage servers and about four Xen hosts. Here i can 
experiment a little. So i can implement tested things in the live 
environment.


> There's also an interesting commercial product called peerfs which also
> works nicely for live-migration and isn't that expensive.

Well, i don't like commercial products ;-) I will be able to fix things my 
self if they don't work as i'd expect.


> So, it's up to your equipment, needs and budget what you want to implement
> :)

The need is all and the budget is small ;-)
All jokes aside, i will try to make a highend solution with cheap hardware. 
Just like RAID does but for servers. OK, the hardware will not that cheap. 
But thinking about scsi vs. sata you can save a lot of money.


-- 
greetings

eMHa

Attachment: pgp_qxa7Vu7p4.pgp
Description: PGP signature

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.