[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] blade servers


  • To: John Madden <jmadden@xxxxxxxxxxx>
  • From: Nico Kadel-Garcia <nkadel@xxxxxxxxx>
  • Date: Tue, 31 Jul 2007 23:27:29 +0100
  • Cc: Tomoki Taniguchi <tomoki.taniguchi@xxxxxxxxx>, xen-users@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Tue, 31 Jul 2007 15:25:12 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:message-id:date:from:user-agent:mime-version:to:cc:subject:references:in-reply-to:content-type:content-transfer-encoding; b=oXzNEEMN2+UBKR25HB9V12uU3tcSP4ER47Lrb7G2yESbaprt/v/vIArqBAN1n5D1J1wIblfjhspAvof3IKd5RX+94ysOP9pzDew0ZEZ7vbTBZ1rw/NQLcrWKRfxuJp5ALZ6Iosplqlds6ARSFgIqzmEFc+yfrg4fdXmh+w9jG2Q=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

John Madden wrote:
On Wed, 2007-08-01 at 03:44 +0900, Tomoki Taniguchi wrote:
so could xen be installed on such hardware?

Yes, the OS can't tell the difference, it looks like a normal machine,
which is somewhat the point.

where are the network interfaces located?
on the blades or the encloser?

This is dependent on the hardware vendor, but to the machine, won't
matter (see above).  IBM xSeries Bladecenters (I can personally very
strongly recommend these) actually have Cisco switch modules that plug
in in the back and look like normal gig switches.  Internally, you're
given two ethN devices on the linux host -- the ethernet cards are on
the blades themselves.
From experience, they're typically on a network switch that's part of the enclusure itself. High end blades, such as the very sweet IBM bladecenters, have a built-in management console to provide remote KVM access to individual blades. (It's actually VNC based, which made me laugh like hell when I realized because I wrote one of the early SunOS ports of VNC, so I was already aware of some of its limitations.)

You want to think, hard, about whether the switching configurations on them is what you want if you're doing high performance computing. The internal switches aren't normally very sophisticated nor extremely high bandwidth, and can be saturatied by massive traffic, such as running lots and lots of NFS based operating systems. Xen iSCSI or NFS based OS's could compound the problem, since these systems typically do not have a lot of local disk storage, and what they have is typically 2.5" hard drive based.

More money, more features and better testing of the hardware before it arrives on your doorstep. Less money, cheaper blades, inferior "motherboards of the week" that fell off some employee's uncle's truck in Taiwan, inferior and out-of-date add-on components, untested serial console features, amazingly stupid BIOS defaults, no remote KVM, untested RAM, fans that work better as paperweights, ducting made out of what acts like tin foil, etc. Like all systems, you can save a lot of money up-front and wind up seriously paying for it down the road. (I've seen this happen, up close and personal.)

I've been involved in designing blades and Beowulf clusters: I wish I'd had Xen for blade use when I was doing that, it could have saved me a lot of kernel upgrade pain.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.