[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] Fibre Channel SAN + HBA + Xen = how ?

  • To: xen-users@xxxxxxxxxxxxxxxxxxx
  • From: Nathan Flynn <mlist@xxxxxxxx>
  • Date: Fri, 26 Mar 2010 14:42:47 +0000
  • Delivery-date: Fri, 26 Mar 2010 07:44:08 -0700
  • List-id: Xen user discussion <xen-users.lists.xensource.com>


I have been looking around all day and lot's of things are being thrown at me 
such as npiv and vscsi and such.

I have an HITACHI,DF600F SAN which goes via two switch fabrics and plugs in to 
the dual port Qlogic HBA on the server(s).

I have 4 Dell 2950's at current which are under-utilised. I want to buy new 
2xR710 and consolidate these 4 servers to a single physical server running 4 
VM's and spawn them on the other server in the event of a failure.

At current each of these servers is presented with 2 LUN's (one for mail and 
one for website data).

I am going to P2V the 4x Servers (CentOS 5.4) to some medium (probably put it 
on it's own LUN on the SAN).

So the situation is I have the following

2x switch fabric (presenting 4 pathways as 2 on each fabric) -> HBA -> Physical 

Per VM (4 VMs needed)
1x LUN(lvm2 partitioned) for website data 
1x LUN(lvm2 partitioned) for mail data

Can anybody tell me what is the best way to pass these to the CentOS VM's? I 
have debated passing each block device in (so 4*3) and letting the VM do the 
Multipath - any idea on performance?
I could let the dom0 do the multipath'ing and then pass through the combined 
mpath device to the DOMU
I could pass through the individual LV's from the LUN's to the DOMU

IO performance is key to doing this - which is the whole reason for the FC SAN.
Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.