[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] SR-IOV vs nPAR


(Already asked this on the intel wired forum: 
http://communities.intel.com/message/185413#185413 but some of those questiones 
are XEN specific, which might be why theres no response ;-)

I have doubts if SR-IOV will provide the required features compared to nPAR.
SR-IOV seems more flexible, but I'm not sure if performance and features are 
the same as nPAR.

Servers will be Dell M620 with either Intel X520 or Broadcom 57810, running 
Linux kernel 3.8, XEN 4.2 and openvswitch 1.7

Tutorials like 
does not mention if it's possible to run VFs in the dom0 for iSCSI and vswitch, 
or if it's necessary (for whatever reason) to run those networks on physical 

I want to do something like this (if understand SR-IOV correctly):
eth0: PF - not used
eth1: VF - management (active/passive bond)
eth2: VF - iSCSI
eth3: VF - openvswitch (LACP bond)
ethX: VF - passthrough to domU

I'm quite sure this works with nPAR, but I'm not sure about SR-IOV:

- QOS: SR-IOV allow only rate-limit interfaces while nPAR guarantees a minimum 
amount of bandwidth pr interface - is this correct that SR-IOV does not allow 
bandwidth guarantees?
- LACP apparently have problems 
 - anybody have experience with this?
- Can I use a VF as iSCSI HBA from dom0?
- Anybody using DCB (datacenter bridging) with iSCSI in a SR-IOV environment?
- Is there any practical difference on PFs and VFs (speed, latency etc…) - 
reason to use/not use the PF ?

Hope someone is able to help with some (or even all) of the above :-)



Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.