[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-users] 10 GbE PCI passthrough


  • To: Felix Reinel <freinel@xxxxxxx>, Xen Users <xen-users@xxxxxxxxxxxxxxxxxxx>
  • From: Matej Zary <matej.zary@xxxxxxxxx>
  • Date: Tue, 25 Jan 2011 23:10:18 +0100
  • Accept-language: en-US
  • Acceptlanguage: en-US
  • Cc:
  • Delivery-date: Tue, 25 Jan 2011 14:12:05 -0800
  • List-id: Xen user discussion <xen-users.lists.xensource.com>
  • Thread-index: Acu8rXY3gu5ZE9fIQiKYcqpWWymmrwALC/MR
  • Thread-topic: [Xen-users] 10 GbE PCI passthrough

Hi there,

as far as I know, you can pass-thru whole physical device only to one VM (being 
single or multi-function). This problem in case of NICs is solved by SR-IOV 
funcionality, when more VMs can directly use one NIC via virtual functions 
(every VM will have own passed-thru virtual function with own queues etc.). 
Your NIC supports SR-IOV as far as I know. You need SR-IOV support in Xen, Dom0 
and DomU kernel. The SR-IOV funcionality is present in Xen 4. Not sure about 
your Dom0 and DomU kernels - Intel has drivers for their SR-IOV NIC stuff for 
Red Hat 5.3 so one would assume Qloqic (NetXen) does in similar way.

Please pardon if I wrote something incorrectly, I've never owned such HW, just 
went thru some Xen presentations from Xen summits once.  :)

Regards 

Matej

________________________________________
From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx 
[xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Felix Reinel 
[freinel@xxxxxxx]
Sent: 25 January 2011 17:30
To: Xen Users
Subject: [Xen-users] 10 GbE PCI passthrough

Dear all,

we are trying to get our virtual machines to work performant with our 10
GbE cards (lspci says: NetXen Incorporated NX3031 Multifunction
1/10-Gigabit Server Adapter (rev 42)).

When doing some benchmarks it was noticed that especially on the
receiving side, the virtualization layer hits a significant performance
impact and therefore we went for a PCI passthrough, so that the VMs can
talk directly to the network card.

The PCI device was hidden from the hostsystem via pciback and then
configured in a VM like this:

# PCI passthrough for network card
pci = [ '82:00.0' ]

in the config file under /etc/xen.

The hostsystem is SLES11-SP1, running Xen version 4, 64 bit. The VMs
running on it are all running RedHat 5.3 (paravirtualized), 32 bit.

The limitation we noticed is that doing it this way, it is only possible
to configure the passthrough for a single VM, booting up a second VM
with identical configuration shows that the device is already in use.
However what we would like to have in the end is to share the PCI device
accross all VMs.

Is this a technical limitation or do you have some advise? If not
possible, do you have any other tuning tips?

Best regards,
Felix

--
-----------------------------------------------------------------------
Felix Reinel               |  Web & Systems Administrator
Office: D132 ESO           |
Tel.:   +49-89-32006-171   |  Address:
Fax.:   +49-89-32006-677   |    European Southern Observatory
Mobile: +49-160-2956856    |    Karl-Schwarzschild-Strasse 2
E-Mail: freinel@xxxxxxx    |    D-85748 Garching bei Muenchen, Germany
-----------------------------------------------------------------------
                   http://www.eso.org



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.