[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Xen 4.0.1 PCI passthrough help



Hi,

I can see the device fine (it is hidden from dom0) and shows in the domU
fine, but freezes the dom0 as soon as it is used, for instance running
ifconfig eth1 192.168.1.1

It seems there is some sort of conflict that is causing this. I want to
try passing BOTH onboard NIC's instead of just one incase the conflict
is there. Any other pointers on what might be causing this issue would
be appreciated.

Regards,
Mark

On Mon, Sep 27, 2010 at 09:21:28PM -0400, Jignesh Patel wrote:
> Hello Mark,
> 
> I am not an expert with this but I found out following.
> 
> Can you please run following command and check if device is available for PV
> Guest.
> 
> xm pci-list-assignable-devices
> 
> You should able to see all pci back devices.
> 
> Also I notice that I was not able to pci pass through successfully
> untill I disable the firewire device from bios.
> 
> If you have NIC working, let me know.
> 
> Regards,
> 
> 
> Jignesh
> 
> 
> On Sat, Sep 25, 2010 at 10:25 AM, Mark Adams <mark@xxxxxxxxxxxxxxxxxx>wrote:
> 
> > So much for saying the e1000 works -- I've now tried to test again and
> > it seems to freeze my dom0 entirely. No logs written or output to
> > screen. Manual reboot is the only fix.
> >
> > the freeze only occurrs when a machine with a PCI passthrough is active.
> > An example is trying to add an IP to the passed through port.
> >
> > Does anyone have this running stable?
> >
> > Regards,
> > Mark
> >
> > On Fri, Aug 06, 2010 at 10:39:47AM +0100, Mark Adams wrote:
> > > Right so it's definately something to do with this card (Intel Quad port
> > > ET, igb driver) Because I've successfully passed through the onboard
> > > NIC's which use e1000e.
> > >
> > > Anyone got this card working with passthrough?
> > >
> > > Regards,
> > > Mark
> > >
> > > On Thu, Aug 05, 2010 at 05:31:16PM +0100, Mark Adams wrote:
> > > > If I pass through another nic on the same card (Intel Corporation
> > > > 82576), I get a different error.
> > > >
> > > > [  586.990658] pcifront pci-0: Rescanning PCI Frontend Bus 0000:00
> > > > [  595.896913] Intel(R) Gigabit Ethernet Network Driver - version
> > 1.3.16-k2
> > > > [  595.896922] Copyright (c) 2007-2009 Intel Corporation.
> > > > [  595.897103] igb 0000:00:00.1: device not available because of BAR 0
> > [0xfaee0000-0xfaefffff] collisions
> > > > [  595.897111] igb: probe of 0000:00:00.1 failed with error -22
> > > >
> > > > On Thu, Aug 05, 2010 at 05:25:13PM +0100, Mark Adams wrote:
> > > > > OK so the hang on boot was caused by not putting the 4 zero's at the
> > > > > start, so pci line had to read [ 'XXXX:XX:XX.X' ]
> > > > >
> > > > > Still can't access the device in domU though;
> > > > >
> > > > > Aug  5 11:58:29 ha_deb_testing kernel: [   68.140064] Intel(R)
> > Gigabit Ethernet Network Driver - version 1.0.8-k2
> > > > > Aug  5 11:58:29 ha_deb_testing kernel: [   68.140064] Copyright (c)
> > 2008 Intel Corporation.
> > > > > Aug  5 11:58:29 ha_deb_testing kernel: [   68.140336] igb
> > 0000:00:00.0: enabling device (0000 -> 0003)
> > > > > Aug  5 11:58:29 ha_deb_testing kernel: [   68.149635] igb: probe of
> > 0000:00:00.0 failed with error -2
> > > > >
> > > > > On Thu, Aug 05, 2010 at 04:06:07PM +0100, Mark Adams wrote:
> > > > > > Ok, so I've been working on this - The domU won't even boot at all
> > with
> > > > > > the pci part included in the config, it just sits at p on 0.0 and
> > the
> > > > > > xen log streams the following endlessly, until I restart xend.
> > > > > >
> > > > > > <--snip
> > > > > > [2010-08-05 16:02:24 3593] DEBUG (XendDomainInfo:2790)
> > _freeDMAmemory (5) Need 57064KiB DMA memory. Asking for 4142836KiB
> > > > > > [2010-08-05 16:02:24 3593] DEBUG (XendDomainInfo:2790)
> > _freeDMAmemory (4) Need 57064KiB DMA memory. Asking for 4144884KiB
> > > > > > [2010-08-05 16:02:24 3593] DEBUG (XendDomainInfo:2790)
> > _freeDMAmemory (3) Need 57064KiB DMA memory. Asking for 4146932KiB
> > > > > > [2010-08-05 16:02:24 3593] DEBUG (XendDomainInfo:2790)
> > _freeDMAmemory (2) Need 57064KiB DMA memory. Asking for 4148980KiB
> > > > > > [2010-08-05 16:02:24 3593] DEBUG (XendDomainInfo:2790)
> > _freeDMAmemory (1) Need 57064KiB DMA memory. Asking for 4151028KiB
> > > > > > [2010-08-05 16:02:24 3593] WARNING (XendDomainInfo:2806) We tried
> > our best to balloon down DMA memory to accomodate your PV guest. We need
> > 57064KiB extra memory.
> > > > > > <--snip
> > > > > >
> > > > > > then the following is shown (after some info about disks etc..):
> > > > > >
> > > > > > [2010-08-05 16:02:24 3593] INFO (XendDomainInfo:2367) createDevice:
> > pci : {'devs': [{'slot': '0x00', 'domain': '0x0000', 'key': '0a:00.0',
> > 'bus': '0x0a', 'vdevfn': '0x100', 'func': '0x0', 'uuid':
> > 'af2849a8-fd68-a178-b3dd-dd1a7f43147c'}], 'uuid':
> > '432a09e2-bb8d-0c78-821f-df8310261b66'}
> > > > > > [2010-08-05 16:02:24 3593] INFO (pciquirk:92) NO quirks found for
> > PCI device [8086:10c9:8086:0000]
> > > > > > [2010-08-05 16:02:24 3593] DEBUG (pciquirk:132) Permissive mode
> > enabled for PCI device [8086:10c9:8086:0000]
> > > > > > [2010-08-05 16:02:24 3593] DEBUG (pciquirk:141) Unconstrained
> > device: 0000:0a:00.0
> > > > > > [2010-08-05 16:02:24 3593] ERROR (XendDomainInfo:2904)
> > XendDomainInfo.initDomain: exception occurred
> > > > > > Traceback (most recent call last):
> > > > > >   File "/usr/lib/xen-4.0/lib/python/xen/xend/XendDomainInfo.py",
> > line 2896, in _initDomain
> > > > > >     self._createDevices()
> > > > > >   File "/usr/lib/xen-4.0/lib/python/xen/xend/XendDomainInfo.py",
> > line 2374, in _createDevices
> > > > > >     devid = self._createDevice(devclass, config)
> > > > > >   File "/usr/lib/xen-4.0/lib/python/xen/xend/XendDomainInfo.py",
> > line 2336, in _createDevice
> > > > > >     return
> > self.getDeviceController(deviceClass).createDevice(devConfig)
> > > > > >   File
> > "/usr/lib/xen-4.0/lib/python/xen/xend/server/DevController.py", line 67, in
> > createDevice
> > > > > >     self.setupDevice(config)
> > > > > >   File "/usr/lib/xen-4.0/lib/python/xen/xend/server/pciif.py", line
> > 453, in setupDevice
> > > > > >     self.setupOneDevice(d)
> > > > > >   File "/usr/lib/xen-4.0/lib/python/xen/xend/server/pciif.py", line
> > 316, in setupOneDevice
> > > > > >     raise VmError("Failed to assign device to IOMMU (%s)" %
> > pci_str)
> > > > > > VmError: Failed to assign device to IOMMU (0000:0a:00.0)
> > > > > > [2010-08-05 16:02:24 3593] ERROR (XendDomainInfo:483) VM start
> > failed
> > > > > > Traceback (most recent call last):
> > > > > >   File "/usr/lib/xen-4.0/lib/python/xen/xend/XendDomainInfo.py",
> > line 469, in start
> > > > > >     XendTask.log_progress(31, 60, self._initDomain)
> > > > > >   File "/usr/lib/xen-4.0/lib/python/xen/xend/XendTask.py", line
> > 209, in log_progress
> > > > > >     retval = func(*args, **kwds)
> > > > > >   File "/usr/lib/xen-4.0/lib/python/xen/xend/XendDomainInfo.py",
> > line 2907, in _initDomain
> > > > > >     raise exn
> > > > > > VmError: Failed to assign device to IOMMU (0000:0a:00.0)
> > > > > >
> > > > > > Anyone have any idea where I'm going wrong here? I have iommu=soft
> > and swiotlb=force in my extras.
> > > > > >
> > > > > > Regards,
> > > > > > Mark
> > > > > >
> > > > > > On Thu, Aug 05, 2010 at 01:33:54PM +0100, Mark Adams wrote:
> > > > > > > Hi all,
> > > > > > >
> > > > > > > Debian squeeze 2.6.32-5 with Xen 4.0
> > > > > > >
> > > > > > > I'm having some issues getting PCI passthrough to work for a quad
> > port
> > > > > > > Gigabit NIC card. I've added the devices and they are not visible
> > in
> > > > > > > dom0, however when the domU boots up all I get is the following:
> > > > > > >
> > > > > > > [    0.644428] Intel(R) Gigabit Ethernet Network Driver - version
> > 1.0.8-k2
> > > > > > > [    0.644433] Copyright (c) 2008 Intel Corporation.
> > > > > > > [    0.644838] PCI: Setting latency timer of device 0000:00:00.0
> > to 64
> > > > > > > [    0.655956] igb: probe of 0000:00:00.0 failed with error -2
> > > > > > >
> > > > > > > lspci shows the following in the domU:
> > > > > > >
> > > > > > > 00:00.0 Ethernet controller: Intel Corporation Device 10c9 (rev
> > 01)
> > > > > > >         Subsystem: Intel Corporation Device 0000
> > > > > > >         Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV-
> > VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
> > > > > > >         Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast
> > >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
> > > > > > >         Interrupt: pin A routed to IRQ 26
> > > > > > >         Region 0: Memory at fae60000 (32-bit, non-prefetchable)
> > [size=128K]
> > > > > > >         Region 2: I/O ports at d880 [size=32]
> > > > > > >         Region 3: Memory at fae5c000 (32-bit, non-prefetchable)
> > [size=16K]
> > > > > > >         Capabilities: [40] Power Management version 3
> > > > > > >                 Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA
> > PME(D0-,D1-,D2-,D3hot-,D3cold-)
> > > > > > >                 Status: D0 PME-Enable- DSel=0 DScale=1 PME-
> > > > > > >         Capabilities: [50] Message Signalled Interrupts: Mask+
> > 64bit+ Queue=0/0 Enable-
> > > > > > >                 Address: 0000000000000000  Data: 0000
> > > > > > >                 Masking: 00000000  Pending: 00000000
> > > > > > >         Capabilities: [70] MSI-X: Enable- Mask- TabSize=10
> > > > > > >                 Vector table: BAR=3 offset=00000000
> > > > > > >                 PBA: BAR=3 offset=00002000
> > > > > > >         Capabilities: [a0] Express (v2) Endpoint, MSI 00
> > > > > > >                 DevCap: MaxPayload 512 bytes, PhantFunc 0,
> > Latency L0s <4us, L1 <64us
> > > > > > >                         ExtTag- AttnBtn- AttnInd- PwrInd- RBE+
> > FLReset+
> > > > > > >                 DevCtl: Report errors: Correctable- Non-Fatal-
> > Fatal- Unsupported-
> > > > > > >                         RlxdOrd+ ExtTag- PhantFunc- AuxPwr-
> > NoSnoop+ FLReset-
> > > > > > >                         MaxPayload 128 bytes, MaxReadReq 512
> > bytes
> > > > > > >                 DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq-
> > AuxPwr- TransPend-
> > > > > > >                 LnkCap: Port #4, Speed 2.5GT/s, Width x4, ASPM
> > L0s L1, Latency L0 <2us, L1 <64us
> > > > > > >                         ClockPM- Suprise- LLActRep- BwNot-
> > > > > > >                 LnkCtl: ASPM Disabled; RCB 64 bytes Disabled-
> > Retrain- CommClk+
> > > > > > >                         ExtSynch- ClockPM- AutWidDis- BWInt-
> > AutBWInt-
> > > > > > >                 LnkSta: Speed 2.5GT/s, Width x4, TrErr- Train-
> > SlotClk+ DLActive- BWMgmt- ABWMgmt-
> > > > > > >         Capabilities: [100] Advanced Error Reporting <?>
> > > > > > >         Capabilities: [150] #0e
> > > > > > >         Capabilities: [160] #10
> > > > > > >         Kernel modules: igb
> > > > > > >
> > > > > > >
> > > > > > > Does anyone have any ideas what I might be doing wrong or how I
> > can debug this further?
> > > > > > >
> > > > > > > Best Regards,
> > > > > > > Mark
> > > > > > >
> > > > > > > _______________________________________________
> > > > > > > Xen-users mailing list
> > > > > > > Xen-users@xxxxxxxxxxxxxxxxxxx
> > > > > > > http://lists.xensource.com/xen-users
> > > > > >
> > > > > >
> > > > > > _______________________________________________
> > > > > > Xen-users mailing list
> > > > > > Xen-users@xxxxxxxxxxxxxxxxxxx
> > > > > > http://lists.xensource.com/xen-users
> > > > >
> > > > >
> > > > > _______________________________________________
> > > > > Xen-users mailing list
> > > > > Xen-users@xxxxxxxxxxxxxxxxxxx
> > > > > http://lists.xensource.com/xen-users
> > > >
> > > > _______________________________________________
> > > > Xen-users mailing list
> > > > Xen-users@xxxxxxxxxxxxxxxxxxx
> > > > http://lists.xensource.com/xen-users
> > >
> > > _______________________________________________
> > > Xen-users mailing list
> > > Xen-users@xxxxxxxxxxxxxxxxxxx
> > > http://lists.xensource.com/xen-users
> >
> > _______________________________________________
> > Xen-users mailing list
> > Xen-users@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-users
> >


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.