[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Nested PCI bridge support of VT-d


  • To: "Han, Weidong" <weidong.han@xxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx
  • From: "Jimmy Jin" <jimmyjin.maillist@xxxxxxxxx>
  • Date: Sat, 31 May 2008 22:15:06 +0800
  • Delivery-date: Sat, 31 May 2008 07:15:30 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=XZgIuo87n8Ei01+r4v5odPh6QGEIXWRukkYVossiZpBxRumUDlUcTpt1jsH4lvIiHaswsaLvEaqJM7BFoxYLQAFiUvRP6sNwVTh6Xtuu6CdbFjXk+u2o+xGi5eQ9jsPAHkXnIkcL20S3jN5yotqdcOgLPlbpJ6sfcd6JE7Z85CM=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Hello, Randy,

After changing the harddrive of the workstation with a SATA one, so
that I can avoid the LSI SAS controller on the same PCI-e-to-PCI
bridge, now it works.

Thanks,
Jimmy Jin

On Thu, May 29, 2008 at 5:18 PM, Han, Weidong <weidong.han@xxxxxxxxx> wrote:
> Jimmy Jin wrote:
>> Hello, Randy,
>>
>> Thank you very much for your reply.
>>
>> Just like to double confirm my understanding.
>>
>> In my case the lspci -t -v output is like following:
>>
>> -[0000:00]-+-00.0  Intel Corporation Memory Controller Hub
>>            +-01.0-[0000:80]--
>>            +-03.0-[0000:a0]--
>>            +-05.0-[0000:60]----00.0  nVidia Corporation Quadro NVS 290
>>
>>
>>
>> +-09.0-[0000:10-40]--+-00.0-[0000:1e-40]--+-00.0-[0000:20]-- |
>> |                    \-01.0-[0000:40]-- |
>> \-00.3-[0000:11]--+-06.0  LSI Logic /
>> Symbios Logic SAS1068 PCI-X Fusion-MPT SAS
>>            |                                      \-09.0  ADMtek NC100
>> Network Everywhere Fast Ethernet 10/100
>> ...
>>
>> I'm trying to pass through the ADMtek NC100 NIC to RHEL 3.7 HVM domU.
>> So according to your explanation, I must also pass the LSI Logic SAS
>> controller to the same domU, right? Otherwise the problem I
>> encountered with the SAS controller will occur, right?
>
> yes, you should assign all the devices under the same PCIe-to-PCI bridge
> to the same domain.
>
> Randy (Weidong)
>
>>
>> If it's correct,  it seems the only solution on this workstation is to
>> have SATA HD instead of SAS. :-(
>>
>> Thanks,
>> Jimmy Jin
>>
>> On Wed, May 28, 2008 at 3:17 PM, Han, Weidong <weidong.han@xxxxxxxxx>
>> wrote:
>>> Hi Jimmy,
>>>
>>> All devices behind PCIe-to-PCI bridge has to be assigned to the same
>>> domain.
>>>
>>> Supporting nested PCI bridge is a little bit complicated, and even
>>> infeasible in some cases. I think it makes a little sense. It is
>>> meaningful to make an interface for users to know which devices are
>>> assignable with VT-d, and hint them to assign correctly.
>>>
>>> Randy (Weidong)
>>>
>>>
>>> Jimmy Jin wrote:
>>>> Hi,
>>>>
>>>> Is there a plan to enable the nested PCI bridge support of VT-d?
>>>> Currently, if there is nested PCI bridge, a message will shown
>>>> saying it's not supported. And if passing the card on the slot on a
>>>> nested PCI bridge, it seems some unexpected problem may occur.
>>>>
>>>> I encounter this case when trying to pass through a PCI card to a
>>>> RHEL3 HVM on HP xw8600 workstation. The (only) PCI slot in xw8600 is
>>>> in a nested PCI bridge, according to lspci -t. If pass the card on
>>>> this PCI slot into a HVM, the system just hangs, I guess because
>>>> some problem occurs and causes the LSI SCSI controller on the same
>>>> PCI bridge not work correctly any more. On the same system, the pass
>>>> through of another PCI device (a on-board PCI Express NIC) is
>>>> working OK.
>>>>
>>>> Thanks,
>>>> Jimmy Jin
>>>>
>>>> _______________________________________________
>>>> Xen-devel mailing list
>>>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>>>> http://lists.xensource.com/xen-devel
>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.