[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] xen/arm: Hiding SMMUs from Dom0 when using ACPI on Xen



Hello,

On 18/05/17 12:59, Manish Jaggi wrote:
On 2/27/2017 11:42 PM, Julien Grall wrote:
On 02/27/2017 04:58 PM, Shanker Donthineni wrote:
Hi Julien,

Hi Shanker,

Please don't drop people in CC. In my case, any e-mail I am not CCed
are skipping my inbox and I may not read them for a while.


On 02/27/2017 08:12 AM, Julien Grall wrote:


On 27/02/17 13:23, Vijay Kilari wrote:
Hi Julien,

Hello Vijay,

On Wed, Feb 22, 2017 at 7:40 PM, Julien Grall <julien.grall@xxxxxxx>
wrote:
Hello,

There was few discussions recently about hiding SMMUs from DOM0 when
using
ACPI. I thought it would be good to have a separate thread for this.

When using ACPI, the SMMUs will be described in the IO Remapping
Table
(IORT). The specification can be found on the ARM website [1].

For a brief summary, the IORT can be used to discover the SMMUs
present on
the platform and find for a given device the ID to configure
components such
as ITS (DeviceID) and SMMU (StreamID).

The appendix A in the specification gives an example how DeviceID and
StreamID can be found. For instance, when a PCI device is both
protected by
an SMMU and MSI-capable the following translation will happen:
        RID -> StreamID -> DeviceID

Currently, SMMUs are hidden from DOM0 because they are been used by
Xen and
we don't support stage-1 SMMU. If we pass the IORT as it is, DOM0
will try
to initialize SMMU and crash.

I first thought about using a Xen specific way (STAO) or extending a
flag in
IORT. But that is not ideal.

So we would have to rewrite the IORT for DOM0. Given that a range of
RID can
mapped to multiple ranges of DeviceID, we would have to translate
RID one by
one to find the associated DeviceID. I think this may end up to
complex code
and have a big IORT table.

Why can't we replace Output base of IORT of PCI node with SMMU output
base?.
I mean similar to PCI node without SMMU, why can't replace output base
of PCI node with
SMMU's output base?.

Because I don't see anything in the spec preventing one RC ID mapping
to produce multiple SMMU ID mapping. So which output base would you
use?


Basically, remove SMMU nodes, and replaces output of the PCIe and named
nodes ID mappings with ITS nodes.

RID --> StreamID  --> dviceID  --> ITS device id = RID --> dviceID  -->
ITS device id

Can you detail it? You seem to assume that one RC ID mapping range
will only produce ID mapping range. AFAICT, this is not mandated by
the spec.

You are correct that it is not mandated by the spec, but AFAIK there
seems to be no valid use case for that.

Xen has to be compliant with the spec, if the spec says something then we should do it unless there is a strong reason not to.

In this case, it is not too difficult to implement the suggestion I wrote a couple of months ago. So why would we try to put us in a corner?


RID range should not overlap between ID Array entries.

I believe you misunderstood my point here. So let me give an example. My understanding of the spec is it is possible to have:

RC A
 // doesn't use SMMU 0 so just outputs DeviceIDs to ITS GROUP 0
 // Input ID --> Output reference: Output ID
0x0000-0xffff --> ITS GROUP 0 : 0x0000->0xffff

SMMU 0
// Note that range of StreamIDs that map to DeviceIDs excludes
// the NIC 0 DeviceID as it does not generate MSIs
 // Input ID --> Output reference: Output ID
0x0000-0x01ff --> ITS GROUP 0 : 0x10000->0x101ff
0x0200-0xffff --> ITS GROUP 0 : 0x20000->0x207ff

// SMMU 0 Control interrupt is MSI based
 // Input ID --> Output reference: Output ID
N/A --> ITS GROUP 0 : 0x200001

I believe this would be updated in the next IORT spec revision.

Well, Xen should still support current revision of IORT even if the next version add more restriction.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.