[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [early RFC] ARM PCI Passthrough design document





On 29/12/2016 18:41, Jaggi, Manish wrote:
Hi Julien,

Please configure you e-mail client to properly quote a e-mail. The [manish] solution is hard to read.

------------------------------------------------------------------------
*From:* Julien Grall <julien.grall@xxxxxxxxxx>
*Sent:* Thursday, December 29, 2016 10:33 PM
*To:* Jaggi, Manish; xen-devel; Stefano Stabellini
*Cc:* Edgar Iglesias (edgar.iglesias@xxxxxxxxxx); Steve Capper; Punit
Agrawal; Wei Chen; Campbell Sean; Shanker Donthineni; Jiandi An; Roger
Pau Monné; alistair.francis@xxxxxxxxxx; Kapoor, Prasun; Nair, Jayachandran
*Subject:* Re: [early RFC] ARM PCI Passthrough design document



On 29/12/2016 14:16, Jaggi, Manish wrote:
Hi Julien,

Hello Manish,


Wouldnt it be better if the design proposed by cavium be extended by
discussions and comeup with an agreeable to all design.

As I mentioned in my mail, this design is a completely different
approach (emulation vs PV).
[manish] It would have been better if you had suggested in the design
posted by me, that the following 1.2.3 points would change.
Since a design is already been posted, it makes sense to focus on that
and reach a common understanding. And is a bit _rude_.

Before saying it is rude, beware there was a gap of more than a year between v4 and v5. I also warned you on IRC that I was working on a new design document the last couple of months. So please don't do the person that was not aware.


This is a distinct proposal because
emulation vs PV impact a lot the overall design.

[manish] There are 3 aspects
a. changes needed in Xen / Dom0 for
- registering of pci host controller driver in xen
- mapping between sdbf and streamid
- adding enumerated pci devices to xen dom0
- making devices use SMMU in dom0

b.1 How domU is assigned  a PCI device.
b.2 How a domU PCI driver reads configuration space
I think only at this point PV vs emulation matters. As of now the
frontend backend driver allow reading PCI space.
Adding an its node in domU device tree will make traps to xen for ITS
emulation.

c. DT vs ACPI
I havent seen in your design how it is captured to support both dt and
acpi together.
A good appraoch would be to extend Draft5 with ACPI.

Please read again section "Information available in the firmware tables".


I didnt see any comments on the one I posted.

Whilst I haven't commented on your design document, I have read
carefully your last version of the design. But even after 5 version and
nearly 2 years of work this is still DT and ITS focused.
[manish] Two reasons for  that
a) PCI driver in linux was evolving and only after msi-map we have a way
to map sbdf with steamID

You are wrong here. msi-map is only here to do the mapping between RID and DeviceID. Not between RID and StreamID. Please read again the documentation (Documentation/devicetree/bindings/pci/pci-msi.txt). For RID to StreamID you want to look at the property iommu-map (Documentation/devicetree/bindings/pci/pci-iommu.txt).

[...]

No words about
interrupt legacy,
[Manish] At the time of Xen dev summit 2015 we agreed to keep legacy as
a secondary item, so that we can get something in xen for PCI pt.

Even if we don't implement it right now, we need to have a think about it as this may modify the overall design. What I want to avoid is have to redo everything in the future just because we knowingly ignored a bit.

no words about ACPI... And as you can see in this
draft, ACPI will have an impact on the overall.

Some part of this design document is based on all the discussion we had
over last year on your design. However, most of the comments have not
been addressed despite the fact they have been repeated multiple time by
various reviewers. For example the bus number has not been added
PHYSDEVOP_pci_host_bridge_add as requested in one of the first version
of the design.
[manish] I disagree, since the same pci node is passed to dom0 and xen
and it has a bus-nr property
there is no need for it.

It is a bit annoying to have to repeat this, PCI passthrough is not only for the DT/ITS use case but also ACPI, GICv2m... This hypercall will be part of the stable ABI. Once it is defined, it can never change. So we have to make it correct from the beginning.

If you still disagree, please explain why. Ignoring a request is not the right thing to do if you want to see people reviewing your design document.

Moreover this hypercall was suggested by Ian and the requirement was to
only add segno so that xen and dom0 bind to a PCI RC.

Putting an altogether new design without commenting on the one posted a
month back, might not be a right approach

Speaking for myself, my bandwidth is limited and I am going to
prioritize review on series where my comments have been addressed.

[manish] All have limited bandwidth and priorities are loaded.  Your
comments are dully taken care of in the document, espcially
sbdf-streamID mapping.

No it is not. You are still mixing DeviceID and StreamID. You added an hypercall without any justification and way to use it. I know we discussed about this on v4 and I was concerned about quirk for platform with a wrong RID. I was expecting from you to check if my concern was valid and come up with a justification.

Anyway, you are focusing on one comment, and not looking at the rest of them.

I would have appreciated that you could have sent a draft 6 with your
additions so that a good design be produced.

I already explained why I have sent a separate design document. I have been writing this design document before you sent the v5.

I first thought about re-using your design doc as a skeleton. But it was not fitting the way I wanted to describe the approach. Currently, I don't care about the implementation details. The important bits is the high level architecture and the justifications.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.