[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [early RFC] ARM PCI Passthrough design document



Hi Stefano,

On 02/14/2017 06:20 PM, Stefano Stabellini wrote:
On Tue, 14 Feb 2017, Julien Grall wrote:
Hi Stefano,

On 02/13/2017 07:59 PM, Stefano Stabellini wrote:
On Mon, 13 Feb 2017, Julien Grall wrote:
Hi Stefano,

On 10/02/17 01:01, Stefano Stabellini wrote:
On Fri, 3 Feb 2017, Edgar E. Iglesias wrote:
A possible hack could be to allocate a chunk of DDR dedicated for PCI DMA.
PCI DMA devs could be locked in to only be able to access this mem + MSI
doorbell.
Guests can still screw each other up but at least it becomes harder to
read/write directly from each others OS memory.
It may not be worth the effort though....

Actually, we do have the swiotlb in Dom0, which can be used to bounce
DMA requests over a buffer that has been previously setup to be DMA safe
using an hypercall. That is how the swiotlb is used on x86. On ARM it is
used to issue cache flushes via hypercall, but it could be adapted to do
both. It would degrade performance, due to the additional memcpy, but it
would work, I believe.

A while ago, Globallogic suggested to use direct memory mapping for the guest
to allow guest using DMA on platform not supporting SMMU.

I believe we can use the same trick on platform where SMMUs can not
distinguish PCI devices.

Yes, that would work, but only on platforms with a very limited number
of guests. However, it might still be a very common use-case on a
platform such as the Zynq MPSoC.

Can you explain why you think this could only work with limited number
of guests?

Because the memory regions would need to be mapped 1:1, right?

Correct. In your case, the DMA buffer would have to be contiguous in the memory.

And often
devices have less than 4G DMA addresses limitations?

Many platform has more than 4GB of memory today, I would be surprised if devices still have this 32-bit DMA address limitation. But maybe I am wrong here.

If it that is the case, you would still need to have memory freed below 4GB for the swiotlb.


I can see how it could work well with 1-4 guests, but I don't think it
could work in a typical server environment with many more guests. Or am
I missing something?

I expect all servers to be SBSA compliant and AFAICT the SBSA mandates an SMMU for I/O virtualization (see 8.6 in ARM-DEN-0029 v3.0).

Furthermore, for embedded the cost of using swiotlb might not be acceptable (you add an extra copy).

In the server case, I would not bother to support properly platform with broken SMMU. For embedded, I think it would be acceptable to have direct mapping.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.