[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PCI Passthrough ARM Design : Draft1





On Tuesday 16 June 2015 09:21 AM, Roger Pau Monné wrote:
El 16/06/15 a les 18.13, Stefano Stabellini ha escrit:
On Thu, 11 Jun 2015, Ian Campbell wrote:
On Thu, 2015-06-11 at 07:25 -0400, Julien Grall wrote:
Hi Ian,

On 11/06/2015 04:56, Ian Campbell wrote:
On Wed, 2015-06-10 at 15:21 -0400, Julien Grall wrote:
Hi,

On 10/06/2015 08:45, Ian Campbell wrote:
4. DomU access / assignment PCI device
--------------------------------------
When a device is attached to a domU, provision has to be made such that
it can
access the MMIO space of the device and xen is able to identify the mapping
between guest bdf and system bdf. Two hypercalls are introduced
I don't think we want/need new hypercalls here, the same existing
hypercalls which are used on x86 should be suitable. That's
XEN_DOMCTL_memory_mapping from the toolstack I think.
XEN_DOMCTL_memory_mapping is done by QEMU for x86 HVM when the guest
(i.e hvmloader?) is writing in the PCI BAR.
What about for x86 PV? I think it is done by the toolstack there, I
don't know what pciback does with accesses to BAR registers.
XEN_DOMCTL_memory_mapping is only used to map memory in stage-2 page
table. This is only used for auto-translated guest.

In the case of x86 PV, the page-table is managed by the guest. The only
things to do is to give the MMIO permission to the guest in order to the
let him use them. This is done at boot time in the toolstack.
Ah yes, makes sense.

Manish, this sort of thing and the constraints etc should be discussed
in the doc please.
I think that the toolstack (libxl) will need to call
xc_domain_memory_mapping (XEN_DOMCTL_memory_mapping), in addition to
xc_domain_iomem_permission, for auto-translated PV guests on x86 (PVH)
and ARM guests.
I'm not sure about this, AFAICT you are suggesting that the toolstack
(or domain builder for Dom0) should setup the MMIO regions on behalf of
the guest using the XEN_DOMCTL_memory_mapping hypercall.

IMHO the toolstack should not setup MMIO regions and instead the guest
should be in charge of setting them in the p2m by using a hypercall (or
at least that was the plan on x86 PVH).

Roger.
There were couple of points discussed,
a) There needs to be a hypercall issued from an entity to map the device MMIO space to domU.
What that entity be
 i) Toolstack
ii) domU kernel.

b) Should the MMIO mapping be 1:1

For (a) I have implemented in domU kernel in the context of the notification received when a device is added on the pci-front bus. This was a logical point I thought this hypercall should be called. Keep in mind that I am still not aware how this works on x86.


For (b) The BAR region is not updated AFAIK by the pci device driver running in domU. So once set the BARs by firmware or enumeration logic, are not changed, not in domU for sure. Then it is 1:1 always.
Should the BAR region of the device be updated to make it not 1:1 ?


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.