[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PCI Passthrough ARM Design : Draft1



On Thu, 2015-06-11 at 07:25 -0400, Julien Grall wrote:
> Hi Ian,
> 
> On 11/06/2015 04:56, Ian Campbell wrote:
> > On Wed, 2015-06-10 at 15:21 -0400, Julien Grall wrote:
> >> Hi,
> >>
> >> On 10/06/2015 08:45, Ian Campbell wrote:
> >>>> 4. DomU access / assignment PCI device
> >>>> --------------------------------------
> >>>> When a device is attached to a domU, provision has to be made such that
> >>>> it can
> >>>> access the MMIO space of the device and xen is able to identify the 
> >>>> mapping
> >>>> between guest bdf and system bdf. Two hypercalls are introduced
> >>>
> >>> I don't think we want/need new hypercalls here, the same existing
> >>> hypercalls which are used on x86 should be suitable. That's
> >>> XEN_DOMCTL_memory_mapping from the toolstack I think.
> >>
> >> XEN_DOMCTL_memory_mapping is done by QEMU for x86 HVM when the guest
> >> (i.e hvmloader?) is writing in the PCI BAR.
> >
> > What about for x86 PV? I think it is done by the toolstack there, I
> > don't know what pciback does with accesses to BAR registers.
> 
> XEN_DOMCTL_memory_mapping is only used to map memory in stage-2 page 
> table. This is only used for auto-translated guest.
> 
> In the case of x86 PV, the page-table is managed by the guest. The only 
> things to do is to give the MMIO permission to the guest in order to the 
> let him use them. This is done at boot time in the toolstack.

Ah yes, makes sense.

Manish, this sort of thing and the constraints etc should be discussed
in the doc please.

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.