[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PCI Passthrough ARM Design : Draft1



On Thu, 2015-06-25 at 17:29 +0530, Manish Jaggi wrote:
> 
> On Thursday 25 June 2015 02:41 PM, Ian Campbell wrote:
> > On Thu, 2015-06-25 at 13:14 +0530, Manish Jaggi wrote:
> >> On Wednesday 17 June 2015 07:59 PM, Ian Campbell wrote:
> >>> On Wed, 2015-06-17 at 07:14 -0700, Manish Jaggi wrote:
> >>>> On Wednesday 17 June 2015 06:43 AM, Ian Campbell wrote:
> >>>>> On Wed, 2015-06-17 at 13:58 +0100, Stefano Stabellini wrote:
> >>>>>> Yes, pciback is already capable of doing that, see
> >>>>>> drivers/xen/xen-pciback/conf_space.c
> >>>>>>
> >>>>>>> I am not sure if the pci-back driver can query the guest memory map. 
> >>>>>>> Is there an existing hypercall ?
> >>>>>> No, that is missing.  I think it would be OK for the virtual BAR to be
> >>>>>> initialized to the same value as the physical BAR.  But I would let the
> >>>>>> guest change the virtual BAR address and map the MMIO region wherever 
> >>>>>> it
> >>>>>> wants in the guest physical address space with
> >>>>>> XENMEM_add_to_physmap_range.
> >>>>> I disagree, given that we've apparently survived for years with x86 PV
> >>>>> guests not being able to right to the BARs I think it would be far
> >>>>> simpler to extend this to ARM and x86 PVH too than to allow guests to
> >>>>> start writing BARs which has various complex questions around it.
> >>>>> All that's needed is for the toolstack to set everything up and write
> >>>>> some new xenstore nodes in the per-device directory with the BAR
> >>>>> address/size.
> >>>>>
> >>>>> Also most guests apparently don't reassign the PCI bus by default, so
> >>>>> using a 1:1 by default and allowing it to be changed would require
> >>>>> modifying the guests to reasssign. Easy on Linux, but I don't know about
> >>>>> others and I imagine some OSes (especially simpler/embedded ones) are
> >>>>> assuming the firmware sets up something sane by default.
> >>>> Does the Flow below captures all points
> >>>> a) When assigning a device to domU, toolstack creates a node in per
> >>>> device directory with virtual BAR address/size
> >>>>
> >>>> Option1:
> >>>> b) toolstack using some hypercall ask xen to create p2m mapping {
> >>>> virtual BAR : physical BAR } for domU
> >> While implementing I think rather than the toolstack, pciback driver in
> >> dom0 can send the
> >> hypercall by to map the physical bar to virtual bar.
> >> Thus no xenstore entry is required for BARs.
> > pciback doesn't (and shouldn't) have sufficient knowledge of the guest
> > address space layout to determine what the virtual BAR should be. The
> > toolstack is the right place for that decision to be made.
> Yes, the point is the pciback driver reads the physical BAR regions on 
> request from domU.
> So it sends a hypercall to map the physical bars into stage2 translation 
> for the domU through xen.
> Xen would use the holes left in IPA for MMIO.

I still think it is the toolstack which should do this, that's whewre
these sorts of layout decisions belong.

> Xen would return the IPA for pci-back to return to the request to domU.
> >> Moreover a pci driver would read BARs only once.
> > You can't assume that though, a driver can do whatever it likes, or the
> > module might be unloaded and reloaded in the guest etc etc.
> >
> > Are you going to send out a second draft based on the discussion so far?
> yes, I was working on that only. I was traveling this week 24 hour 
> flights jetlag...
> >
> > Ian.
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxx
> > http://lists.xen.org/xen-devel
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.