[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PCI Passthrough ARM Design : Draft1



On Wed, Jun 17, 2015 at 03:35:02PM +0100, Stefano Stabellini wrote:
> On Wed, 17 Jun 2015, Ian Campbell wrote:
> > On Wed, 2015-06-17 at 07:14 -0700, Manish Jaggi wrote:
> > > 
> > > On Wednesday 17 June 2015 06:43 AM, Ian Campbell wrote:
> > > > On Wed, 2015-06-17 at 13:58 +0100, Stefano Stabellini wrote:
> > > >> Yes, pciback is already capable of doing that, see
> > > >> drivers/xen/xen-pciback/conf_space.c
> > > >>
> > > >>> I am not sure if the pci-back driver can query the guest memory map. 
> > > >>> Is there an existing hypercall ?
> > > >> No, that is missing.  I think it would be OK for the virtual BAR to be
> > > >> initialized to the same value as the physical BAR.  But I would let the
> > > >> guest change the virtual BAR address and map the MMIO region wherever 
> > > >> it
> > > >> wants in the guest physical address space with
> > > >> XENMEM_add_to_physmap_range.
> > > > I disagree, given that we've apparently survived for years with x86 PV
> > > > guests not being able to right to the BARs I think it would be far
> > > > simpler to extend this to ARM and x86 PVH too than to allow guests to
> > > > start writing BARs which has various complex questions around it.
> > > > All that's needed is for the toolstack to set everything up and write
> > > > some new xenstore nodes in the per-device directory with the BAR
> > > > address/size.
> > > >
> > > > Also most guests apparently don't reassign the PCI bus by default, so
> > > > using a 1:1 by default and allowing it to be changed would require
> > > > modifying the guests to reasssign. Easy on Linux, but I don't know about
> > > > others and I imagine some OSes (especially simpler/embedded ones) are
> > > > assuming the firmware sets up something sane by default.
> > > Does the Flow below captures all points
> > > a) When assigning a device to domU, toolstack creates a node in per 
> > > device directory with virtual BAR address/size
> > > 
> > > Option1:
> > > b) toolstack using some hypercall ask xen to create p2m mapping { 
> > > virtual BAR : physical BAR } for domU
> > > c) domU will not anytime update the BARs, if it does then it is a fault, 
> > > till we decide how to handle it
> > 
> > As Julien has noted pciback already deals with this correctly, because
> > sizing a BAR involves a write, it implementes a scheme which allows
> > either the hardcoded virtual BAR to be written or all 1s (needed for
> > size detection).
> > 
> > > d) when domU queries BAR address from pci-back the virtual BAR address 
> > > is provided.
> > > 
> > > Option2:
> > > b) domU will not anytime update the BARs, if it does then it is a fault, 
> > > till we decide how to handle it
> > > c) when domU queries BAR address from pci-back the virtual BAR address 
> > > is provided.
> > > d) domU sends a hypercall to map virtual BARs,
> > > e) xen pci code reads the BAR and maps { virtual BAR : physical BAR } 
> > > for domU
> > > 
> > > Which option is better I think Ian is for (2) and Stefano may be (1)
> > 
> > In fact I'm now (after Julien pointed out the current behaviour of
> > pciback) in favour of (1), although I'm not sure if Stefano is too.
> > 
> > (I was never in favour of (2), FWIW, I previously was in favour of (3)
> > which is like (2) except pciback makes the hypervcall to map the virtual
> > bars to the guest, I'd still favour that over (2) but (1) is now my
> > preference)
> 
> OK, let's go with (1).

Right, and as the maintainer of pciback that means I don't have to do
anything right :-)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.