[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Query] Assigning PCI ranges to dom0 and domU



Please use plain text in emails.

On Sat, 9 Aug 2014, manish jaggi wrote:
> On 1 August 2014 19:58, Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx> 
> wrote:
>       On Fri, 1 Aug 2014, Stefano Stabellini wrote:
>       > On Fri, 1 Aug 2014, manish jaggi wrote:
>       > > Hi Stefano,
>       > >
>       > > I am working on accessing PCI nodes in the doms on ARM (cavium). If 
> there is the below device tree nodeÂ
>       > > pcie1@0x849000000000 {
>       > > ÂÂÂ ÂÂÂÂÂÂÂ compatible = "cavium,thunder-pcie";
>       > > ÂÂÂ ÂÂÂ device_type = "pci";
>       > > ÂÂÂ ÂÂÂ msi-parent = <&its>;
>       > > ÂÂÂ ÂÂÂ bus-range = <0 255>;
>       > > ÂÂÂ ÂÂÂ #size-cells = <2>;
>       > > ÂÂÂ ÂÂÂ #address-cells = <3>;
>       > > ÂÂÂÂÂÂÂ ÂÂÂ reg = <0x8490 0x00000000 0 0x40000000>;Â /* 
> Configuration space */
>       > > ÂÂÂ ÂÂÂ ranges = <0x03000000 0x8310 0x00000000 0x8310 0x00000000 
> 0x00 0x10000000>, /* mem ranges */
>       > > ÂÂÂ ÂÂÂ ÂÂÂ <0x03000000 0x8100 0x00000000 0x8100 0x00000000 0x80 
> 0x00000000>;
>       > > ÂÂÂÂÂÂÂ };
>       > > ÂÂÂ
>       > >
>       > > How to assign ranges to guest dom0 / domU. Is there a well defined 
> api in xen OR I have to parse the device tree
>       > > ranges and do a 1:1 mapping using map_mmio_regions.
>       >
>       > Firstly you just need to get PCI up and running in Dom0, and you can 
> do
>       > that by passing this device tree node to Dom0 and remapping the
>       > appropriate memory ranges. See for example:
>       >
>       > xen/arch/arm/platforms/xgene-storm.c:xgene_storm_specific_mapping
> 
> So yes, to reply to your specific question, you need to parse the device
> tree and use map_mmio_regions for Dom0.
> 
> For DomU, the toolstack allows the guest to map the mmio regions of the
> device by calling xc_domain_iomem_permission.
> On ARM in addition to that you'll have to add those MMIO regions to the
> guest p2m, otherwise the guest won't still have access to them. (x86 PV
> guests don't actually have a proper p2m, like ARM guests do, so giving
> them permission would be enough.)
> 
> map_mmio_regions for 4G or more takes a lot of time. Is there a way to 
> optimize it. AFAIK it would amount to only a few PTE memory writes.
> How it is handled in Xen

Why 4G or more? Are you actually trying to assign PCI devices that have
one or more MMIO regions of 4G or more?

In any case Ian recently introduced super page support in the p2m, so
map_mmio_regions should be much faster now, using 2MB mappings.


>       > Once that is done, it is time to look at pciback and pcifront and try 
> to
>       > get them running on ARM.
>       >
>       >
>       > I would start by enabling PCI passthrough in the xl toolstack, look at
>       > tools/libxl/libxl_pci.c:libxl__device_pci_add, called by
>       > domcreate_attach_pci. It should be working on ARM following the PV 
> path
>       > (LIBXL_DOMAIN_TYPE_PV).
>       >
>       >
>       > After the toolstack parts are in place, you should be able to see a 
> pci
>       > entry in xenstore (xenstore-ls to list everything that is present in
>       > xenstore). That is the basic information needed by pcifront and 
> pciback
>       > to enstablish a communication channel. Pcifront is
>       > drivers/pci/xen-pcifront.c and pciback is drivers/xen/xen-pciback: you
>       > need to compile and initialize them on ARM. You might have to 
> implement
>       > a few ARM Âspecific missing pieces, corresponding to the x86 ones in
>       > arch/x86/pci/xen.c. They are mostly about MSIs.
> 
> 
> 
> 
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.