[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH V2 2/3] xen/arm: Add handling of extended regions for Dom0




On 23.09.21 00:05, Stefano Stabellini wrote:

Hi Stefano

On Wed, 22 Sep 2021, Oleksandr wrote:
You will also need to cover "ranges" that will describe the BARs for
the PCI
devices.
Good point.
Yes, very good point!


Could you please clarify how to recognize whether it is a PCI
device as long as PCI support is not merged? Or just to find any device
nodes
with non-empty "ranges" property
and retrieve addresses?
Normally any bus can have a ranges property with the aperture and
possible address translations, including /amba (compatible =
"simple-bus"). However, in these cases dt_device_get_address already
takes care of it, see xen/common/device_tree.c:dt_device_get_address.

The PCI bus is special for 2 reasons:
- the ranges property has a different format
- the bus is hot-pluggable

So I think the only one that we need to treat specially is PCI.

As far as I am aware PCI is the only bus (or maybe just the only bus
that we support?) where ranges means the aperture.
Thank you for the clarification. I need to find device node with non-empty
ranges property
(and make sure that device_type property is "pci"), after that I need to
read the context of ranges property and translate it.


OK, I experimented with that and managed to parse ranges property for PCI host
bridge node.

I tested on my setup where the host device tree contains two PCI host bridge
nodes with the following:

pcie@fe000000 {
...
             ranges = <0x1000000 0x0 0x0 0x0 0xfe100000 0x0 0x100000 0x2000000
0x0 0xfe200000 0x0 0xfe200000 0x0 0x200000 0x2000000 0x0 0x30000000 0x0
0x30000000 0x0 0x8000000 0x42000000 0x0 0x38000000 0x0 0x38000000 0x0
0x8000000>;
...
};

pcie@ee800000 {
...
             ranges = <0x1000000 0x0 0x0 0x0 0xee900000 0x0 0x100000 0x2000000
0x0 0xeea00000 0x0 0xeea00000 0x0 0x200000 0x2000000 0x0 0xc0000000 0x0
0xc0000000 0x0 0x8000000 0x42000000 0x0 0xc8000000 0x0 0xc8000000 0x0
0x8000000>;
...
};

So Xen retrieves the *CPU addresses* from the ranges:

(XEN) dev /soc/pcie@fe000000 range_size 7 nr_ranges 4
(XEN) 0: addr=fe100000, size=100000
(XEN) 1: addr=fe200000, size=200000
(XEN) 2: addr=30000000, size=8000000
(XEN) 3: addr=38000000, size=8000000
(XEN) dev /soc/pcie@ee800000 range_size 7 nr_ranges 4
(XEN) 0: addr=ee900000, size=100000
(XEN) 1: addr=eea00000, size=200000
(XEN) 2: addr=c0000000, size=8000000
(XEN) 3: addr=c8000000, size=8000000

The code below covers ranges property in the context of finding memory holes
(to be squashed with current patch):

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index d37156a..7d20c10 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -834,6 +834,8 @@ static int __init find_memory_holes(struct meminfo
*ext_regions)
      {
          unsigned int naddr;
          u64 addr, size;
+        const __be32 *ranges;
+        u32 len;

          naddr = dt_number_of_address(np);

@@ -857,6 +859,41 @@ static int __init find_memory_holes(struct meminfo
*ext_regions)
                  goto out;
              }
          }
+
+        /*
+         * Also looking for non-empty ranges property which would likely mean
+         * that we deal with PCI host bridge device and the property here
+         * describes the BARs for the PCI devices.
+         */
One thing to be careful is that ranges with a valid parameter is not
only present in PCI busses. It can be present in amba and other
simple-busses too. In that case the format for ranges in simpler as it
doesn't have a "memory type" like PCI.

When you get addresses from reg, bus ranges properties are automatically
handled for you.

All of this to say that a check on "ranges" is not enough because it
might capture other non-PCI busses that have a different, simpler,
ranges format. You want to check for "ranges" under a device_type =
"pci"; node.

ok, will do.




+        ranges = dt_get_property(np, "ranges", &len);
+        if ( ranges && len )
+        {
+            unsigned int range_size, nr_ranges;
+            int na, ns, pna;
+
+            pna = dt_n_addr_cells(np);
+            na = dt_child_n_addr_cells(np);
+            ns = dt_child_n_size_cells(np);
+            range_size = pna + na + ns;
+            nr_ranges = len / sizeof(__be32) / range_size;
+
+            for ( i = 0; i < nr_ranges; i++, ranges += range_size )
+            {
+                /* Skip the child address and get the parent (CPU) address */
+                addr = dt_read_number(ranges + na, pna);
+                size = dt_read_number(ranges + na + pna, ns);
+
+                start = addr & PAGE_MASK;
+                end = PAGE_ALIGN(addr + size);
+                res = rangeset_remove_range(mem_holes, start, end - 1);
+                if ( res )
+                {
+                    printk(XENLOG_ERR "Failed to remove:
%#"PRIx64"->%#"PRIx64"\n",
+                           start, end);
+                    goto out;
+                }
+            }
+        }
      }

--
Regards,

Oleksandr Tyshchenko




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.