[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 03/27] ARM: GICv3 ITS: allocate device and collection table
Hi Andre, On 03/16/2017 11:20 AM, Andre Przywara wrote: Each ITS maps a pair of a DeviceID (for instance derived from a PCI b/d/f triplet) and an EventID (the MSI payload or interrupt ID) to a pair of LPI number and collection ID, which points to the target CPU. This mapping is stored in the device and collection tables, which software has to provide for the ITS to use. Allocate the required memory and hand it to the ITS. The maximum number of devices is limited to a compile-time constant exposed in Kconfig. Signed-off-by: Andre Przywara <andre.przywara@xxxxxxx> --- docs/misc/xen-command-line.markdown | 8 ++ xen/arch/arm/Kconfig | 14 ++++ xen/arch/arm/gic-v3-its.c | 163 ++++++++++++++++++++++++++++++++++++ xen/arch/arm/gic-v3.c | 3 + xen/include/asm-arm/gic_v3_its.h | 63 +++++++++++++- 5 files changed, 250 insertions(+), 1 deletion(-) diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-command-line.markdown index 619016d..068d116 100644 --- a/docs/misc/xen-command-line.markdown +++ b/docs/misc/xen-command-line.markdown @@ -1158,6 +1158,14 @@ based interrupts. Any higher IRQs will be available for use via PCI MSI. ### maxcpus > `= <integer>` +### max\_its\_device\_bits +> `= <integer>` + +Specifies the maximum number of devices using MSIs on the ARM GICv3 ITS +controller to allocate table entries for. Each table entry uses a hardware +specific size, typically 8 or 16 bytes. +Defaults to 10 bits (to cover at most 1024 devices). The description is misleading. Even if the platform has less than 1024 devices, 10 may not be enough if the Device ID are not contiguous. However, I don't think this is useful. A user may not know the DevIDs in use of the platform and hence will not be able to choose a sensible value. I still think that we should allocate what the hardware asked us and if it is necessary to limit this should be done per-platform. + ### max\_lpi\_bits > `= <integer>` diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig index 86f7b53..0d50156 100644 --- a/xen/arch/arm/Kconfig +++ b/xen/arch/arm/Kconfig @@ -64,6 +64,20 @@ config MAX_PHYS_LPI_BITS This can be overridden on the command line with the max_lpi_bits parameter. +config MAX_PHYS_ITS_DEVICE_BITS + depends on HAS_ITS + int "Number of device bits the ITS supports" + range 1 32 + default "10" + help + Specifies the maximum number of devices which want to use the ITS. + Xen needs to allocates memory for the whole range very early. + The allocation scheme may be sparse, so a much larger number must + be supported to cover devices with a high bus number or those on + separate bus segments. + This can be overridden on the command line with the + max_its_device_bits parameter. From what I understand the default you suggested fit neither Cavium nor Qualcomm platform. It is also hard to see how a Distribution will be able to choose a sensible value here. + endmenu menu "ARM errata workaround via the alternative framework" diff --git a/xen/arch/arm/gic-v3-its.c b/xen/arch/arm/gic-v3-its.c index 4056e5b..9982fe9 100644 --- a/xen/arch/arm/gic-v3-its.c +++ b/xen/arch/arm/gic-v3-its.c [...] +static int its_map_baser(void __iomem *basereg, uint64_t regc, + unsigned int nr_items) +{ + uint64_t attr, reg; + unsigned int entry_size = GITS_BASER_ENTRY_SIZE(regc); + unsigned int pagesz = 2, order, table_size; Please document what mean 2. + void *buffer; + + attr = GIC_BASER_InnerShareable << GITS_BASER_SHAREABILITY_SHIFT; + attr |= GIC_BASER_CACHE_SameAsInner << GITS_BASER_OUTER_CACHEABILITY_SHIFT; + attr |= GIC_BASER_CACHE_RaWaWb << GITS_BASER_INNER_CACHEABILITY_SHIFT; + + /* + * Setup the BASE register with the attributes that we like. Then read + * it back and see what sticks (page size, cacheability and shareability + * attributes), retrying if necessary. + */ +retry: + table_size = ROUNDUP(nr_items * entry_size, BIT(BASER_PAGE_BITS(pagesz))); + /* The BASE registers support at most 256 pages. */ + table_size = min(table_size, 256U << BASER_PAGE_BITS(pagesz)); + /* The memory block must be aligned to the requested page size. */ + order = max(get_order_from_bytes(table_size), pagesz * 2); + + buffer = alloc_xenheap_pages(order, 0); Why don't you use _zalloc(...)? This would avoid to compute the order and drop few lines. + if ( !buffer ) + return -ENOMEM; + + if ( !check_propbaser_phys_addr(buffer, BASER_PAGE_BITS(pagesz)) ) The name of the function does not make sense. You check an address for the register BASER and not PROPBASER. + { + free_xenheap_pages(buffer, 0); + return -ERANGE; + } + memset(buffer, 0, table_size); [...] +int gicv3_its_init(void) +{ + struct host_its *hw_its; + int ret; + + list_for_each_entry(hw_its, &host_its_list, entry) { Coding style: list_for_each_entry(....) { + ret = gicv3_its_init_single_its(hw_its); + if ( ret ) + return ret; + } + + return 0; +} + /* Scan the DT for any ITS nodes and create a list of host ITSes out of it. */ void gicv3_its_dt_init(const struct dt_device_node *node) { diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c index ed78363..cc1e219 100644 --- a/xen/arch/arm/gic-v3.c +++ b/xen/arch/arm/gic-v3.c @@ -1590,6 +1590,9 @@ static int __init gicv3_init(void) spin_lock(&gicv3.lock); gicv3_dist_init(); + res = gicv3_its_init(); + if ( res ) + printk(XENLOG_WARNING "GICv3: ITS: initialization failed: %d\n", res); I would have expect a panic here because the ITS subsystem could be half initialized and it is not safe to continue. res = gicv3_cpu_init(); gicv3_hyp_init(); Cheers, -- Julien Grall IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |