[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC PATCH 03/24] ARM: GICv3 ITS: allocate device and collection table
Hi, On 26/10/16 23:57, Stefano Stabellini wrote: > On Wed, 28 Sep 2016, Andre Przywara wrote: >> Each ITS maps a pair of a DeviceID (usually the PCI b/d/f triplet) and >> an EventID (the MSI payload or interrupt ID) to a pair of LPI number >> and collection ID, which points to the target CPU. >> This mapping is stored in the device and collection tables, which software >> has to provide for the ITS to use. >> Allocate the required memory and hand it the ITS. >> We limit the number of devices to cover 4 PCI busses for now. >> >> Signed-off-by: Andre Przywara <andre.przywara@xxxxxxx> >> --- >> xen/arch/arm/gic-its.c | 114 >> ++++++++++++++++++++++++++++++++++++++++++ >> xen/arch/arm/gic-v3.c | 5 ++ >> xen/include/asm-arm/gic-its.h | 49 +++++++++++++++++- >> 3 files changed, 167 insertions(+), 1 deletion(-) >> >> diff --git a/xen/arch/arm/gic-its.c b/xen/arch/arm/gic-its.c >> index b52dff3..40238a2 100644 >> --- a/xen/arch/arm/gic-its.c >> +++ b/xen/arch/arm/gic-its.c >> @@ -21,6 +21,7 @@ >> #include <xen/device_tree.h> >> #include <xen/libfdt/libfdt.h> >> #include <asm/p2m.h> >> +#include <asm/io.h> >> #include <asm/gic.h> >> #include <asm/gic_v3_defs.h> >> #include <asm/gic-its.h> >> @@ -38,6 +39,119 @@ static DEFINE_PER_CPU(void *, pending_table); >> min_t(unsigned int, lpi_data.host_lpi_bits, CONFIG_HOST_LPI_BITS) >> #define MAX_HOST_LPIS (BIT(MAX_HOST_LPI_BITS) - 8192) >> >> +#define BASER_ATTR_MASK \ >> + ((0x3UL << GITS_BASER_SHAREABILITY_SHIFT) | \ >> + (0x7UL << GITS_BASER_OUTER_CACHEABILITY_SHIFT) | \ >> + (0x7UL << GITS_BASER_INNER_CACHEABILITY_SHIFT)) >> +#define BASER_RO_MASK (GENMASK(52, 48) | GENMASK(58, 56)) >> + >> +static uint64_t encode_phys_addr(paddr_t addr, int page_bits) >> +{ >> + uint64_t ret; >> + >> + if ( page_bits < 16) >> + return (uint64_t)addr & GENMASK(47, page_bits); >> + >> + ret = addr & GENMASK(47, 16); >> + return ret | (addr & GENMASK(51, 48)) >> (48 - 12); >> +} >> + >> +static int gicv3_map_baser(void __iomem *basereg, uint64_t regc, int >> nr_items) > > Shouldn't this be called its_map_baser? Yes, the BASER registers are an ITS property. >> +{ >> + uint64_t attr; >> + int entry_size = (regc >> GITS_BASER_ENTRY_SIZE_SHIFT) & 0x1f; > > The spec says "This field is read-only and specifies the number of > bytes per entry, minus one." Do we need to increment it by 1? Mmh, looks so. I guess it worked because the number gets dwarfed by the page size round up below. >> + int pagesz; >> + int order; >> + void *buffer = NULL; >> + >> + attr = GIC_BASER_InnerShareable << GITS_BASER_SHAREABILITY_SHIFT; >> + attr |= GIC_BASER_CACHE_SameAsInner << >> GITS_BASER_OUTER_CACHEABILITY_SHIFT; >> + attr |= GIC_BASER_CACHE_RaWaWb << GITS_BASER_INNER_CACHEABILITY_SHIFT; >> + >> + /* >> + * Loop over the page sizes (4K, 16K, 64K) to find out what the host >> + * supports. >> + */ > > Is this really the best way to do it? Can't we assume ITS supports 4K, > given that Xen requires 4K pages at the moment? The ITS pages are totally independent from the core's MMU page size. So the spec says: "If the GIC implementation supports only a single, fixed page size, this field might be RO." I take it that this means that the only implemented page size could be 64K, for instance. And in fact the KVM ITS emulation advertises exactly this to a guest. > Is it actually possible > to find hardware that supports 4K but with an ITS that only support 64K > or 16K pages? It seems insane to me. Otherwise can't we probe the page > size somehow? We can probe by writing and seeing if it sticks - that's what the code does. Is it really so horrible? I agree it's nasty, but isn't it basically a loop around the code needed anyway? Yes to the rest of the comments. Cheers, Andre. >> + for (pagesz = 0; pagesz < 3; pagesz++) >> + { >> + uint64_t reg; >> + int nr_bytes; >> + >> + nr_bytes = ROUNDUP(nr_items * entry_size, BIT(pagesz * 2 + 12)); >> + order = get_order_from_bytes(nr_bytes); >> + >> + if ( !buffer ) >> + buffer = alloc_xenheap_pages(order, 0); >> + if ( !buffer ) >> + return -ENOMEM; >> + >> + reg = attr; >> + reg |= (pagesz << GITS_BASER_PAGE_SIZE_SHIFT); >> + reg |= nr_bytes >> (pagesz * 2 + 12); >> + reg |= regc & BASER_RO_MASK; >> + reg |= GITS_BASER_VALID; >> + reg |= encode_phys_addr(virt_to_maddr(buffer), pagesz * 2 + 12); >> + >> + writeq_relaxed(reg, basereg); >> + regc = readl_relaxed(basereg); >> + >> + /* The host didn't like our attributes, just use what it returned. >> */ >> + if ( (regc & BASER_ATTR_MASK) != attr ) >> + attr = regc & BASER_ATTR_MASK; >> + >> + /* If the host accepted our page size, we are done. */ >> + if ( (reg & (3UL << GITS_BASER_PAGE_SIZE_SHIFT)) == pagesz ) >> + return 0; >> + >> + /* Check whether our buffer is aligned to the next page size >> already. */ >> + if ( !(virt_to_maddr(buffer) & (BIT(pagesz * 2 + 12 + 2) - 1)) ) >> + { >> + free_xenheap_pages(buffer, order); >> + buffer = NULL; >> + } >> + } >> + >> + if ( buffer ) >> + free_xenheap_pages(buffer, order); >> + >> + return -EINVAL; >> +} >> + >> +int gicv3_its_init(struct host_its *hw_its) >> +{ >> + uint64_t reg; >> + int i; >> + >> + hw_its->its_base = ioremap_nocache(hw_its->addr, hw_its->size); >> + if ( !hw_its->its_base ) >> + return -ENOMEM; >> + >> + for (i = 0; i < 8; i++) > > Code style. Unfortunately we don't have a script to check, but please > refer to CODING_STYLE. I'd prefer if every number was #define'ed, > including `8' (something like GITS_BASER_MAX). > > >> + { >> + void __iomem *basereg = hw_its->its_base + GITS_BASER0 + i * 8; >> + int type; >> + >> + reg = readq_relaxed(basereg); >> + type = (reg >> 56) & 0x7; > > Please #define 56 and 0x7 > > >> + switch ( type ) >> + { >> + case GITS_BASER_TYPE_NONE: >> + continue; >> + case GITS_BASER_TYPE_DEVICE: >> + /* TODO: find some better way of limiting the number of devices >> */ >> + gicv3_map_baser(basereg, reg, 1024); > > An hardcoded max value might be OK, but please #define it. > > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |