[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v6 10/31] xen/arm: ITS: Introduce gic_is_lpi helper function
On 01/09/15 10:02, Vijay Kilari wrote: > On Mon, Aug 31, 2015 at 10:19 PM, Julien Grall <julien.grall@xxxxxxxxxx> > wrote: >> Hi Vijay, >> >> On 31/08/2015 12:06, vijay.kilari@xxxxxxxxx wrote: >>> >>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c >>> index 758678d..2199963 100644 >>> --- a/xen/arch/arm/gic.c >>> +++ b/xen/arch/arm/gic.c >>> @@ -62,6 +62,15 @@ enum gic_version gic_hw_version(void) >>> return gic_hw_ops->info->hw_version; >>> } >>> >>> +#ifdef HAS_GICV3 >>> +bool_t gic_is_lpi(unsigned int irq) >>> +{ >>> + return (irq >= FIRST_GIC_LPI && irq < (1 << >>> gic_hw_ops->info->nr_id_bits)); >> >> >> It would make more sense to calculate the number of ID supported at boot >> time rather than re-calculate everytime this function is called (i.e very >> often). >> >>> +} >>> +#else >>> +bool_t gic_is_lpi(unsigned int irq) { return 0; } >>> +#endif >> >> >> I though I'd already say it on a previous version. I would like to avoid >> seen any #ifdef HAS_GICV3 in the generic code include interrupt framework. >> >> In this case, I don't see much the benefit to do a specific case for >> platform not using GICv3 (i.e ARM32). > > You mean, let gic_is_lpi() implemented for both ARM64/32 and let this > function fail > always for ARM32? Yes. You already implement it as always fail but with #ifdef. Although I don't think this is worth to do as it's more difficult to maintain. > Other option is to implement callback to hw drivers (gicv3 and gicv2). > But overhead of callback > should also be considered It was the implementation you suggested on v5. And I wasn't not in favor about it. BTW, I suggested to create a field nr_lpis but you decided to store the number of bits supported. Why? Regards, -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |