[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [XEN PATCH v3 5/5] xen/arm: ffa: support notification
On Fri, Apr 26, 2024 at 2:41 PM Bertrand Marquis <Bertrand.Marquis@xxxxxxx> wrote: > > Hi Jens, > > > On 26 Apr 2024, at 14:32, Jens Wiklander <jens.wiklander@xxxxxxxxxx> wrote: > > > > Hi Bertrand, > > > > On Fri, Apr 26, 2024 at 2:19 PM Bertrand Marquis > > <Bertrand.Marquis@xxxxxxx> wrote: > >> > >> Hi Jens, > >> > >>> On 26 Apr 2024, at 14:11, Jens Wiklander <jens.wiklander@xxxxxxxxxx> > >>> wrote: > >>> > >>> Hi Bertrand, > >>> > >>> On Fri, Apr 26, 2024 at 11:20 AM Bertrand Marquis > >>> <Bertrand.Marquis@xxxxxxx> wrote: > >>>> > >>>> Hi Jens, > >>>> > >>>>> On 26 Apr 2024, at 10:47, Jens Wiklander <jens.wiklander@xxxxxxxxxx> > >>>>> wrote: > >>>>> > > [...] > >>>>> +struct notif_irq_info { > >>>>> + unsigned int irq; > >>>>> + int ret; > >>>>> + struct irqaction *action; > >>>>> +}; > >>>>> + > >>>>> +static void notif_irq_enable(void *info) > >>>>> +{ > >>>>> + struct notif_irq_info *irq_info = info; > >>>>> + > >>>>> + irq_info->ret = setup_irq(irq_info->irq, 0, irq_info->action); > >>>>> + if ( irq_info->ret ) > >>>>> + printk(XENLOG_ERR "ffa: request_irq irq %u failed: error %d\n", > >>>>> + irq_info->irq, irq_info->ret); > >>>>> +} > >>>>> + > >>>>> +void ffa_notif_init(void) > >>>>> +{ > >>>>> + const struct arm_smccc_1_2_regs arg = { > >>>>> + .a0 = FFA_FEATURES, > >>>>> + .a1 = FFA_FEATURE_SCHEDULE_RECV_INTR, > >>>>> + }; > >>>>> + struct notif_irq_info irq_info = { }; > >>>>> + struct arm_smccc_1_2_regs resp; > >>>>> + unsigned int cpu; > >>>>> + > >>>>> + arm_smccc_1_2_smc(&arg, &resp); > >>>>> + if ( resp.a0 != FFA_SUCCESS_32 ) > >>>>> + return; > >>>>> + > >>>>> + irq_info.irq = resp.a2; > >>>>> + if ( irq_info.irq < GIC_SGI_STATIC_MAX || irq_info.irq >= > >>>>> NR_GIC_SGI ) > >>>>> + { > >>>>> + printk(XENLOG_ERR "ffa: notification initialization failed: > >>>>> conflicting SGI %u\n", > >>>>> + irq_info.irq); > >>>>> + return; > >>>>> + } > >>>>> + > >>>>> + /* > >>>>> + * SGIs are per-CPU so we must enable the IRQ on each CPU. We use > >>>>> an > >>>>> + * IPI to call notif_irq_enable() on each CPU including the current > >>>>> + * CPU. The struct irqaction is preallocated since we can't > >>>>> allocate > >>>>> + * memory while in interrupt context. > >>>>> + */ > >>>>> + for_each_online_cpu(cpu) > >>>>> + { > >>>>> + irq_info.action = xmalloc(struct irqaction); > >>>> > >>>> You allocate one action per cpu but you have only one action pointer in > >>>> your structure > >>>> which means you will overload the previously allocated one and lose > >>>> track. > >>>> > >>>> You should have a table of actions in your structure instead unless one > >>>> action is > >>>> enough and can be reused on all cpus and in this case you should move > >>>> out of > >>>> your loop the allocation part. > >>> > >>> That shouldn't be needed because this is done in sequence only one CPU > >>> at a time. > >> > >> Sorry i do not understand here. > >> You have a loop over each online cpu and on each loop you are assigning > >> irq_info.action with a newly allocated struct irqaction so you are in > >> practice > >> overloading on cpu 2 the action that was allocated for cpu 1. > >> > >> What do you mean by sequence here ? > >> > > > > My understanding is that for_each_online_cpu(cpu) loops over each cpu, > > one at a time. The call > > on_selected_cpus(cpumask_of(cpu), notif_irq_enable, &irq_info, 1); > > returns after notif_irq_enable() has returned on the CPU in question > > thanks to the "1" (wait) parameter. So once it has returned &irq_info > > isn't used by the other CPU any longer and we can assign a new value > > to irq_info.action. > > Right so you loose track of what was assigned so you are not able to > free it. > If that is wanted then why saving this in irq.action as you will only have > there the one allocated for the last online cpu. Wouldn't release_irq() free it? An error here is unlikely, but we may be left with a few installed struct irqaction if it occurs. I can add a more elaborate error path if it's worth the added complexity. Thanks, Jens
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |