[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v4 3/5] xen/arm: vgic: Optimize the way to store the target vCPU in the rank
On 23/10/15 11:14, Ian Campbell wrote: > On Fri, 2015-10-23 at 11:01 +0100, Julien Grall wrote: >> On 23/10/15 10:34, Ian Campbell wrote: >>> On Thu, 2015-10-22 at 18:15 +0100, Julien Grall wrote: >>>> Hi Ian, >>>> >>>> On 22/10/15 17:17, Ian Campbell wrote: >>>>> On Mon, 2015-10-12 at 15:22 +0100, Julien Grall wrote: >>>>>> [...] >>>>>> /* Only migrate the vIRQ if the target vCPU has changed >>>>>> */ >>>>>> if ( new_target != old_target ) >>>>>> { >>>>>> + unsigned int virq = rank->index * >>>>>> NR_INTERRUPT_PER_RANK >>>>>> + offset; >>>>> >>>>> FWIW this was the value of offset before it was shifted + masked, I >>>>> think. >>>>> Could you not just save it up top and remember it? >>>> >>>> In fact, the virq is already correctly set before the loop (see patch >>>> #2): >>>> >>>> virq = rank->index * NR_INTERRUPT_PER_RANK + offset; >>>> >>>> The variable is incremented in the for loop. So I just forgot to drop >>>> this line when I did the split. >>>> >>>> Not that it's not possible to use directly offset because for byte >>>> access it will point to the byte modified and not the base address of >>>> the register. >>>> >>>> Though, I could use a mask, but I find this solution clearer. >>> >>> But per the above what is actually going to happen is you drop this >>> change? >> >> As said, the introduction of virq within this patch is a mistake. >> Patch #2 already set virq before the loop: > > I thought that was what you said, but then your final line seemed to > contradict that by implying that you wanted to keep virq here (the implicat > ion of saying it is clearer to you). Sorry, I was speaking about using the unmodified offset. I.e something like virq = offset & 0x3; Regards, -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |