[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2] xen/arm: implement GICD_I[S/C]ACTIVER reads
On Mon, 30 Mar 2020, Julien Grall wrote: > Hi Stefano, > > On 30/03/2020 17:35, Stefano Stabellini wrote: > > On Sat, 28 Mar 2020, Julien Grall wrote: > > > qHi Stefano, > > > > > > On 27/03/2020 02:34, Stefano Stabellini wrote: > > > > This is a simple implementation of GICD_ICACTIVER / GICD_ISACTIVER > > > > reads. It doesn't take into account the latest state of interrupts on > > > > other vCPUs. Only the current vCPU is up-to-date. A full solution is > > > > not possible because it would require synchronization among all vCPUs, > > > > which would be very expensive in terms or latency. > > > > > > Your sentence suggests you have number showing that correctly emulating > > > the > > > registers would be too slow. Mind sharing them? > > > > No, I don't have any numbers. Would you prefer a different wording or a > > better explanation? I also realized there is a typo in there (or/of). > Let me start with I think correctness is more important than speed. > So I would have expected your commit message to contain some fact why > synchronization is going to be slow and why this is a problem. > > To give you a concrete example, the implementation of set/way instructions are > really slow (it could take a few seconds depending on the setup). However, > this was fine because not implementing them correctly would have a greater > impact on the guest (corruption) and they are not used often. > > I don't think the performance in our case will be in same order magnitude. It > is most likely to be in the range of milliseconds (if not less) which I think > is acceptable for emulation (particularly for the vGIC) and the current uses. Writing on the mailing list some of our discussions today. Correctness is not just in terms of compliance to a specification but it is also about not breaking guests. Introducing latency in the range of milliseconds, or hundreds of microseconds, would break any latency sensitive workloads. We don't have numbers so we don't know for certain the effect that your suggestion would have. It would be interesting to have those numbers, and I'll add to my TODO list to run the experiments you suggested, but I'll put it on the back-burner (from a Xilinx perspective it is low priority as no customers are affected.) > So lets take a step back and look how we could implement ISACTIVER/ICACTIVER > correctly. > > The only thing we need is a snapshot of the interrupts state a given point. I > originally thought it would be necessary to use domain_pause() which is quite > heavy, but I think we only need the vCPU to enter in Xen and sync the states > of the LRs. > > Roughly the code would look like (This is not optimized): > > for_each_vcpu(d, v) > { > if ( v->is_running ) > smp_call_function(do_nothing(), v->cpu); > } > > /* Read the state */ > > A few remarks: > * The function do_nothing() is basically a NOP. > * I am suggesting to use smp_call_function() rather > smp_send_event_check_cpu() is because we need to know when the vCPU has > synchronized the LRs. As the function do_nothing() will be call afterwards, > then we know the the snapshot of the LRs has been done > * It would be possible to everything in one vCPU. > * We can possibly optimize it for the SGIs/PPIs case > > This is still not perfect, but I don't think the performance would be that > bad.
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |