[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [for-4.7] xen/arm: Force broadcast of TLB and instruction cache maintenance instructions
On Mon, 25 Apr 2016, Julien Grall wrote: > (CC Steve and Andre) > > Hi Stefano, > > On 25/04/16 11:45, Stefano Stabellini wrote: > > On Mon, 18 Apr 2016, Julien Grall wrote: > > > UP guest usually uses TLB instruction to flush only on the local CPU. The > > > TLB flush won't be broadcasted across all the CPUs within the same > > > innershareable domain. > > > > > > When the vCPU is migrated between different CPUs, it may be rescheduled > > > to a previous CPU where the TLB has not been flushed. The TLB may > > > contain stale entries which will result to translate incorrectly a VA to > > > IPA or even cause TLB conflicts. > > > > > > To avoid a such situation, always set HCR_EL2.FB which will force the > > > broadcast of TLB and instruction cache maintenance instructions. > > > Cheers, > > > > > > Signed-off-by: Julien Grall <julien.grall@xxxxxxx> > > > > Well spotted! > > > > Julien, I was wondering whether we could avoid the HCR_FB by manually > > doing a flush in ctxt_switch_from or context_switch. I am suggesting > > this because I have the feeling that enabling HCR_FB would have a > > negative performance impact. > > The performance impact will depend on how much the guest makes use of local > flush instructions. > > When HCR.FB is set, the hardware will broadcast the flush (TLBs, instruction > cache or branch predictor) to all the CPUs in the same innershareable domain. > I.e any local flush instructions will be upgraded to innershareable. > > ARM64 Linux kernel is SMP-aware (no possibility to build only for UP), most of > the flush instructions are innershareable. The local flushes are limited to > boot (1 flush per CPU) and when the ASID of a task is changing. Therefore the > impact of setting HCR.FB for ARM64 Linux guest would be very limited. > > ARM32 Linux kernel can be built SMP-aware or only UP-aware. The former, will > make a very limited use of those instructions. The latter will obviously use > only local flush instructions. Therefore, there will be an impact to set > HCR.FB for UP-aware kernel guest. > > I have looked quickly at FreeBSD (both ARM64 and ARM32). SMP-aware kernel will > mostly make use of innershareable flush instructions. UP-aware kernel will > only make use of local flush instructions. > > However, nothing prevent an SMP-aware kernel to make more often use of local > flush instructions. > > In the case that HCR.FB is not set, Xen would need to: > * Flush all the TLBs for the VMID associated to this domain > * Invalidate all the entries from branch predictors (on for AArch32) > * Invalidate all the entries from the instruction cache > Whilst you suggested to do it at every domain context switch, this is only > necessary when the vCPU migrates between 2 physical CPUs. > > In any case, not setting HCR.FB will have a big impact on any SMP-aware > Linux/Freebsd kernel as any context switch (or migration) will nuke the TLBs, > the instruction cache and the branch predictor. That would be extremely bad. I think we should be able to perform the tlb flushing only for domains that have only 1 vcpus, which should limit the negative effects of the change. > The impact of HCR.FB on UP-aware kernel would need to be benchmarked. > But to be honest, I expect most of the kernels, which run in a guest, to be > SMP-aware. > > So setting HCR.FB seems to be the best solution. We can revisit it later, if > we notice negative performance impact. I agree that setting HCR.FB is an very simple solution to the problem. It has hard to argue against that :-) It would be nice at some point to write a prototype of the tlb flushing at vcpu migration and give it a try. For now could you please summarize your thoughts on this in the commit message so that a couple of years down the line we can still find them? _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |