[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] x86: use alternatives for FS/GS base accesses
On 26/09/18 07:43, Jan Beulich wrote: >>>> On 25.09.18 at 18:52, <andrew.cooper3@xxxxxxxxxx> wrote: >> On 29/08/18 17:03, Jan Beulich wrote: >>> Eliminates a couple of branches in particular from the context switch >>> path. >>> >>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> >> I've already expressed a dis-inclination to this patch, because it looks >> like a micro-optimisation which won't actually affect measureable >> performance. (And as said before, I could be wrong, but I don't think I >> am...) > Iirc you had indicated you first of all don't like the mix of some constructs > using alternatives and some not. Correct. Consistency (one way or the other) is better overall here. > Eliminating conditional branches is always a Good Thing (tm), it seems to me. By this reasoning, we should compile Xen with movfuscator, which really will get rid of every branch. Doing so would be utter nonsense, ergo this claim is false. > And that's not just for > performance (inside Xen we can't assume at all that any code path, > even the context switch one, is hot enough to have any BTB entries > allocated), This is a valid argument, for why the proposed change might plausibly be an improvement. It is by no means a guarantee that making the change will result in improved performance. > but also for ease of looking at the assembly, should there > be a need to do so. Using alternatives actively obfuscates the logic in the disassembly. It is almost impossible to distinguish the individual fragments, and you rejected my suggestion of rectifying this by putting symbols into the .altinstructions section. It also results in harder to read C, and poorer surrounded code generation, as the compiler has to cope with the union of entry/exit requirements for the blocks. So no - this claim is also false. > Overall I think we ought to make much heavier use > of alternatives patching, so I view this only as a first step towards this. > Otherwise, btw, why did you not object to e.g. clac() / stac() using > alternatives patching? As with so many other things, I very much think > we should settle on a fundamental approach, and then write all code > consistently. If we followed what you say, we'd have to limit patching > to cases where conditionals can't (reasonably) express what we want. I never said that we shouldn't patch conditionals. There is a cost to every use of alternative, and the decision to use a alternatives needs to be justified on their merits outweighing their cost. I'm not currently convinced of the merit/cost tradeoff in this case. >> Have you done some perf analysis since you last posted it? > I don't view this as a worthwhile use of my time, to be honest. Even > a non-measurable improvement is an improvement. I'd understand > your objection if there was a fair reason to be afraid of worse > performance as a result of this change. So you're submitting a performance patch (which you admit might have no measurable improvement) based on logic which I've called into question, and furthermore, you expect me to ack it based on your untested opinion that "its an improvement"? Do you think that repeating myself is a worthwhile use of my time? I'm afraid that I'm going to be very blunt now. What matters, performance wise, is net performance in common workloads, and avoiding catastrophic corner cases. This is a macro problem, not a micro problem, and in my opinion, you are demonstrating repeated poor judgement in this regard. In particular, it is simply not true that improving the micro-performance of a block increases the overall performance. To cover some examples so far this year... This patch still hasn't addressed the concerns about sh[lr]d, and the resulting competition for execution resource on AMD Fam15/16h systems. "x86: enable interrupts earlier with XPTI disabled" was objected to by me on the basis of the increased complexity of following the code, rather than any performance consideration. A contributory factor was that I couldn't see any reason why it would make any performance difference. When Juergen eventually measured it, the results said the performance was worse. (It might be interesting to work out why it was worse overall, because its definitely not obvious, but I suspect we all have more important work to do). "x86/xsave: prefer eager clearing of state over eager restoring" is basic statistics. In this case, worrying about the theoretical longterm trend is having a material performance impact (in Intel's case, 8%) on current users, and I do intend to make Xen fully eager (benefiting all hardware) when I've confirmed what I suspect to be true on the AMD side of things. When all the major OS and hypervisors are fully eager, and when most hardware you can buy today is specifically optimised for this configuration, Xen being the different hurts only ourselves. "x86: use PDEP/PEXT for maddr/direct-map-offset conversion when available" neglects the cache bloat of having 255 copies of the stub, and the pipeline stall from mixing legacy and VEX SSE instructions. Both of these (irrespective of other aspects) have a very real chance of making the overall performance worse rather than better. All of these are very real potential problem, which may or may not be an issue in practice. You're certainly not going to know without testing your patch, so no - I'm not going to simply accept patches on your blind assertion that it is better in one way or another - I'd be failing in my responsibility as a maintainer if I were to do so. ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |