[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v3 00/13] xen: drop hypercall function tables
On 08.03.22 14:42, Jan Beulich wrote: On 08.03.2022 13:56, Juergen Gross wrote:On 08.03.22 13:50, Jan Beulich wrote:On 08.03.2022 09:39, Juergen Gross wrote:On 08.03.22 09:34, Jan Beulich wrote:On 08.12.2021 16:55, Juergen Gross wrote:In order to avoid indirect function calls on the hypercall path as much as possible this series is removing the hypercall function tables and is replacing the hypercall handler calls via the function array by automatically generated call macros. Another by-product of generating the call macros is the automatic generating of the hypercall handler prototypes from the same data base which is used to generate the macros. This has the additional advantage of using type safe calls of the handlers and to ensure related handler (e.g. PV and HVM ones) share the same prototypes. A very brief performance test (parallel build of the Xen hypervisor in a 6 vcpu guest) showed a very slim improvement (less than 1%) of the performance with the patches applied. The test was performed using a PV and a PVH guest. Changes in V2: - new patches 6, 14, 15 - patch 7: support hypercall priorities for faster code - comments addressed Changes in V3: - patches 1 and 4 removed as already applied - comments addressed Juergen Gross (13): xen: move do_vcpu_op() to arch specific code xen: harmonize return types of hypercall handlers xen: don't include asm/hypercall.h from C sources xen: include compat/platform.h from hypercall.h xen: generate hypercall interface related code xen: use generated prototypes for hypercall handlers x86/pv-shim: don't modify hypercall table xen/x86: don't use hypercall table for calling compat hypercalls xen/x86: call hypercall handlers via generated macro xen/arm: call hypercall handlers via generated macro xen/x86: add hypercall performance counters for hvm, correct pv xen: drop calls_to_multicall performance counter tools/xenperf: update hypercall namesAs it's pretty certain now that parts of this which didn't go in yet will need re-basing, I'm going to drop this from my waiting-to-be-acked folder, expecting a v4 instead.Yes, I was planning to spin that up soon. The main remaining question is whether we want to switch the return type of all hypercalls (or at least the ones common to all archs) not requiring to return 64-bit values to "int", as Julien requested.After walking through the earlier discussion (Jürgen - thanks for the link) I'm inclined to say that if Arm wants their return values limited to 32 bits (with exceptions where needed), so be it. But on x86 I'd rather not see us change this aspect. Of course I'd much prefer if architectures didn't diverge in this regard, yet then again Arm has already diverged in avoiding the compat layer (in this case I view the divergence as helpful, though, as it avoids unnecessary headache).How to handle this in common code then? Have a hypercall_ret_t type (exact naming TBD) which is defined as long on x86 and int on Arm? Or use long in the handlers and check the value on Arm side to be a valid 32-bit signed int (this would be cumbersome for the exceptions, though)?I was thinking along the lines of hypercall_ret_t, yes, but the compiler wouldn't be helping with spotting truncation issues (we can't reasonably enable the respective warnings, as they would trigger all over the place). If we were to go that route, we'd rely on an initial audit and subsequent patch review to spot issues. Therefore, cumbersome or not, the checking approach may be the more viable one. Then again Julien may have a better plan in mind; I'd anyway expect him to supply details on how he thinks such a transition could be done safely, as he was the one to request limiting to 32 bits. In order to have some progress I could just leave the Arm side alone in my series. It could be added later if a solution has been agreed on. What do you think? Juergen Attachment:
OpenPGP_0xB0DE9DD628BF132F.asc Attachment:
OpenPGP_signature
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |