[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 3/7] x86: shrink struct arch_{vcpu,domain} when !HVM


  • To: Jan Beulich <jbeulich@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Fri, 7 Aug 2020 18:14:05 +0100
  • Authentication-results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
  • Cc: Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Fri, 07 Aug 2020 17:14:21 +0000
  • Ironport-sdr: yr4h3eRYOVSRdXPMgdRowVeJMPmBfN3D987wTz6jkjI8us0o1k3CX5W5mcRBKizGuE6iJbPDl2 igK9F/vry0+r5yzwS7CVk6K45dhKeGX1Gay476bsUMt713AfEKoMgXdfZOpGxZG67CFyDUgf24 C0dGBdxqprpgWrCk6Bw1rNwaD23F+t4bOjM7HJmHRQY1XuWTgNAN4onW+IftoMfmvJcBV3Y0Be MfeV5UTnm+H0HL00jdnL7KuexmyPQzESJnwh0kF7MiutpVvZDxu0A0MEHhE402288uWw+uSZda kiw=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 07/08/2020 12:33, Jan Beulich wrote:
> While this won't affect overall memory overhead (struct vcpu as well as
> struct domain get allocated as single pages) nor code size (the offsets
> into the base structures are too large to be representable as signed 8-
> bit displacements), it'll allow the tail of struct pv_{domain,vcpu} to
> share a cache line with subsequent struct arch_{domain,vcpu} fields.
>
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> ---
> RFC: There is a risk associated with this: If we still have code
>      somewhere accessing the HVM parts of the structures without a prior
>      type check of the guest, this going to end up worse than the so far
>      not uncommon case of the access simply going to space unused by PV.
>      We may therefore want to consider whether to further restrict when
>      this conversion to union gets done.
>      And of course there's also the risk of future compilers complaining
>      about this abuse of unions. But this is limited to code that's dead
>      in !HVM configs, so only an apparent problem.
>
> --- a/xen/include/asm-x86/hvm/domain.h
> +++ b/xen/include/asm-x86/hvm/domain.h
> @@ -99,7 +99,13 @@ struct hvm_pi_ops {
>  
>  #define MAX_NR_IOREQ_SERVERS 8
>  
> -struct hvm_domain {
> +typedef
> +#ifdef CONFIG_HVM
> +struct
> +#else
> +union
> +#endif
> +hvm_domain {
>      /* Guest page range used for non-default ioreq servers */
>      struct {
>          unsigned long base;
> @@ -203,7 +209,7 @@ struct hvm_domain {
>  #ifdef CONFIG_MEM_SHARING
>      struct mem_sharing_domain mem_sharing;
>  #endif
> -};
> +} hvm_domain_t;

Honestly, I'd say no to this patch even it resulted in a 100x speedup,
because I am totally lost for words about this construct, and the effect
it comprehensibility of our code.

Seeing as improved cache locality appears to be the sole benefit,
marginal as it is, you can achieve that in a better way by rearranging
struct domain/vcpu to put the pv/hvm union at the end.

~Andrew



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.