[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 2/2] x86/spec-ctrl: add support for modifying SSBD VIA LS_CFG MSR



On Fri, Aug 17, 2018 at 12:59:28AM -0600, Jan Beulich wrote:
> >>> On 16.08.18 at 22:02, <brian.woods@xxxxxxx> wrote:
> > On Wed, Aug 15, 2018 at 10:00:48AM -0600, Jan Beulich wrote:
> >> >>> On 09.08.18 at 21:42, <brian.woods@xxxxxxx> wrote:
> >> > --- a/xen/arch/x86/spec_ctrl.c
> >> > +++ b/xen/arch/x86/spec_ctrl.c
> >> 
> >> First of all - I'm not convinced some of the AMD specific code here
> >> wouldn't better live in cpu/amd.c.
> > 
> > Well, by that logic, a lot of the other logic could go in cpu/intel.c.
> > It has to do with SSBD and when AMD's processors support it via the
> > SPEC_CTRL MSR, the support for SSBD will get merged together in
> > spec_ctrl.c and if that's the case, it makes sense to have all the SSBD
> > code together. IMO
> 
> Maybe, though I'd say retpoline_safe(), should_use_eager_fpu(),
> l1tf_calculations(), and xpti_init_default() are all written in a way
> that they could be extended to other CPU vendors should it
> become known that they're also affected. I don't think we really
> know for sure whether VIA and/or Shanghai are affected. Otoh
> the code you add is not just AMD-specific, but even model-specific
> within AMD's palette.
> 
> I'd be curious to know whether Andrew has an opinion here.

Since most of the spec_ctrl code is his, his opinion here would be the
most important.

> >> > @@ -50,7 +51,16 @@ bool __initdata bsp_delay_spec_ctrl;
> >> >  uint8_t __read_mostly default_xen_spec_ctrl;
> >> >  uint8_t __read_mostly default_spec_ctrl_flags;
> >> >  
> >> > +/* for SSBD support for AMD via LS_CFG */
> >> > +#define SSBD_AMD_MAX_SOCKET 2
> >> > +struct ssbd_amd_ls_cfg_smt_status {
> >> > +    spinlock_t lock;
> >> > +    uint32_t mask;
> >> > +} __attribute__ ((aligned (64)));
> >> 
> >> Where's this literal 64 coming from? Do you perhaps mean
> >> SMP_CACHE_BYTES? And if this is really needed (as said before,
> >> I think the array would better be dynamically allocated than having
> >> compile time determined fixed size), then please don't open-code
> >> __aligned().
> > 
> > It's the cache coherency size.  The SMP_CACHE_BYTES is 128 bytes IIRC.
> > I was trying to save space since the struct is so small it would double
> > the size.  That can be changed though.
> 
> If SMP_CACHE_BYTES doesn't fit your need, the literal number used
> needs either an explaining comment or a suitably named #define.

I'll just use SMP_CACHE_BYTES and not worry about the extra space.

> >> > +bool __read_mostly ssbd_amd_smt_en = false;
> >> > +bool __read_mostly default_xen_ssbd_amd_ls_cfg_en = false;
> >> >  uint64_t __read_mostly ssbd_amd_ls_cfg_mask = 0ull;
> >> > +struct ssbd_amd_ls_cfg_smt_status 
> >> > *ssbd_amd_smt_status[SSBD_AMD_MAX_SOCKET] = {NULL};
> >> 
> >> Several further pointless initializers.
> > 
> > ssbd_amd_ls_cfg_mask -> needs to be initialized, due to how it gets set
> > ssbd_amd_ls_cfg_smt_status -> needs to be initialized, unless xalloc
> >                               sets it as NULL on failure to alloc
> > default_xen_ssbd_amd_ls_cfg_en -> needs to be initialized of an else
> >                                   added to an if statement
> > ssbd_amd_smt_en -> like the above
> > 
> > If you want default_xen_ssbd_amd_ls_cfg_en and ssbd_amd_smt_en to be
> > not be initialized, I can add code to do that.
> 
> I don't understand: Add code? The initializers here are all pointless
> because the values you supply are what the variables would be
> initialized with anyway if you omitted the initializers. That's what
> the C standard specifies.

Leaving values which determine the behavior of the hypervisor to
defaults of the compiler's implementation of the standard seems like it
would be a possible source of subtle bugs when the compiler doesn't do
something just right.  I'd much rather have an initialized value or
have it set in the code before use.

If you strongly feel that they shouldn't be initialized or set in code,
I'll change them though.

> >> > +static int __init ssbd_amd_ls_cfg_init(void)
> >> > +{
> >> > +    uint32_t cores_per_socket, threads_per_core;
> >> > +    const struct cpuinfo_x86  *c =  &boot_cpu_data;
> >> > +    uint32_t core, socket;
> >> > +
> >> > +    if ( !ssbd_amd_ls_cfg_mask ||
> >> > +         !boot_cpu_has(X86_FEATURE_SSBD_AMD_LS_CFG) )
> >> > +        goto ssbd_amd_ls_cfg_init_fail;
> >> 
> >> Why not simply "return"?
> > 
> > Because it forces all the failed returns down one code path.  I can
> > change it if you wish.
> 
> If any cleanup is to be done, using "goto" for this purpose is
> generally accepted (although personally I dislike "goto" in
> general). Here, however, nothing has been done yet and the
> cleanup path is non-trivial. It took me extra time to verify
> that nothing bad would happen from going that path despite
> nothing having been done before. It is far more clear to the
> reader imo if there is just "return" here.

Well, it's just a difference of opinion at this point.  I'll change it.

> >> > +    default_xen_ssbd_amd_ls_cfg_en = false;
> >> > +
> >> > +    dprintk(XENLOG_ERR, "SSBD AMD LS CFG: disalbing SSBD due to 
> >> > errors\n");
> >> > +
> >> > +    return 1;
> >> 
> >> If the function returns 0 and 1 only, it looks like you've meant to
> >> use bool, false, and true respectively.
> > 
> > Because it's more of an error code than boolean logic value.  I can
> > switch it over to bool if you want.
> 
> Error code style returns would please use (negative) errno-style
> numbers. But if the function really just means to say "success"
> or "failure", without particular error codes, then boolean would
> imo still be preferable.
> 
> Jan
> 

Sounds fair to change it to bool.  I'll do that.

-- 
Brian Woods

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.