[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 2/2] x86/spec-ctrl: add support for modifying SSBD via LS_CFG MSR
>>> On 27.08.18 at 18:55, <brian.woods@xxxxxxx> wrote: > Adds support for modifying the LS_CFG MSR to enable SSBD on supporting > AMD CPUs. There needs to be locking logic for family 17h with SMT > enabled since both threads share the same MSR. Otherwise, a core just > needs to write to the LS_CFG MSR. For more information see: > https://developer.amd.com/wp-content/resources/124441_AMD64_SpeculativeStoreB > ypassDisable_Whitepaper_final.pdf > > Signed-off-by: Brian Woods <brian.woods@xxxxxxx> > --- > xen/arch/x86/cpu/amd.c | 13 +-- > xen/arch/x86/smpboot.c | 3 + > xen/arch/x86/spec_ctrl.c | 172 > +++++++++++++++++++++++++++++++++++++- > xen/include/asm-x86/cpufeatures.h | 1 + > xen/include/asm-x86/spec_ctrl.h | 2 + > 5 files changed, 181 insertions(+), 10 deletions(-) > > diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c > index 6e65ae7427..e96f14268e 100644 > --- a/xen/arch/x86/cpu/amd.c > +++ b/xen/arch/x86/cpu/amd.c > @@ -601,8 +601,8 @@ static void init_amd(struct cpuinfo_x86 *c) > } > > /* > - * If the user has explicitly chosen to disable Memory Disambiguation > - * to mitigiate Speculative Store Bypass, poke the appropriate MSR. > + * Poke the LS_CFG MSR to see if the mitigation for Speculative > + * Store Bypass is available. > */ > if (!ssbd_amd_ls_cfg_mask) { > int bit = -1; > @@ -615,14 +615,9 @@ static void init_amd(struct cpuinfo_x86 *c) > > if (bit >= 0) > ssbd_amd_ls_cfg_mask = 1ull << bit; > - } > > - if (ssbd_amd_ls_cfg_mask && !rdmsr_safe(MSR_AMD64_LS_CFG, value)) { > - ssbd_amd_ls_cfg_av = true; > - if (opt_ssbd) { > - value |= ssbd_amd_ls_cfg_mask; > - wrmsr_safe(MSR_AMD64_LS_CFG, value); > - } > + if (ssbd_amd_ls_cfg_mask && !rdmsr_safe(MSR_AMD64_LS_CFG, > value)) > + ssbd_amd_ls_cfg_av = true; > } > > /* MFENCE stops RDTSC speculation */ > diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c > index 7e76cc3d68..b103b46dee 100644 > --- a/xen/arch/x86/smpboot.c > +++ b/xen/arch/x86/smpboot.c > @@ -376,6 +376,9 @@ void start_secondary(void *unused) > if ( boot_cpu_has(X86_FEATURE_IBRSB) ) > wrmsrl(MSR_SPEC_CTRL, default_xen_spec_ctrl); > > + if ( default_xen_ssbd_amd_ls_cfg_en ) > + ssbd_amd_ls_cfg_set(true); > + > if ( xen_guest ) > hypervisor_ap_setup(); > > diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c > index b32c12c6c0..89cc444f56 100644 > --- a/xen/arch/x86/spec_ctrl.c > +++ b/xen/arch/x86/spec_ctrl.c > @@ -20,6 +20,7 @@ > #include <xen/init.h> > #include <xen/lib.h> > #include <xen/warning.h> > +#include <xen/spinlock.h> > > #include <asm/microcode.h> > #include <asm/msr.h> > @@ -58,8 +59,17 @@ paddr_t __read_mostly l1tf_addr_mask, __read_mostly > l1tf_safe_maddr; > static bool __initdata cpu_has_bug_l1tf; > static unsigned int __initdata l1d_maxphysaddr; > > +/* for SSBD support for AMD via LS_CFG */ > +#define SSBD_AMD_MAX_SOCKET 4 Oh, went up from 2 to 4? But still not a dynamic upper bound? > +struct ssbd_amd_ls_cfg_smt_status { > + spinlock_t lock; > + uint32_t mask; > +} __attribute__ ((aligned (64))); Ehem. See the respective comment on patch 1. To put it bluntly, I'm not willing to look at patches where prior comments were not addressed, the more that you had verbally agreed to use SMP_CACHE_BYTES here. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |