[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2] x86/SVM: restrict hardware SSBD update upon guest VIRT_SPEC_CTRL write


  • To: Jan Beulich <jbeulich@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Wed, 1 Mar 2023 20:55:41 +0000
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=0wAaOqcWTWaLL9Vj1oovXb76K8P9CVuINZsPMN85Ycs=; b=Xj383TjWqqybBqaN/ydjYHBx5tcml9xJcMIXnt7qdC+sqFmU8jOIMC2nFp3rhQTXCRlxY6Lmpn/REQpuUGT0ZMO3HN1ROsuP6TiHCdeg/dn7Mo0cHcLdN2ECn1PTxS9zyu9K/+TiXhKVAppJrZGxpvd7eIyFhIW4WS0Mf7Chk9yRGpZGAVe6V3f3/W6JIE8DUbNkQEiwRvIPNHJlv2edBl0nnuB6HfzMR8JwC5aQaGA28P7bnap4bxCrwNkNMOYxbCTzpDIrdZH7qa2hyoIpNB0Wu3HR1YCfYZNVxqaVT8tEh385CDDAto3gcj8Xw1awYqW5O1IXpMtYwFNSLS+mXQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=IZ8WUlQ1dn6oMSD4WARAXZuE5uMFKSRcnF8typIyniut+uJcHgPUsA3zW7ZkSARUlPpjXQ2kzeRQXrAifjE537jX6eLnMhekygVnXAy1KXZkNK5P6BkeA+QDZ0hUAA3XRZmqo7DFepZV2O9Sx6iXCrXO2i6iwHGwxEm29VQIRZMeQUixyhiIHwIEdEG+kEwBYjpyYFK6fTH/o5Svl4Cxd4FhptkQvmmMwP3l2NPsP8OptppiDnhNaV7LNEVOgkqDU8bsXiO4Y/cb+jhTtiv7RVV/6tb7QxyoEAj1KcBEg93O3lgnoQnB98ThFNqw0GAsZ7dWbDE+ngxi67HoyXa2ew==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Wed, 01 Mar 2023 20:55:56 +0000
  • Ironport-data: A9a23:0fCBR6kCZuW8yLShfp3Hii/o5gyiJ0RdPkR7XQ2eYbSJt1+Wr1Gzt xIXWmnQP6uJN2P0e4gnPdmypEIAsJaHnd5hTVQ/qyo2RCMWpZLJC+rCIxarNUt+DCFhoGFPt JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq LvartbWfVSowFaYCEpNg064gE4p7aSaVA8w5ARkPqgQ5AaGzBH5MbpETU2PByqgKmVrNrbSq 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/ f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3 f8hBi0nXAm/u/2ZkOKSE+Yxu5g9NPC+aevzulk4pd3YJdAPZMmZBoD1v5pf1jp2gd1SF/HDY cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVklI3jOGF3Nn9I7RmQe18mEqCq 32A1GP+GhwAb/SUyCaf82LqjejK9c/+cNNDTO3nrqcy6LGV7mdCSyVHW3zlnfaW1G68cNdED mcvyDV7+MDe82TuFLERRSaQonSJoxodUNp4CPAh5UeGza+8yxaUAC0IQyBMbPQitdQqXno62 1mRhdTrCDdz9rqPRhq16bO8vT60fy8PIgc/iTQsSAIE55zmv9s1hxeWFNJ7Svbp15vyBC36x C2MoG4mnbIPgMUX1qK9u1fanzaroZuPRQkwjunKYl+YAspCTNbNT+SVBZLzsZ6s8K7xooG9g UU5
  • Ironport-hdrordr: A9a23:QFvm368X9Hsds3crFcRuk+AXI+orL9Y04lQ7vn1ZYxpTb8Ceio SKkPMUyB6cskdrZJhAo6H4BEDkexjhHPFOj7X5UY3CYOCighrMEGgA1/qF/9SDIVydygc178 4JHJSWSueAaWSS6/yb3ODSKadG/DDoytHKuQ6n9QYUcShaL5tHyCdSTiu4MmkzfilpIvMCfq a01458oT2hczAyQa2AakXsC4D4yuH2qA==
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 15/12/2022 8:20 am, Jan Beulich wrote:
> core_set_legacy_ssbd() counts the number of times SSBD is being enabled
> via LS_CFG on a core. This assumes that calls there only occur if the
> state actually changes. While svm_ctxt_switch_{to,from}() conform to
> this, guest_wrmsr() doesn't: It also calls the function when the bit
> doesn't actually change. Make core_set_legacy_ssbd() track per-thread
> enabled state by converting the "count" field to a bitmap, thus allowing
> to skip redundant enable/disable requests, constraining
> amd_setup_legacy_ssbd() accordingly.
>
> Fixes: b2030e6730a2 ("amd/virt_ssbd: set SSBD at vCPU context switch")
> Reported-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> ---
> This wants properly testing on affected hardware. From Andrew's
> description it's also not clear whether this really is addressing that
> problem, or yet another one in this same area.
> ---
> v2: Change core_set_legacy_ssbd() itself rather than the problematic
>     caller.

I think this patch will fix one of the errors.  I've lost count of how
many others issues I've found now that I've looked at the code in detail
for the first time.

However...

> --- a/xen/arch/x86/cpu/amd.c
> +++ b/xen/arch/x86/cpu/amd.c
> @@ -744,7 +744,7 @@ void amd_init_ssbd(const struct cpuinfo_
>  
>  static struct ssbd_ls_cfg {
>      spinlock_t lock;
> -    unsigned int count;
> +    unsigned long enabled;
>  } __cacheline_aligned *ssbd_ls_cfg;
>  static unsigned int __ro_after_init ssbd_max_cores;
>  #define AMD_FAM17H_MAX_SOCKETS 2
> @@ -757,6 +757,11 @@ bool __init amd_setup_legacy_ssbd(void)
>           boot_cpu_data.x86_num_siblings <= 1 || opt_ssbd)
>               return true;
>  
> +     if (boot_cpu_data.x86_num_siblings > BITS_PER_LONG ||
> +         (boot_cpu_data.x86_num_siblings &
> +          (boot_cpu_data.x86_num_siblings - 1)))
> +             return false;

... this is nonsense.  There is no such thing as an Zen1 uarch with more
than 2 threads per core, and attempts cope with it (as opposed to
rejecting it outright) makes ...

> +
>       /*
>        * One could be forgiven for thinking that c->x86_max_cores is the
>        * correct value to use here.
> @@ -800,10 +805,12 @@ static void core_set_legacy_ssbd(bool en
>                             c->cpu_core_id];
>  
>       spin_lock_irqsave(&status->lock, flags);
> -     status->count += enable ? 1 : -1;
> -     ASSERT(status->count <= c->x86_num_siblings);
> -     if (enable ? status->count == 1 : !status->count)
> +     if (!enable)
> +             __clear_bit(c->apicid & (c->x86_num_siblings - 1), 
> &status->enabled);
> +     if (!status->enabled)
>               BUG_ON(!set_legacy_ssbd(c, enable));
> +     if (enable)
> +             __set_bit(c->apicid & (c->x86_num_siblings - 1), 
> &status->enabled);

... this mess even worse.

I am rewriting the legacy SSBD code so that it is fit for purpose.

~Andrew



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.