[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH for-4.17 v2.1 2/3] amd/virt_ssbd: set SSBD at vCPU context switch


  • To: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Thu, 3 Nov 2022 09:09:41 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=iiYtT3YjCmRsXNtJCT7zFffOR6d2OnrN/dQgPKbsIkk=; b=HYRVNUGuAD0V+rvDeQMM5dW4Jd6CpDYlEsL7ucpmfCvrLfJsTZU6frgHYelOlh3BIJxbwoQ8wMhh+jPGmM/huqz9/xAS0JOBoQkNMMhFrfdSghfnSfRYDpSQIVK+YrpvMkHr+fHh3pA//OZyYkEokR1bj0sex+E0s9XY8aVrpZjp7Bk5wSs655YZXtPvJbYdQnIJ8OzMwZIKrNsTjXPqN9DPr5tA/Y/1EJHZQsZbYVurysE4vq4QkStcqwGeWKenrQ4F8Q2tG+reVbqEJqwIYT3SjgV1Jvx0uFEBepMpXrJjKxOWv5kSv1/4G59FPvkUL3aQbonqiLC5wXaFGZlKSg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=nSkZX6phjw3y0cfMsUcoIEZi6zt57DMKYPqMp6LJOeC1zTiwrJBaGOgveMKm/8kgwcIkP3sC+H7u89TtcID4M0X/cmNNWBBl5jvIQiyqzYDbfNz/UdggcN0TF80YqBbGdoVQ9SuyeU+gC5NErV5yMm2XPmw9xHv6XVOcxkw1SOXgQGKXTxYrpVJMkokzAm8mjdbkz1vw19tcTD6bMXC09kVc5MPVwlrR8M0TtAmoG8omrjgllPOM2DezfsKQ9u316o5Je8gXVZ8x8e+OIGo4iyIfiNMiZRR0J3fU7vlJy9kAwbjb4t70xvHbINuH1PnPFrM2cLvJ2iTDZXrkUDBUQA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Henry.Wang@xxxxxxx, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Thu, 03 Nov 2022 08:10:02 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 02.11.2022 18:38, Roger Pau Monné wrote:
> On Wed, Nov 02, 2022 at 12:49:17PM +0100, Jan Beulich wrote:
>> On 29.10.2022 15:12, Roger Pau Monne wrote:
>>> --- a/xen/arch/x86/hvm/svm/svm.c
>>> +++ b/xen/arch/x86/hvm/svm/svm.c
>>> @@ -973,6 +973,16 @@ static void cf_check svm_ctxt_switch_from(struct vcpu 
>>> *v)
>>>  
>>>      /* Resume use of ISTs now that the host TR is reinstated. */
>>>      enable_each_ist(idt_tables[cpu]);
>>> +
>>> +    /*
>>> +     * Clear previous guest selection of SSBD if set.  Note that 
>>> SPEC_CTRL.SSBD
>>> +     * is already cleared by svm_vmexit_spec_ctrl.
>>> +     */
>>> +    if ( v->arch.msrs->virt_spec_ctrl.raw & SPEC_CTRL_SSBD )
>>> +    {
>>> +        ASSERT(v->domain->arch.cpuid->extd.virt_ssbd);
>>> +        amd_set_ssbd(false);
>>> +    }
>>>  }
>>
>> Aren't you potentially turning off SSBD here just to ...
>>
>>> @@ -1000,6 +1010,13 @@ static void cf_check svm_ctxt_switch_to(struct vcpu 
>>> *v)
>>>  
>>>      if ( cpu_has_msr_tsc_aux )
>>>          wrmsr_tsc_aux(v->arch.msrs->tsc_aux);
>>> +
>>> +    /* Load SSBD if set by the guest. */
>>> +    if ( v->arch.msrs->virt_spec_ctrl.raw & SPEC_CTRL_SSBD )
>>> +    {
>>> +        ASSERT(v->domain->arch.cpuid->extd.virt_ssbd);
>>> +        amd_set_ssbd(true);
>>> +    }
>>>  }
>>
>> ... turn it on here again? IOW wouldn't switching better be isolated to
>> just svm_ctxt_switch_to(), doing nothing if already in the intended mode?
> 
> What if we switch from a HVM vCPU into a PV one?  AFAICT then
> svm_ctxt_switch_to() won't get called and we would be running the PV
> guest with the previous HVM domain SSBD selection.

Would that be a problem? Or in other words: What is the intended behavior
for PV? PV domains can control SSBD via SPEC_CTRL (only), so all we need
to guarantee is that we respect their choice there.

>>> --- a/xen/arch/x86/msr.c
>>> +++ b/xen/arch/x86/msr.c
>>> @@ -697,7 +697,15 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t 
>>> val)
>>>                  msrs->spec_ctrl.raw &= ~SPEC_CTRL_SSBD;
>>>          }
>>>          else
>>> +        {
>>>              msrs->virt_spec_ctrl.raw = val & SPEC_CTRL_SSBD;
>>> +            if ( v == curr )
>>> +                /*
>>> +                 * Propagate the value to hardware, as it won't be context
>>> +                 * switched on vmentry.
>>> +                 */
>>
>> I have to admit that I find "on vmentry" in the comment misleading: Reading
>> it I first thought you're still alluding to the old model. Plus I also find
>> the combination of "context switched" and "on vmentry" problematic, as we
>> generally mean something else when we say "context switch".
> 
> I had a hard time wording this, because of the Xen/guest vs vCPU
> context switches.
> 
> What about:
> 
> "Propagate the value to hardware, as it won't we set on guest resume
> path."

Sounds better, thanks (with s/we/be/).

>>> +                goto set_reg;
>>
>> It's not clear why you want to use hvm_set_reg() in the first place - the
>> comment says "propagate to hardware", which would mean wrmsrl() in the
>> usual case. Here it would mean a direct call to amd_set_ssbd() imo. That
>> would then also be in line with all other "v == curr" conditionals, none
>> of which apply to any "goto set_reg". ..._set_reg(), aiui, is meant only
>> for use in cases where vCPU state needs updating such that proper state
>> would be loaded later (e.g. during VM entry).
> 
> I thought it was better to hide those vendor specific calls in the
> already existing vendor hooks (set_reg).  I don't mind calling
> amd_set_ssbd() directly here if that's preferred, it seemed kind of a
> layering violation when we have vendor specific hooks in place.

Well, Andrew of course should correct me if I'm wrong, but my understanding
of the get/set-reg interface is as described. On which grounds I don't see
any layering violation here - doing the call right here is merely a more
involved flavor of wrmsrl().

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.