[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH HVM v2 1/1] hvm: refactor set param


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Norbert Manthey <nmanthey@xxxxxxxxx>
  • Date: Mon, 8 Feb 2021 20:47:23 +0100
  • Autocrypt: addr=nmanthey@xxxxxxxxx; prefer-encrypt=mutual; keydata= xsFNBFoJQc0BEADM8Z7hB7AnW6ErbSMsYkKh4HLAPfoM+wt7Fd7axHurcOgFJEBOY2gz0isR /EDiGxYyTgxt5PZHJIfra0OqXRbWuLltbjhJACbu35eaAo8UM4/awgtYx3O1UCbIlvHGsYDg kXjF8bBrVjPu0+g55XizX6ot/YPAgmWTdH8qXoLYVZVWJilKlTqpYEVvarSn/BVgCbIsQIps K93sOTN9eJKDSqHvbkgKl9XG3WsZ703431egIpIZpfN0zZtzumdZONb7LiodcFHJ717vvd89 3Hv2bYv8QLSfYsZcSnyU0NVzbPhb1WtaduwXwNmnX1qHJuExzr8EnRT1pyhVSqouxt+xkKbV QD9r+cWLChumg3g9bDLzyrOTlEfAUNxIqbzSt03CRR43dWgfgGiLDcrqC2b1QR886WDpz4ok xX3fdLaqN492s/3c59qCGNG30ebAj8AbV+v551rsfEba+IWTvvoQnbstc6vKJCc2uG8rom5o eHG/bP1Ug2ht6m/0uWRyFq9C27fpU9+FDhb0ZsT4UwOCbthe35/wBZUg72zDpT/h5lm64G6C 0TRqYRgYcltlP705BJafsymmAXOZ1nTCuXnYAB9G9LzZcKKq5q0rP0kp7KRDbniirCUfp7jK VpPCOUEc3tS1RdCCSeWNuVgzLnJdR8W2h9StuEbb7hW4aFhwRQARAQABzSROb3JiZXJ0IE1h bnRoZXkgPG5tYW50aGV5QGFtYXpvbi5kZT7CwX0EEwEIACcFAloJQc0CGyMFCQlmAYAFCwkI BwIGFQgJCgsCBBYCAwECHgECF4AACgkQZ+8yS8zN62ajmQ/6AlChoY5UlnUaH/jgcabyAfUC XayHgCcpL1SoMKvc2rCA8PF0fza3Ep2Sw0idLqC/LyAYbI6gMYavSZsLcsvY6KYAZKeaEriG 7R6cSdrbmRcKpPjwvv4iY6G0DBTeaqfNjGe1ECY8u522LprDQVquysJIf3YaEyxoK/cLSb0c kjzpqI1P9Vh+8BQb5H9gWpakbhFIwbRGHdAF1roT7tezmEshFS0IURJ2ZFEI+ZgWgtl1MBwN sBt65im7x5gDo25h8A5xC9gLXTc4j3tk+3huaZjUJ9mCbtI12djVtspjNvDyUPQ5Mxw2Jwar C3/ZC+Nkb+VlymmErpnEUZNltcq8gsdYND4TlNbZ2JhD0ibiYFQPkyuCVUiVtimXfh6po9Yt OkE0DIgEngxMYfTTx01Zf6iwrbi49eHd/eQQw3zG5nn+yZsEG8UcP1SCrUma8p93KiKOedoL n43kTg4RscdZMjj4v6JkISBcGTR4uotMYP4M0zwjklnFXPmrZ6/E5huzUpH9B7ZIe/SUu8Ur xww/4dN6rfqbNzMxmya8VGlEQZgUMWcck+cPrRLB09ZOk4zq9i/yaHDEZA1HNOfQ9UCevXV5 7seXSX7PCY6WDAdsT3+FuaoQ7UoWN3rdpb+064QKZ0FsHeGzUd7MZtlgU4EKrh25mTSNZYRs nTz2zT/J33fOwU0EWglBzQEQAKioD1gSELj3Y47NE11oPkzWWdxKZdVr8B8VMu6nVAAGFRSf Dms4ZmwGY27skMmOH2srnZyTfm9FaTKr8RI+71Fh9nfB9PMmwzA7OIY9nD73/HqPywzTTleG MlALmnuY6xFRSDmqmvxDHgWyzB4TgPWt8+hW3+TJKCx2RgLAdSuULZla4lia+NlS8WNRUDGK sFJCCB3BW5I/cocfpBEUqLbbmnPuD9UKpEnFcYWD9YaDNcBTjSc7iDsvtpdrBXg5VETOz/TQ /CmVs9h/5zug8O4bXxHEEJpCAxs4cGKxowBqx/XJfkwdWeo/LdaeR+LRbXvq4A32HSkyj9sV vygwt2OFEk493JGik8qtAA/oPvuqVPJGacxmZ7zKR12c0mnKCHiexFJzFbC7MSiUhhe8nNiM p6Sl6EZmsTUXhV2bd2M12Bqcss3TTJ1AcW04T4HYHVCSxwl0dVfcf3TIaH0BSPiwFxz0FjMk 10umoRvUhYYoYpPFCz8dujXBlfB8q2tnHltEfoi/EIptt1BMNzTYkHKArj8Fwjf6K+nQ3a8p 1cWfkYpA5bRqbhbplzpa0u1Ex0hZk6pka0qcVgqmH31O2OcSsqeKfUfHkzj3Q6dmuwm1je/f HWH9N1gDPEp1RB5bIxPnOG1Z4SNl9oVQJhc4qoJiqbvkciivYcH7u2CBkboFABEBAAHCwWUE GAEIAA8FAloJQc0CGwwFCQlmAYAACgkQZ+8yS8zN62YU9Q//WTnN28aBX1EhDidVho80Ql2b tV1cDRh/vWTcM4qoM8vzW4+F/Ive6wDVAJ7zwAv8F8WPzy+acxtHLkyYk14M6VZ1eSy0kV0+ RZQdQ+nPtlb1MoDKw2N5zhvs8A+WD8xjDIA9i21hQ/BNILUBINuYKyR19448/41szmYIEhuJ R2fHoLzNdXNKWQnN3/NPTuvpjcrkXKJm2k32qfiys9KBcZX8/GpuMCc9hMuymzOr+YlXo4z4 1xarEJoPOQOXnrmxN4Y30/qmf70KHLZ0GQccIm/o/XSOvNGluaYv0ZVJXHoCnYvTbi0eYvz5 OfOcndqLOfboq9kVHC6Yye1DLNGjIVoShJGSsphxOx2ryGjHwhzqDrLiRkV82gh6dUHKxBWd DXfirT8a4Gz/tY9PMxan67aSxQ5ONpXe7g7FrfrAMe91XRTf50G3rHb8+AqZfxZJFrBn+06i p1cthq7rJSlYCqna2FedTUT+tK1hU9O0aK4ZYYcRzuTRxjd4gKAWDzJ1F/MQ12ftrfCAvs7U sVbXv2TndGIleMnheYv1pIrXEm0+sdz5v91l2/TmvkyyWT8s2ksuZis9luh+OubeLxHq090C hfavI9WxhitfYVsfo2kr3EotGG1MnW+cOkCIX68w+3ZS4nixZyJ/TBa7RcTDNr+gjbiGMtd9 pEddsOqYwOs=
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Ian Jackson <iwj@xxxxxxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Mon, 08 Feb 2021 19:49:47 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 2/8/21 3:21 PM, Jan Beulich wrote:
> On 05.02.2021 21:39, Norbert Manthey wrote:
>> To prevent leaking HVM params via L1TF and similar issues on a
>> hyperthread pair, let's load values of domains as late as possible.
>>
>> Furthermore, speculative barriers are re-arranged to make sure we do not
>> allow guests running on co-located VCPUs to leak hvm parameter values of
>> other domains.
>>
>> This is part of the speculative hardening effort.
>>
>> Signed-off-by: Norbert Manthey <nmanthey@xxxxxxxxx>
>> Reported-by: Hongyan Xia <hongyxia@xxxxxxxxxxxx>
> Did you lose Ian's release-ack, or did you drop it for a specific
> reason?
That happened by accident, similarly to not chaining this v2 to the
former v1. I'll add it to the next revision.
>
>> --- a/xen/arch/x86/hvm/hvm.c
>> +++ b/xen/arch/x86/hvm/hvm.c
>> @@ -4060,7 +4060,7 @@ static int hvm_allow_set_param(struct domain *d,
>>                                 uint32_t index,
>>                                 uint64_t new_value)
>>  {
>> -    uint64_t value = d->arch.hvm.params[index];
>> +    uint64_t value;
>>      int rc;
>>
>>      rc = xsm_hvm_param(XSM_TARGET, d, HVMOP_set_param);
>> @@ -4108,6 +4108,13 @@ static int hvm_allow_set_param(struct domain *d,
>>      if ( rc )
>>          return rc;
>>
>> +    if ( index >= HVM_NR_PARAMS )
>> +        return -EINVAL;
>> +
>> +    /* Make sure we evaluate permissions before loading data of domains. */
>> +    block_speculation();
>> +
>> +    value = d->arch.hvm.params[index];
>>      switch ( index )
>>      {
>>      /* The following parameters should only be changed once. */
> I don't see the need for the heavier block_speculation() here;
> afaict array_access_nospec() should do fine. The switch() in
> context above as well as the switch() further down in the
> function don't have any speculation susceptible code.
The reason to block speculation instead of just using the hardened index
access is to not allow to speculatively load data from another domain.
>
> Furthermore the first switch() doesn't use "value" at all, so
> you could move the access even further down. This may have the
> downside of adding latency, so may not be worth it, but in
> this case at least the description should say half a word,
> especially seeing it say "as late as possible" right now.
Agreed, I can move this further down, or adapt the wording. The initial
intention was to move it only below the first possible speculative
blocker. Hence, let me adapt the wording.
>
>> @@ -4141,6 +4148,9 @@ static int hvm_set_param(struct domain *d, uint32_t 
>> index, uint64_t value)
>>      if ( rc )
>>          return rc;
>>
>> +    /* Make sure we evaluate permissions before loading data of domains. */
>> +    block_speculation();
>> +
>>      switch ( index )
>>      {
>>      case HVM_PARAM_CALLBACK_IRQ:
> Like you do for the "get" path I think this similarly renders
> pointless the use in hvmop_set_param() (and - see below - the
> same consideration wrt is_hvm_domain() applies).
Can you please be more specific why this is pointless? I understand that
the is_hvm_domain check comes with a barrier that can be used to not add
another barrier. However, I did not find such a barrier here, which
comes between the 'if (rc)' just above, and the potential next access
based on the value of 'index'. At least the access behind the switch
statement cannot be optimized and replaced with a constant value easily.
>
>> @@ -4388,6 +4398,10 @@ int hvm_get_param(struct domain *d, uint32_t index, 
>> uint64_t *value)
>>      if ( rc )
>>          return rc;
>>
>> +    /* Make sure the index bound check in hvm_get_param is respected, as 
>> well as
>> +       the above domain permissions. */
>> +    block_speculation();
> Nit: Please fix comment style here.
Will do.
>
>> @@ -4428,9 +4442,6 @@ static int hvmop_get_param(
>>      if ( a.index >= HVM_NR_PARAMS )
>>          return -EINVAL;
>>
>> -    /* Make sure the above bound check is not bypassed during speculation. 
>> */
>> -    block_speculation();
>> -
>>      d = rcu_lock_domain_by_any_id(a.domid);
>>      if ( d == NULL )
>>          return -ESRCH;
> This one really was pointless anyway, as is_hvm_domain() (used
> down from here) already contains a suitable barrier.

Yes, agreed.

Best,
Norbert

>
> Jan




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.