[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 09/13] x86/sysctl: Implement XEN_SYSCTL_get_cpumsr_policy


  • To: Jan Beulich <JBeulich@xxxxxxxx>, Sergey Dyasli <sergey.dyasli@xxxxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Wed, 4 Jul 2018 18:57:16 +0100
  • Autocrypt: addr=andrew.cooper3@xxxxxxxxxx; prefer-encrypt=mutual; keydata= xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs 6+ahAA==
  • Cc: Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>, Daniel de Graaf <dgdegra@xxxxxxxxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxx>, Wei Liu <wei.liu2@xxxxxxxxxx>, Roger Pau Monne <roger.pau@xxxxxxxxxx>
  • Delivery-date: Wed, 04 Jul 2018 17:57:39 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 04/07/18 10:43, Jan Beulich wrote:
> Oh, here we go - the title doesn't suggest this is about CPUID as well.
>
>>>> On 03.07.18 at 22:55, <andrew.cooper3@xxxxxxxxxx> wrote:
>> Extend the xen-cpuid utility to be able to dump the system policies.  An
>> example output is:
>>
>>     Xen reports there are maximum 113 leaves and 3 MSRs
>>     Raw policy: 93 leaves, 3 MSRs
>>      CPUID:
>>       leaf     subleaf  -> eax      ebx      ecx      edx
>>       00000000:ffffffff -> 0000000d:756e6547:6c65746e:49656e69
> I'd like to suggest to suppress the :fffffff when there are no sub-leaves.

This is a developer tool, rather than a user tool, and it is dumping raw
data.

If there were an easy formatter way of expressing "uint32_t or blank"
then yes, but I'm not aware of one.

>
>> @@ -377,7 +458,7 @@ int main(int argc, char **argv)
>>                  if ( i == nr_features )
>>                      break;
>>  
>> -                if ( *ptr == ':' )
>> +                if ( *ptr == ':' || *ptr == '-' )
> None of the other changes to this file give any hint why a dash needs
> recognizing here all of the sudden. Is this a stray / leftover change?

Hmm - at a guess that is a XenServer compatibility improvement, but
probably can be pulled out into a separate change.

Xapi represents feature bitmaps with dash delimiters rather than colon
delimiters, and this alters the parsing to accept both forms.

>
>> --- a/xen/arch/x86/sysctl.c
>> +++ b/xen/arch/x86/sysctl.c
>> @@ -32,22 +32,32 @@
>>  #include <asm/cpuid.h>
>>  
>>  const struct policy_group system_policies[] = {
>> -    {
>> +    [ XEN_SYSCTL_cpumsr_policy_raw ] = {
> Aha - this clarifies a question I had on the earlier patch. But it would
> be nice if that other patch was self contained also in the way of
> allowing readers to understand the intentions.

One thing I could do is introduce the XEN_SYSCTL_cpumsr_policy_* defines
in the previous patch?  I don't want to merge the two patches as that is
too many moving parts to review in a single patch.

> And with this I now
> wonder whether the pointers in struct policy_group shouldn't all
> be const qualified.

Unfortunately that doesn't work with the logic to create a policy_group
for an individual domain during audit.

>
>> @@ -318,6 +328,74 @@ long arch_do_sysctl(
>>          break;
>>      }
>>  
>> +    case XEN_SYSCTL_get_cpumsr_policy:
>> +    {
>> +        const struct policy_group *group;
>> +
>> +        /* Bad policy index? */
>> +        if ( sysctl->u.cpumsr_policy.index >= ARRAY_SIZE(system_policies) )
>> +        {
>> +            ret = -EINVAL;
>> +            break;
>> +        }
>> +        group = &system_policies[sysctl->u.cpumsr_policy.index];
> Isn't this introducing at least half of a Spectre v1 gadget?

Nope :(

It's both halves of the Spectre gadget, when you account for the
dereference when calling x86_*_copy_to_buffer() slightly lower.

I suppose we want to port the Linux array nospec lookup logic so we can
protect the clearly-visible gadgets.

>
>> +        /* Request for maximum number of leaves/MSRs? */
>> +        if ( guest_handle_is_null(sysctl->u.cpumsr_policy.cpuid_policy) )
>> +        {
>> +            sysctl->u.cpumsr_policy.nr_leaves = CPUID_MAX_SERIALISED_LEAVES;
>> +            if ( __copy_field_to_guest(u_sysctl, sysctl,
>> +                                       u.cpumsr_policy.nr_leaves) )
>> +            {
>> +                ret = -EFAULT;
>> +                break;
>> +            }
>> +        }
>> +        if ( guest_handle_is_null(sysctl->u.cpumsr_policy.msr_policy) )
>> +        {
>> +            sysctl->u.cpumsr_policy.nr_msrs = MSR_MAX_SERIALISED_ENTRIES;
>> +            if ( __copy_field_to_guest(u_sysctl, sysctl,
>> +                                       u.cpumsr_policy.nr_msrs) )
>> +            {
>> +                ret = -EFAULT;
>> +                break;
>> +            }
>> +        }
>> +
>> +        /* Serialise the information the caller wants. */
>> +        if ( !guest_handle_is_null(sysctl->u.cpumsr_policy.cpuid_policy) )
>> +        {
>> +            if ( (ret = x86_cpuid_copy_to_buffer(
>> +                      group->cp,
>> +                      sysctl->u.cpumsr_policy.cpuid_policy,
>> +                      &sysctl->u.cpumsr_policy.nr_leaves)) )
>> +                break;
> Coming back to an earlier question, I realize the null handle logic
> above is supposed to allow sizing the buffers, but I think it would
> be better to allow single invocations to generally work, making a
> second invocation necessary just as a fallback. IOW I think the
> code here wants to return to the caller the required number of
> slots in case of -ENOBUFS. And it should the also ...

I don't agree.  Whatever happens, the toolstack has to (once) make a
hypercall requesting the size of the buffers, and there is no plausible
manipulation where the toolstack would start with one sized buffer, and
dynamically size it based on -ENOBUFS.

The independent handling (e.g. only getting CPUID or MSR) is of more use
to the toolstack than having Xen try to stumble on in the face of a hard
error.

>
>> --- a/xen/include/public/sysctl.h
>> +++ b/xen/include/public/sysctl.h
>> @@ -1063,6 +1063,43 @@ struct xen_sysctl_set_parameter {
>>      uint16_t pad[3];                        /* IN: MUST be zero. */
>>  };
>>  
>> +#if defined(__i386__) || defined(__x86_64__)
>> +/*
>> + * XEN_SYSCTL_get_cpumsr_policy (x86 specific)
> Perhaps express the "x86 specific" also in the opcode name? And make
> more obvious that this is about CPUID and MSRs at the same time? E.g.
> XEN_SYSCTL_x86_get_cpuid_msr_policy?
>
> I'm sure you have reasons to munge it all into a single operation.

(Answering in reverse order)

The get operations don't strictly need to be a single operation.  The
set operation specifically must be a single operation, and the getters
have an interface to match.

As for naming, cpumsr_policy wasn't chosen by me, but I can't think of
anything better.  The code is currently consistent and, while I'm open
to a rename, it will impact large quantities of the series.

One concern I have if we end up with a new block of information.  I was
hoping for a generic name, but simply "policy" on its own is too
generic.  cpumsr is, I believe, a contraction of cpuid_msr to avoid
excessive code volume.

Suggestions welcome.

>
>> + * Return information about CPUID and MSR policies available on this host.
>> + *  -       Raw: The real H/W values.
>> + *  -      Host: The values Xen is using, (after command line overrides, 
>> etc).
>> + *  -     Max_*: Maximum set of features a PV or HVM guest can use.  
>> Includes
>> + *               experimental features outside of security support.
>> + *  - Default_*: Default set of features a PV or HVM guest can use.  This is
>> + *               the security supported set.
>> + */
>> +struct xen_sysctl_cpumsr_policy {
>> +#define XEN_SYSCTL_cpumsr_policy_raw          0
>> +#define XEN_SYSCTL_cpumsr_policy_host         1
>> +#define XEN_SYSCTL_cpumsr_policy_pv_max       2
>> +#define XEN_SYSCTL_cpumsr_policy_hvm_max      3
>> +#define XEN_SYSCTL_cpumsr_policy_pv_default   4
>> +#define XEN_SYSCTL_cpumsr_policy_hvm_default  5
>> +    uint32_t index;       /* IN: Which policy to query? */
>> +    uint32_t nr_leaves;   /* IN/OUT: Number of leaves in/written to
>> +                           * 'cpuid_policy', or the maximum number of 
>> leaves if
>> +                           * any of the guest handles is NULL.
>> +                           * NB. All policies come from the same space,
>> +                           * so have the same maximum length. */
>> +    uint32_t nr_msrs;     /* IN/OUT: Number of MSRs in/written to
>> +                           * 'msr_domain_policy', or the maximum number of 
>> MSRs
>> +                           * if any of the guest handles is NULL.
>> +                           * NB. All policies come from the same space,
>> +                           * so have the same maximum length. */
>> +    XEN_GUEST_HANDLE_64(xen_cpuid_leaf_t) cpuid_policy; /* OUT: */
> Explicit padding (checked to be zero in the handler) above here
> please.

Why?  SYSCTLs are unstable and we don't perform similar checks for other
subops.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.