[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 11/17] x86/CPUID: adjust extended leaves out of range clearing


  • To: Jan Beulich <jbeulich@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Thu, 15 Apr 2021 13:48:22 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=FVC9u1l/9s+vS6XdXeRhEWRzR2M/Rz7i3vJKa57ewTM=; b=hUbUIEbeeGCUdWiyK0kBQgQHDJPkvRWZxpRBNXXWIsWUbfL9Jz2SZ0HvBaKRTS+phi+OV5w7urBGNzeC434dGxmQJXovagzWk8k1mHn1qNlI0Ksd7ayNBeMO99McE8H1z7L0j5+zhEUVdVlb7D4flbN14IOypF+GLTY7vAimGCB8ABjAf0hmHYsxsU9M/UTiq3GGHo5sbDi2EkBhmtHxQ3wVkI/Z91WnJTs5SxF0+eC5RczisicnHrVov72EFiRlO9qkDiCATgEEFS+lEOYPpzoZA5T9kaz1uwzlXbMB1Enu9WhltWZXBCZEZb5o7PEJ80wsQtDib26MwNvuBPdURg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=BjLqw3sIthSMQrSHNCBv/bGJlyKrNnCo3oeTGTIEQpGN+Dwa3MvQc3ec+0cu0C6uexjpO2e+AwCN3BNCXqbzILu2WTT2dVIcnUHbIGsZB3ZPltRWwYXqbJvMlgqDN34kIYEltDOXVF4zJkKgCuC+/f6EFoxwP8P5LeyqipmMc7SVDAGwgAPTr+B/kOrF+OYmp4L+FBXtC8UAsOrO3/r1Xdv97WSmbeeE4IDYsyzhae+HiLpHuYPer4VQpbJuMMZwSdXTX8KCSIRGW3HE00APz8bqqF3gbFbv20Wi6zf+CGKQaSXXVw9R9K940z0vL/BzUK0Tx+xARDCRe4QpCb+3bA==
  • Authentication-results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: George Dunlap <george.dunlap@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Thu, 15 Apr 2021 12:48:42 +0000
  • Ironport-hdrordr: A9a23:kgFCqq+UTwLIdztgTqNuk+F1cL1zdoIgy1knxilNYDRvWIixi9 2ukPMH1RX9lTYWXzUalcqdPbSbKEmyybdc2qNUGbu5RgHptC+TLI9k5Zb/2DGIIUHD38Zn/+ Nbf6B6YeecMXFTkdv67A6kE9wp3dmA9+SSif3Dymp2JDsKV4hLxW5Ce2OmO2dxQxRLAod8MZ Ka6NZOqTbIQwVpUu2QAH4ZU+/f4+DRnJX9bhIcQzIh4g+CjTSngYSKbySw9BEYTj9J3PMe4X HI+jaJm5mLntOa7lvn12HV54lLg9eJ8LV+LeGFl8R9EESVti+Gf4JkMofy2wwdgObq01oylc mJnhFIBbUI11r0XkWY5STgwBPh1jFG0Q6Q9Xa9jWH4qcL0ABIWYvAx/L5xSRfS50o+sNwU6s sitAj4xvkneC/opyjz68PFUBtnjCOP0B4fuNUekmBFVs8mYKJRxLZvjH99KosKHy7x9ekcYY 9TJfzbjcwmE2+yU2rUpS1GztCqQx0Ib2y7a3lHkMmU3z9KpWt+3ksVyecO901wha4Vet1q4f /JPb9vk6wLZsgKbbhlDONEesevDHfRKCi8f166EBDCLuUqKnjNo5n47PEc4/yrQoUByN8XlI 7aWF1VmGYucyvVeIOz9awO1iqIbHS2XDzrxM0bzYN+oKfASL3iNjDGYEwykuO7ys9vQfHzar KWAtZ7EvXjJWzhFcJixAvlQaRfLnEYTYk8pss7YVSTucjGQ7ea9tDzQbL2Hv7AADwkUmTwDj 8oRz7oPvhN6UitRzvWmx7Ud3TxelHu3J55HaTAltJjjbQlB8lpiEw4mF657saEJXlpqaotZn ZzJ7vhj+eaqACNjCL1xlQsHiAYIlde4b3mXX8PjxQNKVnIfbEKvMjaXWhT2XCANyJuVs++Kn 8Zm31HvYaMa7CAzyErDNyqdkiAiWEImX6MR5AA3oqO+NniYZF9Kpo9QqR+GUHqGnVO6EdXgV YGTDVBal7UFzvoh6ngpocTHvvje951hxruB9VVp3LZvUC1vtouWXMfYj6rXaes8EQTbgsRom c0374UgbKGlzrqA3A4mv4EPFpFb3nSPKhLFz2fZIJfmqnifSZ5SWviv03dtzgDPk7Rs2kCjG 3oKiOZPdXGGEBUtHxj3qH2y19sbWmGc0Vsand1jJ1lGQ39ywRO+N7OQpD2/3qaa1MEzO1YCj 3DbDcICi5Fxty81neu6Xy/PERj4q9rEv3WDbwlfb2W52ikL5eQk7oaW9VO+ox+CdzouugXcO 6WdgOPNgnkA+cx1wH9nAd9BABE7F0f1d/40hzs62a1mEMlCf3JOVJ8WvU1Jcqf42WMfYfA7L xJyfYO+c2+PWX6ZoTYleX5bztfJgjSpmDzZecyspxQtb8zsrw2P5Sza0q+6Fh3mDEFaOHznw ciZY4+xpbrEIpmZdYTdCJU5UBBrqXFEGIb9ijNRtYjdlQshULBN9yH47D0uaMia3fx0DfYCB 26yWlh5P/LUCuI6K4CB48xKWpQblIg6H4KxpL1S6TgTCGrffpE5ly0LzuUd6JcUrGMHdwr31 tHyuDNu++cbCzj3g/M+RN9P6JV6m6iBee/GhiFF+IN09u0Pz238+eXyf/2qDf8Uj2gbUsEwa VDaEwLd8xGzgAYs7df6Fn7doXH5mQ/k1Vf5jl7llninqieiV2rY31uAEn+mZVZXT5aL36Sq9 /KmNLojEjA3A==
  • Ironport-sdr: TkUbduWemakaSRtMxGs9BArr25Pf5Lu//IfnaiWoPZu8tF8QGVoOqg2hPr7guA6o2bEK0ELdBe MmxB7+hFGEA8vPTifsLlNtuo9EK3+Efuqn2nMAX414FyeXm3tysaiuuK01gZkGP2pMc/lgzvTz S+IRngrvq3KqRXppHKWFg3WZhq4JJfYXn0oOKyRjJDBeAyzYq1T3eRommhExrrlWelM9Nsfjut 8FEo/AHAdK8YyYoIMYUuue3UblF32Fd5Vy8OXkBaWbK91bHeTNm+OJZ9jg65oF4BWzjym2d7WF odQ=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 23/11/2020 14:32, Jan Beulich wrote:
> A maximum extended leaf input value with the high half different from
> 0x8000 should not be considered valid - all leaves should be cleared in
> this case.
>
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> ---
> v2: Integrate into series.
>
> --- a/tools/tests/cpu-policy/test-cpu-policy.c
> +++ b/tools/tests/cpu-policy/test-cpu-policy.c
> @@ -516,11 +516,22 @@ static void test_cpuid_out_of_range_clea
>              },
>          },
>          {
> +            .name = "no extd",
> +            .nr_markers = 0,
> +            .p = {
> +                /* Clears all markers. */
> +                .extd.max_leaf = 0,
> +
> +                .extd.vendor_ebx = 0xc2,
> +                .extd.raw_fms = 0xc2,
> +            },
> +        },
> +        {
>              .name = "extd",
>              .nr_markers = 1,
>              .p = {
>                  /* Retains marker in leaf 0.  Clears others. */
> -                .extd.max_leaf = 0,
> +                .extd.max_leaf = 0x80000000,
>                  .extd.vendor_ebx = 0xc2,
>  
>                  .extd.raw_fms = 0xc2,
> --- a/xen/lib/x86/cpuid.c
> +++ b/xen/lib/x86/cpuid.c
> @@ -232,7 +232,9 @@ void x86_cpuid_policy_clear_out_of_range
>                      ARRAY_SIZE(p->xstate.raw) - 1);
>      }
>  
> -    zero_leaves(p->extd.raw, (p->extd.max_leaf & 0xffff) + 1,
> +    zero_leaves(p->extd.raw,
> +                ((p->extd.max_leaf >> 16) == 0x8000
> +                 ? (p->extd.max_leaf & 0xffff) + 1 : 0),
>                  ARRAY_SIZE(p->extd.raw) - 1);

Honestly, this is unnecessary complexity and overhead, and the logic is
already hard enough to follow.

There won't be extd.max_leaf with the high half != 0x8000 in real
policies, because of how we fill them.  Nor ought there to be, given the
intended meaning of this part of the union.

I think we simply forbid this case, rather than taking extra complexity
to cope with it.  Approximately all VMs will have 0x80000008 as a
minimum, and I don't think catering to pre-64bit Intel CPUs is worth our
effort either.

~Andrew




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.