[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/3] acpi/processor: sanitize _PDC buffer bits when running as Xen dom0


  • To: Jason Andryuk <jandryuk@xxxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Fri, 16 Jun 2023 16:39:34 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=07Hlh83Rzfc93wV1PdtX/VuuiLcn2vG76Ns1+RGeYRY=; b=KsjoAfMPPrSkhJqChXRigrZBCPtE30oK1wZYoqsgwwQ7ISwfDWuRK/On6Ct2AqRwDhN5bvzi7mRysuAHempunssVDjpSoH8fH31uM+zs3RGMnpdA0OX3DMKoJAE8hYIFHjkpyCaaqf+pbz3cAeGHXbdnHlXh1zNiapBq7Q7ax0Eoov0sgbjxg4nNQ2qegGWrZT+pehnOYzHeZqJ+Thm8LGkXLF4t2Uqe6y4xMCppKL66xbewj9pSUxHDyMXw6B9RM3AeP2HCNqUltsEnVBsV89DZoEVzlBvx8OSHU/zQ0V4dtem40CgPIVOBWWZYzAOKfzyfXpYiIr9SrZczDCx0AQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Eiai+NuI2uG2XC7ZPZRJGH6jN54cUKXTHoKNPWZ0mrt6nEdvaK54tSVFwxSRQBBOqPiUJ8AMM2GLQLk8LqvdykNW2Tf76HVrPmL7hlS7ZzhhcnidxDMMdvZRG4NU8TT2IAKGxSzWNck/yuty7ihxH4TQmktlPrdK2Y3SE1eYzGfI3k9AV+8aXfpcLSp1iHJf93KvzZi+wmDv/QjUeseH+Am2MsqDQcxkNfXZq7YPMFeEhjU/ssQX8bt7kz23kBNhafL0OpPp7vHnT55Z1r5mG/cv/HIAa1PM9KWToFqLWnpnz2EVGJekxIsFDhNPL45DZxYCAqs84CIryoHJXQ3Wuw==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Jan Beulich <jbeulich@xxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx, jgross@xxxxxxxx, stable@xxxxxxxxxxxxxxx, Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>, Thomas Gleixner <tglx@xxxxxxxxxxxxx>, Ingo Molnar <mingo@xxxxxxxxxx>, Borislav Petkov <bp@xxxxxxxxx>, Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>, x86@xxxxxxxxxx, "H. Peter Anvin" <hpa@xxxxxxxxx>, "Rafael J. Wysocki" <rafael@xxxxxxxxxx>, Len Brown <lenb@xxxxxxxxxx>, linux-acpi@xxxxxxxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx
  • Delivery-date: Fri, 16 Jun 2023 14:40:13 +0000
  • Ironport-data: A9a23:MbX2faOGEKESqm3vrR2el8FynXyQoLVcMsEvi/4bfWQNrUpx1jZTy TAYWWDTaP/fNDP2ft5/OoWz9U1Q6pXVmoQ2Gwto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/ Nj/uKUzAnf8s9JPGjxSs/rrRC9H5qyo42tG5wJmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0uQvW0Yey tdGEh4qfAvbiv/q8omyQNA506zPLOGzVG8ekldJ6GiDSNoDH9XESaiM4sJE1jAtgMwIBezZe 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+PtxujeKpOBy+OGF3N79YNuFSN8Thk+Fj mnH4374ElcRM9n3JT+tqyv11rWTx3iiMG4UPIOYrt1q2APJ/FwsIixVbXuHsf/jiWfrDrqzL GRRoELCt5Ma9kWtQsPsQh6Qr3uNvxpaUN1Ve8U44gyQzqvf4y6CG3MJCDVGbbQOpMIwADAny FKNt9foHiB09q2YT2qH8bWZpi/0PjIaRUcAajUDVhAt+MT4rcc4iRenZs1/GaSxg/XrFjzqh TOHti4zg/MUl8Fj/6+851HcxTW3uoLOUBU29y3QRGuu6g4/b4mgD6S05lzLxfJBKpuFVF6Hv WhCl8X2xOUPC56KvDaATOUEAPei4PPtGDfEqVdrHpQnp3Kh9haLcYlO7Xd+LUFyP8AsfT7vf V+VuAVN6ZsVN3yvBYdnM9yZCMkwy6XkU9P/WZj8bsJHSopgaAiduippYCa4xWnjmUUouaIyI 5GWdYCrF3lyIaBqyjCeROoH17IvgCckygv7QZH90gTi2LGGZVaLRrofdliDdOY06OWDugq92 9JeMdaajhZSSuvzZgHJ/oMJa1MHN342AdbxscM/SwKYCg9vGWVkB/qPx7okItZhh/4Myb6O+ WyhUEhFzla5nWfANQiBdnFkbvXoQIp7qnU4eycrOD5ExkQeXGpm149HH7NfQFXt3LULISJcJ xXdR/i9Pw==
  • Ironport-hdrordr: A9a23:Jxu63qtVQUCeY3Jdg/b21J6o7skD/9V00zEX/kB9WHVpm62j+/ xG+c5x6faaslYssR0b+OxofZPwOE8036QFhrX5Xo3MYOCZghrPEGgK1+KLqVDd8m/Fh5ZgPM FbAtND4bbLY2SS4/yKnDWQIpINx8SG7bDtpcq29QYTceiyUdAb0+6uMHfnLmRGADNLAocjBN 644MRIqyHIQwV0Uu2LQkMIWPXZt5nvkpzpbQVDIhI55Azmt0LM1JfKVyKV2QoEQ3d32rEg/W LZ+jaJhZmLgrWAxhjAzH+W1JhOncuk990rPr3ptvQo
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Wed, Jun 14, 2023 at 03:57:11PM -0400, Jason Andryuk wrote:
> Hi, Roger,
> 
> On Mon, Nov 21, 2022 at 10:04 AM Roger Pau Monné <roger.pau@xxxxxxxxxx> wrote:
> >
> > On Mon, Nov 21, 2022 at 03:10:36PM +0100, Jan Beulich wrote:
> > > On 21.11.2022 11:21, Roger Pau Monne wrote:
> > > > --- a/drivers/acpi/processor_pdc.c
> > > > +++ b/drivers/acpi/processor_pdc.c
> > > > @@ -137,6 +137,14 @@ acpi_processor_eval_pdc(acpi_handle handle, struct 
> > > > acpi_object_list *pdc_in)
> > > >             buffer[2] &= ~(ACPI_PDC_C_C2C3_FFH | ACPI_PDC_C_C1_FFH);
> > > >
> > > >     }
> > > > +   if (xen_initial_domain())
> > > > +           /*
> > > > +            * When Linux is running as Xen dom0 it's the hypervisor the
> > > > +            * entity in charge of the processor power management, and 
> > > > so
> > > > +            * Xen needs to check the OS capabilities reported in the 
> > > > _PDC
> > > > +            * buffer matches what the hypervisor driver supports.
> > > > +            */
> > > > +           xen_sanitize_pdc((uint32_t 
> > > > *)pdc_in->pointer->buffer.pointer);
> > > >     status = acpi_evaluate_object(handle, "_PDC", pdc_in, NULL);
> > >
> > > Again looking at our old XenoLinux forward port we had this inside the
> > > earlier if(), as an _alternative_ to the &= (I don't think it's valid
> > > to apply both the kernel's and Xen's adjustments). That would also let
> > > you use "buffer" rather than re-calculating it via yet another (risky
> > > from an abstract pov) cast.
> >
> > Hm, I've wondered this and decided it wasn't worth to short-circuit
> > the boot_option_idle_override conditional because ACPI_PDC_C_C2C3_FFH
> > and ACPI_PDC_C_C1_FFH will be set anyway by Xen in
> > arch_acpi_set_pdc_bits() as part of ACPI_PDC_C_CAPABILITY_SMP.
> >
> > I could re-use some of the code in there, but didn't want to make it
> > more difficult to read just for the benefit of reusing buffer.
> >
> > > It was the very nature of requiring Xen-specific conditionals which I
> > > understand was the reason why so far no attempt was made to get this
> > > (incl the corresponding logic for patch 1) into any upstream kernel.
> >
> > Yes, well, it's all kind of ugly.  Hence my suggestion to simply avoid
> > doing any ACPI Processor object handling in Linux with the native code
> > and handle it all in a Xen specific driver.  That requires the Xen
> > driver being able to fetch more data itself form the ACPI Processor
> > methods, but also unties it from the dependency on the data being
> > filled by the generic code, and the 'tricks' is plays into fooling
> > generic code to think certain processors are online.
> 
> Are you working on this patch anymore?

Not really, I didn't get any feedback from maintainers (apart from
Jans comments, which I do value), and wasn't aware of this causing
issues, or being required by any other work, hence I kind of dropped
it (I have plenty of other stuff to work on).

> My Xen HWP patches need a
> Linux patch like this one to set bit 12 in the PDC.  I had an affected
> user test with this patch and it worked, serving as an equivalent of
> Linux commit a21211672c9a ("ACPI / processor: Request native thermal
> interrupt handling via _OSC").
> 
> Another idea is to use Linux's arch_acpi_set_pdc_bits() to make the
> hypercall to Xen.  It occurs earlier:
> acpi_processor_set_pdc()
>     acpi_processor_alloc_pdc()
>         acpi_set_pdc_bits()
>             arch_acpi_set_pdc_bits()
>     acpi_processor_eval_pdc()
> 
> So the IDLE_NOMWAIT masking in acpi_processor_eval_pdc() would still
> apply.  arch_acpi_set_pdc_bits() is provided the buffer, so it's a
> little cleaner in that respect.

I see.  My reasoning for placing the Xen filtering in
acpi_processor_eval_pdc() is so that there are no further
modifications to the buffer by Linux after the call to sanitize the
buffer (XENPF_set_processor_pminfo).

I think if the filtering done by Xen is moved to
arch_acpi_set_pdc_bits() we would then need to disable the evaluation
of boot_option_idle_override in acpi_processor_eval_pdc() as we don't
want dom0 choices affecting the selection of _PDC features done by
Xen?

In any case, feel free to pick this patch and re-submit upstream if
you want.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.