[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 06/13] cpufreq: Export HWP parameters to userspace



On 03.05.2021 21:28, Jason Andryuk wrote:
> --- a/xen/arch/x86/acpi/cpufreq/hwp.c
> +++ b/xen/arch/x86/acpi/cpufreq/hwp.c
> @@ -523,6 +523,30 @@ static const struct cpufreq_driver __initconstrel 
> hwp_cpufreq_driver =
>      .update = hwp_cpufreq_update,
>  };
>  
> +int get_hwp_para(struct cpufreq_policy *policy, struct xen_hwp_para 
> *hwp_para)
> +{
> +    unsigned int cpu = policy->cpu;
> +    struct hwp_drv_data *data = hwp_drv_data[cpu];

const, perhaps also for the first function parameter, and in
general (throughout the series) whenever an item is not meant to
be modified.

> +    if ( data == NULL )
> +        return -EINVAL;
> +
> +    hwp_para->hw_feature        =
> +        feature_hwp_activity_window ? XEN_SYSCTL_HWP_FEAT_ACT_WINDOW  : 0 |
> +        feature_hwp_energy_perf     ? XEN_SYSCTL_HWP_FEAT_ENERGY_PERF : 0;

This needs parentheses added, as | has higher precedence than ?:.

> --- a/xen/drivers/acpi/pmstat.c
> +++ b/xen/drivers/acpi/pmstat.c
> @@ -290,6 +290,12 @@ static int get_cpufreq_para(struct xen_sysctl_pm_op *op)
>              &op->u.get_para.u.ondemand.sampling_rate,
>              &op->u.get_para.u.ondemand.up_threshold);
>      }
> +
> +    if ( !strncasecmp(op->u.get_para.scaling_governor,
> +                      "hwp-internal", CPUFREQ_NAME_LEN) )
> +    {
> +        ret = get_hwp_para(policy, &op->u.get_para.u.hwp_para);
> +    }
>      op->u.get_para.turbo_enabled = cpufreq_get_turbo_status(op->cpuid);

Nit: Unnecessary parentheses again, and with the leading blank line
you also want a trailing one. (As an aside I'm also not overly happy
to see the call keyed to the governor name. Is there really no other
indication that hwp is in use?)

> --- a/xen/include/acpi/cpufreq/cpufreq.h
> +++ b/xen/include/acpi/cpufreq/cpufreq.h
> @@ -246,4 +246,7 @@ int write_userspace_scaling_setspeed(unsigned int cpu, 
> unsigned int freq);
>  void cpufreq_dbs_timer_suspend(void);
>  void cpufreq_dbs_timer_resume(void);
>  
> +/********************** hwp hypercall helper *************************/
> +int get_hwp_para(struct cpufreq_policy *policy, struct xen_hwp_para 
> *hwp_para);

While I can see that the excessive number of stars matches what
we have elsewhere in the header, I still wonder if you need to go
this far for a single declaration. If you want to stick to this,
then to match the rest of the file you want to follow the comment
by a blank line.

> --- a/xen/include/public/sysctl.h
> +++ b/xen/include/public/sysctl.h
> @@ -35,7 +35,7 @@
>  #include "domctl.h"
>  #include "physdev.h"
>  
> -#define XEN_SYSCTL_INTERFACE_VERSION 0x00000013
> +#define XEN_SYSCTL_INTERFACE_VERSION 0x00000014

As long as the size of struct xen_get_cpufreq_para doesn't change
(which from the description I imply it doesn't), you don't need to
bump the version, I don't think - your change is a pure addition
to the interface.

> @@ -301,6 +301,23 @@ struct xen_ondemand {
>      uint32_t up_threshold;
>  };
>  
> +struct xen_hwp_para {
> +    uint16_t activity_window; /* 7bit mantissa and 3bit exponent */

If you go this far with commenting, you should also make the further
aspects clear: Which bits these are, and that the exponent is taking
10 as the base (in most other cases one would expect 2).

> +#define XEN_SYSCTL_HWP_FEAT_ENERGY_PERF (1 << 0) /* energy_perf range 0-255 
> if
> +                                                    1. Otherwise 0-15 */
> +#define XEN_SYSCTL_HWP_FEAT_ACT_WINDOW  (1 << 1) /* activity_window supported
> +                                                    if 1 */

Style: Comment formatting. You may want to move the comment on separate
lines ahead of what they comment.

> +    uint8_t hw_feature; /* bit flags for features */
> +    uint8_t hw_lowest;
> +    uint8_t hw_most_efficient;
> +    uint8_t hw_guaranteed;
> +    uint8_t hw_highest;

Any particular reason for the recurring hw_ prefixes?

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.