|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v4 05/15] pmstat&xenpm: Re-arrage for cpufreq union
On Thu, Jun 15, 2023 at 10:38 AM Jan Beulich <jbeulich@xxxxxxxx> wrote:
>
> On 14.06.2023 20:02, Jason Andryuk wrote:
> > Move some code around now that common xen_sysctl_pm_op get_para fields
> > are together. In particular, the scaling governor information like
> > scaling_available_governors is inside the union, so it is not always
> > available.
> >
> > With that, gov_num may be 0, so bounce buffer handling needs
> > to be modified.
> >
> > scaling_governor won't be filled for hwp, so this will simplify the
> > change when it is introduced.
>
> While I think this suitably describes the tool stack side changes, ...
>
> > --- a/xen/drivers/acpi/pmstat.c
> > +++ b/xen/drivers/acpi/pmstat.c
> > @@ -239,11 +239,24 @@ static int get_cpufreq_para(struct xen_sysctl_pm_op
> > *op)
> > if ( ret )
> > return ret;
> >
> > + op->u.get_para.cpuinfo_cur_freq =
> > + cpufreq_driver.get ? cpufreq_driver.get(op->cpuid) : policy->cur;
> > + op->u.get_para.cpuinfo_max_freq = policy->cpuinfo.max_freq;
> > + op->u.get_para.cpuinfo_min_freq = policy->cpuinfo.min_freq;
> > + op->u.get_para.turbo_enabled = cpufreq_get_turbo_status(op->cpuid);
> > +
> > + if ( cpufreq_driver.name[0] )
> > + strlcpy(op->u.get_para.scaling_driver,
> > + cpufreq_driver.name, CPUFREQ_NAME_LEN);
> > + else
> > + strlcpy(op->u.get_para.scaling_driver, "Unknown",
> > CPUFREQ_NAME_LEN);
> > +
> > if ( !(scaling_available_governors =
> > xzalloc_array(char, gov_num * CPUFREQ_NAME_LEN)) )
> > return -ENOMEM;
> > - if ( (ret =
> > read_scaling_available_governors(scaling_available_governors,
> > - gov_num * CPUFREQ_NAME_LEN * sizeof(char))) )
> > + if ( (ret = read_scaling_available_governors(
> > + scaling_available_governors,
> > + gov_num * CPUFREQ_NAME_LEN * sizeof(char))) )
> > {
> > xfree(scaling_available_governors);
> > return ret;
> > @@ -254,26 +267,16 @@ static int get_cpufreq_para(struct xen_sysctl_pm_op
> > *op)
> > if ( ret )
> > return ret;
> >
> > - op->u.get_para.cpuinfo_cur_freq =
> > - cpufreq_driver.get ? cpufreq_driver.get(op->cpuid) : policy->cur;
> > - op->u.get_para.cpuinfo_max_freq = policy->cpuinfo.max_freq;
> > - op->u.get_para.cpuinfo_min_freq = policy->cpuinfo.min_freq;
> > -
> > op->u.get_para.u.s.scaling_cur_freq = policy->cur;
> > op->u.get_para.u.s.scaling_max_freq = policy->max;
> > op->u.get_para.u.s.scaling_min_freq = policy->min;
> >
> > - if ( cpufreq_driver.name[0] )
> > - strlcpy(op->u.get_para.scaling_driver,
> > - cpufreq_driver.name, CPUFREQ_NAME_LEN);
> > - else
> > - strlcpy(op->u.get_para.scaling_driver, "Unknown",
> > CPUFREQ_NAME_LEN);
> > -
> > if ( policy->governor->name[0] )
> > strlcpy(op->u.get_para.u.s.scaling_governor,
> > policy->governor->name, CPUFREQ_NAME_LEN);
> > else
> > - strlcpy(op->u.get_para.u.s.scaling_governor, "Unknown",
> > CPUFREQ_NAME_LEN);
> > + strlcpy(op->u.get_para.u.s.scaling_governor, "Unknown",
> > + CPUFREQ_NAME_LEN);
> >
> > /* governor specific para */
> > if ( !strncasecmp(op->u.get_para.u.s.scaling_governor,
> > @@ -291,7 +294,6 @@ static int get_cpufreq_para(struct xen_sysctl_pm_op *op)
> > &op->u.get_para.u.s.u.ondemand.sampling_rate,
> > &op->u.get_para.u.s.u.ondemand.up_threshold);
> > }
> > - op->u.get_para.turbo_enabled = cpufreq_get_turbo_status(op->cpuid);
> >
> > return ret;
> > }
>
> ... all I see on the hypervisor side is re-ordering of steps and re-formatting
> of over-long lines. It's not clear to me why what you do is necessary for your
> purpose.
The purpose was to move accesses to the nested struct and union
"op->u.get_para.u.s.u" to the end of the function, and the accesses to
common fields (e.g. op->u.get_para.turbo_enabled) earlier. This
simplifies the changes in "cpufreq: Export HWP parameters to userspace
as CPPC". These governor fields get indented, and that needed some
re-formatting. Some re-formatting slipped in while I rebased changes
- sorry about that.
Regards,
Jason
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |