[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 10/11] x86/intel_pstate: support the use of intel_pstate in pmstat.c



On 09/09/2015 16:17,  Jan Beulich wrote:
>>> On 10.09.15 at 07:35, <wei.w.wang@xxxxxxxxx> wrote:
> On 09/09/2015 23:55,  Jan Beulich wrote:
>>>> On 09.09.15 at 17:16, <wei.w.wang@xxxxxxxxx> wrote:
>> On 09/09/2015 21:12,  Jan Beulich wrote:
>>>>> On 09.09.15 at 14:56, <wei.w.wang@xxxxxxxxx> wrote:
>>> Can you please explain more why it doesn't scale? 
>>> From my point of view, any other future value representation can be 
>>> passed from the producer to the related consumer through this method.
>> 
>>> Did you read all of my earlier replies? I already said there "Just 
>>> consider
>> what happens to the code when we end up gaining a few
>>> more drivers providing percentage values, and perhaps another one 
>>> providing
>> a third variant of output representation."
>> 
>> Yes, I have read that. I am not sure if I got your point, but my 
>> meaning was when we add new drivers to the code, e.g. xx_pstate 
>> driver, we can still have the name, "xx_pstate", assigned to 
>> "p_cpufreq->scaling_driver" to distinguish one another. If the driver 
>> uses a different variant of output representation, which cannot be 
>> held by " uint32_t scaling_max_perf" (it needs "uint64_t" for 
>> example, then
> that driver developer needs to add a new field here like  "
>> uint64_t scaling_max_perf_xx").
>> What is the scaling problem? 
> 
>>      if (strcmp() == 0 ||
>>          strcmp() == 0 ||
>>          strcmp() == 0) {
>>      ...
>>      } else if (strcmp() == 0) {
>>      ...
>>      } else {
>>      ...
>>      }
> 
>> is just ugly, and gets all the uglier the more strcmp()s get added.
>> Have a boolean or enumeration indicating what kind of data there is, 
>> and the
> above changes to
> 
>>      switch (kind) {
>>      case absolute: ...
>>      case percentage: ...
>>      }
> 
> Ok. I will replace the default "scaling_driver[CPUFREQ_NAME_LEN]" with 
> an enum type, like this following ...
> - char scaling_driver[CPUFREQ_NAME_LEN];
> + enum scaling_driver_flag scaling_driver;
> ...
> 
> We cannot keep both of the above two there, because there is a 128Byte 
> size limit. Then somewhere, we need to translate the 
> character-represented scaling_driver to our new enum-represented 
> scaling_driver. For example, in pmstat.c, the following:
> 
> if ( cpufreq_driver->name[0] )
>         strlcpy(op->u.get_para.scaling_driver,
>             cpufreq_driver->name, CPUFREQ_NAME_LEN); else
>         strlcpy(op->u.get_para.scaling_driver, "Unknown", 
> CPUFREQ_NAME_LEN);
> 
> needs to be changed to:
> if ( strncmp(cpufreq_driver->name[0], "intel_pstate", CPUFREQ_NAME_LEN) == 0 )
>     op->u.get_para.scaling_driver = INTEL_PSTATE; else if ( 
> strncmp(cpufreq_driver->name[0], "acpi_cpufreq", CPUFREQ_NAME_LEN) == 
> 0 )
>     op->u.get_para.scaling_driver = ACPI_CPUFREQ; ...
> 
> Seems we still cannot get rid of these strncmp()s. Is this acceptable, 
> or should we change "struct cpufreq_driver" to use enum represented 
> driver name as well, or do you have a better suggestion?

> The one I explained before: Express the data representation type in an enum, 
> not the driver kind. But even if we went with the 
> above, the strcmp() ugliness would at least be limited to the producer, not 
> enforced onto any number of consumers.

No.  I think in our previous discussion, there is no problem with "the data 
representation type", any future data representation, as long as it is in 
"uint32_t", it can use "uint32_t scaling_max_perf" to hold that value 
representation. Your concern was that the following doesn't scale well.

+    if (!strncmp(p_cpufreq->scaling_driver,
+                  "intel_pstate", CPUFREQ_NAME_LEN) )

So we are trying to change the driver name thing to be in enum. 

Best,
Wei



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.