[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] RE: [PATCH 2] X86: cpufreq get_cur_val adjustment



Jan Beulich wrote:
>>>> "Liu, Jinsong" <jinsong.liu@xxxxxxxxx> 11.02.09 09:46 >>>
>> X86: cpufreq get_cur_val adjustment
>> 
>> c/s 19149 update cpufreq get_cur_val logic to avoid cross processor
>> call, it's a good point. 
>> However, to avoid null drv_data pointer, we adjust some logic in
>> this patch, to keep advantage of c/s 19149 and at same time to avoid
>> null drv_data pointer. 
> 
> Are you saying that there are cases where
> cpufreq_cpu_policy[cpu]->cpu != cpu? 
> 
> And shouldn't drv_data[] be initialized for all known CPUs (possibly
> set to the same value for several of them)? 
> 
> The patch you submitted would only be needed if the answer is 'yes'
> to the first question, and 'no' to the second (and even then I would
> think fixing drv_data[] initialization would be better than the patch
> presented here).   
> 
> Jan

You have eagle eyes :)

For the 1st question, the answer is yes.
    - in cpufreq_cpu_policy[cpu], cpu is the processor number as a index;
    - in cpufreq_cpu_policy[cpu]->cpu, the later cpu is the 'main cpu' of the 
coordination domain;

For the 2nd question, the answer is no.
    - this is inherited from native linux, although it seems a little strange, 
native linux really works so;
    - in a _PSD domain, there is a 'main cpu', and only drv_data[main_cpu] is 
not null;
    - I agree with you, logically drv_data[] can be a per-domain data structure 
rather than as current per-cpu data structure, however, native linux don't have 
per-domain level structure, and considering different coordination type, 
per-domain structure will be very complex. I think current drv_data is OK, it 
works and compatible with latest native linux (i.e. 2.6.26.5).

Thanks,
Jinsong

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.