[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v9 10/25] x86: refactor psr: L3 CAT: set value: assemble features value array.



>>> On 28.03.17 at 11:11, <yi.y.sun@xxxxxxxxxxxxxxx> wrote:
> On 17-03-28 02:36:05, Jan Beulich wrote:
>> >>> On 28.03.17 at 10:05, <yi.y.sun@xxxxxxxxxxxxxxx> wrote:
>> > On 17-03-28 11:12:43, Yi Sun wrote:
>> >> On 17-03-27 04:17:28, Jan Beulich wrote:
>> >> > >>> On 16.03.17 at 12:08, <yi.y.sun@xxxxxxxxxxxxxxx> wrote:
>> >> > > --- a/xen/arch/x86/psr.c
>> >> > > +++ b/xen/arch/x86/psr.c
>> > [...]
>> > 
>> >> > >  static int gather_val_array(uint32_t val[],
>> >> > > @@ -589,7 +672,34 @@ static int gather_val_array(uint32_t val[],
>> >> > >                              const struct psr_socket_info *info,
>> >> > >                              unsigned int old_cos)
>> >> > >  {
>> >> > > -    return -EINVAL;
>> >> > > +    const struct feat_node *feat;
>> >> > > +    unsigned int i;
>> >> > > +
>> >> > > +    if ( !val )
>> >> > > +        return -EINVAL;
>> >> > > +
>> >> > > +    /* Get all features current values according to old_cos. */
>> >> > > +    for ( i = 0; i < PSR_SOCKET_MAX_FEAT; i++ )
>> >> > > +    {
>> >> > > +        if ( !info->features[i] )
>> >> > > +            continue;
>> >> > > +
>> >> > > +        feat = info->features[i];
>> >> > > +
>> >> > > +        if ( old_cos > feat->ops.get_cos_max(feat) )
>> >> > > +            old_cos = 0;
>> >> > > +
>> >> > > +        /* value getting order is same as feature array */
>> >> > > +        feat->ops.get_old_val(val, feat, old_cos);
>> >> > > +
>> >> > > +        array_len -= feat->cos_num;
>> >> > 
>> >> > So this I should really have asked about on a much earlier patch,
>> >> > but I've recognize the oddity only now: Why is cos_num
>> >> > per-feature-node instead of per-feature? This should really be a
>> >> > field in struct feat_ops (albeit the name "ops" then will be slightly
>> >> > misleading, but I think that's tolerable if you can't think of a better
>> >> > name).
>> >> > 
>> >> Ok, I got your meaning. How about 'feat_props'? No matter operations or
>> >> variables are all properties of the feature.
>> >> 
>> > One more thing here. If we move 'cos_max' into 'feat_ops', we cannot 
> declare
>> > 'feat_ops' as const. Because we have to assign value to 'cos_max' in
>> > cat_init_feature().
>> 
>> I don't see a problem with this. It's only the static variable which
>> can't be const then anymore. The pointer used everywhere else
>> easily can be, afaict.
>> 
> Because I want to assign the l3_cat_props to feat->props before executing
> cat_init_feature(). The codes sequence is below. Then, in 
> cat_init_feature(),
> I can use 'feat' but not 'l3_cat_props' which is feature specific.
> 
> static void cat_init_feature(...)
> {
> ......
>     feat->info.cbm_len = (regs->a & CAT_CBM_LEN_MASK) + 1;
>     feat->props->cos_max = min(opt_cos_max, regs->d & CAT_COS_MAX_MASK);
> ......
> }
> 
> static struct feat_props l3_cat_props = {
>     .cos_num = 1,
> };
> 
> static void psr_cpu_init(void)
> {
> ......
>         feat->props = &l3_cat_props;
>         cat_init_feature(&regs, feat, info, PSR_SOCKET_L3_CAT);
> ......
> }

static void psr_cpu_init(void)
{
......
        cat_init_feature(&regs, &l3_cat_props, feat, info, PSR_SOCKET_L3_CAT);
        feat->props = &l3_cat_props;
......
}

> Then, back to the origin of this. I think feature-node is feature itself.
> Everything in it is feature specific thing. Is it necessary to move values
> into a sub-structure, 'feat_props'? If not doing this, we can keep
> 'feat_ops' to only handle callback functions.

I'm not sure I understand what you're trying to tell me. I can only
repeat what I've said before: The amount of feature specific
callbacks should be reduced to the minimum necessary - the more
generic code, the less code overall to maintain.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.