[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v9 05/25] x86: refactor psr: L3 CAT: implement CPU init and free flow.



On 17-03-27 00:34:29, Jan Beulich wrote:
> >>> On 27.03.17 at 06:41, <yi.y.sun@xxxxxxxxxxxxxxx> wrote:
> > On 17-03-24 10:52:34, Jan Beulich wrote:
> >> >>> On 16.03.17 at 12:07, <yi.y.sun@xxxxxxxxxxxxxxx> wrote:
> >> > @@ -46,6 +50,9 @@
> >> >   */
> >> >  #define MAX_COS_REG_CNT  128
> >> >  
> >> > +/* CAT features use 1 COS register in one access. */
> >> > +#define CAT_COS_NUM      1
> >> 
> >> With it being stored into the feature node now I don't see why you
> >> need this constant anymore. And indeed it's being used exactly
> >> once.
> >> 
> > I remember somebody suggested me not to use constant but should define a
> > macro. As it is only used once, I will remove this and 'CDP_COS_NUM' in
> > later patch.
> 
> It may well have been me, back when this was used in multiple places.
> 
Ok, I got it. Will remove such macros.

> >> > +/*
> >> > + * Use this function to check if any allocation feature has been enabled
> >> > + * in cmdline.
> >> > + */
> >> > +static bool psr_alloc_feat_enabled(void)
> >> > +{
> >> > +    return ((!socket_info) ? false : true );
> >> 
> >> Stray parentheses (all of them actually) and blank. Even more, why
> >> not simply
> >> 
> >>     return socket_info;
> >> 
> >> ?
> >> 
> > How about 'return !!socket_info'?
> 
> And what would the !! be good for? Back when we were still using
> bool_t that would have been a requirement (the code wouldn't
> even have built without afaict), but now that we use bool I don't
> see the point (other that cluttering code). In fact I consider the
> presence of the function questionable as a whole, unless later
> patches add to it.
> 
Per Wei's suggestion, I added this function to make readers clearly understand
the meaning of the code. In previous codes, we just check 'if ( !socket_info )'.

Per test, 'return socket_info' causes warning if function type is 'bool'.

> >> > +                             struct feat_node *feat,
> >> > +                             struct psr_socket_info *info,
> >> > +                             enum psr_feat_type type)
> >> > +{
> >> > +    unsigned int socket, i;
> >> > +    struct psr_cat_hw_info cat = { };
> >> > +    uint64_t val;
> >> > +
> >> > +    /* No valid value so do not enable feature. */
> >> > +    if ( !regs.a || !regs.d )
> >> > +        return;
> >> > +
> >> > +    cat.cbm_len = (regs.a & CAT_CBM_LEN_MASK) + 1;
> >> > +    cat.cos_max = min(opt_cos_max, regs.d & CAT_COS_MAX_MASK);
> >> > +
> >> > +    /* cos=0 is reserved as default cbm(all bits within cbm_len are 1). 
> >> > */
> >> > +    feat->cos_reg_val[0] = cat_default_val(cat.cbm_len);
> >> > +    /*
> >> > +     * To handle cpu offline and then online case, we need read MSRs 
> >> > back to
> >> > +     * save values into cos_reg_val array.
> >> > +     */
> >> > +    for ( i = 1; i <= cat.cos_max; i++ )
> >> > +    {
> >> > +        rdmsrl(MSR_IA32_PSR_L3_MASK(i), val);
> >> > +        feat->cos_reg_val[i] = (uint32_t)val;
> >> > +    }
> >> 
> >> You mention this in the changes done, but I don't understand why
> >> you do this. What meaning to these values have to you? If you
> >> want hardware and cached values to match up, the much more
> >> conventional way of enforcing this would be to write the values
> >> you actually want (normally all zero).
> >> 
> > When all cpus on a socket are offline, the free_feature() is called to free
> > features resources so that the values saved in cos_reg_val[] are lost. When 
> > the
> > socket is online again, features are allocated again so that cos_reg_val[]
> > members are all initialized to 0. Only is cos_reg_val[0] initialized to 
> > default
> > value in this function in old codes.
> > 
> > But domain is still alive so that its cos id on the socket is kept. The
> > corresponding MSR value is kept too per test. To make cos_reg_val[] values 
> > be
> > same as HW to not to mislead user, we should read back the valid values on 
> > HW
> > into cos_reg_val[].
> 
> Okay, I understand the background, but I don't view this solution
> as viable: Once the last core on a socket goes offline, all
> references to it should be cleaned up. After all what will be
> brought back online may be a different physical CPU altogether;
> you can't assume MSR values to have survived even if it is the
> same CPU which comes back online, as it may have undergone
> a reset cycle, or BIOS/SMM may have played with the MSRs.
> That's even a possibility for a single core coming back online, so
> you have to reload MSRs explicitly anyway if implicit reloading
> (i.e. once vCPU-s get scheduled onto it) doesn't suffice.
> 
So, you think the MSRs values may not be valid after such process and
reloading (write MSRs to default value) is needed. If so, I would like
to do more operations in 'free_feature()':
1. Iterate all domains working on the offline socket to change
   'd->arch.psr_cos_ids[socket]' to COS 0, i.e restore it back to init
   status.
2. Restore 'socket_info[socket].cos_ref[]' to all 0.

These can make the socket's info be totally restored back to init status.

How do you think? Thanks!

> >> > +/* L3 CAT ops */
> >> > +static const struct feat_ops l3_cat_ops = {
> >> > +};
> >> 
> >> Leaving an already declared function pointer as NULL? Please don't.
> >> 
> > Ok, will consider to move it and below code into later patch.
> >     feat->ops = l3_cat_ops;
> 
> I don't mind the empty structure instance above, as long as the
> structure doesn't have any function pointer members yet (data
> members are almost always fine).
> 
To explain how the data structures are, I declared '(*get_cos_max)' in
'struct feat_ops' in patch 3. So, do you mind I remove this declaration
and just keep an empty 'struct feat_ops' in patch 3 so that we can keep
current codes in this patch?

> Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.