[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v8 13/24] x86: refactor psr: implement CPU init and free flow for CDP.



On 17-03-09 07:53:16, Jan Beulich wrote:
> >>> On 15.02.17 at 09:49, <yi.y.sun@xxxxxxxxxxxxxxx> wrote:
> > --- a/xen/arch/x86/psr.c
> > +++ b/xen/arch/x86/psr.c
> > @@ -97,6 +97,7 @@ struct psr_cat_hw_info {
> >  struct feat_hw_info {
> >      union {
> >          struct psr_cat_hw_info l3_cat_info;
> > +        struct psr_cat_hw_info l3_cdp_info;
> 
> Two union members of the same type? What's the union good for
> then? (I've peeked ahead, and L2 CAT adds yet another one. A
> strong sign that you've gone too far with what needs to be per-
> feature vs what can be common.)
> 
I have corrected this. L3 CAT/CDP and L2 CAT will use a common HW info
in next version.

> > @@ -195,6 +196,22 @@ struct feat_node {
> >      struct list_head list;
> >  };
> >  
> > +/*
> > + * get_data - get DATA COS register value from input COS ID.
> > + * @feat:        the feature list entry.
> > + * @cos:         the COS ID.
> > + */
> > +#define get_cdp_data(feat, cos)                  \
> > +            ( feat->cos_reg_val[cos * 2] )
> 
> Stray blanks inside parentheses. Macro parameters need to be
> parenthesized.
> 
It has been corrected in next version per Roger's comment.

[...]
> > @@ -1267,6 +1351,14 @@ static int psr_cpu_prepare(void)
> >           (feat_l3_cat = xzalloc(struct feat_node)) == NULL )
> >          return -ENOMEM;
> >  
> > +    if ( feat_l3_cdp == NULL &&
> > +         (feat_l3_cdp = xzalloc(struct feat_node)) == NULL )
> > +    {
> > +        xfree(feat_l3_cat);
> > +        feat_l3_cat = NULL;
> 
> Why the freeing? We've decided to allow for one node to be kept,
> so no reason to free it on the error path here.
> 
Ok, will correct this.

> Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.