[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v4 04/24] x86: refactor psr: implement CPU init and free flow.
>>> On 14.12.16 at 05:07, <yi.y.sun@xxxxxxxxxxxxxxx> wrote: > @@ -141,11 +144,79 @@ struct psr_assoc { > > struct psr_cmt *__read_mostly psr_cmt; > > +static struct psr_socket_info *__read_mostly socket_info; > + > static unsigned int opt_psr; > static unsigned int __initdata opt_rmid_max = 255; > +static unsigned int __read_mostly opt_cos_max = MAX_COS_REG_CNT; > static uint64_t rmid_mask; > static DEFINE_PER_CPU(struct psr_assoc, psr_assoc); > > +/* Declare feature list entry. */ > +static struct feat_node *feat_l3_cat; Hmm, if you indeed (again) need such a helper object, then please make the comment actually say so. As it is, the comment is mostly meaningless. > +/* Common functions. */ > +static void free_feature(struct psr_socket_info *info) > +{ > + struct feat_node *feat_tmp; > + > + if ( !info ) > + return; > + > + list_for_each_entry(feat_tmp, &info->feat_list, list) > + { > + clear_bit(feat_tmp->feature, &info->feat_mask); > + list_del(&feat_tmp->list); > + xfree(feat_tmp); > + } This requires list_for_each_entry_safe() to be used, to avoid a use-after-free issue (or alternatively a while(!list_empty()) loop). > + /* Free feature which are not added into feat_list. */ > + if ( feat_l3_cat ) > + { > + xfree(feat_l3_cat); > + feat_l3_cat = NULL; > + } Why don't you leave this around, avoiding the need for an allocation the next time a CPU comes online? Also note that xfree() deals fine with a NULL input, so conditionals like this are pointless. > +/* L3 CAT callback functions implementation. */ > +static void l3_cat_init_feature(unsigned int eax, unsigned int ebx, > + unsigned int ecx, unsigned int edx, This is rather unfortunate naming: How does the reader of this code know what these values represent, without having to first go look in the caller? > + struct feat_node *feat, > + struct psr_socket_info *info) > +{ > + struct psr_cat_hw_info l3_cat; > + unsigned int socket; > + > + /* No valid value so do not enable feature. */ > + if ( !eax || !edx ) > + return; > + > + l3_cat.cbm_len = (eax & CAT_CBM_LEN_MASK) + 1; > + l3_cat.cos_max = min(opt_cos_max, edx & CAT_COS_MAX_MASK); > + > + /* cos=0 is reserved as default cbm(all ones). */ > + feat->cos_reg_val[0] = (1ull << l3_cat.cbm_len) - 1; Considering how cbm_len gets calculated a few lines up, I can't see how this can end up being all ones (as the comment says). At most this can be 0xffffffff (as a 64-bit value) afaics. > + feat->feature = PSR_SOCKET_L3_CAT; > + __set_bit(PSR_SOCKET_L3_CAT, &info->feat_mask); > + > + feat->info.l3_cat_info = l3_cat; > + > + info->nr_feat++; > + > + /* Add this feature into list. */ > + list_add_tail(&feat->list, &info->feat_list); > + > + socket = cpu_to_socket(smp_processor_id()); > + printk(XENLOG_INFO "L3 CAT: enabled on socket %u, cos_max:%u, > cbm_len:%u\n", > + socket, feat->info.l3_cat_info.cos_max, > + feat->info.l3_cat_info.cbm_len); I don't think we want such printed for every socket, at least not by default. Please, if you want to keep it, make it dependent upon e.g. opt_cpu_info. > +} > + > +struct feat_ops l3_cat_ops = { static const > @@ -340,18 +414,113 @@ void psr_domain_free(struct domain *d) > psr_free_rmid(d); > } > > -static int psr_cpu_prepare(unsigned int cpu) > +static int cpu_prepare_work(unsigned int cpu) > { > + if ( !socket_info ) > + return 0; > + > + /* Malloc memory for the global feature head here. */ > + if ( feat_l3_cat == NULL && > + (feat_l3_cat = xzalloc(struct feat_node)) == NULL ) > + return -ENOMEM; > + > return 0; > } > > +static void cpu_init_work(void) > +{ > + unsigned int eax, ebx, ecx, edx; > + struct psr_socket_info *info; > + unsigned int socket; > + unsigned int cpu = smp_processor_id(); > + const struct cpuinfo_x86 *c = cpu_data + cpu; Please use current_cpu_data instead of open coding it. > + struct feat_node *feat_tmp; Looking at the uses, I don't think this is temporary in any way - why not just "feat"? > + if ( !cpu_has(c, X86_FEATURE_PQE) || c->cpuid_level < > PSR_CPUID_LEVEL_CAT ) > + return; Instead of such a double check, please consider clearing the PQE feature bit when the maximum CPUID level is too low (which shouldn't happen anyway). > + socket = cpu_to_socket(cpu); > + info = socket_info + socket; > + if ( info->feat_mask ) > + return; > + > + spin_lock_init(&info->ref_lock); > + > + cpuid_count(PSR_CPUID_LEVEL_CAT, 0, &eax, &ebx, &ecx, &edx); > + if ( ebx & PSR_RESOURCE_TYPE_L3 ) > + { > + cpuid_count(PSR_CPUID_LEVEL_CAT, 1, &eax, &ebx, &ecx, &edx); > + > + feat_tmp = feat_l3_cat; > + feat_l3_cat = NULL; > + feat_tmp->ops = l3_cat_ops; > + > + feat_tmp->ops.init_feature(eax, ebx, ecx, edx, feat_tmp, info); What's the point of the indirect call here, when you know the function is l3_cat_init_feature()? > +static void cpu_fini_work(unsigned int cpu) > +{ > + unsigned int socket = cpu_to_socket(cpu); > + > + if ( !socket_cpumask[socket] || cpumask_empty(socket_cpumask[socket]) ) > + { > + struct psr_socket_info *info = socket_info + socket; > + > + free_feature(info); Pointless local variable "info", unless later patches add further uses. > +static void __init init_psr(void) > +{ > + unsigned int i; > + > + if ( opt_cos_max < 1 ) > + { > + printk(XENLOG_INFO "CAT: disabled, cos_max is too small\n"); > + return; > + } > + > + socket_info = xzalloc_array(struct psr_socket_info, nr_sockets); > + > + if ( !socket_info ) > + { > + printk(XENLOG_INFO "Fail to alloc socket_info!\n"); > + return; > + } > + > + for ( i = 0; i < nr_sockets; i++ ) > + INIT_LIST_HEAD(&socket_info[i].feat_list); Please decide for one central place where to do such initialization: This and spin_lock_init() really should live together (and I think better there, not here). > +static int psr_cpu_prepare(unsigned int cpu) > +{ > + return cpu_prepare_work(cpu); > +} What is this wrapper good for? Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |