[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v7 for Xen 4.7 3/4] libxl: enable per-VCPU parameter settings for RTDS scheduler
On Wed, Mar 16, 2016 at 11:47:50AM -0500, Chong Li wrote: > Add libxl_vcpu_sched_params_get/set and sched_rtds_vcpu_get/set > functions to support per-VCPU settings. > > Signed-off-by: Chong Li <chong.li@xxxxxxxxx> > Signed-off-by: Meng Xu <mengxu@xxxxxxxxxxxxx> > Signed-off-by: Sisu Xi <xisisu@xxxxxxxxx> > Good work fixing all issues. Acked-by: Wei Liu <wei.liu2@xxxxxxxxxx> And I have some nit-picking below. > + > +/* Set the RTDS scheduling parameters of vcpu(s) */ > +static int sched_rtds_vcpus_params_set(libxl__gc *gc, uint32_t domid, > + const libxl_vcpu_sched_params *scinfo) > +{ > + int r, rc; > + int i; > + uint16_t max_vcpuid; > + xc_dominfo_t info; > + struct xen_domctl_schedparam_vcpu *vcpus; > + > + r = xc_domain_getinfo(CTX->xch, domid, 1, &info); > + if (r < 0) { > + LOGE(ERROR, "getting domain info"); > + rc = ERROR_FAIL; > + goto out; > + } > + max_vcpuid = info.max_vcpu_id; > + > + if (scinfo->num_vcpus <= 0) { > + rc = ERROR_INVAL; > + goto out; > + } else { > + for (i = 0; i < scinfo->num_vcpus; i++) { > + if (scinfo->vcpus[i].vcpuid < 0 || > + scinfo->vcpus[i].vcpuid > max_vcpuid) { > + LOG(ERROR, "VCPU index is out of range, " > + "valid values are within range from 0 to %d", > + max_vcpuid); > + rc = ERROR_INVAL; > + goto out; > + } > + rc = sched_rtds_validate_params(gc, scinfo->vcpus[i].period, > + scinfo->vcpus[i].budget); > + if (rc) { > + rc = ERROR_INVAL; > + goto out; > + } > + } > + GCNEW_ARRAY(vcpus, scinfo->num_vcpus); > + for (i = 0; i < scinfo->num_vcpus; i++) { > + vcpus[i].vcpuid = scinfo->vcpus[i].vcpuid; > + vcpus[i].u.rtds.period = scinfo->vcpus[i].period; > + vcpus[i].u.rtds.budget = scinfo->vcpus[i].budget; > + } > + } > + You could have written this hunk like this: if (scinfo->num_vcpus <= 0) { rc = ERROR_INVAL; goto out; } for (i = 0; i < scinfo->num_vcpus; i++) { if (scinfo->vcpus[i].vcpuid < 0 || scinfo->vcpus[i].vcpuid > max_vcpuid) { LOG(ERROR, "VCPU index is out of range, " "valid values are within range from 0 to %d", max_vcpuid); rc = ERROR_INVAL; goto out; } rc = sched_rtds_validate_params(gc, scinfo->vcpus[i].period, scinfo->vcpus[i].budget); if (rc) { rc = ERROR_INVAL; goto out; } } GCNEW_ARRAY(vcpus, scinfo->num_vcpus); for (i = 0; i < scinfo->num_vcpus; i++) { vcpus[i].vcpuid = scinfo->vcpus[i].vcpuid; vcpus[i].u.rtds.period = scinfo->vcpus[i].period; vcpus[i].u.rtds.budget = scinfo->vcpus[i].budget; } But, you original code is still OK. This is just FYI. No need to resend just because of this. > + r = xc_sched_rtds_vcpu_set(CTX->xch, domid, > + vcpus, scinfo->num_vcpus); > + if (r != 0) { > + LOGE(ERROR, "setting vcpu sched rtds"); > + rc = ERROR_FAIL; > + goto out; > + } > + rc = 0; > +out: > + return rc; > +} > + > +/* Set the RTDS scheduling parameters of all vcpus of a domain */ > +static int sched_rtds_vcpus_params_set_all(libxl__gc *gc, uint32_t domid, > + const libxl_vcpu_sched_params *scinfo) > +{ > + int r, rc; > + int i; > + uint16_t max_vcpuid; > + xc_dominfo_t info; > + struct xen_domctl_schedparam_vcpu *vcpus; > + uint32_t num_vcpus; > + > + r = xc_domain_getinfo(CTX->xch, domid, 1, &info); > + if (r < 0) { > + LOGE(ERROR, "getting domain info"); > + rc = ERROR_FAIL; > + goto out; > + } > + max_vcpuid = info.max_vcpu_id; > + > + if (scinfo->num_vcpus != 1) { > + rc = ERROR_INVAL; > + goto out; > + } else { > + if (sched_rtds_validate_params(gc, scinfo->vcpus[0].period, > + scinfo->vcpus[0].budget)) { > + rc = ERROR_INVAL; > + goto out; > + } > + num_vcpus = max_vcpuid + 1; > + GCNEW_ARRAY(vcpus, num_vcpus); > + for (i = 0; i < num_vcpus; i++) { > + vcpus[i].vcpuid = i; > + vcpus[i].u.rtds.period = scinfo->vcpus[0].period; > + vcpus[i].u.rtds.budget = scinfo->vcpus[0].budget; > + } > + } Same here, the else branch can be simplified. Again, it's just FYI. No need to resend. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |