[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 for Xen 4.6 1/4] xen: enabling XL to set per-VCPU parameters of a domain for RTDS scheduler
On Mon, May 25, 2015 at 07:05:52PM -0500, Chong Li wrote: > --- a/xen/common/domctl.c > +++ b/xen/common/domctl.c > @@ -841,6 +841,11 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) > u_domctl) > copyback = 1; > break; > > + case XEN_DOMCTL_scheduler_vcpu_op: > + ret = sched_adjust_vcpu(d, &op->u.scheduler_vcpu_op); > + copyback = 1; I didn't see any fields you need to copy back here ('vcpus' were copied back in rt_vcpu_cntl() already). > +{ > + struct rt_private *prv = rt_priv(ops); > + struct rt_dom * const sdom = rt_dom(d); > + struct rt_vcpu *svc; > + struct list_head *iter; > + unsigned long flags; > + int rc = 0; > + xen_domctl_sched_rtds_params_t local_sched; > + unsigned int vcpuid; > + unsigned int i; 'vcpuid' is only used in 'get' path once while 'i' is used in 'set' path only, perhaps merge the two variables? > + > + switch ( op->cmd ) > + { > + case XEN_DOMCTL_SCHEDOP_getvcpuinfo: > + spin_lock_irqsave(&prv->lock, flags); > + list_for_each( iter, &sdom->vcpu ) > + { > + svc = list_entry(iter, struct rt_vcpu, sdom_elem); > + vcpuid = svc->vcpu->vcpu_id; > + > + local_sched.budget = svc->budget / MICROSECS(1); > + local_sched.period = svc->period / MICROSECS(1); > + if ( copy_to_guest_offset(op->u.rtds.vcpus, vcpuid, > + &local_sched, 1) ) > + { > + spin_unlock_irqrestore(&prv->lock, flags); > + return -EFAULT; ^Double spaces. > + } > + hypercall_preempt_check(); The check itself does nothing for preemption, you need return âERESTART or call hypercall_create_continuation to make the preemption happen. > + } > + spin_unlock_irqrestore(&prv->lock, flags); > + break; 'nr_vcpus' is not actually used untile now but in xc side you do pass that in. Regards Chao _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |