[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 for Xen 4.6 1/4] xen: enabling XL to set per-VCPU parameters of a domain for RTDS scheduler



On Tue, 2015-06-02 at 17:28 +0100, George Dunlap wrote:
> On Tue, May 26, 2015 at 1:05 AM, Chong Li <lichong659@xxxxxxxxx> wrote:
> > Add two hypercalls(XEN_DOMCTL_SCHEDOP_getvcpuinfo/putvcpuinfo) to get/set a 
> > domain's
> > per-VCPU parameters. Hypercalls are handled by newly added hook 
> > (.adjust_vcpu) in the
> > scheduler interface.
> >
> > Add a new data structure (struct xen_domctl_scheduler_vcpu_op) for 
> > transferring data
> > between tool and hypervisor.
> >
> > Signed-off-by: Chong Li <chong.li@xxxxxxxxx>
> > Signed-off-by: Meng Xu <mengxu@xxxxxxxxxxxxx>
> > Signed-off-by: Sisu Xi <xisisu@xxxxxxxxx>
> 
> One comment that I didn't see addressed already...
> 

> > +    switch ( op->cmd )
> > +    {
> > +    case XEN_DOMCTL_SCHEDOP_getvcpuinfo:
> > +        spin_lock_irqsave(&prv->lock, flags);
> > +        list_for_each( iter, &sdom->vcpu )
> > +        {
> > +            svc = list_entry(iter, struct rt_vcpu, sdom_elem);
> > +            vcpuid = svc->vcpu->vcpu_id;
> > +
> > +            local_sched.budget = svc->budget / MICROSECS(1);
> > +            local_sched.period = svc->period / MICROSECS(1);
> > +            if ( copy_to_guest_offset(op->u.rtds.vcpus, vcpuid,
> > +                    &local_sched, 1) )
> > +            {
> > +                spin_unlock_irqrestore(&prv->lock, flags);
> > +                return  -EFAULT;
> > +            }
> > +            hypercall_preempt_check();
> > +        }
> > +        spin_unlock_irqrestore(&prv->lock, flags);
> > +        break;
> > +    case XEN_DOMCTL_SCHEDOP_putvcpuinfo:
> > +        spin_lock_irqsave(&prv->lock, flags);
> > +        for( i = 0; i < op->u.rtds.nr_vcpus; i++ )
> > +        {
> > +            if ( copy_from_guest_offset(&local_sched,
> > +                    op->u.rtds.vcpus, i, 1) )
> > +            {
> > +                spin_unlock_irqrestore(&prv->lock, flags);
> > +                return -EFAULT;
> > +            }
> > +            if ( local_sched.period <= 0 || local_sched.budget <= 0 )
> > +            {
> > +                spin_unlock_irqrestore(&prv->lock, flags);
> > +                return -EINVAL;
> > +            }
> > +            svc = rt_vcpu(d->vcpu[local_sched.vcpuid]);
> > +            svc->period = MICROSECS(local_sched.period);
> > +            svc->budget = MICROSECS(local_sched.budget);
> > +            hypercall_preempt_check();
> > +        }
> 
> It looks like the interface here is assymetric.  That is, on
> "putvcpuinfo", you assume there is an array of rtds.nr_vcpus length,
> and you read vcpu_id from each one.  In "getvcpuinfo", you assume that
> the array is the number of vcpus in the guest, and you just copy all
> the vcpu data into it.
> 
> I think it would make more sense for it to work the same both ways --
> so either always pass an array of all vcpus of the guest in and out;
> or, have an array potentially specifying a subset of cpus in and out.
> 
> Thoughts?
> 
It would indeed. And in fact, just FTR, I did say pretty much the same
in <1432904488.5077.30.camel@xxxxxxxxxx> :-D

Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.