[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler



On Wed, Mar 9, 2016 at 10:38 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>>>> On 09.03.16 at 17:10, <dario.faggioli@xxxxxxxxxx> wrote:
>> On Tue, 2016-03-08 at 19:09 +0000, Wei Liu wrote:
>>> On Sun, Mar 06, 2016 at 11:55:55AM -0600, Chong Li wrote:
>>>
>>> > +            spin_lock_irqsave(&prv->lock, flags);
>>> > +            svc = rt_vcpu(d->vcpu[local_sched.vcpuid]);
>>> > +            svc->period = period;
>>> > +            svc->budget = budget;
>>> > +            spin_unlock_irqrestore(&prv->lock, flags);
>>> > +
>>> And this locking pattern seems sub-optimal. You might be able to move
>>> the lock and unlock outside the while loop?
>>>
>> Yes, unless I'm missing something, that looks possible to me, and would
>> save a lot of acquire/release ping pong on the lock.
>
> Well, there are guest memory accesses (which may fault) in the
> loop body. While this may work right now, I don't think doing so
> is a good idea.

So I still keep my design here?

Dario, Jan and Wei,

I almost finished a new version, but since this part is critical for
the whole patch, let me summarize the feedbacks here. Please correct
me if my understanding is wrong.

1) We don't need "guest_handle_is_null()" check, because null handle
could be used in some special cases. And normally, handle is checked
by copy_from(to)_guest* functions.

2) In domctl.h, add explain for nr_vcpus, because it is used in both
IN and OUT ways.

3) Use printk(XENLOG_G_WARNING ...) here, because of its rate limit feature.

4) Do I still keep the spin_lock inside the loop body?

Chong



-- 
Chong Li
Department of Computer Science and Engineering
Washington University in St.louis

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.