[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 09/16] xen: sched: close potential races when switching scheduler to CPUs



On Tue, 2016-04-05 at 19:37 +0200, Dario Faggioli wrote:
> On Thu, 2016-03-24 at 12:14 +0000, George Dunlap wrote:
> > On 18/03/16 19:05, Dario Faggioli wrote:
> > So I think there should be no problem with:
> > 1. Grabbing the pcpu schedule lock in schedule_cpu_switch()
> > 2. Grabbing prv->lock in csched2_switch_sched()
> > 3. Setting the per_cpu schedule lock as the very last thing in
> > csched2_switch_sched()
> > 4. Releasing the (old) pcpu schedule lock in schedule_cpu_switch().
> > 
> > What do you think?
> > 
> I think it should work. We'll be doing the scheduler lock
> manipulation
> protected by the old and wrong (as it's the one of another scheduler)
> per-cpu/runq lock, and the correct global private lock. It would look
> like the ordering between the two locks is the wrong one, in Credit2,
> but it's not because of the fact that the per-runq lock is the other
> scheduler's one.
> 
> Tricky, but everything is in here! :-/
> 
I've done it as you suggest above.

The new .switch_sched hook is still there, and still does look the
same. But I do indeed like the final look of the code better, and it
appears to be working ok.

Have a look. ;-)

> > As an aside -- it seems to me that as soon as we change the
> > scheduler
> > lock, there's a risk that something else may come along and try to
> > grab
> > it / access the data.  Does that mean we really ought to use memory
> > barriers to make sure that the lock is written only after all
> > changes
> > to
> > the scheduler data have been appropriately made?
> > 
> Yes, if it were only for this code, I think you're right, barriers
> would be necessary. I once again think this is actually safe, because
> it's serialized elsewhere, but thinking more about it, I can well add
> both barriers (and a comment).
> 
And I've added smp_mb()-s too.

> > > This also means that, in Credit2 and RTDS, we can get rid
> > > of the code that was doing the scheduler lock remapping
> > > in csched2_free_pdata() and rt_free_pdata(), and of their
> > > triggering ASSERT-s.
> > Right -- so to put it a different way, *all* schedulers must now
> > set
> > the
> > locking scheme they wish to use, even if they want to use the
> > default
> > per-cpu locks.  
> > 
> Exactly.
> 
> > 
> > I think that means we have to do that for arinc653 too,
> > right?
> > 
> Mmm... right, I'll have a look at that.
> 
And, finally, I did have a look at this too, and I actually don't think
ARINC needs any of this.

In fact, ARINC brings the idea of "doing its own locking" much further
than the other schedulers we have. They have their lock and they use it
in such a way that they don't even care to what
{v,p}cpu_schedule_lock() and friends points to.

As an example, check a653sched_do_schedule(). It's called from
schedule(), right after taking the runqueue lock,
with pcpu_schedule_lock_irq(), and yet it does this:

 spin_lock_irqsave(&sched_priv->lock, flags);

So, I actually better _not_ add anything to this series that re-maps
sd->schedule_lock to point to their sched_priv->lock, or we'd deadlock!

I'm not sure the design behind all this is the best possible one, but
that's a different issue, to be dealt with with another series in
another moment. :-)

In any case, I've added Robert and Josh.

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.