[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v8]xen: sched: convert RTDS from time to event driven model

On Mon, 2016-03-14 at 12:03 -0400, Meng Xu wrote:
> On Mon, Mar 14, 2016 at 11:38 AM, Meng Xu <mengxu@xxxxxxxxxxxxx>
> wrote:
> > I'm ok that we keep using spin_lock_irqsave() for now. But maybe
> > later, it will be a better idea to explore if spin_lock_irq() can
> > replace all spin_lock_irqsave() in the RTDS scheduler?
> I rethink about the choice of replacing spin_lock_irqsave with
> spin_lock_irq().
> If in the future ,we will introduce new locks and there may exit the
> situaiton when we want to lock two locks in the same function. In
> that
> case, we won't use spin_lock_irq() but have to use
> spin_lock_irqsave(). If we can mix up spin_lock_irq() with
> spin_lock_irqsave() in different fucntiosn for the same lock, which I
> think we can (right?), we should be fine. Otherwise, we will have to
> keep using spin_lock_irqsave().
Mixing per se is not a problem, it's how you mix...

If you call spin_unlock_irq() within a critical section protected by
either spin_lock_irq() or spin_lock_irqsave(), that is not a good mix!

if you call _irqsave() inside a critical section protected by either
_irq() or _irqsave(), that's what should be done (it's the purpose of
_irqsave(), actually!).

Actually, in case of nesting, most of the time the inner lock can be
taken by just spin_lock(). Look, for instance, at csched2_dump_pcpu().

With more locks (which I agree is something we want for RTDS), the
biggest issue is going to be getting the actual nesting right, rather
than the various _irq* variants. :-)

<<This happens because I choose it to happen!>> (Raistlin Majere)
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.