[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Help] Trigger Watchdog when adding an IPI in vcpu_wake

On Sat, 2016-09-17 at 03:30 +0000, Wei Yang wrote:
> Dario,
Hey, hi again, and sorry for the in getting back at this particular
part of the conversation.

> Just get chance to look into this. This is interesting and I am
> trying to
> understand the problem you want to solve first.

> vcpu_wake() is a _special_ case who has to turn off the irq, because
> SCHED_OP(wake, v) would not only enqueue the vcpu but also tickle
> pcpu to pick
> up the queued vcpu. And the tickling part needd the irq to be turned
> off.
_Almost_ correct. However, the problem is more that vcpu_wake() can
happen in response to an IRQ, and when you grab a spinlock in IRQ
context, you need to disable IRQs.

There is a good explanation of why, here:

> I don't get the this part. Why we have to turn off the irq for
> tickling pcpu?
We don't. The point here is that, in Xen, we wants spinlocks to either
_always_ be taken with IRQs disabled or _always_ be taken with IRQs
enabled. Well, since we now take the scheduler's runqueue lock in
vcpu_wake() (and as said that must be done with IRQs disabled), this
implies that the scheduler's runqueue lock needs to always be taken
with IRQs disabled, even when leaving them enabled would not actually
be an issue, for being consistent with the locking rule.

So, if we manage to avoid taking the scheduler's runqueue lock in vcpu_wake(), 
we will manage to put ourself in the opposite situation, wrt the locking 
consistency rule, i.e., we can _always_ take the scheduler's runqueue lock with 
IRQs enabled.

This is, broadly speaking, the problem we wanted to solve, and how we thought 
about solving it.

> And by enabling irq in schedule handlers benefits the performance? or
> ? The
> motivation behind this is?
As I said to you about your modification, here too it is not super-easy 
to tell in advance whether, and if yes, by how much, we'll see a boost
in performance.

In this case, the idea is that, in general, keeping IRQs disabled is
bad. It is bad for concurrency, it is bad for latency in both
scheduling and when it comes to reacting to external event. And it has
been profiled that the scheduler is one of the component that keeps the
IRQs disabled for long chunks of time.

I honestly did expect things to improve a bit, especially on certain
workloads, but of course, as usual, benchmarks will tell. :-)

> Didn't look into your code yet. 
Even if/when you look at it, bear in mind it's still only a stub.

> From the description from Andrew, I comes with
> several questions:
> 1. which per-cpu list you want to queue the vcpu? v->processor?
ISTR having thought, and even given a try, to both v->processor (where
v is the waking vcpu) and the current cpu (where for 'current cpu' I
meant the cpu where the wake-up is occurring).

I think I decided to use the current cpu (mostly for the same reasons
why I don't think there is much advantage in what you are trying to do

> 2. SCHEDULE_WAKE_SOFTIRQ would do runqueue insert and tickle?
It would call what is now vcpu_wake(). So, yes, for most scheduler that
means inserting and tickling, but it's really schedulere specific.

> 3. purpose is so that schedule softirq could run with irq enabled
Purpose is that scheduling functions can be executed with IRQs enabled,

> 4. the schedule softirq would do the jobs currently in vcpu_wake()?
Err... not sure what you mean (but maybe I've already answered at point

> Took the per-cpu branch as an example, I see several commits. The top
> 6 are
> related ones, right?
The top 4 (I told you it's a stub :-P).

<<This happens because I choose it to happen!>> (Raistlin Majere)
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.