[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 3/7] xen: rework locking for dump of scheduler info (debug-key r)
On 03/17/2015 10:54 AM, Jan Beulich wrote: >>>> On 16.03.15 at 18:05, <dario.faggioli@xxxxxxxxxx> wrote: >> such as it is taken care of by the various schedulers, rather >> than happening in schedule.c. In fact, it is the schedulers >> that know better which locks are necessary for the specific >> dumping operations. >> >> While there, fix a few style issues (indentation, trailing >> whitespace, parentheses and blank line after var declarations) >> >> Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx> >> Cc: George Dunlap <george.dunlap@xxxxxxxxxxxxx> >> Cc: Meng Xu <xumengpanda@xxxxxxxxx> >> Cc: Jan Beulich <JBeulich@xxxxxxxx> >> Cc: Keir Fraser <keir@xxxxxxx> >> --- >> xen/common/sched_credit.c | 42 ++++++++++++++++++++++++++++++++++++++++-- >> xen/common/sched_credit2.c | 40 ++++++++++++++++++++++++++++++++-------- >> xen/common/sched_rt.c | 7 +++++-- >> xen/common/schedule.c | 5 ++--- >> 4 files changed, 79 insertions(+), 15 deletions(-) > > Is it really correct that sched_sedf.c doesn't need any change here? > >> --- a/xen/common/sched_credit.c >> +++ b/xen/common/sched_credit.c >> @@ -26,6 +26,23 @@ >> >> >> /* >> + * Locking: >> + * - Scheduler-lock (a.k.a. runqueue lock): >> + * + is per-runqueue, and there is one runqueue per-cpu; >> + * + serializes all runqueue manipulation operations; >> + * - Private data lock (a.k.a. private scheduler lock): >> + * + serializes accesses to the scheduler global state (weight, >> + * credit, balance_credit, etc); >> + * + serializes updates to the domains' scheduling parameters. >> + * >> + * Ordering is "private lock always comes first": >> + * + if we need both locks, we must acquire the private >> + * scheduler lock for first; >> + * + if we already own a runqueue lock, we must never acquire >> + * the private scheduler lock. >> + */ > > And this is Credit1 specific? Credit1 and credit2 have slightly different lock and data layouts, and thus a slightly different locking discipline. This looks like it was copied from the credit2 description and then modified for credit1. > > Regardless of that, even if that's just reflecting current state, isn't > acquiring a private lock around a generic lock backwards? As the description says, the private lock tends to be global, whereas the scheduler lock tends to be per-cpu. There are lots of operations where you need to iterate over cpus (or the vcpus running on them). (For instance, the dumping routines that Dario modifies in this patch.) Grabbing the private lock once and then grabbing the scheduler locks sequentially is obviously the right thing to do here. > Finally, as said in different contexts earlier, I think unconditionally > acquiring locks in dumping routines isn't the best practice. At least > in non-debug builds I think these should be try-locks only, skipping > the dumping when a lock is busy. You mean so that we don't block the console if there turns out to be a deadlock? That makes some sense; but on a busy system that would mean a non-negligible chance that any give keystroke would be missing information about some cpu or other, which would be pretty frustrating for someone trying to figure out the state of their system. Would it make sense to have a version of spin_trylock for use in this kind of situation that waits & retries a reasonable number of times before giving up? -George _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |