[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [Fwd: [PATCH v2 0/5] Improving dumping of scheduler related info]
Forgot to Cc people in the cover letter of the series... Sorry! -------- Forwarded Message -------- From: Dario Faggioli <dario.faggioli@xxxxxxxxxx> To: Xen-devel <xen-devel@xxxxxxxxxxxxx> Subject: [PATCH v2 0/5] Improving dumping of scheduler related info Date: Tue, 17 Mar 2015 16:32:41 +0100 Mailer: StGit/0.17.1-dirty Message-Id: <20150317152615.9867.48676.stgit@xxxxxxxxxxxxxx> Take 2. Some of the patches have been checked-in already, so here's what's remaining: - fix a bug in the RTDS scheduler (patch 1), - improve how the whole process of dumping scheduling info is serialized, by moving all locking code into specific schedulers (patch 2), - print more useful scheduling related information (patches 3, 4 and 5). Git branch here: git://xenbits.xen.org/people/dariof/xen.git rel/sched/dump-v2 http://xenbits.xen.org/gitweb/?p=people/dariof/xen.git;a=shortlog;h=refs/heads/rel/sched/dump-v2 I think I addressed all the comments raised upon v1. More details in the changelogs of the various patches. Thanks and Regards, Dario --- Dario Faggioli (5): xen: sched_rt: avoid ASSERT()ing on runq dump if there are no domains xen: rework locking for dump of scheduler info (debug-key r) xen: print online pCPUs and free pCPUs when dumping xen: sched_credit2: more info when dumping xen: sched_rt: print useful affinity info when dumping xen/common/cpupool.c | 12 +++++++++ xen/common/sched_credit.c | 42 ++++++++++++++++++++++++++++++- xen/common/sched_credit2.c | 53 +++++++++++++++++++++++++++++++++------- xen/common/sched_rt.c | 59 ++++++++++++++++++++++++++++++++------------ xen/common/sched_sedf.c | 16 ++++++++++++ xen/common/schedule.c | 5 +--- 6 files changed, 157 insertions(+), 30 deletions(-) Attachment:
signature.asc _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |