[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [OSSTEST PATCH 18/29] Planner: Report unprocessed planning clients



With recent changes, it can happen that a queue daemon client is not
given an opportunity to report itself in the plan.  This makes the
plan incomplete.

(For resource-plan.html, because the planning run was restarted to try
to quickly allocate new resources; for resource-projection.html,
because it's an old client that doesn't support feature-noalloc.)

When this happens, provide an explicit indication of this in the plan:

* Invent a new entry Unprocessed in data-*.pl for this information.
* Display the first 50 in ms-planner show-html.
* Provide a new ms-planner invocation `unprocessed' to record one.
* Note unprocessed when we skip a client due to !feature-noalloc.
* Note unprocessed for remaining queue when we restart planning.

For now this algorithm can be rather unfortunately O(n^2) when
draining the planning queue, because each `ms-planner unprocessed'
invocation adds only one job but needs to read and write the whole
plan.   This will be fixed shortly.

Signed-off-by: Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
---
v2: New patch
---
 ms-planner     |   27 +++++++++++++++++++++++++++
 ms-queuedaemon |   10 ++++++++++
 2 files changed, 37 insertions(+)

diff --git a/ms-planner b/ms-planner
index 053f330..495c8ff 100755
--- a/ms-planner
+++ b/ms-planner
@@ -238,6 +238,7 @@ sub cmd_reset () {
 
     $plan->{Start}= time;
     $plan->{Events}= { };
+    $plan->{Unprocessed}= [ ];
 
     my %magictask;
     foreach my $taskrefkey (qw(preparing shared)) {
@@ -731,12 +732,38 @@ sub cmd_show_html () {
        printf "</tr>\n";
     }
     printf "</table>\n";
+    printf "<p>\n";
+    my $unprocessed = $plan->{Unprocessed};
+    if (@$unprocessed) {
+           printf "%d tasks not processed and therefore not shown:\n",
+               scalar @$unprocessed;
+           my $shown = 0;
+           printf "<ul>\n";
+           foreach my $unprocessed (@$unprocessed) {
+               if (++$shown > 50) {
+                   printf "<li>...</li>\n";
+                   last;
+               }
+               printf "<li>%s</li>\n", encode_entities($unprocessed->{Info});
+           }
+           printf "</ul>\n";
+           printf "<p>\n";
+    }
     printf "Report generated %s.\n",
         strftime("%Y-%b-%d %a %H:%M:%S", localtime $now);
     die $! if STDOUT->error;
     die $! unless STDOUT->flush;
 }
 
+sub cmd_unprocessed () {
+    die unless @ARGV==1;
+    my ($baseinfo) = @ARGV;
+
+    get_current_plan();
+    push @{ $plan->{Unprocessed} }, { Info => $baseinfo };
+    check_write_new_plan();
+}
+
 die unless @ARGV;
 die if $ARGV[0] =~ m/^-/;
 my $subcmd= shift @ARGV;
diff --git a/ms-queuedaemon b/ms-queuedaemon
index 72e22d0..fba28a1 100755
--- a/ms-queuedaemon
+++ b/ms-queuedaemon
@@ -482,12 +482,21 @@ proc restarter-restart-now {} {
        log-event "restarter-restart-now projection-running"
     }
 
+    foreach skip [set plan/queue_running] {
+       for-chan $skip {
+           chan-note-unprocessed plan $skip
+       }
+    }
     report-plan plan plan
 
     unset plan/queue_running
     runneeded-ensure-will 2
 }
 
+proc chan-note-unprocessed {w chan} {
+    exec ./ms-planner -w$w unprocessed [chan-plan-info $chan {}]
+}
+
 proc notify-to-think {w thinking} {
     for-chan $thinking {
        set noalloc [chan-get-info $thinking {$info(feature-noalloc)} {}]
@@ -496,6 +505,7 @@ proc notify-to-think {w thinking} {
            projection.1 { puts-chan $thinking "!OK think noalloc" }
            projection.* {
                # oh well, can't include it in the projection; too bad
+               chan-note-unprocessed $w $thinking
                queuerun-step-done $w "!feature-noalloc"
            }
        }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.