[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen-unstable] rcupdate: Make rcu_barrier() more paranoia-proof



# HG changeset patch
# User Keir Fraser <keir@xxxxxxx>
# Date 1295023131 0
# Node ID 75b6287626ee0b852d725543568001e99b13be5b
# Parent  3ce532e56efda5557664bc3d6edff285317d5ff0
rcupdate: Make rcu_barrier() more paranoia-proof

I'm not sure my original barrier function is correct. It may allow a
CPU to exit the barrier loop, with no local work to do, while RCU work
is pending on other CPUs and needing one or more quiescent periods to
flush the work through.

Although rcu_pending() may handle this, it is easiest to follow
Linux's example and simply call_rcu() a callback function on every
CPU. When the callback has executed on every CPU, we know that all
previously-queued RCU work is completed, and we can exit the barrier.

Signed-off-by: Keir Fraser <keir@xxxxxxx>
---
 xen/common/rcupdate.c |   31 +++++++++++++++++++++++++------
 1 files changed, 25 insertions(+), 6 deletions(-)

diff -r 3ce532e56efd -r 75b6287626ee xen/common/rcupdate.c
--- a/xen/common/rcupdate.c     Fri Jan 14 15:47:01 2011 +0000
+++ b/xen/common/rcupdate.c     Fri Jan 14 16:38:51 2011 +0000
@@ -61,16 +61,34 @@ static int qlowmark = 100;
 static int qlowmark = 100;
 static int rsinterval = 1000;
 
-static int rcu_barrier_action(void *unused)
-{
-    unsigned int cpu = smp_processor_id();
+struct rcu_barrier_data {
+    struct rcu_head head;
+    atomic_t *cpu_count;
+};
+
+static void rcu_barrier_callback(struct rcu_head *head)
+{
+    struct rcu_barrier_data *data = container_of(
+        head, struct rcu_barrier_data, head);
+    atomic_inc(data->cpu_count);
+}
+
+static int rcu_barrier_action(void *_cpu_count)
+{
+    struct rcu_barrier_data data = { .cpu_count = _cpu_count };
 
     ASSERT(!local_irq_is_enabled());
     local_irq_enable();
 
-    while ( rcu_needs_cpu(cpu) )
+    /*
+     * When callback is executed, all previously-queued RCU work on this CPU
+     * is completed. When all CPUs have executed their callback, data.cpu_count
+     * will have been incremented to include every online CPU.
+     */
+    call_rcu(&data.head, rcu_barrier_callback);
+
+    while ( atomic_read(data.cpu_count) != cpus_weight(cpu_online_map) )
     {
-        rcu_check_callbacks(cpu);
         process_pending_softirqs();
         cpu_relax();
     }
@@ -82,7 +100,8 @@ static int rcu_barrier_action(void *unus
 
 int rcu_barrier(void)
 {
-    return stop_machine_run(rcu_barrier_action, NULL, NR_CPUS);
+    atomic_t cpu_count = ATOMIC_INIT(0);
+    return stop_machine_run(rcu_barrier_action, &cpu_count, NR_CPUS);
 }
 
 static void force_quiescent_state(struct rcu_data *rdp,

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.