[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [linux-2.6.18-xen] netfront accel: Fix request_module/modprobe deadlock



# HG changeset patch
# User Keir Fraser <keir.fraser@xxxxxxxxxx>
# Date 1204118058 0
# Node ID c48f543650603e4ba7227925c6b60f2ea6778242
# Parent  43de9d7c3c63adaac7e334621f763c94acbbc178
netfront accel: Fix request_module/modprobe deadlock

There would seem to be a potential deadlock in the netfront accelerator
plugin support.  When the configured accelerator changes in xenstore,
netfront tries to load the new plugin using request_module().  It does
this from a workqueue work item.  request_module() will invoke
modprobe which in some circumstances (I'm not sure exactly what - I've
not managed to reproduce it myself) seems to try to flush the
workqueue, and so it deadlocks.   This patch fixes the problem by
giving the accel watch work item its own workqueue, and so modprobe
can successfully flush the system-wide one.

Signed-off-by Kieran Mansley <kmansley@xxxxxxxxxxxxxx>
---
 drivers/xen/netfront/accel.c |   12 ++++++++++--
 1 files changed, 10 insertions(+), 2 deletions(-)

diff -r 43de9d7c3c63 -r c48f54365060 drivers/xen/netfront/accel.c
--- a/drivers/xen/netfront/accel.c      Tue Feb 26 17:59:18 2008 +0000
+++ b/drivers/xen/netfront/accel.c      Wed Feb 27 13:14:18 2008 +0000
@@ -60,6 +60,9 @@ static struct list_head accelerators_lis
 /* Lock to protect access to accelerators_list */
 static spinlock_t accelerators_lock;
 
+/* Workqueue to process acceleration configuration changes */
+struct workqueue_struct *accel_watch_workqueue;
+
 /* Mutex to prevent concurrent loads and suspends, etc. */
 DEFINE_MUTEX(accelerator_mutex);
 
@@ -67,12 +70,17 @@ void netif_init_accel(void)
 {
        INIT_LIST_HEAD(&accelerators_list);
        spin_lock_init(&accelerators_lock);
+
+       accel_watch_workqueue = create_workqueue("accel_watch");
 }
 
 void netif_exit_accel(void)
 {
        struct netfront_accelerator *accelerator, *tmp;
        unsigned long flags;
+
+       flush_workqueue(accel_watch_workqueue);
+       destroy_workqueue(accel_watch_workqueue);
 
        spin_lock_irqsave(&accelerators_lock, flags);
 
@@ -156,7 +164,7 @@ static void accel_watch_changed(struct x
        struct netfront_accel_vif_state *vif_state = 
                container_of(watch, struct netfront_accel_vif_state,
                             accel_watch);
-       schedule_work(&vif_state->accel_work);
+       queue_work(accel_watch_workqueue, &vif_state->accel_work);
 }
 
 
@@ -191,7 +199,7 @@ void netfront_accelerator_remove_watch(s
                kfree(vif_state->accel_watch.node);
                vif_state->accel_watch.node = NULL;
 
-               flush_scheduled_work();
+               flush_workqueue(accel_watch_workqueue);
 
                /* Clean up any state left from watch */
                if (vif_state->accel_frontend != NULL) {

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.