[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [linux-2.6.18-xen] blktap2: eliminate race from deferred work queue handling


  • To: xen-changelog@xxxxxxxxxxxxxxxxxxx
  • From: Xen patchbot-linux-2.6.18-xen <patchbot@xxxxxxx>
  • Date: Mon, 22 Aug 2016 06:33:02 +0000
  • Delivery-date: Mon, 22 Aug 2016 06:33:12 +0000
  • List-id: "Change log for Mercurial \(receive only\)" <xen-changelog.lists.xen.org>

# HG changeset patch
# User Jan Beulich <jbeulich@xxxxxxxx>
# Date 1471847043 -7200
#      Mon Aug 22 08:24:03 2016 +0200
# Node ID 7ba8deb4c1be7d9353a88f2af0f23bf789d3f5d9
# Parent  d8c979f0e04c5f5fa64001d4a2fa630117d5c76c
blktap2: eliminate race from deferred work queue handling

Clearing BLKTAP_DEFERRED together with splicing the queue onto a local
list is premature: blktap_defer() must not re-queue a tap before it got
removed from the list it's currently on (no matter whether that's the
global or local one), or else list corruption can occur. Effectively
the lock protects not only the global list (as its name suggests) but
also the list entries of all tap-s.

Effectively the BLKTAP_DEFERRED bit becomes pointless now: With the
changed locking we could as well check list_empty() in blktap_defer()
(of course requiring blktap_control_initialize_tap() to initialize the
entry).

Convert a bogus test_bit()/set_bit() pair to test_and_set_bit() at
once.

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
---


diff -r d8c979f0e04c -r 7ba8deb4c1be drivers/xen/blktap2/wait_queue.c
--- a/drivers/xen/blktap2/wait_queue.c  Mon Aug 22 08:22:45 2016 +0200
+++ b/drivers/xen/blktap2/wait_queue.c  Mon Aug 22 08:24:03 2016 +0200
@@ -15,13 +15,14 @@ blktap_run_deferred(void)
 
        spin_lock_irqsave(&deferred_work_lock, flags);
        list_splice_init(&deferred_work_queue, &queue);
-       list_for_each_entry(tap, &queue, deferred_queue)
-               clear_bit(BLKTAP_DEFERRED, &tap->dev_inuse);
        spin_unlock_irqrestore(&deferred_work_lock, flags);
 
        while (!list_empty(&queue)) {
                tap = list_entry(queue.next, struct blktap, deferred_queue);
+               spin_lock_irqsave(&deferred_work_lock, flags);
+               clear_bit(BLKTAP_DEFERRED, &tap->dev_inuse);
                list_del_init(&tap->deferred_queue);
+               spin_unlock_irqrestore(&deferred_work_lock, flags);
                blktap_device_restart(tap);
        }
 }
@@ -32,9 +33,7 @@ blktap_defer(struct blktap *tap)
        unsigned long flags;
 
        spin_lock_irqsave(&deferred_work_lock, flags);
-       if (!test_bit(BLKTAP_DEFERRED, &tap->dev_inuse)) {
-               set_bit(BLKTAP_DEFERRED, &tap->dev_inuse);
+       if (!test_and_set_bit(BLKTAP_DEFERRED, &tap->dev_inuse))
                list_add_tail(&tap->deferred_queue, &deferred_work_queue);
-       }
        spin_unlock_irqrestore(&deferred_work_lock, flags);
 }

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxx
https://lists.xenproject.org/xen-changelog

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.