[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] xen-blkfront: Don't send closing notification to backend in blkfront_closing()



When we do block attach detach test with below steps, umount hang and the
guest unable to shutdown:

1. start guest with the latest kernel.
2. attach new disk by xm-attach in Dom0
3. mount new disk in guest
4. detach the disk by xm-detach in dom0
5. umount the partition/disk in guest, command hung. exactly at here, any
   IO request to the partition/disk will hang.

Checking the code we found when xm-detach command set backend state to 
Closing, will trigger blkback_changed() -> blkfront_closing() call.
At the moment, the disk still opened by guest, so frontend will refuse the 
request, but in the blkfront_closing(), it send a notification to backend 
said that the frontend state switched to Closing, when backend got the
event, it will disconnect from real device, at here any IO request will
be stuck, even tried to release the disk by umount.

Per our test, below patch fix this issue.

Signed-off-by: Joe Jin <joe.jin@xxxxxxxxxx>
Signed-off-by: Annie Li <annie.li@xxxxxxxxxx>
---
 xen-blkfront.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index b536a9c..f6d8ac2 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -1088,7 +1088,7 @@ blkfront_closing(struct blkfront_info *info)
        if (bdev->bd_openers) {
                xenbus_dev_error(xbdev, -EBUSY,
                                 "Device in use; refusing to close");
-               xenbus_switch_state(xbdev, XenbusStateClosing);
+               xbdev->state = XenbusStateClosing;
        } else {
                xlvbd_release_gendisk(info);
                xenbus_frontend_closed(xbdev);

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.