[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH]: Fix xm block-detach
Tue, 02 Dec 2008 17:24:19 +0100, Chris Lalancette wrote: >Chris Lalancette wrote: >> Masaki Kanno wrote: >>> Hi Chris, >>> >>> I could not reproduce the problem by using the latest xen-unstable. >>> >>> I also found a problem of block tap devices included by c/s 18562, >>> then I have fixed the problem by c/s 18843. But the problem occurred >>> by using xm shutdown or xm destroy or etc, not xm block-detach. >>> >>> Could you try xm block-detach by using the latest xen-unstable? >>> >>> Best regards, >>> Kan >> >> OK, interesting. I'll give it a shot, but it's going to take a little >> while >> since I have to build from scratch. I'll report when I'm done. > >Ah, now I see. Testing it on xen-unstable does, indeed, show xm block-detach >working as expected. There were some changes made in the meantime that >actually >make it work. That means the first hunk of my changes to DevController.py >aren't required. However, I think the other two hunks are actually " >correct", >even though we don't see the xm block-detach bug in current xen-unstable. >That >is, they move the device section from /vm/UUID/device/tap to >/vm/UUID/device/vbd, which seems more right to me. Hi Chris, I have tried the other two hunks of your changes on the latest xen-unstable. I have found two issues. Could you see the attaching file? 1. Information of xm list When I tried xm list to an active domain, both "vbd" and "tap" were shown. They were same uuid and same uname. 2. Double wait in xend I checked xend.log after I started a domain. Xend was waiting for both "vdb" and "tap" by using waitForDevices(). If the other two hunks of your changes are correct, I think that there is some kind of lack in your changes yet. Best regards, Kan Attachment:
result_with_your_patch.txt _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |