[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen-blkfront: emit KOBJ_OFFLINE uevent when detaching device



On Tue, Jul 04, 2017 at 05:59:27PM +0100, Roger Pau Monné wrote :
> On Tue, Jul 04, 2017 at 01:48:32PM +0200, Vincent Legout wrote:
> > Devices are not unmounted inside a domU after a xl block-detach.
> > 
> > After xl block-detach, blkfront_closing() is called with state ==
> > XenbusStateConnected, it detects that the device is still in use and
> > only switches state to XenbusStateClosing. blkfront_closing() is called
> > a second time but returns immediately because state ==
> > XenbusStateClosing. Thus the device keeps being mounted inside the domU.
> > 
> > To fix this, emit a KOBJ_OFFLINE uevent even if the device has users.
> > 
> > With this patch, inside domU, udev has:
> > 
> > KERNEL[16994.526789] offline  /devices/vbd-51728/block/xvdb (block)
> > KERNEL[16994.796197] remove   /devices/virtual/bdi/202:16 (bdi)
> > KERNEL[16994.797167] remove   /devices/vbd-51728/block/xvdb (block)
> > UDEV  [16994.798035] remove   /devices/virtual/bdi/202:16 (bdi)
> > UDEV  [16994.809429] offline  /devices/vbd-51728/block/xvdb (block)
> > UDEV  [16994.842365] remove   /devices/vbd-51728/block/xvdb (block)
> > KERNEL[16995.461991] remove   /devices/vbd-51728 (xen)
> > UDEV  [16995.462549] remove   /devices/vbd-51728 (xen)
> 
> I'm not an expect on udev, but aren't those messages duplicated? You
> seem to get one message from udev and another one from the kernel.

I'm not either, but this seems to be the expected behavior, at least
that's what I get on a few different setups.

> > While without, it had:
> > 
> > KERNEL[30.862764] remove   /devices/vbd-51728 (xen)
> > UDEV  [30.867838] remove   /devices/vbd-51728 (xen)
> > 
> > Signed-off-by: Pascal Bouchareine <pascal@xxxxxxxxx>
> > Signed-off-by: Fatih Acar <fatih.acar@xxxxxxxxx>
> > Signed-off-by: Vincent Legout <vincent.legout@xxxxxxxxx>
> >
> >  drivers/block/xen-blkfront.c | 6 ++++--
> >  1 file changed, 4 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> > index 39459631667c..da0b0444ee1f 100644
> > --- a/drivers/block/xen-blkfront.c
> > +++ b/drivers/block/xen-blkfront.c
> > @@ -2185,8 +2185,10 @@ static void blkfront_closing(struct blkfront_info 
> > *info)
> >     mutex_lock(&bdev->bd_mutex);
> >  
> >     if (bdev->bd_openers) {
> > -           xenbus_dev_error(xbdev, -EBUSY,
> > -                            "Device in use; refusing to close");
> > +           dev_warn(disk_to_dev(info->gd),
> > +                    "detaching %s with pending users\n",
> > +                    xbdev->nodename);
> > +           kobject_uevent(&disk_to_dev(info->gd)->kobj, KOBJ_OFFLINE);
> 
> What happens if you simply remove the xenbus_dev_error but don't add
> the kobject_uevent?

I just tested and I've got the same behavior as before if I do that
(i.e. no unmount inside domU).

> I'm asking because I don't see any other block device calling
> directly kobject_uevent, and I'm sure this should be pretty similar to
> what virtio or USB do when a block device is hot-unplugged.

I don't know if this is the right thing to do, but a call to
kobject_uevent_env was added in xen-blkfront a few months ago:

 
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=89515d0255c918e08aa4085956c79bf17615fda5

> For example blk_unregister_queue already contains a call to trigger a
> kobject_uevent.

Without the patch, blkif_release and xlvbd_release_gendisk are never
called, and no call to blk_unregister_queue is made.

blkif_release expects the device to be unused. And calling directly
xlvbd_release_gendisk instead of kobject_uevent seems to block at
del_gendisk while calling invalidate_partition and then fsync_bdev.


Vincent

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.