[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 04/20] virtio-scsi: stop using aio_disable_external() during unplug



On Wed, May 03, 2023 at 01:40:49PM +0200, Kevin Wolf wrote:
> Am 02.05.2023 um 22:02 hat Stefan Hajnoczi geschrieben:
> > On Tue, May 02, 2023 at 03:19:52PM +0200, Kevin Wolf wrote:
> > > Am 01.05.2023 um 17:09 hat Stefan Hajnoczi geschrieben:
> > > > On Fri, Apr 28, 2023 at 04:22:55PM +0200, Kevin Wolf wrote:
> > > > > Am 25.04.2023 um 19:27 hat Stefan Hajnoczi geschrieben:
> > > > > > This patch is part of an effort to remove the aio_disable_external()
> > > > > > API because it does not fit in a multi-queue block layer world where
> > > > > > many AioContexts may be submitting requests to the same disk.
> > > > > > 
> > > > > > The SCSI emulation code is already in good shape to stop using
> > > > > > aio_disable_external(). It was only used by commit 9c5aad84da1c
> > > > > > ("virtio-scsi: fixed virtio_scsi_ctx_check failed when detaching 
> > > > > > scsi
> > > > > > disk") to ensure that virtio_scsi_hotunplug() works while the guest
> > > > > > driver is submitting I/O.
> > > > > > 
> > > > > > Ensure virtio_scsi_hotunplug() is safe as follows:
> > > > > > 
> > > > > > 1. qdev_simple_device_unplug_cb() -> qdev_unrealize() ->
> > > > > >    device_set_realized() calls qatomic_set(&dev->realized, false) so
> > > > > >    that future scsi_device_get() calls return NULL because they 
> > > > > > exclude
> > > > > >    SCSIDevices with realized=false.
> > > > > > 
> > > > > >    That means virtio-scsi will reject new I/O requests to this
> > > > > >    SCSIDevice with VIRTIO_SCSI_S_BAD_TARGET even while
> > > > > >    virtio_scsi_hotunplug() is still executing. We are protected 
> > > > > > against
> > > > > >    new requests!
> > > > > > 
> > > > > > 2. Add a call to scsi_device_purge_requests() from scsi_unrealize() 
> > > > > > so
> > > > > >    that in-flight requests are cancelled synchronously. This ensures
> > > > > >    that no in-flight requests remain once 
> > > > > > qdev_simple_device_unplug_cb()
> > > > > >    returns.
> > > > > > 
> > > > > > Thanks to these two conditions we don't need aio_disable_external()
> > > > > > anymore.
> > > > > > 
> > > > > > Cc: Zhengui Li <lizhengui@xxxxxxxxxx>
> > > > > > Reviewed-by: Paolo Bonzini <pbonzini@xxxxxxxxxx>
> > > > > > Reviewed-by: Daniil Tatianin <d-tatianin@xxxxxxxxxxxxxx>
> > > > > > Signed-off-by: Stefan Hajnoczi <stefanha@xxxxxxxxxx>
> > > > > 
> > > > > qemu-iotests 040 starts failing for me after this patch, with what 
> > > > > looks
> > > > > like a use-after-free error of some kind.
> > > > > 
> > > > > (gdb) bt
> > > > > #0  0x000055b6e3e1f31c in job_type (job=0xe3e3e3e3e3e3e3e3) at 
> > > > > ../job.c:238
> > > > > #1  0x000055b6e3e1cee5 in is_block_job (job=0xe3e3e3e3e3e3e3e3) at 
> > > > > ../blockjob.c:41
> > > > > #2  0x000055b6e3e1ce7d in block_job_next_locked (bjob=0x55b6e72b7570) 
> > > > > at ../blockjob.c:54
> > > > > #3  0x000055b6e3df6370 in blockdev_mark_auto_del (blk=0x55b6e74af0a0) 
> > > > > at ../blockdev.c:157
> > > > > #4  0x000055b6e393e23b in scsi_qdev_unrealize (qdev=0x55b6e7c04d40) 
> > > > > at ../hw/scsi/scsi-bus.c:303
> > > > > #5  0x000055b6e3db0d0e in device_set_realized (obj=0x55b6e7c04d40, 
> > > > > value=false, errp=0x55b6e497c918 <error_abort>) at 
> > > > > ../hw/core/qdev.c:599
> > > > > #6  0x000055b6e3dba36e in property_set_bool (obj=0x55b6e7c04d40, 
> > > > > v=0x55b6e7d7f290, name=0x55b6e41bd6d8 "realized", 
> > > > > opaque=0x55b6e7246d20, errp=0x55b6e497c918 <error_abort>)
> > > > >     at ../qom/object.c:2285
> > > > > #7  0x000055b6e3db7e65 in object_property_set (obj=0x55b6e7c04d40, 
> > > > > name=0x55b6e41bd6d8 "realized", v=0x55b6e7d7f290, errp=0x55b6e497c918 
> > > > > <error_abort>) at ../qom/object.c:1420
> > > > > #8  0x000055b6e3dbd84a in object_property_set_qobject 
> > > > > (obj=0x55b6e7c04d40, name=0x55b6e41bd6d8 "realized", 
> > > > > value=0x55b6e74c1890, errp=0x55b6e497c918 <error_abort>)
> > > > >     at ../qom/qom-qobject.c:28
> > > > > #9  0x000055b6e3db8570 in object_property_set_bool 
> > > > > (obj=0x55b6e7c04d40, name=0x55b6e41bd6d8 "realized", value=false, 
> > > > > errp=0x55b6e497c918 <error_abort>) at ../qom/object.c:1489
> > > > > #10 0x000055b6e3daf2b5 in qdev_unrealize (dev=0x55b6e7c04d40) at 
> > > > > ../hw/core/qdev.c:306
> > > > > #11 0x000055b6e3db509d in qdev_simple_device_unplug_cb 
> > > > > (hotplug_dev=0x55b6e81c3630, dev=0x55b6e7c04d40, errp=0x7ffec5519200) 
> > > > > at ../hw/core/qdev-hotplug.c:72
> > > > > #12 0x000055b6e3c520f9 in virtio_scsi_hotunplug 
> > > > > (hotplug_dev=0x55b6e81c3630, dev=0x55b6e7c04d40, errp=0x7ffec5519200) 
> > > > > at ../hw/scsi/virtio-scsi.c:1065
> > > > > #13 0x000055b6e3db4dec in hotplug_handler_unplug 
> > > > > (plug_handler=0x55b6e81c3630, plugged_dev=0x55b6e7c04d40, 
> > > > > errp=0x7ffec5519200) at ../hw/core/hotplug.c:56
> > > > > #14 0x000055b6e3a28f84 in qdev_unplug (dev=0x55b6e7c04d40, 
> > > > > errp=0x7ffec55192e0) at ../softmmu/qdev-monitor.c:935
> > > > > #15 0x000055b6e3a290fa in qmp_device_del (id=0x55b6e74c1760 "scsi0", 
> > > > > errp=0x7ffec55192e0) at ../softmmu/qdev-monitor.c:955
> > > > > #16 0x000055b6e3fb0a5f in qmp_marshal_device_del 
> > > > > (args=0x7f61cc005eb0, ret=0x7f61d5a8ae38, errp=0x7f61d5a8ae40) at 
> > > > > qapi/qapi-commands-qdev.c:114
> > > > > #17 0x000055b6e3fd52e1 in do_qmp_dispatch_bh (opaque=0x7f61d5a8ae08) 
> > > > > at ../qapi/qmp-dispatch.c:128
> > > > > #18 0x000055b6e4007b9e in aio_bh_call (bh=0x55b6e7dea730) at 
> > > > > ../util/async.c:155
> > > > > #19 0x000055b6e4007d2e in aio_bh_poll (ctx=0x55b6e72447c0) at 
> > > > > ../util/async.c:184
> > > > > #20 0x000055b6e3fe3b45 in aio_dispatch (ctx=0x55b6e72447c0) at 
> > > > > ../util/aio-posix.c:421
> > > > > #21 0x000055b6e4009544 in aio_ctx_dispatch (source=0x55b6e72447c0, 
> > > > > callback=0x0, user_data=0x0) at ../util/async.c:326
> > > > > #22 0x00007f61ddc14c7f in g_main_dispatch (context=0x55b6e7244b20) at 
> > > > > ../glib/gmain.c:3454
> > > > > #23 g_main_context_dispatch (context=0x55b6e7244b20) at 
> > > > > ../glib/gmain.c:4172
> > > > > #24 0x000055b6e400a7e8 in glib_pollfds_poll () at 
> > > > > ../util/main-loop.c:290
> > > > > #25 0x000055b6e400a0c2 in os_host_main_loop_wait (timeout=0) at 
> > > > > ../util/main-loop.c:313
> > > > > #26 0x000055b6e4009fa2 in main_loop_wait (nonblocking=0) at 
> > > > > ../util/main-loop.c:592
> > > > > #27 0x000055b6e3a3047b in qemu_main_loop () at 
> > > > > ../softmmu/runstate.c:731
> > > > > #28 0x000055b6e3dab27d in qemu_default_main () at ../softmmu/main.c:37
> > > > > #29 0x000055b6e3dab2b8 in main (argc=24, argv=0x7ffec55196a8) at 
> > > > > ../softmmu/main.c:48
> > > > > (gdb) p jobs
> > > > > $4 = {lh_first = 0x0}
> > > > 
> > > > I wasn't able to reproduce this with gcc 13.1.1 or clang 16.0.1:
> > > > 
> > > >   $ tests/qemu-iotests/check -qcow2 040
> > > > 
> > > > Any suggestions on how to reproduce the issue?
> > > 
> > > It happens consistently for me with the same command line, both with gcc
> > > and clang.
> > > 
> > > gcc (GCC) 12.2.1 20221121 (Red Hat 12.2.1-4)
> > > clang version 15.0.7 (Fedora 15.0.7-2.fc37)
> > > 
> > > Maybe there is a semantic merge conflict? I have applied the series on
> > > top of master (05d50ba2d4) and my block branch (88f81f7bc8).
> > 
> > I can't find 88f81f7bc8 but rebased on repo.or.cz/qemu/kevin.git block
> > (4514dac7f2e9) and the test passes here.
> > 
> > I rebased on qemu.git/master (05d50ba2d4) and it also passes.
> > 
> > Please let me know if the following tree (a0ff680a72f6) works on your
> > machine:
> > https://gitlab.com/stefanha/qemu/-/tree/remove-aio_disable_external
> 
> Fails in the same way.
> 
> So I tried to debug this myself now. The problem is that iterating the
> jobs in blockdev_mark_auto_del() is incorrect: job_cancel_locked()
> frees the job and then block_job_next_locked() is a use after free.
> 
> It also drops job_mutex temporarily and polls, so even switching to a
> *_FOREACH_SAFE style loop won't fix this. I guess we have to restart
> the whole search from the start after a job_cancel_locked() because the
> list might look very different after the call.
> 
> Now, of course, how this is related to your patch and why it doesn't
> trigger before it, is still less than clear. What I found out is that
> adding the scsi_device_purge_requests() is enough to crash it. Maybe
> it's related to the blk_drain() inside of it. That the job finishes
> earlier during the unplug now or something like that.
> 
> Anyway, changing blockdev_mark_auto_del() fixes it. I'll send a patch.

Thanks, and sorry for taking your time with debugging it!

Stefan

Attachment: signature.asc
Description: PGP signature


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.