[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v4 17/20] virtio-blk: implement BlockDevOps->drained_begin()
On Thu, May 04, 2023 at 11:13:42PM +0200, Kevin Wolf wrote: > Am 25.04.2023 um 19:27 hat Stefan Hajnoczi geschrieben: > > Detach ioeventfds during drained sections to stop I/O submission from > > the guest. virtio-blk is no longer reliant on aio_disable_external() > > after this patch. This will allow us to remove the > > aio_disable_external() API once all other code that relies on it is > > converted. > > > > Take extra care to avoid attaching/detaching ioeventfds if the data > > plane is started/stopped during a drained section. This should be rare, > > but maybe the mirror block job can trigger it. > > > > Signed-off-by: Stefan Hajnoczi <stefanha@xxxxxxxxxx> > > --- > > hw/block/dataplane/virtio-blk.c | 17 +++++++++------ > > hw/block/virtio-blk.c | 38 ++++++++++++++++++++++++++++++++- > > 2 files changed, 48 insertions(+), 7 deletions(-) > > > > diff --git a/hw/block/dataplane/virtio-blk.c > > b/hw/block/dataplane/virtio-blk.c > > index bd7cc6e76b..d77fc6028c 100644 > > --- a/hw/block/dataplane/virtio-blk.c > > +++ b/hw/block/dataplane/virtio-blk.c > > @@ -245,13 +245,15 @@ int virtio_blk_data_plane_start(VirtIODevice *vdev) > > } > > > > /* Get this show started by hooking up our callbacks */ > > - aio_context_acquire(s->ctx); > > - for (i = 0; i < nvqs; i++) { > > - VirtQueue *vq = virtio_get_queue(s->vdev, i); > > + if (!blk_in_drain(s->conf->conf.blk)) { > > + aio_context_acquire(s->ctx); > > + for (i = 0; i < nvqs; i++) { > > + VirtQueue *vq = virtio_get_queue(s->vdev, i); > > > > - virtio_queue_aio_attach_host_notifier(vq, s->ctx); > > + virtio_queue_aio_attach_host_notifier(vq, s->ctx); > > + } > > + aio_context_release(s->ctx); > > } > > - aio_context_release(s->ctx); > > return 0; > > > > fail_aio_context: > > @@ -317,7 +319,10 @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev) > > trace_virtio_blk_data_plane_stop(s); > > > > aio_context_acquire(s->ctx); > > - aio_wait_bh_oneshot(s->ctx, virtio_blk_data_plane_stop_bh, s); > > + > > + if (!blk_in_drain(s->conf->conf.blk)) { > > + aio_wait_bh_oneshot(s->ctx, virtio_blk_data_plane_stop_bh, s); > > + } > > So here we actually get a semantic change: What you described as the > second part in the previous patch, processing the virtqueue one last > time, isn't done any more if the device is drained. > > If it's okay to just skip this during drain, why do we need to do it > outside of drain? I forgot to answer why we need to process requests one last time outside drain. This approach comes from how vhost uses ioeventfd. When switching from vhost back to QEMU emulation, there's a chance that a final virtqueue kick snuck in while ioeventfd was being disabled. This is not the case with dataplane nowadays (it may have in the past). The only state dataplane transitions are device reset and 'stop'/'cont'. Neither of these require QEMU to process new requests in while stopping dataplane. My confidence is not 100%, but still pretty high that the virtio_queue_host_notifier_read() call could be dropped from dataplane code. Since I'm not 100% sure I didn't attempt that. Stefan Attachment:
signature.asc
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |