[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 2/4] block: Avoid processing BDS twice in bdrv_set_aio_context_ignore()



On Thu, Dec 17, 2020 at 02:06:02PM +0100, Kevin Wolf wrote:
> Am 17.12.2020 um 13:50 hat Vladimir Sementsov-Ogievskiy geschrieben:
> > 17.12.2020 13:58, Kevin Wolf wrote:
> > > Am 17.12.2020 um 10:37 hat Sergio Lopez geschrieben:
> > > > On Wed, Dec 16, 2020 at 07:31:02PM +0100, Kevin Wolf wrote:
> > > > > Am 16.12.2020 um 15:55 hat Sergio Lopez geschrieben:
> > > > > > On Wed, Dec 16, 2020 at 01:35:14PM +0100, Kevin Wolf wrote:
> > > > > > > Anyway, trying to reconstruct the block graph with BdrvChild 
> > > > > > > pointers
> > > > > > > annotated at the edges:
> > > > > > > 
> > > > > > > BlockBackend
> > > > > > >        |
> > > > > > >        v
> > > > > > >    backup-top ------------------------+
> > > > > > >        |   |                          |
> > > > > > >        |   +-----------------------+  |
> > > > > > >        |            0x5655068b8510 |  | 0x565505e3c450
> > > > > > >        |                           |  |
> > > > > > >        | 0x565505e42090            |  |
> > > > > > >        v                           |  |
> > > > > > >      qcow2 ---------------------+  |  |
> > > > > > >        |                        |  |  |
> > > > > > >        | 0x565505e52060         |  |  | ??? [1]
> > > > > > >        |                        |  |  |  |
> > > > > > >        v         0x5655066a34d0 |  |  |  | 0x565505fc7aa0
> > > > > > >      file                       v  v  v  v
> > > > > > >                               qcow2 (backing)
> > > > > > >                                      |
> > > > > > >                                      | 0x565505e41d20
> > > > > > >                                      v
> > > > > > >                                    file
> > > > > > > 
> > > > > > > [1] This seems to be a BdrvChild with a non-BDS parent. Probably a
> > > > > > >      BdrvChild directly owned by the backup job.
> > > > > > > 
> > > > > > > > So it seems this is happening:
> > > > > > > > 
> > > > > > > > backup-top (5e48030) <---------| (5)
> > > > > > > >     |    |                      |
> > > > > > > >     |    | (6) ------------> qcow2 (5fbf660)
> > > > > > > >     |                           ^    |
> > > > > > > >     |                       (3) |    | (4)
> > > > > > > >     |-> (1) qcow2 (5e5d420) -----    |-> file (6bc0c00)
> > > > > > > >     |
> > > > > > > >     |-> (2) file (5e52060)
> > > > > > > > 
> > > > > > > > backup-top (5e48030), the BDS that was passed as argument in 
> > > > > > > > the first
> > > > > > > > bdrv_set_aio_context_ignore() call, is re-entered when qcow2 
> > > > > > > > (5fbf660)
> > > > > > > > is processing its parents, and the latter is also re-entered 
> > > > > > > > when the
> > > > > > > > first one starts processing its children again.
> > > > > > > 
> > > > > > > Yes, but look at the BdrvChild pointers, it is through different 
> > > > > > > edges
> > > > > > > that we come back to the same node. No BdrvChild is used twice.
> > > > > > > 
> > > > > > > If backup-top had added all of its children to the ignore list 
> > > > > > > before
> > > > > > > calling into the overlay qcow2, the backing qcow2 wouldn't 
> > > > > > > eventually
> > > > > > > have called back into backup-top.
> > > > > > 
> > > > > > I've tested a patch that first adds every child to the ignore list,
> > > > > > and then processes those that weren't there before, as you suggested
> > > > > > on a previous email. With that, the offending qcow2 is not 
> > > > > > re-entered,
> > > > > > so we avoid the crash, but backup-top is still entered twice:
> > > > > 
> > > > > I think we also need to every parent to the ignore list before calling
> > > > > callbacks, though it doesn't look like this is the problem you're
> > > > > currently seeing.
> > > > 
> > > > I agree.
> > > > 
> > > > > > bs=0x560db0e3b030 (backup-top) enter
> > > > > > bs=0x560db0e3b030 (backup-top) processing children
> > > > > > bs=0x560db0e3b030 (backup-top) calling bsaci child=0x560db0e2f450 
> > > > > > (child->bs=0x560db0fb2660)
> > > > > > bs=0x560db0fb2660 (qcow2) enter
> > > > > > bs=0x560db0fb2660 (qcow2) processing children
> > > > > > bs=0x560db0fb2660 (qcow2) calling bsaci child=0x560db0e34d20 
> > > > > > (child->bs=0x560db1bb3c00)
> > > > > > bs=0x560db1bb3c00 (file) enter
> > > > > > bs=0x560db1bb3c00 (file) processing children
> > > > > > bs=0x560db1bb3c00 (file) processing parents
> > > > > > bs=0x560db1bb3c00 (file) processing itself
> > > > > > bs=0x560db0fb2660 (qcow2) calling bsaci child=0x560db16964d0 
> > > > > > (child->bs=0x560db0e50420)
> > > > > > bs=0x560db0e50420 (qcow2) enter
> > > > > > bs=0x560db0e50420 (qcow2) processing children
> > > > > > bs=0x560db0e50420 (qcow2) calling bsaci child=0x560db0e34ea0 
> > > > > > (child->bs=0x560db0e45060)
> > > > > > bs=0x560db0e45060 (file) enter
> > > > > > bs=0x560db0e45060 (file) processing children
> > > > > > bs=0x560db0e45060 (file) processing parents
> > > > > > bs=0x560db0e45060 (file) processing itself
> > > > > > bs=0x560db0e50420 (qcow2) processing parents
> > > > > > bs=0x560db0e50420 (qcow2) processing itself
> > > > > > bs=0x560db0fb2660 (qcow2) processing parents
> > > > > > bs=0x560db0fb2660 (qcow2) calling set_aio_ctx child=0x560db1672860
> > > > > > bs=0x560db0fb2660 (qcow2) calling set_aio_ctx child=0x560db1b14a20
> > > > > > bs=0x560db0e3b030 (backup-top) enter
> > > > > > bs=0x560db0e3b030 (backup-top) processing children
> > > > > > bs=0x560db0e3b030 (backup-top) processing parents
> > > > > > bs=0x560db0e3b030 (backup-top) calling set_aio_ctx 
> > > > > > child=0x560db0e332d0
> > > > > > bs=0x560db0e3b030 (backup-top) processing itself
> > > > > > bs=0x560db0fb2660 (qcow2) processing itself
> > > > > > bs=0x560db0e3b030 (backup-top) calling bsaci child=0x560db0e35090 
> > > > > > (child->bs=0x560db0e50420)
> > > > > > bs=0x560db0e50420 (qcow2) enter
> > > > > > bs=0x560db0e3b030 (backup-top) processing parents
> > > > > > bs=0x560db0e3b030 (backup-top) processing itself
> > > > > > 
> > > > > > I see that "blk_do_set_aio_context()" passes "blk->root" to
> > > > > > "bdrv_child_try_set_aio_context()" so it's already in the ignore 
> > > > > > list,
> > > > > > so I'm not sure what's happening here. Is backup-top is referenced
> > > > > > from two different BdrvChild or is "blk->root" not pointing to
> > > > > > backup-top's BDS?
> > > > > 
> > > > > The second time that backup-top is entered, it is not as the BDS of
> > > > > blk->root, but as the parent node of the overlay qcow2. Which is
> > > > > interesting, because last time it was still the backing qcow2, so the
> > > > > change did have _some_ effect.
> > > > > 
> > > > > The part that I don't understand is why you still get the line with
> > > > > child=0x560db1b14a20, because when you add all children to the ignore
> > > > > list first, that should have been put into the ignore list as one of 
> > > > > the
> > > > > first things in the whole process (when backup-top was first entered).
> > > > > 
> > > > > Is 0x560db1b14a20 a BdrvChild that has backup-top as its opaque value,
> > > > > but isn't actually present in backup-top's bs->children?
> > > > 
> > > > Exactly, that line corresponds to this chunk of code:
> > > > 
> > > > <---- begin ---->
> > > >      QLIST_FOREACH(child, &bs->parents, next_parent) {
> > > >          if (g_slist_find(*ignore, child)) {
> > > >              continue;
> > > >          }
> > > >          assert(child->klass->set_aio_ctx);
> > > >          *ignore = g_slist_prepend(*ignore, child);
> > > >          fprintf(stderr, "bs=%p (%s) calling set_aio_ctx child=%p\n", 
> > > > bs, bs->drv->format_name, child);
> > > >          child->klass->set_aio_ctx(child, new_context, ignore);
> > > >      }
> > > > <---- end ---->
> > > > 
> > > > Do you think it's safe to re-enter backup-top, or should we look for a
> > > > way to avoid this?
> > > 
> > > I think it should be avoided, but I don't understand why putting all
> > > children of backup-top into the ignore list doesn't already avoid it. If
> > > backup-top is in the parents list of qcow2, then qcow2 should be in the
> > > children list of backup-top and therefore the BdrvChild should already
> > > be in the ignore list.
> > > 
> > > The only way I can explain this is that backup-top and qcow2 have
> > > different ideas about which BdrvChild objects exist that connect them.
> > > Or that the graph changes between both places, but I don't see how that
> > > could happen in bdrv_set_aio_context_ignore().
> > > 
> > 
> > bdrv_set_aio_context_ignore() do bdrv_drained_begin().. As I reported
> > recently, nothing prevents some job finish and do graph modification
> > during some another drained section. It may be the case.
> 
> Good point, this might be the same bug then.
> 
> If everything worked correctly, a job completion could only happen on
> the outer bdrv_set_aio_context_ignore(). But after that, we are already
> in a drain section, so the job should be quiesced and a second drain
> shouldn't cause any additional graph changes.
> 
> I would have to go back to the other discussion, but I think it was
> related to block jobs that are already in the completion process and
> keep moving forward even though they are supposed to be quiesced.
> 
> If I remember correctly, actually pausing them at this point looked
> difficult. Maybe what we should then do is letting .drained_poll return
> true until they have actually fully completed?
> 
> Ah, but was this something that would deadlock because the job
> completion callbacks use drain sections themselves?
> 
> > If backup-top involved, I can suppose that graph modification is in
> > backup_clean, when we remove the filter.. Who is making
> > set_aio_context in the issue? I mean, what is the backtrace of
> > bdrv_set_aio_context_ignore()?
> 
> Sergio, can you provide the backtrace and also test if the theory with a
> job completion in the middle of the process is what you actually hit?

No, I'm sure the job is not finishing in the middle of the
set_aio_context chain, which is started by a
virtio_blk_data_plane_[start|stop], which in turn is triggered by a
guest reboot.

This is a stack trace that reaches to the point in which backup-top is
entered a second time:

#0  0x0000560c3e173bbd in child_job_set_aio_ctx
    (c=<optimized out>, ctx=0x560c40c45630, ignore=0x7f6d4eeb6f40) at 
../blockjob.c:159
#1  0x0000560c3e1aefc6 in bdrv_set_aio_context_ignore
    (bs=0x560c40dc3660, new_context=0x560c40c45630, ignore=0x7f6d4eeb6f40) at 
../block.c:6509
#2  0x0000560c3e1aee8a in bdrv_set_aio_context_ignore
    (bs=bs@entry=0x560c40c4c030, new_context=new_context@entry=0x560c40c45630, 
ignore=ignore@entry=0x7f6d4eeb6f40) at ../block.c:6487
#3  0x0000560c3e1af503 in bdrv_child_try_set_aio_context
    (bs=bs@entry=0x560c40c4c030, ctx=ctx@entry=0x560c40c45630, 
ignore_child=<optimized out>, errp=errp@entry=0x7f6d4eeb6fc8) at ../block.c:6619
#4  0x0000560c3e1e561a in blk_do_set_aio_context
    (blk=0x560c41ca4610, new_context=0x560c40c45630, 
update_root_node=update_root_node@entry=true, errp=errp@entry=0x7f6d4eeb6fc8) 
at ../block/block-backend.c:2027
#5  0x0000560c3e1e740d in blk_set_aio_context
    (blk=<optimized out>, new_context=<optimized out>, 
errp=errp@entry=0x7f6d4eeb6fc8)
    at ../block/block-backend.c:2048
#6  0x0000560c3e10de78 in virtio_blk_data_plane_start (vdev=<optimized out>)
    at ../hw/block/dataplane/virtio-blk.c:220
#7  0x0000560c3de691d2 in virtio_bus_start_ioeventfd 
(bus=bus@entry=0x560c41ca1e98)
    at ../hw/virtio/virtio-bus.c:222
#8  0x0000560c3de4f907 in virtio_pci_start_ioeventfd (proxy=0x560c41c99d90)
    at ../hw/virtio/virtio-pci.c:1261
#9  0x0000560c3de4f907 in virtio_pci_common_write
    (opaque=0x560c41c99d90, addr=<optimized out>, val=<optimized out>, 
size=<optimized out>)
    at ../hw/virtio/virtio-pci.c:1261
#10 0x0000560c3e145d81 in memory_region_write_accessor
    (mr=0x560c41c9a770, addr=20, value=<optimized out>, size=1, 
shift=<optimized out>, mask=<optimized out>, attrs=...) at 
../softmmu/memory.c:491
#11 0x0000560c3e1447de in access_with_adjusted_size
    (addr=addr@entry=20, value=value@entry=0x7f6d4eeb71a8, size=size@entry=1, 
access_size_min=<optimized out>, access_size_max=<optimized out>, access_fn=
    0x560c3e145c80 <memory_region_write_accessor>, mr=0x560c41c9a770, attrs=...)
    at ../softmmu/memory.c:552
#12 0x0000560c3e148052 in memory_region_dispatch_write
    (mr=mr@entry=0x560c41c9a770, addr=20, data=<optimized out>, op=<optimized 
out>, attrs=attrs@entry=...) at ../softmmu/memory.c:1501
#13 0x0000560c3e06b5b7 in flatview_write_continue
    (fv=fv@entry=0x7f6d400ed3e0, addr=addr@entry=4261429268, attrs=..., 
ptr=ptr@entry=0x7f6d71dad028, len=len@entry=1, addr1=<optimized out>, 
l=<optimized out>, mr=0x560c41c9a770)
    at /home/BZs/1900326/qemu/include/qemu/host-utils.h:164
#14 0x0000560c3e06b7d6 in flatview_write
    (fv=0x7f6d400ed3e0, addr=addr@entry=4261429268, attrs=attrs@entry=..., 
buf=buf@entry=0x7f6d71dad028, len=len@entry=1) at ../softmmu/physmem.c:2799
#15 0x0000560c3e06e330 in address_space_write
    (as=0x560c3ec0a920 <address_space_memory>, addr=4261429268, attrs=..., 
buf=buf@entry=0x7f6d71dad028, len=1) at ../softmmu/physmem.c:2891
#16 0x0000560c3e06e3ba in address_space_rw (as=<optimized out>, addr=<optimized 
out>, attrs=..., 
    attrs@entry=..., buf=buf@entry=0x7f6d71dad028, len=<optimized out>, 
is_write=<optimized out>)
    at ../softmmu/physmem.c:2901
#17 0x0000560c3e10021a in kvm_cpu_exec (cpu=cpu@entry=0x560c40d7e0d0) at 
../accel/kvm/kvm-all.c:2541
#18 0x0000560c3e1445e5 in kvm_vcpu_thread_fn (arg=arg@entry=0x560c40d7e0d0) at 
../accel/kvm/kvm-cpus.c:49
#19 0x0000560c3e2c798a in qemu_thread_start (args=<optimized out>) at 
../util/qemu-thread-posix.c:521
#20 0x00007f6d6ba8614a in start_thread () at /lib64/libpthread.so.0
#21 0x00007f6d6b7b8763 in clone () at /lib64/libc.so.6

Thanks,
Sergio.

Attachment: signature.asc
Description: PGP signature


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.