[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [QEMU PATCH v4 07/13] softmmu/memory: enable automatic deallocation of memory regions



On Thu, Aug 31, 2023 at 06:10:08PM +0800, Akihiko Odaki wrote:
> On 2023/08/31 18:32, Huang Rui wrote:
> > From: Xenia Ragiadakou <xenia.ragiadakou@xxxxxxx>
> > 
> > When the memory region has a different life-cycle from that of her parent,
> > could be automatically released, once has been unparent and once all of her
> > references have gone away, via the object's free callback.
> > 
> > However, currently, references to the memory region are held by its owner
> > without first incrementing the memory region object's reference count.
> > As a result, the automatic deallocation of the object, not taking into
> > account those references, results in use-after-free memory corruption.
> > 
> > This patch increases the reference count of the memory region object on
> > each memory_region_ref() and decreases it on each memory_region_unref().
> > 
> > Signed-off-by: Xenia Ragiadakou <xenia.ragiadakou@xxxxxxx>
> > Signed-off-by: Huang Rui <ray.huang@xxxxxxx>
> > ---
> > 
> > New patch
> > 
> >   softmmu/memory.c | 19 +++++++++++++++++--
> >   1 file changed, 17 insertions(+), 2 deletions(-)
> > 
> > diff --git a/softmmu/memory.c b/softmmu/memory.c
> > index 7d9494ce70..0fdd5eebf9 100644
> > --- a/softmmu/memory.c
> > +++ b/softmmu/memory.c
> > @@ -1797,6 +1797,15 @@ Object *memory_region_owner(MemoryRegion *mr)
> >   
> >   void memory_region_ref(MemoryRegion *mr)
> >   {
> > +    if (!mr) {
> > +        return;
> > +    }
> > +
> > +    /* Obtain a reference to prevent the memory region object
> > +     * from being released under our feet.
> > +     */
> > +    object_ref(OBJECT(mr));
> > +
> >       /* MMIO callbacks most likely will access data that belongs
> >        * to the owner, hence the need to ref/unref the owner whenever
> >        * the memory region is in use.
> > @@ -1807,16 +1816,22 @@ void memory_region_ref(MemoryRegion *mr)
> >        * Memory regions without an owner are supposed to never go away;
> >        * we do not ref/unref them because it slows down DMA sensibly.
> >        */
> 
> The collapsed comment says:
>  > The memory region is a child of its owner.  As long as the
>  > owner doesn't call unparent itself on the memory region,
>  > ref-ing the owner will also keep the memory region alive.
>  > Memory regions without an owner are supposed to never go away;
>  > we do not ref/unref them because it slows down DMA sensibly.
> 
> It contradicts with this patch.

The reason that we modify it is because we would like to address the memory
leak issue in the original codes. Please see below, we find the memory
region will be crashed once we free(unref) the simple resource, because the
region will be freed in object_finalize() after unparent and the ref count
is to 0. Then the VM will be crashed with this.

In virgl_cmd_resource_map_blob():
    memory_region_init_ram_device_ptr(res->region, OBJECT(g), NULL, size, data);
    OBJECT(res->region)->free = g_free;
    memory_region_add_subregion(&b->hostmem, mblob.offset, res->region);
    memory_region_set_enabled(res->region, true);

In virtio_gpu_virgl_resource_unmap():
    memory_region_set_enabled(res->region, false);
    memory_region_del_subregion(&b->hostmem, res->region);
    object_unparent(OBJECT(res->region));
    res->region = NULL;

I spent a bit more time to understand your point, do you want me to update
corresponding comments or you have some concern about this change?

Thanks,
Ray

> 
> > -    if (mr && mr->owner) {
> > +    if (mr->owner) {
> >           object_ref(mr->owner);
> >       }
> >   }
> >   
> >   void memory_region_unref(MemoryRegion *mr)
> >   {
> > -    if (mr && mr->owner) {
> > +    if (!mr) {
> > +        return;
> > +    }
> > +
> > +    if (mr->owner) {
> >           object_unref(mr->owner);
> >       }
> > +
> > +    object_unref(OBJECT(mr));
> >   }
> >   
> >   uint64_t memory_region_size(MemoryRegion *mr)



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.