[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [QEMU PATCH v5 07/13] softmmu/memory: enable automatic deallocation of memory regions



On 20/9/23 01:18, Akihiko Odaki wrote:
On 2023/09/19 23:21, Xenia Ragiadakou wrote:
On 19/9/23 13:44, Akihiko Odaki wrote:
On 2023/09/19 19:28, Xenia Ragiadakou wrote:
On 15/9/23 18:11, Akihiko Odaki wrote:
On 2023/09/15 20:11, Huang Rui wrote:
From: Xenia Ragiadakou <xenia.ragiadakou@xxxxxxx>

When the memory region has a different life-cycle from that of her parent, could be automatically released, once has been unparent and once all of her
references have gone away, via the object's free callback.

However, currently, references to the memory region are held by its owner without first incrementing the memory region object's reference count. As a result, the automatic deallocation of the object, not taking into account those references, results in use-after-free memory corruption.
This patch increases the reference count of an owned memory 
region object
on each memory_region_ref() and decreases it on each 
memory_region_unref().
Signed-off-by: Xenia Ragiadakou <xenia.ragiadakou@xxxxxxx>
Signed-off-by: Huang Rui <ray.huang@xxxxxxx>
---

V4 -> V5:
     - ref/unref only owned memory regions (Akihiko)

  softmmu/memory.c | 5 +++++
  1 file changed, 5 insertions(+)

diff --git a/softmmu/memory.c b/softmmu/memory.c
index 7d9494ce70..15e1699750 100644
--- a/softmmu/memory.c
+++ b/softmmu/memory.c
@@ -1800,6 +1800,9 @@ void memory_region_ref(MemoryRegion *mr)
      /* MMIO callbacks most likely will access data that belongs
       * to the owner, hence the need to ref/unref the owner whenever
       * the memory region is in use.
+     * Likewise, the owner keeps references to the memory region,
+     * hence the need to ref/unref the memory region object to prevent +     * its automatic deallocation while still referenced by its owner.
This comment does not make sense. Traditionally no such automatic 
deallocation happens so the owner has been always required to free 
the memory region when it gets finalized.
"[QEMU PATCH v5 09/13] virtio-gpu: Handle resource blob commands" 
introduces a different kind of memory region, which can be freed 
anytime before the device gets finalized. Even in this case, the 
owner removes the reference to the memory owner by doing 
res->region = NULL;
Hi Akihiko,

You are right, the word "owner" is not correct. The issue observed was due to the references kept in flatview ranges and the fact that flatview_destroy() is asynchronous and was called after memory region's destruction.
If I replace the word "owner" with "memory subsystem" in the commit 
message and drop the comment, would that be ok with you? or do want 
to suggest something else?
This will extend the lifetime of the memory region, but the 
underlying memory is still synchronously freed. Can you show that 
the flatview range will not be used to read the freed memory?
Yes, the intention of this patch is to delay the mr object 
finalization until all memory_region_unref() on this mr have been 
taken place.
What do you mean by "the underlying memory is still synchronously 
freed"?
A pointer is passed to memory_region_init_ram_ptr() with the ptr 
parameter when initializing the memory region and the memory region 
keeps the pointer.
In virtio_gpu_virgl_resource_unmap(), the memory pointed with the 
pointer is unmapped with virgl_renderer_resource_unmap() and makes the 
pointer kept by the memory region dangling though the lifetime of the 
memory region is extended with this patch. Can you show that the 
dangling pointer the memory region has will never be referenced?
I see your point but I think it is not directly related to this patch. 
IMHO, it is related to the implementation of 
virtio_gpu_virgl_resource_unmap(). Maybe the unmapping should be done in 
the free callback. However, I would expect the pointer to a disabled 
memory region to not be used, not sure though.




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.