[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC Patch v3 09/18] block-remus: fix memory leak



On 09/25/2014 07:14 PM, Shriram Rajagopalan wrote:
> On Sep 25, 2014 1:22 AM, "Wen Congyang" <wency@xxxxxxxxxxxxxx> wrote:
>>
>> On 09/25/2014 03:37 AM, Shriram Rajagopalan wrote:
>>> On Sep 5, 2014 5:15 AM, "Wen Congyang" <wency@xxxxxxxxxxxxxx> wrote:
>>>>
>>>> Fix the following two memory leak:
>>>> 1. If s->ramdisk.prev is not NULL, we merge the write requests in
>>>>    s->ramdisk.h into s->ramdisk.prev, and then destroy s->ramdisk.h.
>>>>    But we forget to free hash value when destroying s->ramdisk.h.
>>>
>>> I am responding from my phone. So I don't have the code in hand to
> validate
>>> your claim. I think the merge function reuses the value block (write
>>> request) instead of doing a memcpy. In which case, this patch will be
>>> freeing the buffer that is queued for write. Is that right?
>>
>> No, if we have cached the sector in s->ramdisk.prev, we just use memcpy.
>> Otherwise, we alloc memory and use memcpy. Why we don't reuse the buff?
>> Because the buff may be in the stack...
>>
>>>
>>>> 2. When write requests is finished, replicated_write_callback() will
>>>>    be called. We forget free the buff in this function.
>>>
>>> Wasn't that done explicitly in the write req done function, where a
>>> free(buf) was introduced? (Vague recollection... I am not sure if I
> pushed
>>> that fix upstream either :( )
>>
>> Sorry, I forgot to update the commit message. Bug 2 has been fixed...
>>
>>>
>>> Either way, if you have a working Remus setup, can you confirm that this
>>> patch does not cause double free error? (Just this patch and no other
> Remus
>>> related fixes).
>>
>> Hmm, I don't test it for newest xen, because only drbd is supported now.
>>
> 
> Since this is solely blktap2 related, any older version of Xen w/ Remus
> would do.

We find this bug in older version of Xen. We run pgbench to test the
performance, and it will cause OOM because tapdisk uses too many memory...

I don't test it for newest xen, but I think it's OK according to the codes

> 
>> Thanks
>> Wen COngyang
>>
>>>
>>>>
>>>> Signed-off-by: Lai Jiangshan <laijs@xxxxxxxxxxxxxx>
>>>> Signed-off-by: Jiang Yunhong <yunhong.jiang@xxxxxxxxx>
>>>> Signed-off-by: Wen Congyang <wency@xxxxxxxxxxxxxx>
>>>> Cc: Shriram Rajagopalan <rshriram@xxxxxxxxx>
>>>> ---
>>>>  tools/blktap2/drivers/block-remus.c | 2 +-
>>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>>
>>>> diff --git a/tools/blktap2/drivers/block-remus.c
>>> b/tools/blktap2/drivers/block-remus.c
>>>> index 079588d..4ce9dbe 100644
>>>> --- a/tools/blktap2/drivers/block-remus.c
>>>> +++ b/tools/blktap2/drivers/block-remus.c
>>>> @@ -602,7 +602,7 @@ static int ramdisk_start_flush(td_driver_t *driver)
>>>>                 }
>>>>                 free(sectors);
>>>>
>>>> -               hashtable_destroy (s->ramdisk.h, 0);
>>>> +               hashtable_destroy (s->ramdisk.h, 1);
>>>>         } else
>>>>                 s->ramdisk.prev = s->ramdisk.h;
>>>>
>>>> --
>>>> 1.9.3
>>>>
>>>
>>
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.