[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Xen-devel] [PATCH V5 3/8] xen/common: Introduce _xrealloc function
- To: Jan Beulich <jbeulich@xxxxxxxx>
- From: Oleksandr <olekstysh@xxxxxxxxx>
- Date: Tue, 24 Sep 2019 20:14:36 +0300
- Cc: sstabellini@xxxxxxxxxx, Wei Liu <wl@xxxxxxx>, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>, George Dunlap <George.Dunlap@xxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Ian Jackson <ian.jackson@xxxxxxxxxxxxx>, Tim Deegan <tim@xxxxxxx>, Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>, julien.grall@xxxxxxx, Paul Durrant <paul.durrant@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx, volodymyr_babchuk@xxxxxxxx
- Delivery-date: Tue, 24 Sep 2019 17:14:43 +0000
- List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
On 24.09.19 18:51, Jan Beulich wrote:
Hi, Jan
On 24.09.2019 17:30, Oleksandr Tyshchenko wrote:
Changes V4 -> V5:
- avoid possible truncation with allocations of 4GiB or above
- introduce helper functions add(strip)_padding to avoid
duplicating the code
- omit the unnecessary casts, change u32 to uint32_t
when moving the code
- use _xzalloc instead of _xmalloc to get the tail
portion zeroed
I'm sorry, but no, this is wasteful: You write the initialized
portion of the block twice this way, when once is fully
sufficient (but see below).
Indeed, not effectively.
--- a/xen/common/xmalloc_tlsf.c
+++ b/xen/common/xmalloc_tlsf.c
@@ -554,10 +554,40 @@ static void tlsf_init(void)
#define ZERO_BLOCK_PTR ((void *)-1L)
#endif
+static void *strip_padding(void *p)
+{
+ struct bhdr *b = p - BHDR_OVERHEAD;
+
+ if ( b->size & FREE_BLOCK )
+ {
+ p -= b->size & ~FREE_BLOCK;
+ b = p - BHDR_OVERHEAD;
+ ASSERT(!(b->size & FREE_BLOCK));
+ }
+
+ return p;
+}
+
+static void *add_padding(void *p, unsigned long align)
+{
+ uint32_t pad;
A fixed width type is inappropriate here anyway - unsigned int would
suffice. Judging from the incoming parameters, unsigned long would
be more future proof; alternatively the "align" parameter could be
"unsigned int", since we don't allow such huge allocations anyway
(but I agree that adjusting this doesn't really belong in the patch
here).
Will change to unsigned int.
@@ -598,10 +621,70 @@ void *_xzalloc(unsigned long size, unsigned long align)
return p ? memset(p, 0, size) : p;
}
-void xfree(void *p)
+void *_xrealloc(void *ptr, unsigned long size, unsigned long align)
{
- struct bhdr *b;
+ unsigned long curr_size, tmp_size;
+ void *p;
+
+ if ( !size )
+ {
+ xfree(ptr);
+ return ZERO_BLOCK_PTR;
+ }
+ if ( ptr == NULL || ptr == ZERO_BLOCK_PTR )
+ return _xmalloc(size, align);
This is inconsistent - you use _xzalloc() further down (and too
aggressively imo, as said).
Indeed. I missed that.
Here use of that function would then
be indicated. However, ...
+ ASSERT((align & (align - 1)) == 0);
+ if ( align < MEM_ALIGN )
+ align = MEM_ALIGN;
+
+ tmp_size = size + align - MEM_ALIGN;
+
+ if ( tmp_size < PAGE_SIZE )
+ tmp_size = (tmp_size < MIN_BLOCK_SIZE) ? MIN_BLOCK_SIZE :
+ ROUNDUP_SIZE(tmp_size);
+
+ if ( !((unsigned long)ptr & (PAGE_SIZE - 1)) )
+ {
+ curr_size = (unsigned long)PFN_ORDER(virt_to_page(ptr)) << PAGE_SHIFT;
+
+ if ( size <= curr_size && ((unsigned long)ptr & (align - 1)) == 0 )
+ return ptr;
... I only now realize that in a case like this one you can't really
zero-fill the tail portion that's extending beyond the original
request. Hence I think the just-in-case zero filling would better be
dropped again (and the _xmalloc() use above is fine).
Got it.
Note that with the fix done here you don't need tmp_size anymore
outside of ...
+ }
+ else
+ {
... this block. Please move its calculation (and declaration) here.
Agree. Will move.
+ struct bhdr *b;
+
+ /* Strip alignment padding. */
+ p = strip_padding(ptr);
+
+ b = p - BHDR_OVERHEAD;
+ curr_size = b->size & BLOCK_SIZE_MASK;
+
+ if ( tmp_size <= curr_size )
+ {
+ /* Add alignment padding. */
+ p = add_padding(p, align);
+
+ ASSERT(((unsigned long)p & (align - 1)) == 0);
Since another rev is going to be needed anyway - here and above
please prefer ! over == 0.
ok, will do.
--
Regards,
Oleksandr Tyshchenko
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|