[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-changelog] [xen stable-4.3] xmalloc: handle correctly page allocation when align > size
commit a82fc4f48e535cd828452fb52e71bdc6dc6e071c Author: Julien Grall <julien.grall@xxxxxxxxxx> AuthorDate: Fri Mar 14 17:41:00 2014 +0100 Commit: Jan Beulich <jbeulich@xxxxxxxx> CommitDate: Fri Mar 14 17:41:00 2014 +0100 xmalloc: handle correctly page allocation when align > size When align is superior to size, we need to retrieve the order from align during multiple page allocation. I guess it was the goal of the commit fb034f42 "xmalloc: make close-to-PAGE_SIZE allocations more efficient". Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx> Acked-by: Keir Fraser <keir@xxxxxxx> master commit: ac2cba2901779f66bbfab298faa15c956e91393a master date: 2014-03-10 14:40:50 +0100 --- xen/common/xmalloc_tlsf.c | 5 ++--- 1 files changed, 2 insertions(+), 3 deletions(-) diff --git a/xen/common/xmalloc_tlsf.c b/xen/common/xmalloc_tlsf.c index d3bdfa7..a5769c9 100644 --- a/xen/common/xmalloc_tlsf.c +++ b/xen/common/xmalloc_tlsf.c @@ -527,11 +527,10 @@ static void xmalloc_pool_put(void *p) static void *xmalloc_whole_pages(unsigned long size, unsigned long align) { - unsigned int i, order = get_order_from_bytes(size); + unsigned int i, order; void *res, *p; - if ( align > size ) - get_order_from_bytes(align); + order = get_order_from_bytes(max(align, size)); res = alloc_xenheap_pages(order, 0); if ( res == NULL ) -- generated by git-patchbot for /home/xen/git/xen.git#stable-4.3 _______________________________________________ Xen-changelog mailing list Xen-changelog@xxxxxxxxxxxxx http://lists.xensource.com/xen-changelog
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |