[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for 4.7] xen: Replace alloc_vcpu_guest_context() with vmalloc()



On 28/08/15 14:07, Andrew Cooper wrote:
> On 28/08/15 13:57, Julien Grall wrote:
>> On 28/08/15 13:56, Andrew Cooper wrote:
>>> On 28/08/15 13:41, Julien Grall wrote:
>>>> Hi Andrew,
>>>>
>>>> On 21/08/15 18:51, Andrew Cooper wrote:
>>>>> This essentially reverts c/s 2037f2adb "x86: introduce
>>>>> alloc_vcpu_guest_context()", including the newer arm bits, but achieves
>>>>> the same end goal by using the newer vmalloc() infrastructure.
>>>> I would keep alloc_vcpu_guest_context and replace the content by
>>>> vmalloc(...). It would avoid to open-coding the allocation on the vCPU
>>>> on different places.
>>> alloc_vcpu_guest_context() only existed because x86 used to need to do
>>> something quite cumbersome.  This is no longer the case, given vmalloc()
>>> as a more general solution.
>>>
>>> Retaining alloc_vcpu_guest_context() as just think wrapper, identical on
>>> all architectures, is a bad idea as it call into a separate translation
>>> unit which cannot be optimised.
>> Unless if you introduce a static inline helper in the header. It would
>> avoid open coding vmalloc and make easier future usage of it.
> 
> Hiding the type allocated makes the code harder to read, not easier.
> 
> We don't special case other plain allocations like this, so I still
> don't see a compelling reason to break the norm here.

Let me explain it in a different way: allocation is usually done with
xmalloc, but here you are using vmalloc. Why did you use vmalloc rather
than xalloc? AFAICT there is no improvement on ARM.

If we open code the allocation, one could decide to use xmalloc which is
the common allocation. So what would be the drawback to use xmalloc vs
vmalloc?

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.