[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH Remus v1 1/8] tools/libxc: adjust the memory allocation for migration

On 07/05/15 14:42, Hongyang Yang wrote:
> On 05/07/2015 05:48 PM, Andrew Cooper wrote:
>> On 07/05/15 07:37, Yang Hongyang wrote:
>>> Move the memory allocation before the concrete live/nolive save
>>> in order to avoid the free/alloc memory loop when using Remus.
>>> Signed-off-by: Yang Hongyang <yanghy@xxxxxxxxxxxxxx>
>>> ---
>>>   tools/libxc/xc_sr_save.c | 53
>>> +++++++++++++++++++-----------------------------
>>>   1 file changed, 21 insertions(+), 32 deletions(-)
>>> diff --git a/tools/libxc/xc_sr_save.c b/tools/libxc/xc_sr_save.c
>>> index 5d9c267..7fed668 100644
>>> --- a/tools/libxc/xc_sr_save.c
>>> +++ b/tools/libxc/xc_sr_save.c
>>> @@ -3,6 +3,8 @@
>>>   #include "xc_sr_common.h"
>>> +DECLARE_HYPERCALL_BUFFER(unsigned long, to_send);
>> This unfortunately causes an issue when concurrent calls to
>> xc_domain_save() in the same process.  While this is a highly
>> ill-advised action, I did try to avoid breaking it.
>> Please move this declaration into the ctx.save union.
> I know the best way is to put this into ctx.save union, but I haven't
> found a method to put it in, the DECLARE_HYPERCALL_BUFFER macro can not
> be used there, should I just define a unsigned long var at ctx.save
> union, and use other macro(what macro?) define at save()?

Urgh yes - the hypercall buffer infrastructure is very obscure, and I
never remember how to use it.  I don't think there is a way to do this
in the current infrastructure.

I think you are going to have to manually split
DECLARE_HYPERCALL_BUFFER() between the ctx declaration and new setup()
function.  Leave a comment by both halves, as it will be rather peculiar.

(Fundamentally, the DECLARE in the name is wrong, and contrary to all
other styles.  It should instead be INIT to match similar constructs in
Xen and Linux)


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.