[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] libxl: extract and save affinity maps from hypervisor



On Tue, 2018-10-16 at 17:27 +0100, Ian Jackson wrote:
> Wei Liu writes ("[PATCH] libxl: extract and save affinity maps from
> hypervisor"):
> > This is required to retain affinity setting across save/restore and
> > migration.
> 
> Does the hypervisor or libxl invent a default affinity map at some
> point ?  If so that default calculation needs to be re-done for the
> new host.  ISTR some code to do NUMA affinity stuff by
> default.  CCing
> Dario who will hopefully save me digging into the code to answer that
> question :-).
>
I'll try, but no promises. :-/

So, when we create the domain, if there is no hard or soft affinity
configured by the user, we run automatic placement, which then ends up
(if successful) in the guest having a soft-affinity.

(We're in libxl/libxl_dom.c:libxl__build_pre(), BTW.)

If we save it/send to the destination, I think it means that when
resuming the guest, we'll see that it has a soft-affinity already and
go with it, skipping automatic placement.

Is that good? Is that what we always want?

I would say 'no', in the sense that, if affinity wasn't originally
specified, I'd argue that what we want is for libxl to figure it out
automatically at resume time, or on the new host, as it did when
creating the domain for the first time.

If on the other hand, a domain was actually explicitly specified, then
I guess it makes sense to at least try to set it when resuming.

So the point is, are we, at that time (i.e., during resume), able to
tell whether the reason why the domain has an affinity, is because the
user explicitly specified it? Out of the top of my head, I don't think
we can, but I probably better have another look at the code tomorrow...
:-/

> > diff --git a/tools/libxl/libxl_domain.c
> > b/tools/libxl/libxl_domain.c
> > index 3377bba..24fac9b 100644
> > --- a/tools/libxl/libxl_domain.c
> > +++ b/tools/libxl/libxl_domain.c
> > 
> > +        /* Affinity maps */
> > +
> > +#define
> > REALLOC_AFFINITY_MAP(n)                                            
> >    \
> > +        for (i = 0; i < b_info->num_vcpu_ ## n ## _affinity; i++)
> > {           \
> > +            libxl_bitmap *m = &b_info->vcpu_ ## n ##
> > _affinity[i];            \
> > +            libxl_bitmap_dispose(m);                              
> >             \
> > +            libxl_bitmap_init(m);                                 
> >             \
> > +            libxl_cpu_bitmap_alloc(CTX, m,
> > 0);                                \
> > +        }                                                         
> >             \
> > +        b_info->vcpu_ ## n ## _affinity
> > =                                     \
> > +            libxl__realloc(NOGC, b_info->vcpu_ ## n ##
> > _affinity,             \
> > +                    max_vcpus * sizeof(b_info->vcpu_ ## n ##
> > _affinity[0]));  \
> > +        for (i = b_info->num_vcpu_ ## n ## _affinity; i <
> > max_vcpus; i++) {   \
> > +            libxl_bitmap *m = &b_info->vcpu_ ## n ##
> > _affinity[i];            \
> > +            libxl_bitmap_init(m);                                 
> >             \
> > +            libxl_cpu_bitmap_alloc(CTX, m,
> > 0);                                \
> > +        }                                                         
> >             \
> > +        b_info->num_vcpu_ ## n ## _affinity = max_vcpus;
> > +
> > +        REALLOC_AFFINITY_MAP(hard);
> > +        REALLOC_AFFINITY_MAP(soft);
> 
> I don't think this needs to be a macro, does it ?  I mean, there's
> just the one member _affinity.  You could make this into a function.
> 
FWIW, I'm not sure I'm getting, but there's vcpu_soft_affinity and
vcpu_hard_affinity.

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Software Engineer @ SUSE https://www.suse.com/

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.