[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v10 5/9] libxl: vnuma types declararion

On Wed, 2014-09-03 at 23:48 -0400, Elena Ufimtseva wrote:
> > What happens if a guest has 33 vcpus?
> >
> > Or maybe the output of this is a single vcpu number?
> >
> >> +    ("vdistance",       Array(uint32, "num_vdistance")),
> >
> > nodes to distances? Can num_vdistance ever differ from vnodes?
> num_vdistance is a square of vnodes.
> >
> > I thought distances were between two nodes, in which case doesn't this
> > need to be multidimensional?
> yes, but parser just fills it on per-row order, treating it as it is a
> multidimensional array.

I suppose this is a workaround for the lack of multidimensional arrays
in the IDL?

If this is to be done then it needs a comment I think. In particular the
stride length needs to be clearly defined.

> > You appear to unconditionally overwrite whatever the users might have
> > put here.
> >
> >> +    ("vnuma_autoplacement",  libxl_defbool),
> >
> > Why/how does this differ from the existing numa placement option?
> Basically, its meaning can be briefly described as to try to build
> vnode to pnode mapping if automatic vnuma placement
> enabled, or try to use the user-defined one.
> But I am thinking maybe you are right, and there is no need in that
> vnuma_autoplacement at all.
> Just to remove it, and if numa placement is in automatic mode, just
> use it. If not, try to use vnode to pnode mapping.
> Now I think that is what Dario was talking about for some time )
> Ok, now going back to these types.
>   ("vnodes",          uint32),
>   ("vmemranges",      uint32),
>   ("vnuma_mem",       Array(uint64, "num_vnuma_mem")),
>   ("vnuma_vcpumap",   Array(uint32, "num_vnuma_vcpumap")),
>   ("vdistance",       Array(uint32, "num_vdistance")),
>   ("vnuma_vnodemap",  Array(uint32, "num_vnuma_vnondemap")),
> vnodes -  number of vnuma nodes;
> vmemranges - number of vmemranges (not used, but added support in
> Xen); Not sure how to deal with this. Should I just remove it as its
> not used by tool stack yet?

I think that would be best, otherwise we risk tying ourselves to an API
which might not actually work when we come to try and use it.

If you need the concept internally while building you could add such
things to libxl__domain_build_state instead.

> vnuma_mem - vnode memory sizes (conflicts now with vmemranges);
> vnuma_vcpumap - map of vnuma nodes and vcpus (length is number of
> vcpus, each element is vnuma node number);
> vdistance - distances, num_vdistance = vnodes * vnodes;
> vnuma_vnodemap - mapping of vnuma nodes to physical NUMA nodes;
> And vnuma_autoplacement probably not needed here at all.
> About vmemranges. They will be needed for vnuma nodes what have
> multiple ranges of memories.
> I added it in type definition to specify support when calling
> hypercall. As of now, PV domain have one memory range per node.
> There is libxl__build_vnuma_ranges function which for PV domain
> basically converts vnuma memory node sizes into ranges.
> But for PV domain there is only one range per vnode.
> Maybe there is a better way of doing this, maybe even remove confusing
> at this point vmemranges?

That would help, I think.

Then I would take the rest and wrap them in a libxl_vnode_info struct:

libxl_vnode_info = Struct("libxl_vnode_info", [
    ("mem", MemKB), # Size of this node's memory
    ("distances", Array(...)), # Distances from this node to the others (single 
dimensional array)
    ("pnode", TYPE), # which pnode this vnode is associated with

Then in the (b|c)_config
     ("vnuma_nodes", Array(libxl_vnode_info...))

This avoids the issue of having to have various num_* which are expected
to always be identical. Except for vnuma_nodes[N].num_distances, but I
think that is unavoidable.

The only thing I wasn't sure of was how to fit the vcpumap into this.
AIUI this is a map from vcpu number to vnode number (i.e. for each vcpu
it tells you which vnuma node it is part of). I presume it isn't
possible/desirable to have a vcpu be part of multiple vnuma nodes.

One possibility would be to have a libxl_bitmap/cpumap as part of the
libxl_vnode_info (containing the set of vcpus which are part of this
node) but that would be prone to allowing a vcpu to be in multiple

Another possibility would be an array in (b|c)_config as you have now,
which would be annoying because it's num_foo really ought to be the
->vcpus count and the IDL doesn't really allow that.

I'm not sure what is the lesser of the two evils here.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.