[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH V5 03/32] xl / libxl: push VCPU affinity pinning down to libxl



On Fri, May 16, 2014 at 10:57:59AM +0100, Ian Campbell wrote:
> On Thu, 2014-05-15 at 18:06 +0100, Wei Liu wrote:
> > On Thu, May 15, 2014 at 05:45:19PM +0100, Ian Campbell wrote:
> > > On Tue, 2014-05-13 at 22:53 +0100, Wei Liu wrote:
> > > 
> > > diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> > > > index 0dfafe7..7b0901c 100644
> > > > --- a/tools/libxl/libxl_types.idl
> > > > +++ b/tools/libxl/libxl_types.idl
> > > > @@ -305,6 +305,7 @@ libxl_domain_build_info =
> > > Struct("domain_build_info",[
> > > >      ("avail_vcpus",     libxl_bitmap),
> > > >      ("cpumap",          libxl_bitmap),
> > > >      ("nodemap",         libxl_bitmap),
> > > > +    ("vcpu_affinity",   libxl_key_value_list),
> > > > 
> > > Is a key value list really the best way to represent this? At first
> > > glance it seems like an array would be more suitable?
> > > 
> > > I've glanced through the rest on the assumption you have a convincing
> > > reason why it should be a kvp list.
> > > 
> > 
> > How can you effectively skip pinning a VCPU if it's an array? I can have
> > [ '0': '1', '3': '3' ] in KVL, but not able to represent it in an array
> > [ '1', ?, ?, '3' ].
> 
> Isn't there an explicit value for any?
> 

Yes, but then that's not very efficient if you only want to pin a few
vcpu, say, if you have 128 vcpu but only want to pin several.

> > > > diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> > > > index 661999c..b818815 100644
> > > > --- a/tools/libxl/libxl_dom.c
> > > > +++ b/tools/libxl/libxl_dom.c
> > > > @@ -263,6 +263,54 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
> > > >      libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap);
> > > >      libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus, 
> > > > &info->cpumap);
> > > >  
> > > > +    /* If we have vcpu affinity list, pin vcpu to pcpu. */
> > > > +    if (d_config->b_info.vcpu_affinity) {
> > > > +        int i;
> > > > +        libxl_bitmap vcpu_cpumap;
> > > > +        int *vcpu_to_pcpu, sz = sizeof(int) * 
> > > > d_config->b_info.max_vcpus;
> > > > +
> > > > +        vcpu_to_pcpu = libxl__zalloc(gc, sz);
> > > 
> > > In theory this could be a stack allocation, either with alloca or just
> > >   int vcpu_to_pcpu[sz];
> > > 
> > 
> > I would rather avoid dynamic-size stack allocation, bacause
> > 
> > "The alloca() function returns a pointer to the beginning of the
> > allocated space.  If  the  allocation  causes  stack  overflow,
> > program behavior is undefined."
> 
> I think that's true regardless of how the stack is overflowed, be it an
> allocation, an oversized array or a deep call chain. But I don't think
> we are talking about quantities of data which are unreasonable to put on
> the stack, even with thousands of guest CPUs?
> 

Fair enough. If you don't worry about that then I'm fine with that too.
;-)

> > > > +        memset(vcpu_to_pcpu, -1, sz);
> > > > +
> > > > +        for (i = 0; i < d_config->b_info.max_vcpus; i++) {
> > > > +            libxl_key_value_list kvl = d_config->b_info.vcpu_affinity;
> > > > +            const char *key, *val;
> > > > +            int k, v;
> > > > +
> > > > +            key = kvl[i * 2];
> > > 
> > > Need to bounds check kvl here. I think you might be better off iterating
> > > over the kvl and validating the k against max_vcpus.
> > > 
> > 
> > The next line is "bound-checking".
> 
> The it is too late, you've already run off the end of kvl. (I'm talking
> about the bounds of i*2, not the bounds of the resulting key BTW).
> 

We haven't run off the end. The last single element of a KV list is
sentinel.

   i      0     1         N
   KVL  K0 V0 K1 V1 ... KN VN S <-- sentinel, NULL

Wei.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.