[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-4.6 v2 2/3] xl: error out if vNUMA specifies more vcpus than pcpus



On Sun, Aug 16, 2015 at 09:51:53AM +0100, Ian Campbell wrote:
> On Fri, 2015-08-14 at 00:38 +0100, Wei Liu wrote:
> > > In fact, if b_info->max_vcpus is 0, the elements of vcpu_parsed are
> > > sized against the host pcpus, and we risk to call libxl_bitmap_set() for
> > > vcpus beyond that limit, while parsing the "vcpus" subsection of the
> > > vnode specification (which happens _before_ this check).
> > > 
> > > Or am I missing something?
> > > 
> > 
> > That's fine because that function has no effect when you try to set a
> > bit beyond its size.
> 
> This sort of subtlety is worthy of a comment in either the code of the
> commit message.
> 
> > > Assuming I'm not, it seems to me that a solution could be to check for
> > > this situation _inside_ the 'else if (!strcmp("vcpus", option))'. In
> > > fact, if "maxvcpus" has not been specified, as soon as the end of one of
> > > the ranges --as returned by parse_range()-- is beyond host_cpus, we know
> > > we'd be going past the limit of the corresponding element of
> > > vcpu_parsed, and we can error out.
> 
> FWIW that's what I was expecting to happen...
> 
> > > It'll most likely be a bit uglier than this patch, but probably still
> > > less complex than v1. :-)
> 
> > That doesn't make any difference in terms of functionality. I would
> > rather leave the parsing bit as it is and deal with fallout 
> > separately.
> > That would make code cleaner IMHO.
> 
> ... It's a bit wasteful to keep parsing after things have already
> failed, but I suppose that's not too bad and we can live with it.
> 

I will add comments and resend.

Wei.

> Ian

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.