[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 05/17] xen/cpupool: switch cpupool list to normal list interface



On Tue, 2020-12-01 at 10:18 +0100, Jürgen Groß wrote:
> On 01.12.20 10:12, Jan Beulich wrote:
> > What guarantees that you managed to find an unused ID, other
> > than at current CPU speeds it taking too long to create 4
> > billion pools? Since you're doing this under lock, wouldn't
> > it help anyway to have a global helper variable pointing at
> > the lowest pool followed by an unused ID?
> 
> An admin doing that would be quite crazy and wouldn't deserve better.
> 
> For being usable a cpupool needs to have a cpu assigned to it. And I
> don't think we are coming even close to 4 billion supported cpus. :-)
> 
> Yes, it would be possible to create 4 billion empty cpupools, but for
> what purpose? There are simpler ways to make the system unusable with
> dom0 root access.
> 
Yes, I agree. I don't think it's worth going through too much effort
for trying to deal with that.

What I'd do is:
 - add a comment here, explaining quickly exactly this fact, i.e., 
   that it's not that we've forgotten to deal with this and it's all 
   on purpose. Actually, it can be either a comment here or it can be 
   mentioned in the changelog. I'm fine either way
 - if we're concerned about someone doing:
     for i=1...N { xl cpupool-create foo bar }
   with N ending up being some giant number, e.g., by mistake, I don't 
   think it's unreasonable to come up with an high enough (but 
   certainly not in the billions!) MAX_CPUPOOLS, and stop creating new
   ones when we reach that level.

Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

Attachment: signature.asc
Description: This is a digitally signed message part


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.