[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen, tools/python/xen: pincpu support vcpus, add vcpu to cpu map



* Ian Pratt <m+Ian.Pratt@xxxxxxxxxxxx> [2005-04-14 11:58]:
> > > "xm pincpu mydom 1 2,4-6" which would allow VCPU 1 of mydom 
> > to run on 
> > > CPUs 2,4 and 5 but no others. -1 would still mean "run anywhere". 
> > > Having this functionality is really important before we can 
> > implement 
> > > any kind of CPU load ballancer.
> > 
> > Interesting idea.  I don't see anything in the schedulers 
> > that would take advantage of that sort of definition.  AFAIK, 
> > exec_domains are never migrated unless told to do so via 
> > pincpu.  Does the new scheduler do this?  Or is this more of 
> > setting up the rules that the load balancer would query to 
> > find out where it can migrate vcpus?
> 
> I see having this as a pre-requisite for any fancy new scheduler (or as
> a first step, CPU load ballancer). Without it, I think it'll be
> scheduling anarchy :-)

OK.  Makes sense, that sounds like I separate patch.  I was thinking a
u32 bitmap, but that doesn't give us the -1, run-anywhere.  Maybe
EDF_USEPINMAP and a u32 bitmap.  if EDF_USEPINMAP is set, then the
balancer/scheduler looks at the bitmap to see on which cpus the vcpu can
run, if it is not set, the vcpu can run anywhere.

> > > Secondly, I think it would be really good if we could have some 
> > > hierarchy in CPU names. Imagine a 4 socket system with dual 
> > core hyper 
> > > threaded CPUs. It would be nice to be able to specify the 
> > 3rd socket, 
> > > 1st core, 2nd hyperthread as CPU "2.0.1".
> > > 
> > > Where we're on a system without one of the levels of hierarchy, we 
> > > just miss it off. E.g. a current SMP Xeon box would be "x.y". This 
> > > would be much less confusing than the current scalar representation.
> > 
> > I like the idea of being able to specify "where" the vcpu 
> > runs more explicitly than 'cpu 0', which does not give any 
> > indication of physical cpu characteristics.  We would 
> > probably need to still provide a simple mapping, but allow 
> > the pincpu interface to support a more specific target as 
> > well as the more generic.
> > 
> > 2-way hyperthreaded box:
> > CPU     SOCKET.CORE.THREAD
> > 0       0.0.0
> > 1       0.0.1
> > 2       1.0.0
> > 3       1.0.1
> > 
> > That look sane?
> 
> Yep, that's what I'm thinking. I think its probably worth squeezing out
> unsused levels of hierarchy, e.g. just having SOCKET.THREAD in the above

OK.  I'll see how the implementation looks when I'm done.  It sounds
nice though.

> example. Keeping it pretty generic makes sense too. E.g. imagine a big
> ccNUMA system with a 'node' level above that of the actual CPU socket.

Sure, I'll look at the Linux cpu groups stuff and the Linux topology
code to see if there is anything like this there.

-- 
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
(512) 838-9253   T/L: 678-9253
ryanh@xxxxxxxxxx

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.