[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the scheduler know about node-affinity
On Thu, 2012-12-20 at 07:44 +0100, Juergen Gross wrote: > Am 19.12.2012 20:07, schrieb Dario Faggioli: > > [...] > > > > This change modifies the VCPU load balancing algorithm (for the > > credit scheduler only), introducing a two steps logic. > > During the first step, we use the node-affinity mask. The aim is > > giving precedence to the CPUs where it is known to be preferable > > for the domain to run. If that fails in finding a valid PCPU, the > > node-affinity is just ignored and, in the second step, we fall > > back to using cpu-affinity only. > > > > Signed-off-by: Dario Faggioli<dario.faggioli@xxxxxxxxxx> > > --- > > Changes from v1: > > * CPU masks variables moved off from the stack, as requested during > > review. As per the comments in the code, having them in the private > > (per-scheduler instance) struct could have been enough, but it would be > > racy (again, see comments). For that reason, use a global bunch of > > them of (via per_cpu()); > > Wouldn't it be better to put the mask in the scheduler private per-pcpu area? > This could be applied to several other instances of cpu masks on the stack, > too. > Yes, as I tired to explain, if it's per-cpu it should be fine, since credit has one runq per each CPU and hence runq lock is enough for serialization. BTW, can you be a little bit more specific about where you're suggesting to put it? I'm sorry but I'm not sure I figured what you mean by "the scheduler private per-pcpu area"... Do you perhaps mean making it a member of `struct csched_pcpu' ? Thanks and Regards, Dario -- <<This happens because I choose it to happen!>> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) Attachment:
signature.asc _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |