[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Questions / Comments about hard / soft affinity in Credit 2

  • To: xen-devel@xxxxxxxxxxxxx
  • From: Justin Weaver <jtweaver@xxxxxxxxxx>
  • Date: Mon, 9 Dec 2013 22:31:33 -1000
  • Delivery-date: Tue, 10 Dec 2013 09:24:48 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>


On Sat, Nov 30, 2013 at 10:18 PM, Dario Faggioli
<dario.faggioli@xxxxxxxxxx> wrote:
> I'll have to re-look at the details of credit2 about load balance and
> migration between CPUs/runqueues but it looks like we need to have
> something allowing us to honour pinning/affinity _within_ the same
> runqueue, anyway, don't we? I mean, even if you implement per-L2
> runqueues, that would still span more than one CPU, and the user may
> well want to pin a vCPU to only one (or in general a subset) of them.

Yes, I agree. Just looking for some feedback before I attempt a patch.
Some of the functions I think need updating for hard/soft affinity...

runq_candidate needs to be updated. It decides which vcpu from the run
queue to run next on a given pcpu. Currently it only takes credit into
account. Considering hard affinity should be simple enough. For soft,
what if it first looked through the run queue in credit order at only
vcpus that prefer to run on the given processor and had a certain
amount of credit, and if none were found it then considered the whole
run queue considering only hard affinity and credit?

runq_assign assumes that the run queue associated with vcpu->processor
is OK for vcpu to run on. If considering affinity, I'm not sure if
that can be assumed. I probably need to dig further into schedule.c to
see where vcpu->processor is being assigned initially. Anyway, with
only one run queue this doesn't matter for now.

choose_cpu / migrate will need to be updated, but currently migrate
never gets called because there's only one run queue.

Please let me know what you think.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.