[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 09/48] xen/sched: move some per-vcpu items to struct sched_unit



On 04.09.19 16:16, Jan Beulich wrote:
On 09.08.2019 16:57, Juergen Gross wrote:
V2:
- move affinity_broken back to struct vcpu (Jan Beulich)

But this alone won't work: Now a 2nd vCPU in a unit will clobber
what a 1st one may have set as an affinity override. I don't
think you can get away without a per-vCPU CPU mask, or a
combination of per-vCPU and per-unit state flags.

See patch 24: this one adds a helper sched_check_affinity_broken() for
that purpose iterating over the vcpus checking the affinity_broken flags
and returning true if any of the vcpus has its flag set.


@@ -971,26 +986,29 @@ static int cpu_disable_scheduler_check(unsigned int cpu)
  void sched_set_affinity(
      struct vcpu *v, const cpumask_t *hard, const cpumask_t *soft)
  {
-    sched_adjust_affinity(dom_scheduler(v->domain), v->sched_unit, hard, soft);
+    struct sched_unit *unit = v->sched_unit;
+
+    sched_adjust_affinity(dom_scheduler(unit->domain), unit, hard, soft);

In a situation like this I think it would be better to use
v->domain (I don't think you mean to remove struct vcpu's field).
v has just been de-referenced, so v->domain being in cache is
more likely than unit->domain, and there's then also no data
dependency of the second load on the first one.

Okay.


Juergen


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.