[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 24/47] xen: switch from for_each_vcpu() to for_each_sched_unit()



On 24.09.19 14:31, Jan Beulich wrote:
On 24.09.2019 14:13, Jürgen Groß  wrote:
On 20.09.19 17:05, Jan Beulich wrote:
On 14.09.2019 10:52, Juergen Gross wrote:
@@ -896,18 +929,22 @@ void restore_vcpu_affinity(struct domain *d)
                       cpupool_domain_cpumask(d));
           if ( cpumask_empty(cpumask_scratch_cpu(cpu)) )
           {
-            if ( v->affinity_broken )
+            if ( sched_check_affinity_broken(unit) )
               {
-                sched_set_affinity(v, unit->cpu_hard_affinity_saved, NULL);
-                v->affinity_broken = 0;
+                /* Affinity settings of one vcpu are for the complete unit. */
+                sched_set_affinity(unit->vcpu_list,
+                                   unit->cpu_hard_affinity_saved, NULL);

Yet despite the comment the function gets passed a struct vcpu *,
and this doesn't look to change by the end of the series. Is there
a reason for this?

Yes. sched_set_affinity() is used from outside of schedule.c (by
dom0_build.c).

How about changing the call there then, rather than having confusing
code here?

I'm not sure that would be better.

What about dropping dom0_setup_vcpu() by calling vcpu_create() instead
and doing the pinning via a call to a new function in schedule.c after
having created all vcpus? In fact we could even do a common function
creating all but vcpu[0], doing the pinning and the updating the node
affinity. Probably this would want to be part of patch 20.


Juergen


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.