[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 1/3] xen/sched: introduce cpupool_update_node_affinity()



On 03.08.22 10:30, Jan Beulich wrote:
On 03.08.2022 10:01, Juergen Gross wrote:
On 03.08.22 09:50, Jan Beulich wrote:
On 02.08.2022 15:27, Juergen Gross wrote:
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -1790,28 +1790,14 @@ int vcpu_affinity_domctl(struct domain *d, uint32_t cmd,
       return ret;
   }
-void domain_update_node_affinity(struct domain *d)
+void domain_update_node_affinity_noalloc(struct domain *d,
+                                         const cpumask_t *online,
+                                         struct affinity_masks *affinity)
   {
-    cpumask_var_t dom_cpumask, dom_cpumask_soft;
       cpumask_t *dom_affinity;
-    const cpumask_t *online;
       struct sched_unit *unit;
       unsigned int cpu;
- /* Do we have vcpus already? If not, no need to update node-affinity. */
-    if ( !d->vcpu || !d->vcpu[0] )
-        return;
-
-    if ( !zalloc_cpumask_var(&dom_cpumask) )
-        return;
-    if ( !zalloc_cpumask_var(&dom_cpumask_soft) )
-    {
-        free_cpumask_var(dom_cpumask);
-        return;
-    }

Instead of splitting the function, did you consider using
cond_zalloc_cpumask_var() here, thus allowing (but not requiring)
callers to pre-allocate the masks? Would imo be quite a bit less
code churn (I think).

This would require to change all callers of domain_update_node_affinity()
to add the additional mask parameter. The now common/sched local struct
affinity_masks would then need to made globally visible.

I'm not sure this is a good idea.

Hmm, I see there are quite a few callers (so there would be code churn
elsewhere). But I don't think the struct would need making globally
visible - the majority of callers could simply pass NULL, making the
function use a local instance of the struct instead. Personally I think
that would still be neater than having a _noalloc-suffixed variant of a
function (and specifically in this case also with an already long name).

Hmm, true.

I could even rename the real function to domain_update_node_aff() and
add an inline domain_update_node_affinity() function adding the NULL
parameter.

But I guess this is then up to you / the scheduler maintainers.

--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -410,6 +410,48 @@ int cpupool_move_domain(struct domain *d, struct cpupool 
*c)
       return ret;
   }
+/* Update affinities of all domains in a cpupool. */
+static int cpupool_alloc_affin_masks(struct affinity_masks *masks)
+{
+    if ( !alloc_cpumask_var(&masks->hard) )
+        return -ENOMEM;
+    if ( alloc_cpumask_var(&masks->soft) )
+        return 0;
+
+    free_cpumask_var(masks->hard);
+    return -ENOMEM;
+}

Wouldn't this be a nice general helper function, also usable from
outside of this CU?

I considered that, but wasn't sure this is really helpful. The only
potential other user would be domain_update_node_affinity(), requiring
to use the zalloc variant of the allocation in the helper (not that this
would be a major problem, though).

I was actually thinking the other way around - the clearing of the masks
might better move into what is domain_update_node_affinity_noalloc() in
this version of the patch, so the helper could continue to use the non-
clearing allocations.

I guess with cond_zalloc_cpumask_var() this would come for free.


Juergen

Attachment: OpenPGP_0xB0DE9DD628BF132F.asc
Description: OpenPGP public key

Attachment: OpenPGP_signature
Description: OpenPGP digital signature


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.