[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 36/47] xen/sched: carve out freeing sched_unit memory into dedicated function
On 14.09.2019 10:52, Juergen Gross wrote: > --- a/xen/common/schedule.c > +++ b/xen/common/schedule.c > @@ -351,26 +351,10 @@ static void sched_spin_unlock_double(spinlock_t *lock1, > spinlock_t *lock2, > spin_unlock_irqrestore(lock1, flags); > } > > -static void sched_free_unit(struct sched_unit *unit, struct vcpu *v) > +static void sched_free_unit_mem(struct sched_unit *unit) > { > struct sched_unit *prev_unit; > struct domain *d = unit->domain; > - struct vcpu *vunit; > - unsigned int cnt = 0; > - > - /* Don't count to be released vcpu, might be not in vcpu list yet. */ > - for_each_sched_unit_vcpu ( unit, vunit ) > - if ( vunit != v ) > - cnt++; > - > - v->sched_unit = NULL; > - unit->runstate_cnt[v->runstate.state]--; > - > - if ( cnt ) > - return; > - > - if ( unit->vcpu_list == v ) > - unit->vcpu_list = v->next_in_list; > > if ( d->sched_unit_list == unit ) > d->sched_unit_list = unit->next_in_list; > @@ -393,6 +377,26 @@ static void sched_free_unit(struct sched_unit *unit, > struct vcpu *v) > xfree(unit); > } > > +static void sched_free_unit(struct sched_unit *unit, struct vcpu *v) > +{ > + struct vcpu *vunit; > + unsigned int cnt = 0; > + > + /* Don't count to be released vcpu, might be not in vcpu list yet. */ > + for_each_sched_unit_vcpu ( unit, vunit ) > + if ( vunit != v ) > + cnt++; > + > + v->sched_unit = NULL; > + unit->runstate_cnt[v->runstate.state]--; > + > + if ( unit->vcpu_list == v ) > + unit->vcpu_list = v->next_in_list; > + > + if ( !cnt ) > + sched_free_unit_mem(unit); > +} The entire sched_free_unit() is new code (starting from patch 3) - why don't you arrange for the split right away, instead of moving code around here? Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |