[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2/2] xen: merge temporary vcpu pinning scenarios
On 23.07.2019 11:20, Juergen Gross wrote: > Today there are three scenarios which are pinning vcpus temporarily to > a single physical cpu: > > - NMI/MCE injection into PV domains > - wait_event() handling > - vcpu_pin_override() handling > > Each of those cases are handled independently today using their own > temporary cpumask to save the old affinity settings. And what guarantees that no two of them will be in use at the same time? You don't even mention ... > The three cases can be combined as the two latter cases will only pin > a vcpu to the physical cpu it is already running on, while > vcpu_pin_override() is allowed to fail. .. the NMI/#MC injection case here (despite the use of "the two" and "while" giving the impression). Or (looking at the actual code) did you mean "former" instead of "latter"? But if so - id that true? v->processor gets latched into st->processor before raising the softirq, but can't the vCPU be moved elsewhere by the time the softirq handler actually gains control? If that's not possible (and if it's not obvious why, and as you can see it's not obvious to me), then I think a code comment wants to be added there. Independent of that introducing new failure cases for vcpu_pin_override() isn't really nice. Then again any two racing/conflicting pinning attempts can't result in anything good. Nevertheless - nice idea, so a few minor comments on the code as well, in the hope that my point above can be addressed. > --- a/xen/common/schedule.c > +++ b/xen/common/schedule.c > @@ -1106,47 +1106,58 @@ void watchdog_domain_destroy(struct domain *d) > kill_timer(&d->watchdog_timer[i]); > } > > -int vcpu_pin_override(struct vcpu *v, int cpu) > +int vcpu_set_tmp_affinity(struct vcpu *v, int cpu, uint8_t reason) I'd find it pretty nice if at this occasion the type of "cpu" was changed to "unsigned int", using UINT_MAX or NR_CPUS instead of -1 for the "remove override" case. I'd also prefer if you didn't use "tmp" as an infix here. Make it "temporary", "transient", or some such. Perhaps even omit "set", the more that the function may also clear it. > @@ -182,30 +178,24 @@ static void __prepare_to_wait(struct waitqueue_vcpu > *wqv) > static void __finish_wait(struct waitqueue_vcpu *wqv) > { > wqv->esp = NULL; > - (void)vcpu_set_hard_affinity(current, &wqv->saved_affinity); > + vcpu_set_tmp_affinity(current, -1, VCPU_AFFINITY_WAIT); > } > > void check_wakeup_from_wait(void) > { > - struct waitqueue_vcpu *wqv = current->waitqueue_vcpu; > + struct vcpu *curr = current; > + struct waitqueue_vcpu *wqv = curr->waitqueue_vcpu; > > ASSERT(list_empty(&wqv->list)); > > if ( likely(wqv->esp == NULL) ) > return; > > - /* Check if we woke up on the wrong CPU. */ > - if ( unlikely(smp_processor_id() != wqv->wakeup_cpu) ) > + /* Check if we are still pinned. */ > + if ( unlikely(!(curr->affinity_broken & VCPU_AFFINITY_WAIT)) ) > { > - /* Re-set VCPU affinity and re-enter the scheduler. */ > - struct vcpu *curr = current; > - cpumask_copy(&wqv->saved_affinity, curr->cpu_hard_affinity); > - if ( vcpu_set_hard_affinity(curr, cpumask_of(wqv->wakeup_cpu)) ) > - { > - gdprintk(XENLOG_ERR, "Unable to set vcpu affinity\n"); > - domain_crash(current->domain); > - } > - wait(); /* takes us back into the scheduler */ > + gdprintk(XENLOG_ERR, "vcpu affinity lost\n"); > + domain_crash(current->domain); > } Please use curr in favor of current. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |