[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/2] xen: sched: reorganize cpu_disable_scheduler()



On Mon, 2015-07-20 at 13:41 +0200, Juergen Gross wrote:
> On 07/17/2015 03:35 PM, Dario Faggioli wrote:

> > @@ -644,25 +673,66 @@ int cpu_disable_scheduler(unsigned int cpu)
> >                   cpumask_setall(v->cpu_hard_affinity);
> >               }
> >
> > -            if ( v->processor == cpu )
> > +            if ( v->processor != cpu )
> >               {
> > -                set_bit(_VPF_migrating, &v->pause_flags);
> > +                /* This vcpu is not on cpu, so we can move on. */
> >                   vcpu_schedule_unlock_irqrestore(lock, flags, v);
> > -                vcpu_sleep_nosync(v);
> > -                vcpu_migrate(v);
> > +                continue;
> > +            }
> > +
> > +            /* If it is on cpu, we must send it away. */
> > +            if ( unlikely(system_state == SYS_STATE_suspend) )
> > +            {
> > +                /*
> > +                 * If we are doing a shutdown/suspend, it is not necessary 
> > to
> > +                 * ask the scheduler to chime in. In fact:
> > +                 *  * there is no reason for it: the end result we are 
> > after
> > +                 *    is just 'all the vcpus on the boot pcpu, and no vcpu
> > +                 *    anywhere else', so let's just go for it;
> > +                 *  * it's wrong, for cpupools with only non-boot pcpus, as
> > +                 *    the scheduler would always fail to send the vcpus 
> > away
> > +                 *    from the last online (non boot) pcpu!
> > +                 *
> > +                 * Therefore, in the shutdown/suspend case, we just pick up
> > +                 * one (still) online pcpu. Note that, at this stage, all
> > +                 * domains (including dom0) have been paused already, so we
> > +                 * do not expect any vcpu activity at all.
> > +                 */
> > +                cpumask_andnot(&online_affinity, &cpu_online_map,
> > +                               cpumask_of(cpu));
> > +                BUG_ON(cpumask_empty(&online_affinity));
> > +                /*
> > +                 * As boot cpu is, usually, pcpu #0, using cpumask_first()
> > +                 * will make us converge quicker.
> > +                 */
> > +                new_cpu = cpumask_first(&online_affinity);
> > +                vcpu_move_nosched(v, new_cpu);
> 
> Shouldn't there be a vcpu_schedule_unlock_irqrestore() ?
> 
I'm sure I put one there, as I was sure that it was there the last time
I inspected the patch before hitting send.

But I see that it's not there now, so I must have messed up when
formatting the patch, or something like that. :-(

It's really really weird, as I forgot it during development, and then
the system was hanging, and then I added it, and that's why I'm sure I
did have it in place... but perhaps I fat fingered some stgit command
which made it disappear.

In any case, sorry for this. I will re-test (just to be sure) and
re-send (and this time I'll triple check!!)

Thanks and Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.