|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] xen: idle_loop: either deal with tasklets or go idle
>>> On 16.06.17 at 12:44, <dario.faggioli@xxxxxxxxxx> wrote:
> On Fri, 2017-06-16 at 02:54 -0600, Jan Beulich wrote:
>> > > > On 14.06.17 at 18:53, <dario.faggioli@xxxxxxxxxx> wrote:
>> >
>> > --- a/xen/arch/x86/domain.c
>> > +++ b/xen/arch/x86/domain.c
>> > @@ -112,12 +112,18 @@ static void play_dead(void)
>> >
>> > static void idle_loop(void)
>> > {
>> > + unsigned int cpu = smp_processor_id();
>> > +
>> > for ( ; ; )
>> > {
>> > - if ( cpu_is_offline(smp_processor_id()) )
>> > + if ( cpu_is_offline(cpu) )
>> > play_dead();
>> > - (*pm_idle)();
>> > - do_tasklet();
>> > +
>> > + /* Are we here for running vcpu context tasklets, or for
>> > idling? */
>> > + if ( unlikely(tasklet_work_to_do(cpu)) )
>>
>> I'm not really sure about the "unlikely()" here.
>>
> It's basically already there, without this patch, at the very beginning
> of do_tasklet():
>
> if ( likely(*work_to_do != (TASKLET_enqueued|TASKLET_scheduled)) )
> return;
>
> Which is right the check that I moved in tasklet_work_to_do(), and as
> you can see, it has the likely.
>
> So, I fundamentally kept it for consistency with old code. I actually
> think it does make sense, but I don't have a too strong opinion about
> this.
Okay then.
>> > unsigned long *work_to_do = &per_cpu(tasklet_work_to_do, cpu);
>> > struct list_head *list = &per_cpu(tasklet_list, cpu);
>> >
>> > - /*
>> > - * Work must be enqueued *and* scheduled. Otherwise there is
>> > no work to
>> > - * do, and/or scheduler needs to run to update idle vcpu
>> > priority.
>> > - */
>> > - if ( likely(*work_to_do !=
>> > (TASKLET_enqueued|TASKLET_scheduled)) )
>> > - return;
>>
>> Perhaps it also wouldn't hurt to convert this to an ASSERT() too.
>>
> Nope, I can't. It's a best effort check, and *work_to_do (which is
> per_cpu(tasklet_work_to_do,cpu)) can change, and the assert may fail.
How that? TASKLET_enqueued can only be cleared by do_tasklet()
itself, and I don't think nesting invocations of the function can or
should occur. TASKLET_scheduled is only being cleared when
schedule() observes that bit set without TASKLET_enqueued also
set. IOW there may be races in setting of the bits, but since we
expect the caller to have done this check already, I think an
ASSERT() would be quite fine here.
>> > --- a/xen/include/xen/tasklet.h
>> > +++ b/xen/include/xen/tasklet.h
>> > @@ -40,9 +40,19 @@ DECLARE_PER_CPU(unsigned long,
>> > tasklet_work_to_do);
>> > #define TASKLET_enqueued (1ul << _TASKLET_enqueued)
>> > #define TASKLET_scheduled (1ul << _TASKLET_scheduled)
>> >
>> > +static inline bool tasklet_work_to_do(unsigned int cpu)
>> > +{
>> > + /*
>> > + * Work must be enqueued *and* scheduled. Otherwise there is
>> > no work to
>> > + * do, and/or scheduler needs to run to update idle vcpu
>> > priority.
>> > + */
>> > + return per_cpu(tasklet_work_to_do, cpu) == (TASKLET_enqueued|
>> > + TASKLET_scheduled)
>> > ;
>> > +}
>>
>> Wouldn't cpu_is_haltable() then also better use this new function?
>>
> Mmm... Perhaps. It's certainly less code chrun.
>
> ARM code would then contain two invocations of cpu_is_haltable() (the
> first happens with IRQ enabled, so a second one with IRQ disabled is
> necessary). But that is *exactly* the same thing we do on x86 (they're
> just in different functions in that case).
>
> So, I reworked the patch according to these suggestions, and you can
> look at it below.
I'm confused: You've added further uses of cpu_is_haltable()
where the cheaper tasklet_work_to_do() would do. That's sort
of the opposite of what I've suggested. Of course that suggestion
of mine was more than 1:1 replacement - the implied question was
whether cpu_is_haltable() simply checking tasklet_work_to_do to
be non-zero isn't too lax (and wouldn't better be
!tasklet_work_to_do()).
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |