[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] 答复: [PATCH] x86/hpet: add a lock when cpu clear cpumask in hpet_broadcast_exit();


  • To: Jan Beulich <JBeulich@xxxxxxxx>
  • From: David Wang <DavidWang@xxxxxxxxxxx>
  • Date: Tue, 17 Apr 2018 05:44:43 +0000
  • Accept-language: zh-CN, en-US
  • Cc: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "Fiona Li\(BJ-RD\)" <FionaLi@xxxxxxxxxxx>
  • Delivery-date: Tue, 17 Apr 2018 05:55:54 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHT1XjIK6SR18mEDU+Owe0+mqz/5aQEcgN0
  • Thread-topic: [PATCH] x86/hpet: add a lock when cpu clear cpumask in hpet_broadcast_exit();


Hi Jan
                Thank you for pointing out my problem. I will  revise that.  
Answer the following.

发件人: Jan Beulich <JBeulich@xxxxxxxx>
发送时间: 2018年4月16日 19:48
收件人: David Wang
抄送: xen-devel; Fiona Li(BJ-RD)
主题: Re: [PATCH] x86/hpet: add a lock when cpu clear cpumask in 
hpet_broadcast_exit();
    
>>> On 16.04.18 at 10:00, <Davidwang@xxxxxxxxxxx> wrote:
> By the hpet_get_channel(), cpus share an in-use channel somtime.
> So, core shouldn't clear cpumask while others are getting first
> cpumask. If core zero and core one share an channel, the cpumask
> is 0x3. Core zero clear cpumask between core one executing
> cpumask_empty() and cpumask_first(). The return of cpumask_first()
> is nr_cpu_ids. That would lead to ASSERT(cpu < nr_cpu_ids).

I can see your point, but that's in hpet_detach_channel() afaics,
which your description doesn't mention at all. And the assertion
would - afaict - happen through hpet_detach_channel() ->
set_channel_irq_affinity() -> cpumask_of() (as of e8bf5addc9).

Please realize that it helps review quite a bit if you write concise
descriptions for your changes.

[DavidWang]: Ok, revise it and thanks.
 

> --- a/xen/arch/x86/hpet.c
> +++ b/xen/arch/x86/hpet.c
> @@ -740,7 +740,9 @@ void hpet_broadcast_exit(void)
>      if ( !reprogram_timer(deadline) )
>          raise_softirq(TIMER_SOFTIRQ);
>  
> +    spin_lock_irq(&ch->lock);
>      cpumask_clear_cpu(cpu, ch->cpumask);
> +    spin_unlock_irq(&ch->lock);

Rather than this, how about eliminating the cpumask_empty() call
in favor of just the cpumask_first() one in hpet_detach_channel()
(with a local variable storing the intermediate result)? Or if acquiring
the locking can't be avoided here, you would perhaps better not
drop it before calling hpet_detach_channel() (which has only this
single call site and hence would be straightforward to adjust).

[DavidWang]:  The experiment proved that a local variable storing the 
intermediate result  can slove the problem. On one hand a local variable is 
more efficient than adding lock, On the other it is not very clear for reading. 
In fact, In hpet_detach_channel(), a lock for ch->lock will be added.  Can we 
move the lock( in hpet_detach_channel()) backward  to calling 
cpumask_clear_cpu()  and drop it in function (hpet_detach_channel()) ?
Looking forward to your suggestion.
  



    
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.