[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86: adjust placement of pause insn in _raw_spin_lock()


  • To: Jan Beulich <jbeulich@xxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
  • Date: Fri, 08 Aug 2008 15:02:55 +0100
  • Cc:
  • Delivery-date: Fri, 08 Aug 2008 07:03:31 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: Acj5WG/ormbFmmVLEd2xugAX8io7RQABwR9E
  • Thread-topic: [Xen-devel] [PATCH] x86: adjust placement of pause insn in _raw_spin_lock()

Ah, I suppose it reduce lock acquisition latency slightly. I'll apply it.

 -- Keir

On 8/8/08 14:12, "Keir Fraser" <keir.fraser@xxxxxxxxxxxxx> wrote:

> Why?
> 
>  -- Keir
> 
> On 8/8/08 13:49, "Jan Beulich" <jbeulich@xxxxxxxxxx> wrote:
> 
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxxxx>
>> 
>> Index: 2008-08-06/xen/include/asm-x86/spinlock.h
>> ===================================================================
>> --- 2008-08-06.orig/xen/include/asm-x86/spinlock.h 2007-09-10
>> 09:59:37.000000000 +0200
>> +++ 2008-08-06/xen/include/asm-x86/spinlock.h 2008-08-07 12:36:13.000000000
>> +0200
>> @@ -23,8 +23,8 @@ static inline void _raw_spin_lock(spinlo
>>          "1:  lock; decb %0         \n"
>>          "    js 2f                 \n"
>>          ".section .text.lock,\"ax\"\n"
>> -        "2:  cmpb $0,%0            \n"
>> -        "    rep; nop              \n"
>> +        "2:  rep; nop              \n"
>> +        "    cmpb $0,%0            \n"
>>          "    jle 2b                \n"
>>          "    jmp 1b                \n"
>>          ".previous"
>> 
>> 
>> 
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.