[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 15/32] powerpc: define __smp_xxx



On Wed, Jan 06, 2016 at 09:51:52AM +0800, Boqun Feng wrote:
> On Tue, Jan 05, 2016 at 06:16:48PM +0200, Michael S. Tsirkin wrote:
> [snip]
> > > > > Another thing is that smp_lwsync() may have a third user(other than
> > > > > smp_load_acquire() and smp_store_release()):
> > > > > 
> > > > > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> > > > > 
> > > > > I'm OK to change my patch accordingly, but do we really want
> > > > > smp_lwsync() get involved in this cleanup? If I understand you
> > > > > correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
> > > > > while smp_lwsync() is internal to PPC.
> > > > > 
> > > > > Regards,
> > > > > Boqun
> > > > 
> > > > I think you missed the leading ___ :)
> > > > 
> > > 
> > > What I mean here was smp_lwsync() was originally internal to PPC, but
> > > never mind ;-)
> > > 
> > > > smp_store_release is external and it needs __smp_lwsync as
> > > > defined here.
> > > > 
> > > > I can duplicate some code and have smp_lwsync *not* call __smp_lwsync
> > > 
> > > You mean bringing smp_lwsync() back? because I haven't seen you defining
> > > in asm-generic/barriers.h in previous patches and you just delete it in
> > > this patch.
> > > 
> > > > but why do this? Still, if you prefer it this way,
> > > > please let me know.
> > > > 
> > > 
> > > I think deleting smp_lwsync() is fine, though I need to change atomic
> > > variants patches on PPC because of it ;-/
> > > 
> > > Regards,
> > > Boqun
> > 
> > Sorry, I don't understand - why do you have to do anything?
> > I changed all users of smp_lwsync so they
> > use __smp_lwsync on SMP and barrier() on !SMP.
> > 
> > This is exactly the current behaviour, I also tested that
> > generated code does not change at all.
> > 
> > Is there a patch in your tree that conflicts with this?
> > 
> 
> Because in a patchset which implements atomic relaxed/acquire/release
> variants on PPC I use smp_lwsync(), this makes it have another user,
> please see this mail:
> 
> http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> 
> in definition of PPC's __atomic_op_release().
> 
> 
> But I think removing smp_lwsync() is a good idea and actually I think we
> can go further to remove __smp_lwsync() and let __smp_load_acquire and
> __smp_store_release call __lwsync() directly, but that is another thing.
> 
> Anyway, I will modify my patch.
> 
> Regards,
> Boqun


Thanks!
Could you send an ack then please?

> > 
> > > > > >     WRITE_ONCE(*p, v);                                              
> > > > > > \
> > > > > >  } while (0)
> > > > > >  
> > > > > > -#define smp_load_acquire(p)                                        
> > > > > >         \
> > > > > > +#define __smp_load_acquire(p)                                      
> > > > > >         \
> > > > > >  ({                                                                 
> > > > > > \
> > > > > >     typeof(*p) ___p1 = READ_ONCE(*p);                               
> > > > > > \
> > > > > >     compiletime_assert_atomic_type(*p);                             
> > > > > > \
> > > > > > -   smp_lwsync();                                                   
> > > > > > \
> > > > > > +   __smp_lwsync();                                                 
> > > > > > \
> > > > > >     ___p1;                                                          
> > > > > > \
> > > > > >  })
> > > > > >  
> > > > > > -- 
> > > > > > MST
> > > > > > 
> > > > > > --
> > > > > > To unsubscribe from this list: send the line "unsubscribe 
> > > > > > linux-kernel" in
> > > > > > the body of a message to majordomo@xxxxxxxxxxxxxxx
> > > > > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > > > > Please read the FAQ at  http://www.tux.org/lkml/

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.