[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [v3,11/41] mips: reuse asm-generic/barrier.h
On Wed, Jan 13, 2016 at 12:58:22PM -0800, Leonid Yegoshin wrote: > On 01/13/2016 12:48 PM, Peter Zijlstra wrote: > >On Wed, Jan 13, 2016 at 11:02:35AM -0800, Leonid Yegoshin wrote: > > > >>I ask HW team about it but I have a question - has it any relationship with > >>replacing MIPS SYNC with lightweight SYNCs (SYNC_WMB etc)? > >Of course. If you cannot explain the semantics of the primitives you > >introduce, how can we judge the patch. > > > > > You missed a point - it is a question about replacement of SYNC with > lightweight primitives. It is NOT a question about multithread system > behavior without any SYNC. The answer on a latest Will's question lies in > different area. The reason we (Peter and I) care about this isn't because we enjoy being obstructive. It's because there is a whole load of core (i.e. portable) kernel code that is written to the *kernel* memory model. For example, the scheduler, RCU, mutex implementations, perf, drivers, you name it. Consequently, it's important that the architecture back-ends implement these portable primitives (e.g. smp_mb()) in a way that satisfies the kernel memory model so that core code doesn't need to worry about the underlying architecture for synchronisation purposes. You could turn around and say "but if MIPS gets it wrong, then that's MIPS's problem", but actually not having a general understanding of the ordering guarantees provided by each architecture makes it very difficult for us to extend the kernel memory model in such a way that it can be implemented efficiently across the board *and* relied upon by core code. The virtio patch at the start of the thread doesn't particularly concern me. It's the other patches you linked to that implement acquire/release that have me worried. Will _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |