[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: [Xen-changelog] [xen-unstable] xen: Split domain_flags into discrete first-class fields in the



On Thu, 2007-04-05 at 17:59 +0100, Keir Fraser wrote:
> On 5/4/07 16:56, "Keir Fraser" <keir@xxxxxxxxxxxxx> wrote:
> 
> > On 5/4/07 16:44, "Hollis Blanchard" <hollisb@xxxxxxxxxx> wrote:
> > 
> >> This is an interface problem: using the interface in a way that works on
> >> x86 fails on other architectures. PLEASE let's redefine the interface to
> >> prevent this from happening. In this case, that means replacing the
> >> xchg() macro with
> >>         static inline xchg(atomic_t *ptr, atomic_t val)
> >> and changing the type of 'is_dying'.
> > 
> > Just need to define bool_t appropriately. What do you need: a long?
> 
> Does PowerPC support atomic byte loads and stores by the way (i.e.,
> concurrent loads and stores to adjacent bytes by different processors do not
> conflict with each other)?

Yes, there are single byte load and store instructions.

> In which case it might be worth keeping bool_t
> and defining atomic_bool_t or atomic_rmw_bool_t for bools that need to be
> atomically read-modified-written. That would avoid bloating critical
> structures for the few bools that need atomic r-m-w semantics.

If that's your preference.

However, as long as xchg() accepts all pointer types, this problem will
reoccur. We've had the same problem with the set_bit() interface in the
past, and I see x86 still uses a void* as the pointer argument there.
x86 Linux doesn't use void* for bitops and it's for exactly this reason.

These are not difficult changes to make, and solve real long-term
maintenance problems. I'm sure if x86 had this issue, an arch-neutral
API would have been in place from day one.

-- 
Hollis Blanchard
IBM Linux Technology Center


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.