[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] mce: Fix for another race condition
Hi, this are actually two patches which both fixes the same race condition in mce code. The problem is that these lines (in mctelem_reserve) newhead = oldhead->mcte_next; if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) { are racy. After you read the newhead pointer it can happen that another flow (thread of recursive invocation) change all the list but set head with same value. So oldhead is the same as *freelp but you are setting a new head that could point to whatever element (even already used). The base idea of both patches is to separate mcte_state in a separate state and setting it with cmpxchg to make sure we don't pick up an already allocated element. The first patch try to use the list detaching entirely (setting head to NULL) to avoid the use of mcte_next falling to a slow_reserve which scan all the array trying to find an element in the state FREE. This is surely safe and easy but if list is mostly allocated you end up scanning the array entirely every time. The second patch (which needs some cleanup) instead of using pointers use array indexes to allow to bound in an atomic way which head and next pointer. The head is attached to a counter which is incremented to avoid to have the list changed but the head with same value (it's like a list version). The state is attached with the next index (which replace mcte_next if state is FREE) to allow atomic read of state+next. To handle both thread safety and reentrancy mctelem_reserve got a bit more complicated and the updates are not so straight forward. Now, the question is: Should I just send the first patch and ignore the computation problem in the corner case or should I try to put in shape the second patch? Frediano Attachment:
mce_fix2.patch Attachment:
mce_fix2_v2.patch _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |