[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-changelog] [linux-2.6.18-xen] linux/evtchn: Add memory barriers to evtchn ring accesses.
# HG changeset patch # User Keir Fraser <keir.fraser@xxxxxxxxxx> # Date 1216724382 -3600 # Node ID 8a3dc4fdb4785447398e983c22241c38f128663b # Parent 905f275ed4d8a7bca99dba273e9bc838f605e8b9 linux/evtchn: Add memory barriers to evtchn ring accesses. Xenstore infrequently hangs up on IA64. Actually the xenstored is still alive but no response from xenstore-XXX commands. After tracking down, I've found that evtchn_read() infrequently returns a wrong evtchn port number and evtchn_write() never unmask the exact port. Signed-off-by: Kouya Shimura <kouya@xxxxxxxxxxxxxx> Yes, updates of the ring_prod and ring_cons are separately protected by different locks/mutexes, but the data communication between producer and consumer is lock-free. Barriers are needed. Acked-by: Keir Fraser <keir.fraser@xxxxxxxxxx> --- drivers/xen/evtchn/evtchn.c | 2 ++ 1 files changed, 2 insertions(+) diff -r 905f275ed4d8 -r 8a3dc4fdb478 drivers/xen/evtchn/evtchn.c --- a/drivers/xen/evtchn/evtchn.c Mon Jul 21 09:51:36 2008 +0100 +++ b/drivers/xen/evtchn/evtchn.c Tue Jul 22 11:59:42 2008 +0100 @@ -84,6 +84,7 @@ void evtchn_device_upcall(int port) if ((u = port_user[port]) != NULL) { if ((u->ring_prod - u->ring_cons) < EVTCHN_RING_SIZE) { u->ring[EVTCHN_RING_MASK(u->ring_prod)] = port; + wmb(); /* Ensure ring contents visible */ if (u->ring_cons == u->ring_prod++) { wake_up_interruptible(&u->evtchn_wait); kill_fasync(&u->evtchn_async_queue, @@ -180,6 +181,7 @@ static ssize_t evtchn_read(struct file * } rc = -EFAULT; + rmb(); /* Ensure that we see the port before we copy it. */ if (copy_to_user(buf, &u->ring[EVTCHN_RING_MASK(c)], bytes1) || ((bytes2 != 0) && copy_to_user(&buf[bytes1], &u->ring[0], bytes2))) _______________________________________________ Xen-changelog mailing list Xen-changelog@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-changelog
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |