[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Question about sharing spinlock_t among VMs in Xen
On Mon, Jun 13, 2016 at 5:17 PM, Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx> wrote: > On 06/13/2016 04:46 PM, Meng Xu wrote: >> On Mon, Jun 13, 2016 at 2:28 PM, Boris Ostrovsky >> <boris.ostrovsky@xxxxxxxxxx> wrote: >>> On 06/13/2016 01:43 PM, Meng Xu wrote: >>>> Hi, >>>> >>>> I have a quick question about using the Linux spin_lock() in Xen >>>> environment to protect some host-wide shared (memory) resource among >>>> VMs. >>>> >>>> *** The question is as follows *** >>>> Suppose I have two Linux VMs sharing the same spinlock_t lock (through >>>> the sharing memory) on the same host. Suppose we have one process in >>>> each VM. Each process uses the linux function spin_lock(&lock) [1] to >>>> grab & release the lock. >>>> Will these two processes in the two VMs have race on the shared lock? >>> You can't do this: depending on which Linux version you use you will >>> find that kernel uses ticketlocks or qlocks locks which keep track of >>> who is holding the lock (obviously this information is internal to VM). >> Yes, we are using the ticketlocks. We expose this information to other >> VMs as well. That's why I'm guessing some data can be corrupted due to >> races among VMs that share the same ticketlock information. > > I don't think it's a race that is the problem. It's that when holder > releases a lock it wants to kick the waiter. It can't kick CPU in > another guest and most likely kicks CPU from its own guest (because > that's where it assumes the waiter is). I see. Thank you very much, Boris, for the explanation! I really appreciate it. :-) Best Regards, Meng ----------- Meng Xu PhD Student in Computer and Information Science University of Pennsylvania http://www.cis.upenn.edu/~mengxu/ _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |