[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Elaboration of "Question about sharing spinlock_t among VMs in Xen"



>>>>>> *** The question is as follows ***
>>>>>> Suppose I have two Linux VMs sharing the same spinlock_t lock (through
>>>>>> the sharing memory) on the same host. Suppose we have one process in
>>>>>> each VM. Each process uses the linux function spin_lock(&lock) [1] to
>>>>>> grab & release the lock.
>>>>>> Will these two processes in the two VMs have race on the shared lock?
>>>>
>>>>> You can't do this: depending on which Linux version you use you will
>>>>> find that kernel uses ticketlocks or qlocks locks which keep track of
>>>>> who is holding the lock (obviously this information is internal to VM).
>>>>> On top of this on Xen we use pvlocks which add another (internal)
>>>>> control layer.
>>>>
>>>> I wanted to see if this can be done with the correct combination of
>>>> versions and parameters. We are using 4.1.0 for all domains, which
>>>> still has the CONFIG_PARAVIRT_SPINLOCK option. I've recompiled the
>>>> guests with this option set to n, and have also added the boot
>
> Just a paranoid question: how exactly does the .config line look like?
> It should _not_ be
>
> CONFIG_PARAVIRT_SPINLOCK=n
>
> but rather:
>
> # CONFIG_PARAVIRT_SPINLOCK is not set

Yes, it is not set. Good to cover all bases. Below is the config
grepped with "SPIN"

CONFIG_INLINE_SPIN_UNLOCK_IRQ=y
CONFIG_MUTEX_SPIN_ON_OWNER=y
CONFIG_RWSEM_SPIN_ON_OWNER=y
CONFIG_LOCK_SPIN_ON_OWNER=y
# CONFIG_PARAVIRT_SPINLOCKS is not set
# CONFIG_SPINLOCK_DEV is not set
# CONFIG_DEBUG_SPINLOCK is not set
# CONFIG_SPINLOCK_DEVICE is not set

>>>> parameter xen_nopvspin to both domains and dom0 for good measure. A
>>>> basic ticketlock holds all the information inside the struct itself to
>>>> order the requests, and I believe this is the version I'm using.
>>>
>>> Hm, weird. B/c from arch/x86/include/asm/spinlock_types.h:
>>>   6 #ifdef CONFIG_PARAVIRT_SPINLOCKS
>>>   7 #define __TICKET_LOCK_INC       2
>>>   8 #define TICKET_SLOWPATH_FLAG    ((__ticket_t)1)
>>>   9 #else
>>>  10 #define __TICKET_LOCK_INC       1
>>>  11 #define TICKET_SLOWPATH_FLAG    ((__ticket_t)0)
>>>  12 #endif
>>>  13
>>>
>>> Which means that one of your guests is adding '2' while another is
>>> adding '1'. Or one of them is putting the 'slowpath' flag
>>> which means that the paravirt spinlock is enabled.
>>
>> Interesting. I went back to check on one of my guests, and the .config
>> from the source tree I used, as well as the one in /boot/ for the
>> current build both have it "not set" which shows as unchecked in make
>> menuconfig, where the option was disabled. So this domain appears to
>> be correctly configured. The thing is, the other domain is literally a
>> copy of this domain. Either both are wrong or neither are.
>
> One other thing you should be aware of: as soon as one of your guests
> has only one vcpu it will drop the "lock" prefixes for updates of the
> lock word. So there will be a chance of races just because one or both
> guests are thinking no other cpu can access the lock word concurrently.

Now that is an interesting point! I am indeed using 1 vcpu for each
domain right now. Does it automatically drop lock if it detects one
vcpu when booting? Or is this set at compile time? Shouldn't setting
SMP to y regardless of core/vcpu count keep the SMP spinlock
implementation? I definitely did not think about this -- It was
compiled with one vcpu so if its done at compile time this could be
the issue. I doubt its done at boot but if so I would presume there is
a way to disable this?

Below is the config file grepped for "SMP".
CONFIG_X86_64_SMP=y
CONFIG_GENERIC_SMP_IDLE_THREAD=y
CONFIG_SMP=y
# CONFIG_X86_VSMP is not set
# CONFIG_MAXSMP is not set
CONFIG_PM_SLEEP_SMP=y

See anything problematic? Seems PV spinlocks is not set, and SMP is
enabled... or is something else required to prevent stripping the
spinlocks? Also not sure if any of the set SPIN config items could
mess with this. If this is done at boot, a point in the direction for
preventing this would be appreciated!

Regards,
Dagaen Golomb
Ph.D Student, University of Pennsylvania

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.