[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH]: Fix deadlock in mm_pin



Keir Fraser wrote:
> On 20/11/08 10:31, "Chris Lalancette" <clalance@xxxxxxxxxx> wrote:
> 
>> it applies to the 2.6.18 tree as well; the deadlock scenario is below.
>>
>> "After running an arbitrary workload involving network traffic for some time
>> (1-2 days), a xen guest running the 2.6.9-67 x86_64 xenU kernel locks up with
>> both vcpu's spinning at 100%.
>>
>> The problem is due to a race between the scheduler and network interrupts.  
>> On
>> one vcpu, the scheduler takes the runqueue spinlock of the other vcpu to
>> schedule a process, and attempts to lock mm_unpinned_lock.  On the other 
>> vcpu,
>> another process is holding mm_unpinned_lock (because it is starting or
>> exiting), and is interrupted by a network interrupt.  The network interrupt
>> handler attempts to wake up the same process that the first vcpu is trying to
>> schedule, and will try to get the runqueue spinlock that the first vcpu is
>> already holding."
> 
> I don't believe that mm_unpinned_lock can ever be taken while a runqueue
> lock is already held in 2.6.18. If you can provide a call chain then I'll
> consider the patch -- but I think you'd still be screwed by the
> mm->page_table_lock (also acquired in mm_pin() code, also not IRQ safe, but
> less easy for you to go convert all the users of that lock).
> 
> You might have some backporting from 2.6.18 to do...

Arg.  I think I see what you mean.  In c/s 10343, mm_pin is moved from switch_mm
into activate_mm, which I *think* means that it is no longer called with the
runqueue lock held.  Indeed, the comment on that c/s says it removes a deadlock,
which may be the one the RHEL-4 kernel is running into.  OK, thanks for the
feedback, I'll look at backporting that code.

Chris Lalancette

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.