[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Race condition with scheduler runqueues



On 26/02/13 12:08, George Dunlap wrote:
> On 02/19/2013 11:47 AM, Andrew Cooper wrote:
>> On 19/02/13 09:28, Jan Beulich wrote:
>>>>>> On 18.02.13 at 19:11, Andrew Cooper <andrew.cooper3@xxxxxxxxxx> wrote:
>>>> Hello,
>>>>
>>>> Our testing has discovered a crash (pagefault at 0x0000000000000008)
>>>> which I have tracked down to bad __runq_remove() in csched_vcpu_sleep()
>>>> in sched_credit.c (because a static function of the same name also
>>>> exists in sched_credit2.c, which confused matters to start with)
>>>>
>>>> The test case was a loop of localhost migrate of a 1vcpu HVM win8
>>>> domain.  The test case itself has passed many times in the past on the
>>>> same Xen codebase (Xen-4.1.3), indicating that it is very rare.  There
>>>> does not appear to be any relevant changes between the version of Xen in
>>>> the test and xen-4.1-testing.
>>>>
>>>> The failure itself is because of a XEN_DOMCTL_scheduler_op (trace below)
>>>> from dom0, targeting the VCPU of the migrating domain.
>>>>
>>>> (XEN) Xen call trace:
>>>> (XEN)       [<ffff82c480116a14>] csched_vcpu_sleep+0x44/0x70
>>>> (XEN)      0[<ffff82c480120117>] vcpu_sleep_nosync+0xe7/0x3b0
>>>> (XEN)     12[<ffff82c4801203e9>] vcpu_sleep_sync+0x9/0x50
>>>> (XEN)     14[<ffff82c48011fd4c>] sched_adjust+0xac/0x230
>>>> (XEN)     24[<ffff82c480102bc1>] do_domctl+0x731/0x1130
>>>> (XEN)     64[<ffff82c4802013c4>] compat_hypercall+0x74/0x80
>>>>
>>>> The relevant part of csched_vcpu_sleep() is
>>>>
>>>>      else if ( __vcpu_on_runq(svc) )
>>>>          __runq_remove(svc);
>>>>
>>>> which disassembles to
>>>>
>>>> ffff82c480116a01:       49 8b 10                mov    (%r8),%rdx
>>>> ffff82c480116a04:       4c 39 c2                cmp    %r8,%rdx
>>>> ffff82c480116a07:       75 07                   jne    ffff82c480116a10
>>>> <csched_vcpu_sleep+0x40>
>>>> ffff82c480116a09:       f3 c3                   repz retq
>>>> ffff82c480116a0b:       0f 1f 44 00 00          nopl   0x0(%rax,%rax,1)
>>>> ffff82c480116a10:       49 8b 40 08             mov    0x8(%r8),%rax
>>>> ffff82c480116a14:       48 89 42 08             mov    %rax,0x8(%rdx) #
>>>> <- Pagefault here
>>>> ffff82c480116a18:       48 89 10                mov    %rdx,(%rax)
>>>> ffff82c480116a1b:       4d 89 40 08             mov    %r8,0x8(%r8)
>>>> ffff82c480116a1f:       4d 89 00                mov    %r8,(%r8)
>>>>
>>>> The relevant crash registers from the pagefault are:
>>>> rax: 0000000000000000
>>>> rdx: 0000000000000000
>>>>   r8: ffff83080c89ed90
>>>>
>>>> If I am reading the code correctly, this means that runq->next is NULL,
>>>> so we fail list_empty() and erroneously pass __vcpu_on_runq().  We then
>>>> fail with a fault when trying to update runq->prev, which is also NULL.
>>>>
>>>> The only place I can spot in the code where the runq->{next,prev} could
>>>> conceivably be NULL is in csched_alloc_vdata() between the memset() and
>>>> INIT_LIST_HEAD().  This is logically sensible in combination with the
>>>> localhost migrate loop, and I cant immediately see anything to prevent
>>>> this race happening.
>>> But that doesn't make sense: csched_alloc_vdata() doesn't store
>>> svc into vc->sched_priv; that's being done by the generic
>>> scheduler code once the actor returns.
>>
>> D'oh yes - I overlooked that.
>>
>>>
>>> So I'd rather suspect a stale pointer being used, which is easily
>>> possible when racing with sched_move_domain() (as opposed to
>>> schedule_cpu_switch(), where the new pointer gets stored
>>> _before_ de-allocating the old one).
>>
>>>
>>> However, sched_move_domain() (as well as schedule_cpu_switch())
>>> get called only from CPU pools code, and I would guess CPU pools
>>> aren't involved here, and you don't in parallel soft offline/online
>>> pCPU-s (as I'm sure you otherwise would have mentioned it).
>>>
>>> But wait - libxl__domain_make() _unconditionally_ calls
>>> xc_cpupool_movedomain(), as does XendDomainInfo's
>>> _constructDomain(). The reason for this escapes me - JÃrgen? Yet
>>> I'd expect the pool ID matching check to short cut the resulting
>>> sysctl, i.e. sched_move_domain() ought to not be reached anyway
>>> (worth verifying of course).
>>>
>>> The race there nevertheless ought to be fixed.
>>>
>>> Jan
>>
>> Our toolstack hooks directly into libxc and is completely ignorant of
>> cpupools.  Looking at the crash more closely, it might be a race elsewhere
>>
>> Another dom0 vcpu is in an HVMOP_track_dirty_vram hypercall, and the
>> associated Xen stack trace is
>>
>> [ffff82c4801777b2] time_calibration_std_rendezvous+0xb2/0xc0
>>   ffff82c480172d12  __smp_call_function_interrupt+0x62/0xb0
>>   ffff82c48017339e  smp_call_function_interrupt+0x4e/0x90
>>   ffff82c48014a65a  call_function_interrupt+0x2a/0x30
>>   ffff82c4801223b2  _spin_lock+0x12/0x20
>>   ffff82c4801734ab  flush_area_mask+0xcb/0x1c0
>>   ffff82c4801c862a  paging_log_dirty_range+0x3a/0x780
>>   ffff82c480121ea8  cpumask_raise_softirq+0x78/0x80
>>   ffff82c480117ed3  csched_vcpu_wake+0x193/0x420
>>   ffff82c48014a5fa  event_check_interrupt+0x2a/0x30
>>   ffff82c4801f21c7  hap_track_dirty_vram+0x137/0x1c0
>>   ffff82c4801ad3fd  do_hvm_op+0x16dd/0x1f50
>>   ffff82c480106251  evtchn_send+0xa1/0x160
>>   ffff82c480106d36  do_event_channel_op+0x876/0xfd0
>>   ffff82c4801f9027  compat_update_descriptor+0x27/0x30
>>   ffff82c4801354f8  compat_multicall+0x198/0x380
>>   ffff82c48014a5fa  event_check_interrupt+0x2a/0x30
>>   ffff82c4802013c4  compat_hypercall+0x74/0x80
>>
>> the hap_track_dirty_vram() and paging_log_dirty_range() are part of the
>> same logical call trace, but it appears that we have taken an
>> event_check_interrupt() in the middle and called schedule() off the
>> bottom of it, calling csched_vcpu_wake().
>>
>> I am currently trying to reason as to whether it is possible for a race
>> between csched_vcpu_{sleep,wake}() could result in the seen crash, but
>> it certainly looks like a smoking gun.
>
> Any more progress on this one?
>
> In theory all of those should be made mutually exclusive by holding the 
> lock of the runqueue on which the VCPU is running.
>
> Any chance there's a race with the assignment of the vcpu -- that is, a 
> race in vcpu_schedule_lock() such that someone ends up grabbing the 
> wrong lock?
>
> I think that in theory once you call INIT_LIST_HEAD, none of those 
> pointers should ever be set to zero; if one ever were it might get 
> passed around a bit before actually being followed.  Any chance there's 
> something uninitialized somewhere?
>
> And of course, if all else fails, there's good old-fashioned memory 
> clobbering as a possibility...
>
>   -George

No more progress I am afraid - Other more easily reproducible bugs came up.

I did identify another ticket in our system which got mis-clasified,
which appears to be the same/similar race condition

(XEN) Xen BUG at sched_credit.c:204
(XEN) ----[ Xen-4.1.3  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    11
(XEN) RIP:    e008:[<ffff82c480117bbb>] csched_vcpu_wake+0x3cb/0x3f0
(XEN) RFLAGS: 0000000000010003   CONTEXT: hypervisor
(XEN) rax: ffff830994b37438   rbx: ffff830994b36000   rcx: ffff830836ad1d60
(XEN) rdx: ffff830994b36000   rsi: 0000000000000000   rdi: ffff82c4802cb7c0
(XEN) rbp: ffff830a93343f20   rsp: ffff830836a97d80   r8:  ffff830a93343f20
(XEN) r9:  000000000000000b   r10: 000000000000000b   r11: ffff82c4802ba0e0
(XEN) r12: ffff830994b37500   r13: 000001b166257eba   r14: ffff82c4802cb7c0
(XEN) r15: ffff82c4802ba0e0   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN) cr3: 0000000a92721000   cr2: 0000000008ee7dc8
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
(XEN) Xen stack trace from rsp=ffff830836a97d80:
(XEN)    ffff8300be6bf7d0 00000000fed000f0 ffff82c48014a130 0000000000000000
(XEN)    ffff830836a97f18 ffff830836a98010 ffff82c480149dda ffff830994b36000
(XEN)    ffff8300be6aa000 ffff830994b37500 000001b166257eba ffff82c4802cb7c0
(XEN)    ffff82c4802ba0e0 ffff82c4801206e0 0000000000000000 0000000000000286
(XEN)    000001b1320009f1 0000000000000003 0000000000000001 0000000000000000
(XEN)    ffff8300be6aa000 ffff8300be6ab790 ffff82c4801b7620 ffff8304501a7010
(XEN)    ffff8304501a7010 ffff82c48014f13d ffff8300be6ab790 ffff8300be6ab790
(XEN)    000001b166257de4 ffff82c4801b7642 ffff830836a9e100 ffff82c48012388c
(XEN)    ffff8300be6ab7d0 ffff830836a9e100 ffff8304501a7018 ffff82c480123d35
(XEN)    ffff82c4801af486 000000000000000b ffffffffffffffff ffff830836a97f18
(XEN)    ffff82c4802a8500 ffff82c4802ac500 000001b16625431d ffff82c4801216b5
(XEN)    ffff830836a97f18 ffff8300bf2f0000 ffff8300be74c000 ffff82c4802ba0e0
(XEN)    ffff830836a9e060 ffff82c48014fdc5 000000000000000b 000000000000008d
(XEN)    fffffffffffffffe ffffff007b411228 ffffff007b411200 fffffffeba4c1b90
(XEN)    0000000000000000 ffffffff8044b8c0 ffffff007b583e10 ffffff007b583e88
(XEN)    fffffffeba4c1d10 00000000000003e8 00000000000f4240 0000000000000000
(XEN)    0000000000000001 0000000000000000 0000006d00000000 ffffffff807371c6
(XEN)    0000000000000000 0000000000000246 fffffffeba4c1b90 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    000000000000000b ffff8300bf2f0000 00000043b67d2880 0000000000000000
(XEN) Xen call trace:
(XEN)       [<ffff82c480117bbb>] csched_vcpu_wake+0x3cb/0x3f0
(XEN)      2[<ffff82c48014a130>] smp_apic_timer_interrupt+0x50/0x90
(XEN)      6[<ffff82c480149dda>] apic_timer_interrupt+0x2a/0x30
(XEN)     13[<ffff82c4801206e0>] vcpu_wake+0x180/0x600
(XEN)     22[<ffff82c4801b7620>] pt_timer_fn+0x0/0x30
(XEN)     25[<ffff82c48014f13d>] vcpu_kick+0x1d/0x80
(XEN)     29[<ffff82c4801b7642>] pt_timer_fn+0x22/0x30
(XEN)     31[<ffff82c48012388c>] execute_timer+0x4c/0x70
(XEN)     35[<ffff82c480123d35>] timer_softirq_action+0x85/0x220
(XEN)     36[<ffff82c4801af486>] hvm_vcpu_has_pending_irq+0x76/0xd0
(XEN)     43[<ffff82c4801216b5>] __do_softirq+0x65/0x90
(XEN)     49[<ffff82c48014fdc5>] idle_loop+0x25/0x50
(XEN)   
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 11:
(XEN) Xen BUG at sched_credit.c:204
(XEN) ****************************************

The bug in question is BUG_ON( cpu != svc->vcpu->processor );

This is suspcious as, like before, there is an LAPIC-triggered interrupt
in the middle of a scheduling codepath.  Again, it has been seen only
once through multiple cycles of testing.

If you have any ideas then fantastic, but I have not forgotten about it
- just had other stuff come up.  I will get back to it at some point!

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.