[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen 4.6.1 crash with altp2m enabled by default



>>> On 02.08.16 at 13:45, <Kevin.Mayer@xxxxxxxx> wrote:
> (XEN) ----[ Xen-4.6.1  x86_64  debug=y  Not tainted ]----
> (XEN) CPU:    6
> (XEN) RIP:    e008:[<ffff82d0801fd23a>] vmx_vmenter_helper+0x27e/0x30a
> (XEN) RFLAGS: 0000000000010003   CONTEXT: hypervisor
> (XEN) rax: 000000008005003b   rbx: ffff8300e72fc000   rcx: 0000000000000000
> (XEN) rdx: 0000000000006c00   rsi: ffff830617fd7fc0   rdi: ffff8300e6fc0000
> (XEN) rbp: ffff830617fd7c40   rsp: ffff830617fd7c30   r8:  0000000000000000
> (XEN) r9:  ffff830be8dc9310   r10: 0000000000000000   r11: 00003475e9cf85d0
> (XEN) r12: 0000000000000006   r13: ffff830c14ee1000   r14: ffff8300e6fc0000
> (XEN) r15: ffff830617fd0000   cr0: 000000008005003b   cr4: 00000000000026e0
> (XEN) cr3: 00000001bd665000   cr2: 0000000004510000
> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
> (XEN) Xen stack trace from rsp=ffff830617fd7c30:
> (XEN)    ffff830617fd7c40 ffff8300e72fc000 ffff830617fd7ca0 ffff82d080174f91
> (XEN)    ffff830617fd7f18 ffff830be8dc9000 0000000000000286 ffff830617fd7c90
> (XEN)    0000000000000206 0000000000000246 0000000000000001 ffff830617e91250
> (XEN)    ffff8300e72fc000 ffff830be8dc9000 ffff830617fd7cc0 ffff82d080178c19
> (XEN)    0000000000bdeeae ffff8300e72fc000 ffff830617fd7cd0 ffff82d080178c3e
> (XEN)    ffff830617fd7d20 ffff82d080179740 ffff8300e6fc2000 ffff830c17e38e80
> (XEN)    ffff830617e91250 ffff820080000000 0000000000000002 ffff830617e91250
> (XEN)    ffff830617e91240 ffff830be8dc9000 ffff830617fd7d70 ffff82d080196152
> (XEN)    ffff830617fd7d50 ffff82d0801f7c6b ffff8300e6fc2000 ffff830617e91250
> (XEN)    ffff8300e6fc2000 ffff830617e91250 ffff830617e91240 ffff830be8dc9000
> (XEN)    ffff830617fd7d80 ffff82d080244a62 ffff830617fd7db0 ffff82d0801d3fe2
> (XEN)    ffff8300e6fc2000 0000000000000000 ffff830617e91f28 ffff830617e91000
> (XEN)    ffff830617fd7dd0 ffff82d080175c2c ffff8300e6fc2000 ffff8300e6fc2000
> (XEN)    ffff830617fd7e00 ffff82d080105dd4 ffff830c17e38040 0000000000000000
> (XEN)    0000000000000000 ffff830617fd0000 ffff830617fd7e30 ffff82d0801215fd
> (XEN)    ffff8300e6fc0000 ffff82d080329280 ffff82d080328f80 fffffffffffffffd
> (XEN)    ffff830617fd7e60 ffff82d08012caf8 0000000000000006 ffff830c17e3bc60
> (XEN)    0000000000000002 ffff830c17e3bbe0 ffff830617fd7e70 ffff82d08012cb3b
> (XEN)    ffff830617fd7ef0 ffff82d0801c23a8 ffff8300e72fc000 ffffffffffffffff
> (XEN)    ffff82d0801f3200 ffff830617fd7f08 ffff82d080329280 0000000000000000
> (XEN) Xen call trace:
> (XEN)    [<ffff82d0801fd23a>] vmx_vmenter_helper+0x27e/0x30a
> (XEN)    [<ffff82d080174f91>] __context_switch+0xdb/0x3b5
> (XEN)    [<ffff82d080178c19>] __sync_local_execstate+0x5e/0x7a
> (XEN)    [<ffff82d080178c3e>] sync_local_execstate+0x9/0xb
> (XEN)    [<ffff82d080179740>] map_domain_page+0xa0/0x5d4
> (XEN)    [<ffff82d080196152>] destroy_perdomain_mapping+0x8f/0x1e8
> (XEN)    [<ffff82d080244a62>] free_compat_arg_xlat+0x26/0x28
> (XEN)    [<ffff82d0801d3fe2>] hvm_vcpu_destroy+0x73/0xb0
> (XEN)    [<ffff82d080175c2c>] vcpu_destroy+0x5d/0x72
> (XEN)    [<ffff82d080105dd4>] complete_domain_destroy+0x49/0x192
> (XEN)    [<ffff82d0801215fd>] rcu_process_callbacks+0x19a/0x1fb
> (XEN)    [<ffff82d08012caf8>] __do_softirq+0x82/0x8d
> (XEN)    [<ffff82d08012cb3b>] process_pending_softirqs+0x38/0x3a
> (XEN)    [<ffff82d0801c23a8>] mwait_idle+0x10c/0x315
> (XEN)    [<ffff82d080174825>] idle_loop+0x51/0x6b

On this deep a stack execution can't validly end up in
vmx_vmenter_helper: That's a function called only when the stack
is almost empty. Nor is the caller of it the context switch code.
Hence your problem starts quite a bit earlier - perhaps memory
corruption?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.