[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] error in xen/arch/x86/mm.c:get_page during migration
>>> On 21.02.13 at 18:31, Olaf Hering <olaf@xxxxxxxxx> wrote: > It did not happen with xl. But the same guest and Dom0 kernel, and the same hypervisor? > Here is the output while doing xm migrate: > > (XEN) HVM2 restore: VMCE_VCPU 0 > (XEN) HVM2 restore: VMCE_VCPU 1 > (XEN) HVM2 restore: TSC_ADJUST 0 > (XEN) HVM2 restore: TSC_ADJUST 1 > (XEN) mm.c:1983:d0 Error pfn 4112c5: rd=ffff83036ffef000, > od=0000000000000000, caf=180000000000000, taf=7400000000000001 Didn't even notice yesterday that this is apparently after restore has already started. Which makes me curious whether the domain that is being referenced with rd= is the old or the new one (would require printing the domain ID; honestly I never understood what use printing of the domain pointer is). I'm also confused by the domain pointer always being the same; I would expect it to at least toggle between two values, but probably even be different between every instance of the guest. But you're not having a stubdom configured for the guest either, according to the config you sent earlier... > (XEN) Xen call trace: > (XEN) [<ffff82c4c0170fb2>] get_page+0xfb/0x151 > (XEN) [<ffff82c4c01e1d87>] get_page_from_gfn_p2m+0x17e/0x284 > (XEN) [<ffff82c4c01098ae>] __get_paged_frame+0x5d/0x170 > (XEN) [<ffff82c4c0109e55>] __acquire_grant_for_copy+0x494/0x6ae > (XEN) [<ffff82c4c010bef0>] gnttab_copy+0x53b/0x843 > (XEN) [<ffff82c4c010e3b8>] do_grant_table_op+0x11c5/0x1b82 > (XEN) [<ffff82c4c011502f>] do_multicall+0x227/0x444 > (XEN) [<ffff82c4c0227f0b>] syscall_enter+0xeb/0x145 The only user of grant copies is netback, and hence I would suppose that the failed transmit (in whichever direction) is simply being retried, thus preventing the error from becoming user visible. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |