[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Xen-devel] [PATCH v2 11/12] xenctx: Dump registers via hvm info if available.
On 11/07/13 03:38, Jan Beulich wrote:
On 06.11.13 at 21:08, Don Slutz <dslutz@xxxxxxxxxxx> wrote:
@@ -931,7 +1174,44 @@ static void dump_ctx(int vcpu, guest_word_t mem_addr,
guest_word_t stk_addr)
#endif
} else {
if (!stk_addr) {
- print_ctx(&ctx);
+#if defined(__i386__) || defined(__x86_64__)
+ if (xenctx.dominfo.hvm && ctxt_word_size == 8) {
+ if (guest_word_size == 4) {
+ if ((((uint32_t)ctx.x64.user_regs.eip) != cpuctx.rip) ||
+ (((uint32_t)ctx.x64.user_regs.esp) != cpuctx.rsp) ||
+ (((uint32_t)ctx.x64.ctrlreg[3]) != cpuctx.cr3)) {
+ fprintf(stderr, "Regs mismatch ip=%llx vs %llx sp=%llx vs
%llx cr3=%llx vs %llx\n",
+ (long long)((uint32_t)ctx.x64.user_regs.eip),
+ (long long)cpuctx.rip,
+ (long long)((uint32_t)ctx.x64.user_regs.esp),
+ (long long)cpuctx.rsp,
+ (long long)((uint32_t)ctx.x64.ctrlreg[3]),
+ (long long)cpuctx.cr3);
+ fprintf(stdout, "=============Regs mismatch
start=============\n");
+ print_ctx(&ctx);
+ fprintf(stdout, "=============Regs mismatch
end=============\n");
+ }
+ } else {
+ if ((ctx.x64.user_regs.eip != cpuctx.rip) ||
+ (ctx.x64.user_regs.esp != cpuctx.rsp) ||
+ (ctx.x64.ctrlreg[3] != cpuctx.cr3)) {
+ fprintf(stderr, "Regs mismatch ip=%llx vs %llx sp=%llx vs
%llx cr3=%llx vs %llx\n",
+ (long long)ctx.x64.user_regs.eip,
+ (long long)cpuctx.rip,
+ (long long)ctx.x64.user_regs.esp,
+ (long long)cpuctx.rsp,
+ (long long)ctx.x64.ctrlreg[3],
+ (long long)cpuctx.cr3);
+ fprintf(stdout, "=============Regs mismatch
start=============\n");
+ print_ctx(&ctx);
+ fprintf(stdout, "=============Regs mismatch
end=============\n");
+ }
+ }
+ print_cpuctx(&cpuctx);
+ }
+ else
+#endif
+ print_ctx(&ctx);
Apart from Andrew's comments, which I agree with - most of the
additions above clearly don't belong here: This is not a diagnostic
utility.
Fine with me, I will drop this part. I added during the time (~2010-2011) when I
was looking at a DomU that was crashed. What I remember about this DomU was that
vCPU 1 was offline and vCPU 0 was the crash cause and vCPU 2 & 3 where in the
code to go offline. The routine xc_domain_hvm_getcontext_partial() was returning
vCPU 2's data when asked for vCPU 1 (via instance == 1). This is the key reason I
did the code in patch #12 (basically a way to call
xc_domain_hvm_getcontext_partial() and xc_vcpu_getcontext() with different args).
At the time I was not working on full xen code, just changing xenctx the help me
out as needed. Looking at the code now, I do not understand how this could happen.
-Don Slutz
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|