[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] Re: [Crash-utility] xencrash fixes for xen-3.3.0
Hi, It is a good question. I checked about i386. __per_cpu_data_end - __per_cpu_start is smaller than 4K, but PERCPU_SHIFT is 13 (it is common both x86_32 and x86_64). Oops. I will consider more. Thanks Itsuro Oda On Tue, 07 Oct 2008 09:39:05 -0400 Dave Anderson <anderson@xxxxxxxxxx> wrote: > Itsuro ODA wrote: > > Hi, > > > > This patch is for xen hypervisor analysis function of the > > crash command to apply to the xen-3.3.0 (the newest version of xen). > > > > * PERCPU_SHIFT becomes 13 (from 12) in the xen-3.3.0. > > This value is calculated from "__per_cpu_start" and "__per_cpu_data_end". > > * "jiffies" does not exist in the xen-3.3.0. > > It was used to show the uptime. I found there is no altanernative > > (i.e. the xen hypervisor does not have the uptime.). > > Then if "jiffies" does not exist, "--:--:--" is showed as UPTIME in > > the sys command. > > (Is it better to eliminate the whole UPTIME line ?) > > --- example --- > > crash> sys > > KERNEL: xen-syms > > DUMPFILE: vmcore > > CPUS: 4 > > DOMAINS: 5 > > UPTIME: --:--:-- > > MACHINE: Intel(R) Core(TM)2 Quad CPU Q9450 @ 2.66GHz (2660 Mhz) > > MEMORY: 2 GB > > ---------------- > > > > This patch is for crash-4.0-7.2. > > > > Thanks > > Itsuro Oda > > The patch looks OK. But just for sanity's sake, is it guaranteed that > the per_cpu data section will be greater than 4k on both architectures? > Or could there be some combination of xen CONFIG options that could > reduce the i386 per_cpu data section contents to less than 4K even though > PERCPU_SHIFT is 13? > > Dave > > > > > --- > > --- xen_hyper_defs.h.org 2008-10-06 13:45:39.000000000 +0900 > > +++ xen_hyper_defs.h 2008-10-06 13:44:44.000000000 +0900 > > @@ -134,9 +134,8 @@ > > #endif > > > > #if defined(X86) || defined(X86_64) > > -#define XEN_HYPER_PERCPU_SHIFT 12 > > #define xen_hyper_per_cpu(var, cpu) \ > > - ((ulong)(var) + (((ulong)(cpu))<<XEN_HYPER_PERCPU_SHIFT)) > > + ((ulong)(var) + (((ulong)(cpu))<<xht->percpu_shift)) > > #elif defined(IA64) > > #define xen_hyper_per_cpu(var, cpu) \ > > ((xht->flags & XEN_HYPER_SMP) ? \ > > @@ -404,6 +403,7 @@ > > ulong *cpumask; > > uint *cpu_idxs; > > ulong *__per_cpu_offset; > > + int percpu_shift; > > }; > > > > struct xen_hyper_dumpinfo_context { > > --- xen_hyper.c.org 2008-10-06 13:41:14.000000000 +0900 > > +++ xen_hyper.c 2008-10-06 14:15:03.000000000 +0900 > > @@ -71,6 +71,8 @@ > > #endif > > > > #if defined(X86) || defined(X86_64) > > + xht->percpu_shift = > > + (symbol_value("__per_cpu_data_end") - > > symbol_value("__per_cpu_start") > 4096) ? 13: 12; > > member_offset = MEMBER_OFFSET("cpuinfo_x86", "x86_model_id"); > > buf = GETBUF(XEN_HYPER_SIZE(cpuinfo_x86)); > > if (xen_hyper_test_pcpu_id(XEN_HYPER_CRASHING_CPU())) { > > @@ -1746,9 +1748,11 @@ > > tmp2 = (ulong)jiffies_64; > > jiffies_64 = (ulonglong)(tmp2 - tmp1); > > } > > - } else { > > + } else if (symbol_exists("jiffies")) { > > get_symbol_data("jiffies", sizeof(long), &jiffies); > > jiffies_64 = (ulonglong)jiffies; > > + } else { > > + jiffies_64 = 0; /* hypervisor does not have uptime */ > > } > > > > return jiffies_64; > > --- xen_hyper_command.c.org 2008-10-07 08:05:37.000000000 +0900 > > +++ xen_hyper_command.c 2008-10-07 08:24:29.000000000 +0900 > > @@ -1022,7 +1022,8 @@ > > (buf1, "%d\n", XEN_HYPER_NR_DOMAINS())); > > /* !!!Display a date here if it can be found. */ > > XEN_HYPER_PRI(fp, len, "UPTIME: ", buf1, flag, > > - (buf1, "%s\n", convert_time(xen_hyper_get_uptime_hyper(), > > buf2))); > > + (buf1, "%s\n", (xen_hyper_get_uptime_hyper() ? > > + convert_time(xen_hyper_get_uptime_hyper(), buf2) : > > "--:--:--"))); > > /* !!!Display a version here if it can be found. */ > > XEN_HYPER_PRI_CONST(fp, len, "MACHINE: ", flag); > > if (strlen(uts->machine)) { > > --- > > > > > -- > Crash-utility mailing list > Crash-utility@xxxxxxxxxx > https://www.redhat.com/mailman/listinfo/crash-utility -- Itsuro ODA <oda@xxxxxxxxxxxxx> _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |