[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: [patch 13/26] Xen-paravirt_ops: Consistently wrap paravirt ops callsites to make them patchable



On Tue, Mar 20, 2007 at 10:25:20AM -0600, Eric W. Biederman wrote:
> What I recall observing is call traces that made no sense.  Not just
> extra noise in the stack trace but things like seeing a function that
> has exactly one path to it, and not seeing all of the functions on
> that path in the call trace.

That's tail call/sibling call optimization. No unwinder can untangle that
because the return address is lost.  But it's also an quite important 
optimization.

> >
> > In 2.4 it was often very reasonable to just sort out the false positives,
> > but with sometimes 20-30+ level deep call chains in 2.6 with many callbacks 
> > that
> > just
> > gets far too tenuous. 
> 
> Hmm.  I haven't seen those traces, but I wonder if the size of those
> stack traces indicates potential stack overflow problems.

Most functions have quite small frames, so 20-30 is still not a problem

> Do you also validate the unwind data?

There are many sanity checks in the unwind code and it will fall back
to the old unwinder when it gets stuck.

> 
> > Although in future it would be good if people did some more analysis in root
> > causes for failures before let the paranoia take over and revert patches.
> >
> > We see a good example here of what I call the JFS/ACPI effect: code gets 
> > merged
> > too early with some visible problems. It gets a bad name and afterwards 
> > people
> > never look objectively at it again and just trust their prejudices. 
> 
> I don't know.  The impression I got was the root cause analysis stopped 
> when it was observed that the code was unsuitable for solving the problem.

No, me and Jan fixed all reported bugs as far as I know.

-Andi


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.