[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Re: [PATCH 2 of 4] xentrace: replace ASSERT with printk in __trace_var
It's not like we have dodgy third-party drivers to worry about. Just fix the code -- *which is in the tree!* -- that is making the bad calls! It was correct to make this an assert/bug_on in the first place. -- Keir On 15/07/2011 17:31, "George Dunlap" <george.dunlap@xxxxxxxxxx> wrote: > This seems likely to spam the console if there is a trace which violates > this; and this may happen in production environments if the loglevel is > increased. I think putting in something to warn just once would be a > better idea. > > -George > > On Fri, 2011-07-15 at 16:38 +0100, Olaf Hering wrote: >> # HG changeset patch >> # User Olaf Hering <olaf@xxxxxxxxx> >> # Date 1310741871 -7200 >> # Node ID e0ff4eea0432e0af3210e090a47414a0126e9904 >> # Parent d0dcdddf5285eba0605a95dfda79b794803fa733 >> xentrace: replace ASSERT with printk in __trace_var >> >> If trace_var gets called with large extra_data, do not crash the hypervisor. >> Instead print a warning and truncate the buffer. >> >> Signed-off-by: Olaf Hering <olaf@xxxxxxxxx> >> >> diff -r d0dcdddf5285 -r e0ff4eea0432 xen/common/trace.c >> --- a/xen/common/trace.c >> +++ b/xen/common/trace.c >> @@ -683,7 +683,10 @@ void __trace_var(u32 event, bool_t cycle >> if ( (extra % sizeof(u32)) != 0 ) >> extra_word++; >> >> - ASSERT(extra_word <= TRACE_EXTRA_MAX); >> + if ( unlikely(extra_word > TRACE_EXTRA_MAX) ) >> + printk(XENLOG_WARNING "xentrace: event %x extra_data %u too >> large.\n", >> + event, extra); >> + >> extra_word = min_t(int, extra_word, TRACE_EXTRA_MAX); >> >> /* Round size up to nearest word */ > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@xxxxxxxxxxxxxxxxxxx > http://lists.xensource.com/xen-devel _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |