[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH for-4.18 1/5] xenalyze: Only accumulate data from one vmexit without a handler
If a vmentry/exit arc unexpectedly doesn't have a handler, we throw an error, and then log the information under HVM event 0; thus those particular traces within the vmexit reason will have stats gathered, and will show up with "(no handler)". This is useful in the event that there are unusual paths through the hypervisor which don't have trace points. However, if there are more than one of these, then although we detect and warn that this is happening, we subsequently continue to stuff data about all exits into that one exit, even though we only show it in one place. Instead, refator things to only allow a single exit reason to be accumulated into any given event. Also put a comment explaining what's going on, and how to fix it. Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxx> --- Release justification: This is a bug fix; a minor one, but also in a non-critical part of the code. CC: Anthony Perard <anthony.perard@xxxxxxxxx> CC: Stefano Stabellini <sstabellini@xxxxxxxxxx> CC: Xenia Ragiodakou <xenia.ragiadakou@xxxxxxx> --- tools/xentrace/xenalyze.c | 34 ++++++++++++++++++++++++++-------- 1 file changed, 26 insertions(+), 8 deletions(-) diff --git a/tools/xentrace/xenalyze.c b/tools/xentrace/xenalyze.c index 4cb67062c9..1359e096f9 100644 --- a/tools/xentrace/xenalyze.c +++ b/tools/xentrace/xenalyze.c @@ -4652,6 +4652,19 @@ void hvm_generic_postprocess(struct hvm_data *h) ? "[clipped]" : h->exit_reason_name[h->exit_reason]); warned[h->exit_reason]=1; + + /* + * NB that we don't return here; the result will be that `evt` + * will be "0", and there will be a "(no handler)" entry for the + * given VMEXIT. + * + * This does mean that if two different exits have no HVM + * handlers, that only the first one will accumulate data; + * if accumulating a separate "(no handler)" data for more + * than one exit reason is needed, we'll have to make Yet + * Another Array. But for now, since we try to avoid + * it happening, just tolerate it. + */ } } @@ -4664,18 +4677,19 @@ void hvm_generic_postprocess(struct hvm_data *h) } if(opt.summary_info) { - update_cycles(&h->summary.generic[evt], - h->arc_cycles); - /* NB that h->exit_reason may be 0, so we offset by 1 */ if ( registered[evt] ) { static unsigned warned[HVM_EXIT_REASON_MAX] = { 0 }; - if ( registered[evt] != h->exit_reason+1 && !warned[h->exit_reason]) + if ( registered[evt] != h->exit_reason+1 ) { - fprintf(warn, "%s: HVM evt %lx in %x and %x!\n", - __func__, evt, registered[evt]-1, h->exit_reason); - warned[h->exit_reason]=1; + if ( !warned[h->exit_reason] ) + { + fprintf(warn, "%s: HVM evt %lx in %x and %x!\n", + __func__, evt, registered[evt]-1, h->exit_reason); + warned[h->exit_reason]=1; + } + return; } } else @@ -4686,7 +4700,11 @@ void hvm_generic_postprocess(struct hvm_data *h) __func__, ret); registered[evt]=h->exit_reason+1; } - /* HLT checked at hvm_vmexit_close() */ + + update_cycles(&h->summary.generic[evt], + h->arc_cycles); + + /* HLT checked at hvm_vmexit_close() */ } } -- 2.25.1
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |