[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 2/2] x86/Intel: virtualize support for cpuid faulting
On 10/24/2016 12:18 AM, Kyle Huey wrote: > > The anomalies we see appear to be related to, or at least triggerable > by, the performance monitoring interrupt. The following program runs > a loop of roughly 2^25 conditional branches. It takes one argument, > the number of conditional branches to program the PMI to trigger on. > The default is 50,000, and if you run the program with that it'll > produce the same value every time. If you drop it to 5000 or so > you'll probably see occasional off-by-one discrepancies. If you drop > it to 500 the performance counter values fluctuate wildly. Yes, it does change but I also see the difference on baremetal (although not as big as it is in an HVM guest): ostr@workbase> ./pmu 500 Period is 500 Counted 5950003 conditional branches ostr@workbase> ./pmu 500 Period is 500 Counted 5850003 conditional branches ostr@workbase> ./pmu 500 Period is 500 Counted 7530107 conditional branches ostr@workbase> > > I'm not yet sure if this is specifically related to the PMI, or if it > can be caused by any interrupt and it's only how frequently the > interrupts occur that matters. I have never used file interface to performance counters, but what are we reporting here (in read_counter()) --- total number of events or number of events since last sample? It is also curious to me that the counter in non-zero after PERF_EVENT_IOC_RESET (but again, I don't have any experience with these interfaces). Also, exclude_guest doesn't appear to make any difference, I don't know if there are any bits in Intel counters that allow you to distinguish guest from host (unlike AMD, where there is a bit for that). -boris _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |