Hi all,
I have finally got my little PV guest running. My critical error was not setting __XEN_INTERFACE_VERSION__. After tearing to pieces the code and put it back together a few times I decided to debug my Makefile script comparing it to Mini-OS.
I am now starting to do some simple timings and the numbers are so bad I'm wondering what could be wrong with my test.
As mentioned previously I have created a CPU pool with one CPU and one domU. I am using the standard credit scheduler. I set timer_slop=0 on the Xen command line.
I initialise console, traps, events, and TSC clock in my PV and then start a periodic operation running. I then calculate latency as (clock time - deadline) and period as (clock time - previous deadline).At the moment I'm just displaying min/max values. I also increment a tick count on every cycle.
Looking at the statistics I see that I get the expected number of ticks per second, however I get latencies in the range [-1ms, +25us] and the periods in the range of [3us, 1.02ms]. The upper bounds look OK, but the lower bounds are all over the place.
This makes me think that I'm not the only thing using VIRQ_TIMER. I seem to remember that this is called nominally every 10ms by the Hypervisor.
Is there any way to recognize the origin of the timer?
Can anyone suggest other things to try?
Regards.
|