[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] timer_mode/hpet proposals and documentation


  • To: "Xen-Devel (E-mail)" <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: "Dan Magenheimer" <dan.magenheimer@xxxxxxxxxx>
  • Date: Thu, 31 Jan 2008 14:32:27 -0700
  • Delivery-date: Thu, 31 Jan 2008 13:33:32 -0800
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AchkUMUjN8IAF84fSaibQQjqqR+0Kg==

I've been googling for documentation on timer_mode and haven't found
any.  I'd like to write some but will need some help explaining the
subtleties between the different modes.

But first I'd like to suggest some slightly different semantics and
a related idea:

1) Change the definition of timer_mode==0 to be:
   Unspecified.  Xen and/or management tools may use other settings and/or
   heuristics to change timer_mode to a more appropriate value.  Otherwise,
   timer_mode==0 and timer_mode==4 will be equivalent.
2) Add a new timer_mode==4 which replaces timer_mode==0 and may not
   be changed by Xen and/or management tools.
3) Add a new hvm platform variable "vhpet" which defaults to zero.  If set
   to one, the virtual hpet will be enabled, else it will be disabled.

For 1) and 2), timer_mode is relatively new so few shipping Xen
implementations should be dependent on it (especially on timer_mode==0).
It would be nice to plan for automatic mechanisms to work even on
existing VM config files.

For 3), hpet seems to be the default virtual clocksource for guests but
appears to be less accurate than pit.  Since hpet hardware is more
accurate than pit hardware, this is counterintuitive.

If these are reasonable, I will spin some patches.

Here's a first crack at some documentation for timer_mode:

===========
For fully virtualized guests, the platform variable "timer_mode" can
be set to the following values:
0 Unspecified.  Xen and/or management tools may use other settings
   and/or heuristics to change timer_mode to a more appropriate value.
   Otherwise, this value is the same as "delay_for_missed_ticks".
1 no_delay_for_missed_ticks
2 no_missed_ticks_pending
3 one_missed_tick_pending
4 delay_for_missed_ticks

Modern operating systems have direct access to hardware clock/timer
mechanisms and generally keep time by counting interrupts.  Rarely,
delivery of a timer interrupt -- or "tick" -- to an OS may get
delayed.  If a tick is delivered when the OS isn't ready, for example
if it is currently of processing a previous tick, the guest may fail
to see one or more interrupts, resulting in "missed" ticks.  Different
OS's deal with this problem in different ways and the problem occurs
more frequently in a virtual environment, especially when resources
are overloaded.  As a result, Xen has to support multiple mechanisms
for delivery of missed ticks to a guest.  (Note that no virtual time
algorithm is perfect and it is recommended that all guests be
configured to periodically synchronize with an external time source
(e.g. via NTP) to eliminate any remaining small error.)

"Delay for missed ticks" (4) is used for guests which do not correct
for missed ticks, such as most older Linux OS's.

"No delay for missed ticks" (1) is [...???] and is used for Windows
guests.

"No missed ticks pending" is used for guests which are resilient to
missed ticks such as newer Linux 64-bit OS's.  Under most circumstances
these guests correct themselves for missed ticks so Xen doesn't have to.

"One missed tick pending" is [...????]
==========

Feedback and assistance welcome!

Thanks,
Dan

===================================
If Xen could save time in a bottle / then clocks wouldn't virtually skew /
It would save every tick / for VMs that aren't quick /
and Xen then would send them anew
(with apologies to the late great Jim Croce)
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.