[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] change hvm defaults for timer_mode and hpet?



Due to recent changes in timer handling (specifically building
hpet emulation on top of Xen system time and ensuring it is
monotonic), I wonder if it now makes sense to:

1) change hvm default for hpet to 1 (was 0)
2) change hvm timer_mode default from 0 to 2

I encouraged adding the hvm hpet parameter and defaulting
it to 0 because the virtual hpet was not reliable and many
guests/versions default to using hpet (by default) it if
it is available. That reliability problem should now be fixed.

Timer_mode==0 is necessary for guests that do not have
a monotonic platform timer; each processor in a multi-VCPU
guest has to do its best using only pit-generated ticks and
lost ticks mean lost time, so Xen does its best to squirrel
away any ticks that occur while a VCPU is asleep and deliver
all of them when the VCPU awakens.  Thus time moves forward
independently on each VCPU, leading to potential unavoidable
"Time went backwards" problems.   With virtual hpet
working properly, timer_mode==0 should rarely be necessary,
though I think we should leave it around for pre-hpet-capable
guests.

Although both of these parameters are easily specified in the
hvm config file, both are obscure and difficult to explain.
It would be nice if they were "right" most of the time and
only needed to be specified/explained for corner cases.

Comments?  It would be nice to get this all settled (and fully
tested) prior to 3.3.

Thanks,
Dan


===================================
Thanks... for the memory
I really could use more / My throughput's on the floor
The balloon is flat / My swap disk's fat / I've OOM's in store
Overcommitted so much
(with apologies to the late great Bob Hope)
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.