[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Time/clock issues with Xen 3.0.3?


  • To: tim.post@xxxxxxxxxxxxxxx,"marek cervenka" <cervajs@xxxxxxxxxx>
  • From: "Chad Oleary" <oleary.chad@xxxxxxxxx>
  • Date: Wed, 29 Nov 2006 10:58:56 +0000
  • Cc: xen-users@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Wed, 29 Nov 2006 02:11:57 -0800
  • Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:content-transfer-encoding:reply-to:references:in-reply-to:sensitivity:importance:to:cc:subject:from:date:content-type:mime-version; b=RAzIwgz7K7Edg9gxXqzguRAr0dI6FU36WywCUtizWJo3u0MxnFzphkDNjz3+N5kEPImF2t/OEd/ec2CNSIdsl0KFHHFKuncPrtpfMqrizKS0t2zParpFiitjqLF8qZvWU1saN1gNTd5JZHXIVzotDtuVP7cWpyRDzUvZo2KO6YQ=
  • Importance: Normal
  • List-id: Xen user discussion <xen-users.lists.xensource.com>
  • Sensitivity: Normal

ntp is to correct drift. It was not designed be the clock itself. If the. clock 
doesn't keep proper time, do you add more clocks to keep it correct? No, you 
fix the clock.

Ntp has it's place. But, it doesn't work well with a broken clock. Throwing ntp 
at this is inappropriate.


  

-----Original Message-----
From: Tim Post <tim.post@xxxxxxxxxxxxxxx>
Date: Wed, 29 Nov 2006 17:19:53 
To:marek cervenka <cervajs@xxxxxxxxxx>
Cc:xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] Time/clock issues with Xen 3.0.3?

On Tue, 2006-11-28 at 15:28 +0100, marek cervenka wrote:
> 
> i (and others too) do not want run ntpd in every domU. this is wasting of 
> resources

You could try rdate on boot, which would sync well.. I'm not sure
however how far the clock will drift thereafter (in the dom-u). 

ntp consumes almost no resources afiak, it barely malloc()'s (the only
strings it has to deal with are hostnames and time data) .. and I
believe rides in dentry for the most part once started. Its really not
much of a resource loss at all.

There was a fork of ntp/ntpd that was developed for SMM (embedded)
systems that used 99.9% file handles vs cache for everything, so that
the native 2.4 kernel on these systems would relinquish the memory
associated with those file handles instantly. However this also assumes
a CF drive (no i/o loss having it sync every few seconds). 

It came about because some of the cheaper ULV boards have horrible
clocks and 4 - 16 MB of RAM - so cache is precious real estate in that
setting.

I fail to recall the name of it but I *think* its available as a meta
package in Debian , check the debian-embedded list if your interested in
finding it. Was made for Kiosks and such years ago. Even if its orphaned
and unmaintained you should be able to dig it up .. if ntp resource use
is that much of a concern.

I know for all intensive purposes most of us code and install as
defensively on dom-0 as we would on a small memory model or embedded
system .. but we're really not talking about that big of a loss here :)

Best,
-Tim

> 
> > # ntpq -p
> >     remote           refid      st t when poll reach   delay   offset  
> > jitter
> > ==============================================================================
> > LOCAL(0)        LOCAL(0)        10 l   12   64  377    0.000    0.000   
> > 0.001
> > *rkdvmso1.dvm.kl 132.199.176.97   2 u  987 1024  377    0.207    0.035   
> > 0.742
> > +rksapas01.dvm.k 192.168.0.61     3 u  997 1024  377    0.120    0.020   
> > 0.772
> > +rksapas02.dvm.k 192.168.0.61     3 u  992 1024  377    0.101    0.049   
> > 0.769
> > +rksapas03.dvm.k 192.168.0.62     3 u  995 1024  377    0.282    0.019   
> > 4.211
> > +rksapas04.dvm.k 192.168.0.62     3 u  997 1024  377    0.274    0.001   
> > 0.775
> > +rksapas05.dvm.k 192.168.0.63     3 u   78 1024  377    0.281   -0.093   
> > 0.800
> > +rksapas06.dvm.k 192.168.0.63     3 u  999 1024  377    0.230   -0.213   
> > 0.825
> > rksapas07.dvm.k .INIT.          16 u    - 1024    0    0.000    0.000 
> > 4000.00
> > +rksapas08.dvm.k 192.168.0.41     4 u   78 1024  377    0.243   -0.168   
> > 0.796
> >
> > Ulrich
> >
> >
> >>
> >> for an example how to reproduce this problem, see below...
> >>
> >>
> >> right now I use a cron job on dom0 which re-sets the dom0 clock
> >> via date -s `date` (ntpdate doesn't work here).
> >>
> >>
> >> On Nov 25, Tim Post wrote:
> >>
> >>> What are the values of /proc/sys/xen/independent_wallclock
> >>> and /proc/sys/xen/permitted_clock_jitter respectively?
> >>
> >>    os2 koenig > cat /proc/sys/xen/independent_wallclock
> >>    0
> >>
> >>    os2 koenig > cat /proc/sys/xen/permitted_clock_jitter
> >>    10000000
> >>
> >> maybe I should just set /proc/sys/xen/independent_wallclock to 1
> >> and run ntpd on all domUs ?
> >>
> >>
> >> now, how I was able to reproduce the domU clock due to ntp clock drift in 
> >> dom0
> >> using SUSE 10.1 xen stuff.  in this examle both xen, dom0 and domU are 
> >> SUSE 10.1
> >> and use the SUSE xen-kernel, but on my real XEN server I run may different
> >> distributions and kernels -- all alike...
> >>
> >>
> >> step 1: perfect clock sync with drift==0 :
> >>
> >> run ntpd on dom0 with the following config file /etc/ntp.conf which uses
> >> only the dom0 system clock as "source", so it doesn't adjust the clock
> >> ever.
> >>
> >> ----------------------------- /etc/ntp.conf 
> >> -----------------------------------
> >> restrict default noquery notrust nomodify
> >> restrict 127.0.0.1
> >> restrict 192.168.8.0 mask 255.255.255.0
> >> server 127.127.1.1
> >> driftfile /var/lib/ntp/drift/ntp.drift
> >> logfile /var/log/ntp
> >> -------------------------------------------------------------------------------
> >>
> >> before starting ntpd, make sure the clock drift is set to zero with
> >>
> >>    echo 0 > /var/lib/ntp/drift/ntp.drift
> >>
> >> now start ntpd, start domU (don't run ntpd in domU) and check the domU
> >> clock drift with
> >>
> >>    ntpdate -d dom0
> >>
> >> that's how it should always work (in theory;).  but in real world
> >> the real clock drift of a PC clock is not zero. prrtty often the clock
> >> shows a frequency error of 100 ppm and more (which is 8.64 secs per day!).
> >>
> >>
> >> now, let's add some drift to dom0:
> >>
> >>    /etc/init.d/ntpd stop
> >>    echo 100 > /var/lib/ntp/drift/ntp.drift
> >>    /etc/init.d/ntpd start
> >>
> >>
> >> now you check the domU clock by running ntpdate on domU:
> >>
> >>    ntpdate -d dom0 ; sleep 60 ; ntpdate -d dom0
> >>
> >> and there will be a domU clock drift relative to dom0 or any other ntpd 
> >> server
> >> of ~6 msec per minute == 100 ppm.  qed.
> >>
> >>
> >> hope this helps to track and fix this clock problem!
> >>
> >> Harald Koenig
> >> --
> >> "I hope to die                                      ___       _____
> >> before I *have* to use Microsoft Word.",           0--,|    /OOOOOOO\
> >> Donald E. Knuth, 02-Oct-2001 in Tuebingen.        <_/  /  /OOOOOOOOOOO\
> >>                                                     \  \/OOOOOOOOOOOOOOO\
> >>                                                       \ 
> >> OOOOOOOOOOOOOOOOO|//
> >> Harald Koenig                                          \/\/\/\/\/\/\/\/\/
> >> science+computing ag                                    //  /     \\  \
> >> koenig@xxxxxxxxxxxxxxxxxxxx                            ^^^^^       ^^^^^
> >>
> >> _______________________________________________
> >> Xen-users mailing list
> >> Xen-users@xxxxxxxxxxxxxxxxxxx
> >> http://lists.xensource.com/xen-users
> >
> >
> >
> > _______________________________________________
> > Xen-users mailing list
> > Xen-users@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-users
> >
> 
> ---------------------------------------
> Marek Cervenka
> =======================================
> 
> 
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.