gpsd-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[gpsd-dev] How to tell a "good" NTP server


From: Hal Murray
Subject: [gpsd-dev] How to tell a "good" NTP server
Date: Tue, 22 Oct 2013 03:18:30 -0700

Here is the long story.

First, some NTP background.

After the typical client-server exchange of a pair of NTP packets, the client 
has 4 time stamps.
  The time the request left the client.
  The time the request arrives at the server.
  The time the response left the server.
  The time the response arrived at the client.
Note that 2 of those timestamps use the clients clock and 2 use the servers 
clock.

ntpd assumes the network delays are symmetric and that the server clock is 
the truth.  From that you can calculate the offset of the client's clock.

On the other hand, if you know (or assume) that both clocks are accurate, you 
can compute the network propagation delays in both directions.

-------

There are several sources of error.
  1) The accuracy of the server clock.
  2) Network routing asymmetries.
  3) Network queuing delays.

There is not much you can do about the server accuracy except pick 
trustworthy servers and use several of them and make sure they all agree.

You can minimize routing asymmetries by picking servers close to you.

You can minimize queuing delays by picking times when things are not busy 
and/or servers that are close to you so there is less opportunity for queues.

You can monitor the round trip times with the delay column from ntpq -p.  If 
you assume the network routing delay doesn't change very often and you watch 
long enough, the lowest delay you see is the network routing delay.  Anything 
over that is queuing delay or a broken network link.  The broken links will 
be stable for minutes/hours.  The queuing delays will have lots of jitter.

Stratum 1 servers are likely to have better accuracy, but on the other hand, 
since they are popular, they may be busy and suffer from queuing delays.

----------

If I wanted to calibrate a local clock.  I think getting within 10 ms would 
be reasonable.  2 ms would be hard.  YMMV and such.

----------

Various numbers:

I have a 384K SDSL line.  I've seen queuing delays over 3.5 seconds.  (I 
think that mostly happens when a web browser opens a zillion connections to 
download graphics.)  It's ballpark of 500 ms if I'm "just" downloading a huge 
file.

I'm in Silicon Valley, California.  I have a GPS receiver that I trust.  NIST 
has several stratum 1 systems in the Denver/Boulder area: several at NIST 
(Boulder), two at University of Colorado (Boulder), one at WWV (Fort 
Collins).  That's ~1000 miles from here.  Round trip time is ballpark of 80 
ms.  From here, normally, the UofCo server is off by a ms or 2.  The NIST 
server is off a few ms in the other direction.  The WWV server is off a few 
more ms.  I assume those are all due to asymmetric routing.

About 10 hours ago, there was a routing change.  I assume some backbone link 
died or recovered.  The round trip times dropped by about 15 ms.  The UofCo 
server is now off by 10 ms.  The NIST server is off by 10 ms in the other 
direction.

I've seen similar jumps by 20 ms while watching a server in Texas.


-- 
These are my opinions.  I hate spam.






reply via email to

[Prev in Thread] Current Thread [Next in Thread]