What Time Stream do You Follow?
What time it might be for whatever meaning of a moment you have – can get very difficult… Especially so when a non-linear, non-consistent viewpoint smears your timing into a big ball of wibbly-wobbly timey-wimey… stuff (to misquote The Tenth Doctor)…
We already have some long established time standards. We also have the *nix epoch (‘unix time‘) that is completely consistent regardless of ‘externalities’ (see the Epoch & Unix Timestamp Convertor, Epoch clock, or Dan’s Epoch Unix Time Stamp Converter for what time it is on Linux 😛 ). Buy the UnixTime T-shirt! And now… Google presents what I consider to be an ugly ‘work-around‘…
The Google ‘Smeared Time’ Leap Second Work-Around
The Google Cloud Platform Blog has just hit the news announcing Google’s own special version of time specially for their virtual world:
Google says the new service is for “anyone who needs to keep local clocks in sync with VM instances running on Google Compute Engine, to match the time used by Google APIs…” …
… “No commonly used operating system is able to handle a minute with 61 seconds,” Google says, so “Instead of adding a single extra second to the end of the day, we’ll run the clocks 0.0014% slower across the ten hours before and ten hours after the leap second, and “smear” the extra second across these twenty hours. For timekeeping purposes,  December 31 will seem like any other day.”
But not if you use an NTP server that uses another method [of] leap second handling. On the page for its time service, Google says “We recommend that you don’t configure Google Public NTP together with non-leap-smearing NTP servers.”…
So… In Google’s virtual world, there is no leap second discontinuity in that there are no leap seconds. Instead, Google’s time slips by slightly more slowly (or quickly for skipping a second) for 20 hours of ‘real’ time. This means that to a leap second aware system (as viewed from outside of ‘Google-time’), the special leap-period-smeared ‘Google time’ will creep to being 0.5s slow when going up to the leap second, then race/leap at x2 time speed during the second of the leap-second to then become 0.5s ahead of time… Worse still, those systems running on ‘Google-time’ should not even accept anything with a leap-second timestamp… The leap-second is in effect forever lost… What could possibly go wrong?!(TM)
The Google announcement for this is: “Making every (leap) second count with our new public NTP servers“. Note also their blog posting from previously: “Time, technology and leaping seconds”
As for the supposed “No commonly used operating system is able to handle a minute with 61 seconds” silliness. What ‘commonly used operating system’ might that be?… I’ll leave it to these good comments to clear that up:
The “smearing” approach is probably the most sensible method in the vast majority of cases.
You mean for those code monkeys who don’t know/don’t care and don’t test?
First point – your software should not crash if time is stepped anyway, what happens then if a machine is off-network for a while and then adjusted to the correct time (manually or by NTP)?
Second point – if you depend on precise time then do it properly! This is not a new issue, it has been documented and implemented in sane systems since the late 1970s. And for those who really need continuous time-scales (e.g. for computing time differences that are correct in any absolute sense) we already have TIA or even simpler GPS time.
What makes this all very frustrating is that there’s already perfectly good solutions to the leap second problem.
If OS developers wrote their OSes to use International Atomic Time instead of UTC as their base timescale, the OS would never need to deal with a leap second.
And there’s perfectly good libraries for converting TAI to, e.g. UTC that already handle leap seconds, can do accurate time calculations, etc. One such example is the SOFA library from the IAU.
Like everything else it cannot predict leap seconds, but an OS is already well placed to receive library updates as part of its regular maintenance. Why not this one too? And if every developer used TAI instead of UTC to represent time values then all their calculations would always be correct, with conversion to UTC for display being the only thing that’d be wrong in the absence of updates.
All sane OS already handle the leap second properly, except when some code monkey changes it and does not test it, and NTP has this built-in (it announces the leap 1 day in advance so the kernel can step as needed without an NTP packet at the precise change point).
No, this is simply a sop to shitty coders who do not understand the basics of precise time-keeping that have been this way for 40 odd years…
Unix/Linux has ALWAYS handled a minute with 61 seconds in it
The ‘struct tm’ that holds time in year, minutes, seconds, etc. has allowed tm_sec to go from 0-60 (instead of 0-59) for this very reason since before I first touched Unix 25 years ago, and presumably from day one for Linux. So Android and iOS should be perfectly fine. As for Windows, who the hell knows?
Now some applications may not be coded properly to expect that extra second and get a tm->tm_sec == 60, but this is hardly the fault of the OS, it is the fault of the application!
The Google (‘smeared time’) time servers are on:
And they must not be used with a mix of any other time servers…
Time in the Real World
Note that we have the long established ntp and the publicly available pool of time servers coordinated by pool.ntp.org. For the USA, there are the NIST Internet Time Servers publicly available. Note that the time standards for the world are: UTC, GPS, LORAN and TAI.
Interestingly, there is a project to replace the old venerable well proven NTPD with a newly developing system called “ntimed”. A brief summary is given in Ntimed: an NTPD replacement. Notable points include:
- One of the main focuses is security… Small code and small attack surface;
- Leap-smearing (stretching a leap second over a day, to limit the impact, like Google does);
- [Some?] Air-traffic control systems are not leap-second aware. The next leap second will happen … during the day in Tokyo… [Without problem 01/07/2015]
- “Green Computing”: the ideal of ntimed is to be as green as possible [minimize resource utilization and hence minimize the electrical power cost].
At the moment, ntimed is very much a work in progress. See PHK: Time, NTP, PTP etc. Sort of a running log.
Our last leap second caused a little bit of news for some systems that failed: June 30th is a second too long. The National Geographic article “World Will Gain a Leap Second on Tuesday: Here’s Why” nicely describes why we have leap seconds and gives the surrounding story. Note the some systems “just ignore the leap second” which presumably gives rise to the allure of using smeared-time…
Wikipedia gives a summary of computer system time.
- If you run Virtual Machines in the Google ‘Cloud’, or if you must interface into that world in any way that is time critical, then perhaps use the smeared ‘Google time’;
- Otherwise for everything in the Real World, use real time with properly time aware applications!
For a pause for thought: Should we next use time-smearing to ‘do-away’ with all “Leaplings” for leap years?…
Whatever next?! 😉