Last time, we began an in-depth look at how time is measured. This post continues our journey. If any of you are students of Albert Einstein and his theory of relativity, this might seem familiar.
How does a computer know what time it is?
Inside the guts of every computer is a clock (called a Real Time Clock, or RTC) with a battery attached, very similar to a wrist watch. Inside that clock is a quartz crystal, which carries an electrical charge. This makes the quartz vibrate at a certain frequency, which is dictated by the size and shape of the crystal. So, provided a quartz crystal gets the right electric current, a computer will keep track of time indefinitely.
Unfortunately, for a number of reasons (including, recently, a fluctuation in the supply of electricity in parts of Europe), the current the quartz crystal receives is not perfect. This can cause the RTC to drift, meaning a computer may lose or gain time. Additionally, the Earth does not rotate a perfect 24 hours a day, so we have to keep computer clocks synchronised using a central clock and update it over the network.
This central clock could be a time server on our local network (making use of the Network Time Protocol, or NTP) or one of the several NTP servers close to us. Why does the server have to be close to us? Because there can be as much as 200ms latency if the server we’re synchronizing from is on the other side of the planet. An NTP server in our country, and preferably our city, would be able to offer a fairly low latency clock sync.
In ideal circumstances, NTP provides a time accurate to within 1ms and 100ms of UTC, depending on prevailing network conditions, with most Internet-connected computers being accurate to within 10ms.
NTP relies on UDP (User Datagram Protocol), which is very fast and is the same protocol DNS makes use of. Speed matters. The reason DNS is mentioned here is that DNS is the system that translates domain names into IP addresses, and most NTP servers are listed by their domain name.
So for an initial connection to synchronize our computer’s clock, it can take as much as 200ms to make the original connection to an NTP server. If any amount of drift has taken place (which is entirely possible), this could cause any data that relies on the computer’s local time to be persisted out of order when the clock synchronises itself.
Local date and time
As we discovered earlier, the planet we live on is approximately 40,000 km in circumference. That’s pretty big, and the speed of light is limited by the medium it moves through, so information transfer has a physical speed limit.
We also learned that network conditions prevent us from knowing at any given moment what the actual time is, because at the moment the information is measured, and then processed by a computer (or our brain), it is at least a few milliseconds later than it was when we initially measured it.
Over short distances and in most cases, this difference is negligible, but in high-performance and / or distributed systems requiring great accuracy, the speed of light can have a noticeable impact on what time an event occurred, especially over long distances.
For this reason, most people are satisfied with knowing the local time an event took place, relative to the observer (i.e., close to where our brains, or the computer, are situated).
However, local time accuracy is at the whim of geography, politics, and distance from the nearest NTP server. Daylight Saving Time causing clocks to jump backwards or forwards by an hour is the biggest challenge, but certainly not the only one.
Therefore, seasoned database and software designers will always recommend using UTC (Coordinated Universal Time) when persisting date and time data to eliminate many of the complexities introduced by local time.
Coordinated Universal Time (UTC) and time dilation
NTP uses UTC. That is to say, our computer knows what time zone it is in according to our regional settings, but it synchronises with a UTC clock and then adjusts its hour based on the local time zone.
Universal Time is not universal. It is limited to the surface of the planet Earth. Even the International Space Station, and all the satellites in geostationary orbit around the earth (especially GPS satellites), have to allow for time dilation caused by gravitational effects.
For the space station, time moves slower than on the planet surface by more than 1ms per month (because the ISS is moving relative to the planet). On the other hand, time on GPS satellites moves faster than on the planet surface (because they are locked in orbit relative to the planet).
Leap seconds and leap years
All of that is just more complication on top of the fact that the planet isn’t in a consistent rotation around the sun or its own axis. This is where leap years and leap seconds come from.
To keep the clocks on (and around) earth in sync with the planet’s position and rotation, we add or remove leap seconds from time to time.
The leap year comes from the fact that the Earth takes 365.2425 days to orbit the sun. Every four years, except every 100 years that are not divisible by 400, we add on another day. So the year 2000 was a leap year, but 1900 was not, and 2100 will also not be.
The leap second has to do with the length of an Earth day, so these happen at fluctuating intervals. What it means, however, is that UTC time is never accurate because there is always the possibility that the planet’s mean solar time at 0° longitude (where UTC is measured) is off by as much as 1 second (1000ms).
Now that we have a better understanding of how time is measured, the next posts in this series about how SQL Server and Azure SQL Database make use of date and time functions will make more sense. You can find me on Twitter at @bornsql. Stay tuned!