Milliseconds 10, ticks 3

Since the year dot, SQL Server’s native Datetime type has given us a resolution of 300 ticks per second. With the introduction of Datetime2 and DatetimeOffset in 2008, we now get up to 100ns resolution. For those who like the finer things in life, this appears to be cause for rejoicing.

Ruler
Image by frankieleon CC Attribution 2.0

Fun observation: if 100ns corresponded to 1 inch, 3.3ms translates to just over half a mile. That’s a pretty big difference in accuracy. Or should that be precision? Hmmmm, here’s a can of worms.

See, it’s all very well to persist data points at really fine grain, but you first need the ability to measure what you’re persisting. Short of your data acquisition coming from scientific instruments though, chances are that the times stored in your database come from getdate(), sysdatetimeoffset(), or their equivalents in a source application. And unless you have given a lot of thought about where your time measurements come from, you may be inadvertently subscribing to some of the falsehoods programmers believe about time. Okay, if none of those, maybe these?

It is said that there is no time like the present, though one could reasonably argue that there at least three times like the present in SQL Server. Here is a little script that will do nothing other than collecting getdate(), getutcdate() and sysdatetimeoffset() as fast as it can for five seconds and then report how many distinct points in time were caught in the net.

create table #Samples
(
      [GetDate]           datetime not null
    , [GetUTCdate]        datetime not null
    , [SysDatetimeOffset] datetimeoffset not null
)

declare @StopAt datetime = dateadd(s, 10, getutcdate())

while getutcdate() < @StopAt
begin
    insert #Samples
    select
          getdate()
        , getutcdate()
        , sysdatetimeoffset()
end

select distinct
      [GetDate]
    , [GetUTCdate]
    , [SysDatetimeOffset]
from #Samples
order by 3,1

I have no idea what the results on your server will be. Go on, try. My worst-case observations, with worst case being more likely on a physical server than a virtualised one:

  • sysdatetimeoffset() has variable resolution, which could be as coarse as 15ms (two and a half miles!). However, it is extremely unlikely that it would emit timestamps closer than 1ms apart. This is consistent with the documented fact of it using the GetSystemTimeAsFileTime Windows API call.
  • getdate() and getutcdate() don’t necessarily move between 3ms ticks at the same time, occasionally making wild jumps of up to 15ms. When they jump, they do so at separate times, converging after the second one makes the leap.

The secret to oven-fresh ticks

What I want to focus on today is something that has bugged me for ages: given that we can observe cases where Windows only hands out fresh timestamps every 15ms, how come getdate() still manages to give us steaming hot 3ms ticks? This issue, along with some other juicy time-related ones, is touched on by Robert Dorr in a great blog post, although the getdate() nugget is restricted to a very brief bullet point mention. Having now done my own spelunking, here is the expanded version as it stands in SQL Server 2014.

Both getdate() and getutcdate() are serviced by the function sqlmin!utgettime, which takes a parameter indicating whether it should return a local or a utc datetime. Broadly speaking, this function gives us a recently cached Windows time, offset by a quick-and-dirty measurement of how many milliseconds have passed since that time snapshot was taken. If more than a second has passed since the last cache refresh, a new refresh is triggered. In slightly more detail:

  1. Retrieve the current tally of global ticks and translate into a millisecond count
  2. Retrieve the cached Windows time from a structure which includes the above “millisecond count” as it stood when that timestamp was retrieved
  3. If that cached time is older than a second, get a fresh Windows time using the GetLocalTime or GetSystemTime API call, and save it with the current millisecond count as its timestamp of when the snapshot was taken
  4. Return the cached time offset by the number of milliseconds that have elapsed (timestamp delta) since that snapshot was taken

Two permutations come into play here. Firstly, UTC and local snapshots are cached in separate variables, and independently updated, i.e. a request for local time doesn’t trigger a refresh of the UTC cached time. This has the interesting side effect that calling getutcdate() less often than once a second will never have a millisecond delta to apply, and it will have the same output as calling GetSystemTime directly. On the other hand, calling both getdate() and getutcdate() more often than once a second will have them both refreshing their cached times once per second, though probably not at the same time. I ascribe my worst-case observation above to this, where the wobbles showed up at different times for local and UTC, and the cause of the wobbliness being either bad calibration of the intra-second ticks or uncertainty about how fresh the last acquired Windows time was.

The second permutation involves the source of these intra-second ticks. As described in Bob Dorr’s post, the QueryPerformanceCounter – itself coming from one of multiple potential sources! – is used if it is known that an invariant time stamp counter (processor RDTSC instruction) is available. Otherwise the interrupt timer is used, accessed through a simple memory read of the SharedUserData structure which is mapped at the same user-mode address in all processes. See here for background on the latter, including interesting detail on the issues of keeping synchronisation lightweight for a multi-word structure.

Ticks of the trade

Another facet of the utgettime function is that this isn’t just used from T-SQL. If you love it, put a breakpoint on it, and you’ll also see it being used to timestamp transaction start and end times in log record generation. This applies both to user transactions and to ones belonging to system tasks like the ghost cleanup task; I suspect it will also show up in various other tasks that log timestamps against database metadata. The frequency at which it could be needed in a high transaction volume system theoretically puts a concurrency slant on the way it was written, although I have been unable to find experimental evidence that sysdatetimeoffset() costs more than getdate().

Much of this is academic, and only of interest to especially curious folks. One very practical issue is that it weakens the case for using the datetime2 or datetimeoffset data types with more than three fractional digits of precision, because in the absence of source data that truly uses the extra precision (which you’re unlikely to get from either Windows or SQL Server) you’re only wasting storage space in the misguided belief that you are buying accuracy. The picture might change if SQL Server were to start using GetSystemTimePreciseAsFileTime, but this doesn’t yet seem to feature as of the 2016 CTPs.

Another practical implication is that it can make it hard to choose whether to populate a datetime2 column from getdate() or rather from sysdatetime(). With getdate(), you will get your 3ms resolution, although the output might be jittery. With sysdatetime(), you may be stuck with 15ms resolution, but then again you may actually achieve 1ms resolution. I’d cautiously say that sysdatetime() should be the winner on a virtual server due to the apparent standard practice of virtual hosts getting a 1000Hz timer interrupt – see the VMWare document referred to by Bob Dorr for more details and virtualisation gotchas.

And there you have it: more than you wanted to know about the anatomy of getdate’s tick. Stay tuned for some more date and time goodness in a future blog post.

One thought on “Milliseconds 10, ticks 3”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.