The DMV Diaries: sys.dm_os_workers

Following hot on the heels of sys.dm_os_threads, today we look into the worker objects built on top of them. This is intended to be supplementary to the official documentation on sys.dm_os_workers, so I’ll comment on that where appropriate, rather than repeating it. My reference version is 2016 SP1.

Basic plumbing of related objects

worker_address is of course simply the address of the Worker class instance we’re looking at. It is bound to the SOS_Scheduler living at scheduler_address and (once out of the SystemThreadDispatcher) to the SystemThread class instance at thread_address.

Once bound to an SOS_Task in the owning scheduler’s WorkDispatcher, it will expose that object’s address in task_address.

If we’re running in fiber mode, fiber_address points to the fiber data containing the user-mode thread state.

Now this isn’t intended to be about memory, but memory issues tend to touch everything we look at anyway, so a short diversion is in order.

We saw in my previous post on sys.dm_os_threads that each thread gets an associated MiniSOSThreadResources object which already contains a worker. Beyond that initial thread bootstrapping though, the factory method Worker::CreateWorker() is called to create useful Workers. One of the first things that function does is to allocate the memory (2816 bytes) in which to construct the Worker instance. This memory is provided by a memory object which is specially created for the occasion, and the pointer to the memory object is stored within the Worker; this is what gets exposed as memory_object_address.

What’s interesting in the memory hierarchy is that this is a memory object which both “belongs” to the Worker and is its parent. It is possible for other objects and functions to milk it for further “worker-local” allocations, although that would be an exploration for another day.

State and status

The state enum is a familiar one; here are its integer values:

0 - Init
1 - Running
2 - Runnable
3 - Suspended

As described in The DMV diaries: Task, Request and Session State, the value exposed in the DMV lacks the layered semantics of e.g. task state. Once we go beyond Init, we simply see how the SQLOS scheduler views the worker when in nonpreemptive mode. If the worker is running, it owns the scheduler. If runnable it is owned by the scheduler’s runnable queue. And if suspended, the scheduler doesn’t have any interest in its movement, unless the worker is waiting on IO or a timer.

status is an interesting one, and is the source of a bunch of the following is_xxx flags, which break out its individual bits.

Here is the 0-based bit mapping of the flags exposed in the DMV, plus a handful of others I know to be in use but aren’t exposed here.

bit  2 - is_preemptive
bit  3 - is_fiber
bit  4 - is_sick
bit  5 - is_in_cc_exception
bit  6 - is_fatal_exception
bit  7 - is_inside_catch
bit  8 - also involved in exception state
bit 11 - used in scheduling
bit 12 - lazy preemptive (in conjunction with 2)
bit 13 - is_in_polling_io_completion_routine
bit 19 - set in SOS_Task::DoomThread() 

There is a second bitmask member in the Worker class, containing flags like “do not suspend”, “is in external code”, “is in exception backout” etc. For whatever reason, the DMV authors didn’t expose any of these flags.

I/O, exception and affinity metrics

pending_io_count is a straightforward member of the Worker instance, as is pending_io_byte_count. And pending_io_byte_average is simply a convenience column derived from the other two, saving you from having to special-case around potential division by zero.

It is possible for exception_num to be either a 16-bit or 32-bit integer; which it is is determined by another flag elsewhere in the worker. exception_severity lives within the worker, but additional information like exception state is found in a separate ExceptionInfo struct, which is the thing pointed to by exception_address.

affinity comes straight from a Worker member, whereas processor_group is derived from an embedded NodeAffinity instance.

Timestamps

Time to talk about time again. Within SQLOS, the vast majority of “Now()” time stamps are stored as integers sourced from one of two domains. Which domain gets used is determined at service startup and – to the best of my knowledge – will not change until shutdown. The two option are:

  1. The QueryPerformanceCounter (QPC), used if an invariant timestamp is available. This is the more predictable and finely grained option, and I’d assume that it applies on most serious systems.
  2. As fallback, timer interrupt ticks can be used. These are very easy and cheap to retrieve from the KUSER_SHARED_DATA structure, but resolution is at the mercy of outside forces, and can be as bad as 15.6ms.

I have previously touched on the two options when discussing the source of getdate() in Milliseconds 10, ticks 3.

So to get to the point, all the below columns expose normalised values, derived either by applying the instance-specific SOS_PublicGlobals:sm_QueryPerformanceFrequencyBase (QPC) or a simple constant factor of 10,000 (interrupt ticks) to the underlying properties:

  • worker_created_ms_ticks
  • task_bound_ms_ticks
  • wait_started_ms_ticks
  • wait_resumed_ms_ticks
  • start_quantum
  • quantum_used
  • max_quantum

Here are a few notes to pad out the official documentation.

wait_started_ms_ticks is set in SOS_Task::PreWait(), i.e. just before actually suspending, and again cleared in SOS_Task::PostWait(). For more about the choreography of suspending, see here.

wait_resumed_ms_ticks is set in SOS_Scheduler::PrepareWorkerForResume(), itself called by the mysteriously named but highly popular SOS_Scheduler::ResumeNoCuzz().

start_quantum is set for the Resuming and InstantResuming case within SOS_Scheduler::TaskTransition(), called by SOS_Scheduler::Switch() as the worker is woken up after a wait.

max_quantum is a high-water mark for the longest time the worker has spent on a single quantum, and quantum_used is the total time the worker has spent in the Running state.

Definitely the most interesting one of the bunch is end_quantum. This is a calculated field, and is simply start_quantum plus the scheduler’s quantum length, which is currently always 4ms.

What makes it interesting is that this calculation is redundant with the quantum target actually stored as a property within the Worker. This has been touched on recently by Paul Randal in a great thought-provoking blog post when he mentioned that the quantum end is stored in RDTSC ticks.

My best guess is that RDTSC came to the fore here for the sake of fine grain and very low cost. Even in the face of clock speed changes, having a bit of variation in the quantum end is probably no big deal, compared with the risk of having to use interrupt ticks with a dodgy or completely unusable accuracy. And on the cost front, the classic “is it time to yield yet?” check is really cheap to express when it’s just a case of pulling up the current TSC and comparing it with the precalculated finish line.

Anyhow, when calculating end_quantum for the DMV, we get the additional conversion joy of leaning on SOS_PublicGlobals::sm_CpuTicksPerMillisecond, because the quantum length (at scheduler level) is expressed in CPU ticks.

Odds and ends

last_wait_type, tasks_processed_count and signal_worker_address are fairly straightforward.

context_switch_count is incremented within SOS_Scheduler::TaskTransition() in three of its cases:

  1. Suspending, the normal case where a worker is about to be switched out.
  2. InstantResuming, where the quantum was exhausted but the runnable queue is empty, so the worker gets another quantum without any switch actually taking place.
  3. SwitchToNonPreemptive, where a preemptively scheduled worker rejoins the cooperative ecosystem.

The return_code property shows up the mechanism by which results of asynchronous calls propagate back to the calling worker. Whoever makes the waiting worker runnable again (typically return of a wait function), the result of the wait is written into this member while the worker remains asleep. After getting back to the runnable queue, and eventually being picked as the next worker to run, SOS_Scheduler::Switch() reads this value and returns it as the return value to the awakened worker. This may propagate through a few layers of function calls, but ultimately it will reach a function that will know what to do with it, e.g. turn a timeout result into an exception.

boost_count is an oddity for dragging a rather useful explanation of priority boosting into the official documentation. Here is my best effort at making sense of this mechanism, whereby a thread waiting on a synchronisation object gets bumped to the head of the runnable queue 1000 times in a row, but then going to the back once before being eligible for a boost again.

The idea of applying a priority boost is a familiar one from OS scheduling. This serves to avoid the priority inversion that can be caused in the below scenario:

  1. Many threads wait for a resource
  2. The low priority thread eventually gets its turn, is assigned ownership and made runnable, while others (including high-priority ones) are waiting in line behind it
  3. Because it has low priority to the scheduler, it doesn’t get scheduled
  4. Now the higher priority threads don’t get to run either, because they are waiting on something that won’t get to run and pass the baton to them

While the boosting mechanism does apply as described, we are glossing over the fact that each scheduler’s “runnable queue” may actually consist of a lot of different queues. The detail has changed between 2014 and 2016, but the omitted bit still boils down to this: Upon becoming runnable, a worker gets a place at the head or the tail of its assigned queue, but this mechanism doesn’t affect what queue it goes into. In the face of workload groups, it might still be possible to craft priority inversion.

DMV data source and iteration mechanism

Good news. This one is a lot more simple and obvious than sys.dm_os_threads, although with a nice twist that made me rethink the object ownership hierarchy.

There isn’t a single global list of all Workers, so we start at the root of all things, the singleton NodeManager, to iterate over all SOS_Nodes. Now on a per-node basis, we indirectly find a way to iterate over Workers associated with that node.

It turns out that while workers are associated with schedulers, the relationship works one-way, and schedulers don’t keep lists of their workers, apart from the suspend queues that live within the scheduler (timer lists, IO lists, runnable queues). However,taking one step up in the hierarchy, we do find such a list in the SchedulerManager.

This makes sense when you consider that a worker starts its life being suspended in a SystemThreadDispatcher, which itself lives in a SchedulerManager with a 1:1 relationship between them. The linked list item (right at the start of the Worker object) which enlists it into the SystemThreadDispatcher is the suspend queue entry, and this is the one which moves between different suspend queues, or which belongs to no list at times when the worker is running. There is however a second linked list entry sixteen bytes into the Worker; this one’s list head is in the SchedulerManager.

Each node contains separate SchedulerManagers for normal and hidden schedulers, so the full iteration pattern for the global worker iterator goes like this:

  1. Start with the NodeManager
  2. Iterate over all the SOS_Nodes
  3. Per SOS_Node, first iterate over the workers belonging to its “regular” SchedulerManager
  4. When done with these, now iterate over workers belonging to the “hidden” SchedulerManager

Here is an outline of the involved objects.

The global worker iterator

The mechanism of engaging with a retrieved Worker is less clunky than is the case for dm_os_threads, which requires the target objects to be cloned while locked. Workers have their lifetimes controlled by reference counting, so upon finding the next worker, the reference count on the worker is increased before releasing the list’s spinlock. This avoids the worker getting destroyed while we are querying it. Upon moving to the next worker, the reference count on the previous one is decremented, and – in keeping with the reference-counting contract – if this was the last reference, the worker is then destroyed and its memory deallocated.

Well, there you have it for workers. Who can tell where we’ll go next?

In the footsteps of a cooperative wait

In the last two posts, I gradually climbed up the stack trace of a typical SQLOS thread. Here is such a stack trace from thread creation to context switch, omitting the actual meat of SQL Server so we can focus on the scheduling aspect.

Today I’m looking at the upper part of that map, where the thread was deep in thought, but stepped into the open manhole of a latch wait, which – as is typical for most waits – involves an EventInternal as underlying mechanism.

Slipping into SwitchContext()

The SQLOS scheduler exists in the cracks between user tasks. As we’re well aware, in order for scheduling to happen at all, it is necessary for tasks to run scheduler-friendly code every now and again. In practice this means either calling methods which have the side effect of checking your quantum mileage and yielding if needed, or explicitly yielding yourself when the guilt gets too much.

Now from the viewpoint of the user task, the experience of yielding is no different than the experience of calling any long-running CPU-intensive function: You call a function and it eventually returns. The real difference is that the CPU burned between the call and its return was spent on one or more other threads, while the current thread went lifeless for a bit. But you don’t know that, because you were asleep at the time!

Anyway, for perfectly valid reasons, in the example an EventInternal‘s Wait() method decided to go to sleep, or viewed from a different angle, to put its caller to sleep. We know how that story ends. Ultimately the Wait() call will return, but before then, the thread will snake its way into a cooperative context switch involving SignalObjectAndWait().

The recipe

The EventInternal’s Wait() function is one of a handful of blocking functions that ferry threads into the cooperative context switch – or alternatively you can view it as ferrying the CPU across workers. In SQL Server 2017, you’ll start seeing WaitableBase::Wait(), but this is mostly refactoring, or possibly even un-inlining of existing code phrasing which only now shows up in public symbols.

Getting into a context switch and back out again – i.e. eventually having the context switched back to the current thread – in the polite context of a task, includes a sequence of three function calls within Wait():

  1. SOS_Task::PreWait() – this sets up wait accounting and publishes the wait_info XEvent.
  2. SOS_Scheduler::SuspendNonPreemptive() – this sets up timeout logic and does a final check for task abort before calling SwitchContext(). The result of SwitchContext (which is ultimately the result of its call to Switch() will be passed back up to Wait() as the wait result.
  3. SOS_Task::PostWait() – this performs the actual wait accounting and clears the waiting status of the task

These are outlined below:

EventInternal Wait sequence

The elusive SwitchContext() and its uncle Switch()

Okay, I was guilty of a white, or perhaps red, lie by including a call to SwitchContext() in that first diagram. Unless you have a breakpoint on that function, you probably will never see it in a stack trace. This is because it makes a tail call to Switch(), meaning the compiler tranfers control to its child Switch() through a jmp instruction rather than a call, thus erasing and reusing the SwitchContext stack frame. Goto is neither dead nor particularly harmful once you’re into assembly language.

But anyway, there is a nice delineation between the two. Switch() is the lowest-level SQLOS function where a thread may enter and block before eventually returning, and this is where the call to SignalObjectAndWait() happens. As input parameters, it receives a pointer to the current Worker and the one that must be switched to. This includes special handling for the case where the two are the same, e.g. if the worker graciously yielded due to quantum exhaustion, but was the only viable worker on the runnable queue, so got rescheduled immediately. In this case (“InstantResuming” in TaskTransition parlance) there is no mucking about with SignalObjectAndWait, and the function simply returns as soon as it can.

Otherwise, the outgoing worker is tidied up with a TaskTransition call of type “Suspending”, and the long-awaited SignalObjectAndWait ceremony is performed. Next thing it knows, SignalObjectAndWait returns because time passed, other workers ran, and Lady Luck – or if you’re lucky, Father Fairness – chose it as the next worker eligible for a quantum. At this point we get a “Resuming” TaskTransition, and the return value inscribed into the worker by its waker-upper, back when it was put back onto the runnable queue, becomes the return value of Switch() and hence SwitchContext().

However, as a last-ditch guard against against spurious wakeup, should the SignalObjectAndWait call return without the prescribed sacrament of the ambient SystemThread having its LastSignalledBy set by another, we cry foul and go to sleep again using a simple WaitForSingleObject(). As of 2016, there is now even an accompanying premature_systemthread_wakeup XEvent to herald the outrage.

Working backwards then, what does SwitchContext() do? Simple. This is where all the scheduler chores (e.g. checking timer and I/O lists) happen, and crucially, where the next incoming worker is chosen from the runnable queue. Its special case is finding an empty runnable queue, at which point the scheduler’s idle worker is scheduled, which may eventually go to sleep through WaitForSingleObject() on the scheduler’s own idle event. At this point the whole scheduler would be asleep, and it will require another scheduler to wake it up by signalling that idle event.

My, how the runnable queue has grown up. You may have gathered from Bob Dorr’s 2016 updated scheduling algorithms post that the PercentileQueue used in 2012 and 2014 got replaced with something else. What we have in 2016 (and AFAIK in 2017) is the continued use of said PercentileQueue for system schedulers, but the new GroupWorkerQueue for normal ones. This is a thing of beauty, containing a linked list per resource group per scheduler, i.e. partitioned in such a way that little in the way of locking is needed. I would like to highlight that its use doesn’t affect the awarded quantum target, which remains at 4ms, but only which worker gets chosen. One day I might have enough meat to write a whole post about it…

Final thoughts

This touches upon something I scratched at in The Myth of the Waiter List, which can always do with being aired again.

For the most part, a wait list, aka suspend queue, is something that belongs to a synchronisation resource like a latch or a a reader-writer lock. Apart from the timer and I/O lists, and the work dispatcher, suspend queues have nothing to do with schedulers: the synchronisation objects that own those suspend queues will move a worker from them to the relevant scheduler’s runnable queue when its time is up. The scheduler really only cares about the runnable queue, and will not waste tears or time worrying about workers suspended on synchronisation objects.

It should be clear from the context, but I have completely ignored fiber mode today. A fiber switch is a different beast entirely, and you can read more about it in The Joy of Fiber Mode.

Yes, there is some repetition from earlier posts, but I hope that covering the same ground multiple times in different ways works as well for you as it does for me. Any questions or observations? I’m all ears.

Where do SQL Server tasks come from?

In my previous post I discussed the unsung early years of a SQLOS thread. Now it’s all very well knowing that threads extend themselves with SystemThreads, don Worker outfits, and execute SOS_Tasks, but I keep glossing over where tasks come from.

Gloss no more.

SOS_Task::Param – the seed of a work request

This nested struct only shows up by name in one spot on stack traces, when we see the below call as the crossover point between thread setup boilerplate and the meat of the task:

sqldk!SOS_Task::Param::Execute

This is a simple piece of abstraction, calling a previously specified function pointer with a previously specified parameter value. Those two values are the core data members of the SOS_Task::Param. In itself, such a pair is similar to the pair one would pass to a thread creation call like CreateRemoteThreadEx(). However, among the other data members we find SQLOS-specific things like a pointer to a resource group, the XEvent version of task identity and – if applicable – a parent task. These do pad out the picture a bit.

Now the SOS_Task::Param is a comparatively small structure, but it contains the core of what a task is about. Without it, an SOS_Task is a bag of runtime state without a defined mission, and isn’t “executable”. It’s just a typed block of memory: 984 bytes in the case of SQL Server 2016 SP1, of which the Param takes up 68 bytes.

Ultimately, getting a task run translates into this: Specify what you want done by phrasing it as an SOS_Task::Param, and get “someone” to create and enqueue a task based on it to a WorkDispatcher.

Which came first, the task or the task?

I have lately been spending some delightful hours unpicking the SQL Server boot process. For now, let’s stick with the simple observation that some tasks are started as part of booting. One example would be the (or nowadays “a”) logwriter task, whose early call stack looks like this:

sqlmin!SQLServerLogMgr::LogWriter
sqldk!SOS_Task::Param::Execute
sqldk!SOS_Scheduler::RunTask
sqldk!SOS_Scheduler::ProcessTasks
sqldk!SchedulerManager::WorkerEntryPoint
sqldk!SystemThread::RunWorker
sqldk!SystemThreadDispatcher::ProcessWorker
sqldk!SchedulerManager::ThreadEntryPoint
KERNEL32!BaseThreadInitThunk
ntdll!RtlUserThreadStart

Everything up to RunTask() is the thread starting up, getting married to a worker, and dequeuing a task from the WorkDispatcher. Apart from it being enqueued by the boot process, and the task never really completing, there is nothing special about this core piece of system infrastructure. It is just another task, one defined through an SOS_Task::Param with sqlmin!SQLServerLogMgr::LogWriter as its function pointer.

Here is one that is actually a bit more special:

sqldk!SOS_Node::ListenOnIOCompletionPort
sqldk!SOS_Task::Param::Execute
sqldk!SOS_Scheduler::RunTask
sqldk!SOS_Scheduler::ProcessTasks
sqldk!SchedulerManager::WorkerEntryPoint
sqldk!SystemThread::RunWorker
sqldk!SystemThreadDispatcher::ProcessWorker
sqldk!SchedulerManager::ThreadEntryPoint
KERNEL32!BaseThreadInitThunk
ntdll!RtlUserThreadStart

This is of course the network listener that makes it possible for us to talk to SQL Server whatsoever. It is a bit unusual in using preemptive OS scheduling rather than the cooperative SQLOS flavour, but this only affects its waiting behaviour. What makes it very special is that this is the queen bee: it lays the eggs from which other tasks hatch.

The SQL nursery

The above system tasks wear their purpose on their sleeves, because the function pointer in the SOS_Task::Param is, well, to the point. The tasks that run user queries are more abstract, because the I/O completion port listener can’t be bothered with understanding much beyond the rudiments of reading network packets – it certainly can’t be mucking about with fluent TDS skills, SQL parsing and compilation, or permission checks.

So what it does is to enqueue a task that speaks TDS well, pointing it to a bunch of bytes, and sending it on its way. Here is an example of such a dispatch, which shows the “input” side of the WorkDispatcher:

sqldk!WorkDispatcher::EnqueueTask
sqldk!SOS_Node::EnqueueTaskDirectInternal
sqllang!CNetConnection::EnqueuePacket
sqllang!TDSSNIClient::ReadHandler
sqllang!SNIReadDone
sqldk!SOS_Node::ListenOnIOCompletionPort
...

At this point, the Resource Group classifier function will have been run, and the target SOS_Node will have been chosen. Its EnqueueTaskDirectInternal() method picks a suitable SOS_Scheduler in the node, instantiates an SOS_Task based on the work specification in the SOS_Task::Param, and the WorkDispatcher within that scheduler is then asked to accept the task for immediate or eventual dispatching. Side note: it looks as if SOS_Scheduler::EnqueueTask() is inlined within EnqueueTaskDirectInternal, so we can imagine it being in the call stack before WorkDispatcher::EnqueueTask().

In simple terms, the network listener derives a resource group and chooses a node, the node chooses a scheduler, and the scheduler delegates the enqueue to its WorkDispatcher component.

Ideally an idle worker will have been found and assigned to the task. If so, that worker is now put on its scheduler’s runnable queue by the network listener thread, taking it out of the WorkDispatcher. If no worker is available, the task is still enqueued, but in the WorkDispatcher’s wallflower list of workerless tasks, giving us our beloved THREADPOOL wait.

Let’s however assume that we have a worker. Over on the other side, its underlying thread is still sleeping, but tied to a worker which is now on its scheduler’s runnable queue. It will eventually be woken up to start its assigned task in the normal course of its scheduler’s context switching operations. And what screenplay was thrown over the wall for it to read?

sqllang!CSQLSource::Execute
sqllang!process_request
sqllang!process_commands_internal
sqllang!process_messages
sqldk!SOS_Task::Param::Execute
...

Here we get an extra level of abstraction. The instantiated task runs the TDS parser, which creates and invokes specialised classes based on the contents of the TDS packet. In this case we got a SQL batch request, so a CSQLSource was instantiated to encapsulate the text of the request and set to work on it, taking us down the road of SQL parsing, compilation, optimisation and execution.

Once parallelism comes into play, such a task could instantiate yet more tasks, but that is a story for another day.

Conclusion

From a high enough view, this is quite elegant and simple. A handful of tasks are created to get SQL Server in motion; some of these have the ability to create other tasks, and the machine just chugs on executing all of them.

To tie this up with the previous post, here is the complete lifecycle of a SQLOS thread:

The SQLOS thread lifecycle

If you haven’t lately read Remus Rusanu’s excellent Understanding how SQL Server executes a query, do pay it a visit. This post is intended to complement its opening part, although I’m placing more emphasis on the underlying classes and methods recognisable from stack traces.

Since we’ve now covered the thread lifecycle from birth into its working life, next up I’ll go back and look at what a wait feels like from the worker angle.

The early life of a SQLOS thread

So I have checked off that bucket list item of speaking at a SQLSaturday. In the process of getting my act together, I learned a thing or two about the undocumented youth of SQLOS threads, between birth and entering the workplace. And you didn’t, which seems unfair.

We normally see stack traces while looking at the top of the stack, typically during a wait, at which point the thread is wearing full worker garb, and has been executing a task for a while. Let’s today reflect on those happy times when our thread was in diapers.

Conception and birth

Threads are born because the system as a whole decides they are cute and it wants more of them. The decision is made in the SystemThreadDispatcher, which is a component of a SchedulerManager, itself a component of an SOS_Node, aka CPU node.

We can simplify this: Threads are born into nodes.

Now a thread isn’t created at the moment that it is needed, and it isn’t legally able to perform work right from birth. The idea is to have a reasonable number of grown-up threads in the population, ready to be put to work at short notice. We are just at the first step.

Thread creation is done through a CreateRemoteThreadEx() call, within the function SystemThreadDispatcher::CreateNewSysThreadIfRequired(), which is invoked as a side task by another thread when it leaves the pool of unemployed threads.

The function pointer passed in as thread entry point is SchedulerManager::ThreadEntryPoint(), and the parameter that will be passed to that entry point is a pointer to the target node’s SchedulerManager. In other words, when the function runs, it will be a completely normal instance method call on that SchedulerManager, parameterless except for the This pointer. And since the SchedulerManager knows what node it belongs to, our newborn thread will instinctively be able to crawl into the arms of the maternal SOS_Node.

But I am getting ahead of myself here. Before even running that entry point function, the thread creation callback registered during SQLOS boot (SystemThread::DllMainCallback()) is invoked by the OS runtime in the context of the new thread. And that gives us a SystemThread associated with the thread, meaning it has – among other things – the Windows event that will let it participate in SQLOS context switching.

So the very first thing our newborn thread, cosily wrapped up in a SystemThread, does is to enlist itself in the parent SOS_Node – and by “enlist” I literally mean adding itself to a linked list. Strictly speaking, it enlists the SystemThread, which is now SQLOS’s proxy to the thread: whenever we want to refer to a thread, we do so through a pointer to its SystemThread. Looking at it from one direction, the SystemThread contains a handle to the thread. From the other direction, any running code can find the ambient SystemThread through a thread-local storage lookup.

As it stands, the thread can’t do much useful in polite company yet, other than suspend itself. SystemThread::Suspend() is the most rudimentary of scheduling functions, just calling WaitForSingleObject() on the thread’s personal Event.

When a thread loves a Worker

ThreadEntryPoint now calls SystemThreadDispatcher::ProcessWorker() on the SOS_Node’s SystemThreadDispatcher, i.e. the one within the current SchedulerManager.

The SystemThreadDispatcher shows itself to be a dating agency, keeping separate linked lists of unattached SystemThreads and idle Workers, and pairing them off according to supply and demand.

From the viewpoint of the thread running it, ProcessWorker() means “find me an unattached Worker so we can change the world together”. If there isn’t a spare Worker at this moment though, the thread goes to sleep through the aforementioned SystemThread::Suspend() call, only to be woken up when a fresh young Worker arrives on the dating scene. This moment is celebrated by ProcessWorker() moving on to call SystemThread::RunWorker()

Pairing the two up includes the SystemThread swearing a vow of loyalty to the Worker’s associated SOS_Scheduler. Up to this point, the thread was “in the SystemThreadDispatcher” and associated with an SOS_Node, but not a specific scheduler. From here onwards, the SystemThread and Worker are fully part of the family of workers for that scheduler.

We now move on to SchedulerManager::WorkerEntryPoint() which initialises the Worker, e.g. setting timestamps and the first quantum target, before invoking the first SOS_Scheduler method, ProcessTasks().

Interesting aside regarding waits: The suspension of a thread within the SystemThreadDispatcher isn’t a measured wait, because waiting is measured at the level of workers and schedulers, neither of which have yet entered the picture.

Your task, should you choose to accept it…

Moving into the family home of the Worker, the first stop within ProcessTasks() is a courtesy call on that scheduler’s WorkDispatcher. If the SystemThreadDispatcher was a dating agency for Workers and SystemThreads, the WorkDispatcher is an employment agency for those couples, pairing them up with jobs in the form of SOS_Tasks.

Entering the WorkDispatcher initially, the pair generally wouldn’t find a pending tasks. At this point they (though the pair is now just viewed as a Worker by the scheduler) are put to sleep through a full-fledged scheduler method, SOS_Scheduler::SuspendNonPreemptive(). This means that the Worker ends up on a suspend queue, specifically the WorkDispatcher’s list of idle workers.

When a task is lobbed over the wall into the scheduler from elsewhere, the WorkDispatcher will assign it to an idle Worker, and the worker made runnable. In due course it will be chosen as the next worker to run, continuing with the ProcessTasks() call to run the specific function specified through the task: this is SOS_Scheduler:RunTask() into SOS_Task::Param::Execute().

The task gets executed, through all the joys and heartaches of taskhood, and if parallelism is involved, child tasks may even be spawned. Ultimately though, the task will be done, and the pair return to the WorkDispatcher’s idle list, blocked in SOS_Scheduler::ProcessTasks() but ready for the next challenge.

You want pictures? Sure.

The relationship between SOS_Node, SOS_Scheduler and their dispatching components

(For the sake of honesty, I should note that a node actually has separate SchedulerManagers for normal and hidden schedulers.)

Up next

This takes care of how tasks, workers, and threads interact – at least in thread mode, which is the only mode we probably care about. In the next blog post I will look into how tasks actually get instantiated.

Speaking at SQL Saturday Manchester

I was surprised to find it within me to submit a session abstract to SQL Saturday 645 in Manchester. Not to mention delighted when my session on SQLOS scheduling got chosen.

This will be my first time speaking at a SQL Saturday, so I’m pretty excited about the experience. The plan is to cover some fundamentals, revisit a few things I have blogged about, and add bits that have probably never been covered anywhere before.

What can possibly go wrong if I get up on stage? Here endeth the SQL section of this blog post.

This can possibly go wrong when I get up on stage

In a previous life, I spent 7.5 years as a cruise ship musician, first playing in a lounge band, then moving to the show band. I ended up as music director, leading the show band from the piano, and vaguely responsible for supervising and scheduling all music around the ship. Fortunately the latter mostly involved letting people do what they’re good at, and I wasn’t in a position of any responsibility during the below incident.

Back in the mists of early 2001, I worked on a ship that was doing a series of musical theme cruises. One particular cruise was really juicy: the star attraction was a Frank Sinatra impersonator, and our normal 7-piece band got expanded to a 20-piece big band for the occasion. The act was great, the stage was satisfyingly full, and those classic arrangements were a joy to play.

The full show early in the week was over, and regular entertainment went on as usual, except that the star singer was doing a short segment in a variety show on the last night: a magician to open, the singer in the prime spot, and then a short production number from the dance cast.

In line with dinner, which was split between two seatings, shows were done twice each night, at 20:30 and then again at 22:15. Except for the last night, when the second seating had an early 18:45 show before their meal.

Somehow it never occurred to anyone to explicitly mention the flipped showtimes to the headliner. No big deal when we didn’t see his face before curtain up, because he still had twenty minutes to get backstage. But around this time we started asking whether he knows about the early show. Frantic phone calls were made, and towards the end of the magician’s act someone managed to get him on the phone after he just stepped out of the shower.

With one minute to go, and the hope that he can get dressed and ready in three more, the cruise director shouted at the band to play something – anything – while we figure out what to do. The cast was roused from their backstage lazing and told they might have to go on in two.

It was around this point where things completely fell apart. Like any production deployment, stage management works on reasonable certainty and a pre-arranged set of cues. Improvisation rarely has a happy ending.

Keeping a token brave face, the cruise director did a hand-over to the band, who figured we may as well play “A-train” and see where it goes. And the curtain opened up as a half-naked dancer frantically ran across the stage, trailing bits of costume.

The singer turned up a minute after the production cast finished their bit, culminating with the stage being littered with the disgorged content of streamer cannons and unfit for further use. The cruise director, having completely lost the erstwhile brave face, had by then walked on to tell the audience that this was indeed it, and could they please swallow their disappointment as they fill in their feedback forms rating the week’s experience.

The curtain resolutely stayed down. Muffled swearing could be heard backstage. Fingers pointed impotently.

The good news

None of this will happen at SQL Saturday Manchester. I have learned about the dangers of confetti cannons, I will turn up on time, and I probably won’t be in drag.

But if you are interested in the gubbins of context switching, what a SystemThreadDispatcher really does for a living, and you don’t live too far away, do drop by.

Scheduler stories: OLEDB, the external wait that isn’t preemptive

So I’ve been having this ongoing discussion with Lonny Niederstadt (b | t) recently, where he tries to make sense of how CheckDB performs, and I fixate on a possibly irrelevant detail. In other words, he talks over my head, and I talk under his feet. Seems fair to me.

Today is the day to air my funny bits, and with some luck Lonny will continue to take us to interesting places in his own explorations.

Update: Aaaand Lonny has delivered a headful!

The thing about external waits

I’m going to go out on a limb and suggest that none of us are very clear about what external waits are. Code external to SQL Server? External to the CPU? Code that was dropped by aliens?

Previously I dug into preemptive waits in SQLOS, and to be honest, I equated “preemptive” with “external”. For the most part the two go hand in hand after all.

To recap, a preemptive wait isn’t necessarily a wait at all. What happens is that a worker needs to run some code that can’t be trusted to play by cooperative scheduling rules. And rather than put the SQLOS scheduler (and all its sibling workers) at the mercy of that code, the worker detaches itself from the scheduler and cedes control to a sibling runnable worker.

The time period from this moment until the worker returns to cooperative scheduling is labeled as a preemptive wait. During this time, one would hope that the thread did indeed sleep a bit, because it would be directly competing for CPU with its sibling through the mediation of the operating system scheduler. In other words, the time ascribed to a preemptive wait covers an unknown combination of working and sleeping.

In that blog post, I also covered the possibility of external waits getting nested: during the time where we’re executing code but counting each passing tick as external wait A, it is possible to declare another external wait B, and temporarily double count the same slice of time against both. Even more confusingly, we could temporarily dip back into cooperative scheduling while the external “wait” continues.

First, a fundamental premise. A wait type is just a label – a task name someone decided to fill in on a notional time card. Different pieces of code can use the same label for different things, or if we’re lucky, a given wait type is used in one place only, and its presence pinpoints the exact function that was running.

Today I’m only talking about the OLEDB wait as it manifests in CheckDB, although a similar story may pertain elsewhere. In case you didn’t read the title, OLEDB is an external wait type that isn’t preemptive. But what does this mean?

It means that some of what I said about preemptive waits still applies, most significantly the idea that code can actively be running while advertising itself as being in a wait. This can be seen as a slight slant on what the wait type measures: we are using an existing mechanism as a profiling tool that surfaces how much time is spend in a certain chunk of code.

However, in this case the worker doesn’t go preemptive, i.e. it doesn’t yield the scheduler to a sibling while it does the work advertised as a wait.

Here it really gets weird. This worker is cooperatively scheduled, and has a conscience that tells it not to hog the scheduler. Every so often, it will offer to yield, and if the scheduler accepts the offer the CheckDB worker will properly go to sleep, with this sleep time labeled as SOS_SCHEDULER_YIELD.

But at the same time, the clock is still ticking on the OLEDB wait!

This is an entirely new twist on double counting. We claim to be waiting, but are working, except that sometimes we do stop working, counting a bit of time against both OLEDB and SOS_SCHEDULER_YIELD.

I am not saying that this happens all the time, but this is the way that a certain branch of code is written, as least in SQL Server 2014. The OLEDB “wait” is declared within CQScanRmtScanNew::GetRowHelper(), and during this “wait” we get a call to CUtRowset::GetNextRows(), which itself calls CTRowsetInstance::FGetNextRow(). However, within GetNextRows(), after every 16 invocations a courtesy call is made to YieldAndCheckForAbort(), which may yield the scheduler with an SOS_SCHEDULER_YIELD wait.

To visualise broadly:

Visualisation of time spent and time measured

Bonus material: The little preemptive wait that wasn’t

The OLEDB wait is neatly encapsulated in an instance of the CAutoOledbWait class, which in turn contains an SOS_ExternalAutoWait, the same object used by preemptive waits.

Now if we look into SOS_ExternalAutoWait, it comes with three constructors. One gives us a bland UNKNOWN/MISCELLANEOUS wait, presumably on the historic premise that folks didn’t always want to bother categorising their wait activity. Another is fully parameterised, and can emit any supplied wait type. But the third one really catches one’s eye: it’s wired to emit PREEMPTIVE_OS_GETPROCADDRESS.

Clearly PREEMPTIVE_OS_GETPROCADDRESS must serve as a convenient “smoke break” wait type for certain callers, and I’d find it hard to believe that so many people really call GetProcAddress(). So on the premise that nothing in this external wait guarantees it is used preemptively, I am inclined to think that:

  1. When you see this particular wait, you have to read it as MISCELLANEOUS
  2. It ain’t necessarily preemptive

See you next time!

Unsung SQLOS: The SOS_UnfairMutexPair behind CMEMTHREAD waits

As with the droppings of the Questing Beast, we recognise synchronisation code paths by their emissions. But even when not leaving telltale fewmets, these creatures wander among us unseen, unsung, until shutdown doth us part.

The place of the SOS_UnfairMutexPair

Bob Dorr has done a great job of illustrating why thread-safe SQLOS memory objects need serialised access in How It Works: CMemThread and Debugging Them. This has been followed up by the “It just runs faster” coverage of dynamic memory partitioning.

Today I’ll be examining the SOS_UnfairMutexPair, the synchronisation object behind memory allocations. While I’m going to describe synchronisation as a standalone subject, it’s useful to think of it as the CMEMTHREAD wait; to the best of my knowledge nothing other than memory allocations currently uses this class.

For context, I have previously described a bunch of other synchronisation classes:

One picture which emerges from all these synchronisation flavours is that a developer can choose between busy waiting (eagerly burning CPU), politely going to sleep (thread rescheduling), or a blend between the two. As we’ll see the SOS_UnfairMutexPair goes for a peculiar combination, which was clearly tuned to work with the grain of SQLOS scheduling.

The shape of the object

With the exception of the central atomic lock member, everything comes in pairs here. The class models a pair of waitable objects, each having an associated pair of members to note which scheduler and task currently owns it:

Layout of the SOS_UnfairMutexPair (2016 incarnation)

Although it exposes semantics to acquire either both mutexes or just the second, in its memory allocation guise we always grab the pair, and it effectively acts as a single mutex. I’ll therefore restrict my description by only describing half of the pair.

Acquisition: broad outline

The focal point of the mutex’s state – in fact one might say the mutex itself – is the single Spinlock bit within the 32-bit lock member. Anybody who finds it zero, and manages to set it to one atomically, becomes the owner.

Additionally, if you express an interest in acquiring the lock, you need to increment the WaiterCount, whether or not you managed to achieve ownership at the first try. Finally, to release the lock, atomically set the spinlock to zero and decrement the WaiterCount; if the resultant WaiterCount is nonzero, wake up all waiters.

Now one hallmark of a light-footed synchronisation object is that it is very cheap to acquire in the non-contended case, and this class checks that box. If not owned, taking ownership (the method SOS_UnfairMutexPair::AcquirePair()) requires just a handful of instructions, and no looping. The synchronisation doesn’t get in the way until it is needed.

However, if the lock is currently owned, we enter a more complicated world within the SOS_UnfairMutexPair::LongWait() method. This broadly gives us a four-step loop:

  1. If the lock isn’t taken at this moment we re-check it, grab it, simultaneously increment WaiterCount, then exit triumphantly, holding aloft the prize.
  2. Fall back on only incrementing WaiterCount for now, if this is the first time around the loop and the increment has therefore not been done yet.
  3. Now wait on the EventInternal, i.e. yield the thread to the scheduler.
  4. Upon being woken up by the outgoing owner releasing the lock as described above, try again to acquire the lock. Repeat the whole loop.

The unfairness derives from the fact that there is no “first come, first served” rule, in other words the wait list isn’t a queue. This is not a very British class at all, but as we’ll see, there is a system within the chaos.

The finicky detail

Before giving up and waiting on the event, there is a bit of aggressive spinning on the spinlock. As is standard with spinlocks, spinning burns CPU on the optimistic premise that it wouldn’t have to do it for long, so it’s worth a go. However, the number of spin iterations is limited. Here is a slight simplification of the algorithm:

  • If the scheduler owning the lock is the ambient scheduler, restrict to a single spin.
  • Else, give it a thousand tries before sleeping.
  • Each spin involves first trying to grab both the lock and (if not yet done) incrementing WaiterCount. If that doesn’t work, just try and increment the WaiterCount.

This being of course the bit where the class knows a thing or two about SQLOS scheduling: If I am currently running, then no other worker on my scheduler can be running. But if another worker on my scheduler currently holds the lock, it can’t possibly wake up and progress towards releasing it unless *I* go to sleep. Mind you, this is already a edge case, because we’d hope that the owner of this kind of lock wouldn’t go to sleep holding it.

To see how scheduling awareness comes into play, I’m going to walk through a scenario involving contention on such a mutex. If some of the scheduling detail makes you frown, you may want to read Scheduler stories: The myth of the waiter list.

A chronicle of contention

In this toy scenario, we have two schedulers with two active workers each. Three of the four workers will at some point during their execution try and acquire the same mutex, and one of them will try twice. Time flows from left to right, and the numbered callouts are narrated below. A red horizontal bracket represents the period where a worker owns the mutex, which may be a while after the acquisition attempt started.

The mutex acquisition tango
  1. A1 wants to acquire the mutex and, finding it uncontended, gets it straight away.
  2. B2 tries to acquire it, but since it is held by A1, it gives up after a bit of optimistic spinning, going to sleep. This gives B1 a turn on the scheduler.
  3. A1 releases the mutex, and finding that there is a waiter, signals it. This moves B2 off the mutex’s waiter list and onto scheduler B’s runnable queue, so it will be considered eligible for running at the next scheduler yield point.
  4. B1 wants the mutex, and since it isn’t taken, grabs it. Even though B2 has been waiting for a while, it wasn’t running, and it’s just tough luck that B1 gets it first.
  5. A1 wants the mutex again, but now B1 is holding it, so A1 goes to sleep, yielding to A2.
  6. B1 releases the mutex and signals the waiter A1 – note that B2 isn’t waiting on the resource anymore, but is just in a signal wait.
  7. B1 reaches the end of its quantum and politely yields the scheduler. B2 is picked as the next worker to run, and upon waking, the first thing it does is to try and grab that mutex. It succeeds.
  8. A2 reaches a yield point, and now A1 can get scheduled, starting its quantum by trying to acquire the mutex. However, B2 is still holding it, and after some angry spinning, A2 is forced to go to sleep again, yielding to A1.
  9. B2 releases the mutex and signals the waiting A1, who will hopefully have better luck acquiring it when it wakes up again.

While this may come across as a bit complex, remember that an acquisition attempt (whether immediately successful or not) may also involve spinning on the lock bit. And this spinning manifests as “useful” work which doesn’t show up in spinlock statistics; the only thing that gets exposed is the CMEMTHREAD waiting between the moment a worker gives up and goes to sleep and the moment it is woken up. This may be followed by another bout of unsuccessful and unmeasured spinning.

All in all though, you can see that this unfair acquisition pattern keeps the protected object busy doling out its resource: in this case, an memory object providing blocks of memory. In an alternative universe, the mutex class may well have decided on its next owner at the moment that the previous owner releases it. However, this means that the allocator won’t do useful work until the chosen worker has woken up; in the meantime, the unlucky ones on less busy schedulers may have missed an opportunity to get woken up and do a successful acquire/release cycle. So while the behaviour may look unfair from the viewpoint of the longest waiter, it can turn out better for overall system throughput.

Of course, partitioning memory objects reduced the possibility of even having contention. But the fact remains: while any resources whatsoever are shared, we need to consider how they behave in contended scenarios.

Compare-and-swap trivia

Assuming we want the whole pair, as these memory allocations do, there are four atomic operations performed against the lock member:

  • Increment the waiter count: add 0x00010001
  • Increment the waiter count and grab the locks: add 0x80018001
  • Just grab the locks (after the waiter count was previously incremented): add 0x80008000
  • Release the locks and decrement the waiter count: deduct 0x80018001

For the first three, the usual multi-step pattern comes into play:

  1. Retrieve the current value of the lock member
  2. Add the desired value to it, or abandon the operation, e.g. if we find the lock bit set and we’re not planning to spin
  3. Perform the atomic compare-and-swap (lock cmpxchg instruction) to replace the current value with the new one as long as the current value has not changed since the retrieval in step 1
  4. Repeat if not successful

The release is simpler, since we know that the lock bits are set (we own it!) and there is no conditional logic. Here the operation is simple interlocked arithmetic, but two’s complement messes with your mind a bit: the arithmetic operation is the addition of 0x7ffe7fff. Not a typo: that fourth digit is an “e”!

This all comes down to thinking of the lock bit as a sign bit we need to overflow in order to set to 1. The higher one overflows out of the 32-bit space, but the lower one overflows into the lowest bit of the first count. To demonstrate, we expect 0x80018001 to turn to zero after applying this operation:

    8001 8001
  + 7ffe 7fff
    ---------
(1) 0000 0000

Final thoughts

So you thought we’ve reached the end of scheduling and bit twiddling? This may turn out to be a perfect opportunity to revisit waits, and to start exploring those memory objects themselves.

I’d like to thank Brian Gianforcaro (b | t) for feedback in helping me confirm some observations.

Context in perspective 4: On the street where you live

Although today’s post is loosely related to the esoterica of multiple virtual inheritance I unearthed in part 3, it is far more straightforward and goes no further than single inheritance. Spoiler alert: people come up with clever stuff.

Part 3 was a riff on the theme of finding the address of an object, given the address of an interface (vftable) within that object. This was made possible by a combination of member offsets known at compile time and offsets explicitly stored within the object. Now we move on to discover how the address of one object can be derived from the address of another, purely by convention and without relying on magic numbers known only to the compiler or stored in runtime structures. Sounds interesting?

I have often walked down this deck before
But the water always stayed beneath my neck before
It’s the first time I’ve had to scuba dive
When I’m down on the deck where you live.

Come dive with me

In Part 2, the last example was a function which optionally called commondelete as object disposal after the class destructor:

sqllang!CSecContextToken::`vector deleting destructor':
=======================================================
     mov   qword ptr [rsp+8],rbx
     push  rdi
     sub   rsp,20h
     mov   ebx,edx
     mov   rdi,rcx
     call  sqllang!CSecContextToken::~CSecContextToken
     test  bl,1
+-+- je    LabelX
| |
| |  { if low bit of original edx was set  
| ->     mov   rcx,rdi
|        call  sqllang!commondelete
|    }
|
LabelX:
+--> mov   rax,rdi
     mov   rbx,qword ptr [rsp+30h]
     add   rsp,20h
     pop   rdi
     ret

So what does commondelete() do? Well, quite simply it is a global function that releases a previously allocated block of memory starting at the supplied address. Prior to the construction of an object, we need to allocate a chunk of memory to store it in, and after its destruction, this memory must be given back to the pool of available memory. Forgetting to do the latter leads to memory leaks, not to mention getting grounded for a week.

Notionally we can imagine a global portfolio of active memory allocations, each chunk uniquely identified by its starting address. When we want memory, we ask the global memory manager to lend us some from the unused pool, and when we’re done with it, we hand it back to that memory manager, who carefully locks its internal structures during such operations, because we should only access mutable global state in a single-threaded manner, and…. Oops. No, no, double no. That is not how SQL Server does things, right?

Okay, we know that there are actually a variety of memory allocators out there. If nothing else, this avoid the single bottleneck problem. But now the question becomes one of knowing which allocator to return a chunk of memory to after we’re done with it.

One possible solution would be for the object which constructs a child object to store its own reference to the allocator, and this parent object is then the only object who is allowed to destroy its child object. However, this yields a system where all objects have to exist in a strict parent-child hierarchy, which is very restrictive.

As it turns out, many interesting SQL Server objects control their lifetime through reference counting. If object A wants to interact with object B, it first announces that interest, which increases the reference count on B. This serves as a signal that B can’t be disposed, i.e. the pointer held by A remains valid. Once done, A releases the reference to B, which decreases B’s reference count again. Ultimately, B can only be destroyed after its reference count has reached zero. One upshot of this is that objects can live independent lives, without forever being tied to the apron strings of a parent object.

From that given, the problem becomes this: an object playing in this ecosystem must carry around a reference to its memory allocator. Doing it explicitly is conceptually simple. However, it would be onerous, because this pointer would take up an eight-byte chunk of memory within the object itself, which can be a significant tax when you have a large number of small objects. Perhaps more significantly, it raises the complexity bar to playing along, because now all objects must be born with a perfectly-developed plan for their own eventual death, and they must implement standardised semantics for postmortem disposal of their bodies. That’s harsh, not to mention bug-prone.

Far better then would be a way for an outside agent to look at an object and know exactly where to press the Recycle button, although the location of that button is neither global nor stored within the target object.

Pleasant little kingdom

Imagine a perfectly rational urban design; American grid layout taken to the extreme. Say you live at #1137 Main Street and want to find the nearest dentist, dentistry being of particular importance in this kingdom. Well, the rule is simple: every house number divisible by 100 contains a dentist, and to find your local one you just replace the last two digits of your house number with zeros.

That is the relationship that objects have with their memory allocators under SQLOS, except that we are dealing with 8192-byte pages. Take the address of any object in memory, round that address down to the next lower page boundary, and you will find a page header that leads you directly or indirectly to the memory object (allocator) that owns the object’s memory; this object will happily receive the notification that it can put your chunk of memory back into the bank.

I’m skipping some detail around conditionals and an indirect path, but the below is the guts of how commondelete() takes an address and crafts a call to the owning memory object to deallocate that target object and recycle the corpse:

  mov   rdx, rcx
  and   rcx, 0FFFFFFFFFFFFE000h

  ; now rdx contain the address we want to free
  ; and rcx is the start of the containing page 

  mov   rcx, qword ptr [rcx+8]
  ; now rcx contains the 2nd qword on the page.
  ; This is the address of the memory object,
  ; aka pmo (pointer to memory object)

  mov   rax, qword ptr [rcx]
  ; 1st qword of object = pointer to vftable
  ; Now rax points to the vftable

  call   qword ptr [rax+28h]
  ; vftable is an array of (8-byte) function
  ; pointers. This calls the 6th one. Param 1
  ; (rcx) is the pmo. Param 2 (rdx) is the
  ; address we want to free.

See, assembler isn’t really that bad once you put comments in.

Show me

Let’s find ourselves some SQLOS objects. SOS_Schedulers are sitting ducks, because you can easily find their addresses from a DMV query, and they don’t suddenly fly away. And of course I have explored them before. Below from a sad little VM:

I’m getting scheduled in the morning

Got myself a few candidate addresses, so it’s time to break into song the debugger. I’m going to dump a section of memory in 8-byte chunks, and to make it more interesting, I’ll do so with the dps command, which assumes all the memory contents are pointers, and will call out any that it recognises from public symbols. Never mind that this regimented 8-byte view may be at odds with the underlying data’s shape – it’s just a convenient way to slice it.

Because I know where I’m heading, I’m rounding the starting address down to the start of that 8Kb page:

0:031> dps 40130000
00000000`40130000 00010002`00000000
00000000`40130008 00000000`4001e040
00000000`40130010 00000001`00000008
00000000`40130018 00000000`00000000
00000000`40130020 00000000`40188000
00000000`40130028 00000000`00000000
00000000`40130030 00000000`00000003
00000000`40130038 00000000`000010a0
00000000`40130040 00007ffb`edbe0ff8 sqldk!ThreadScheduler::`vftable'
00000000`40130048 00000000`40120048
00000000`40130050 00000000`40080170
00000000`40130058 00000000`00000003
00000000`40130060 00000000`40080170
00000000`40130068 00000000`399d3858
00000000`40130070 00000000`400204f8
00000000`40130078 00000014`00000000

Firstly, we have a beautiful piece of confirmation that the object starting at 0040 is indeed a scheduler, because its very first quadword contains a pointer to the ThreadScheduler vftable. But we knew that already šŸ™‚

What I’d like to know is what the second quadword on the page, at 0008, is pointing to. Interesting, its address also ends in 0040…

Same thing then, let’s dump a bit of that, starting from the top of that page:

0:057> dps 4001e000
00000000`4001e000  00010002`00000000
00000000`4001e008  00000000`4001e040
00000000`4001e010  00000045`00000001
00000000`4001e018  00000000`00000000
00000000`4001e020  00000000`3a6e8000
00000000`4001e028  00000000`40188000
00000000`4001e030  00000000`00000001
00000000`4001e038  00000000`000001b0
00000000`4001e040  00007ffb`edbdc640 sqldk!CMemThread::`vftable'
00000000`4001e048  00000000`00000001
00000000`4001e050  00000000`00002000
00000000`4001e058  00000000`00000004
00000000`4001e060  00000000`40022200
00000000`4001e068  00000000`00000010
00000000`4001e070  00000000`40000280
00000000`4001e078  00000000`4001e040

Aha! On a freaking platter. The vftable symbol name confirms that the scheduler’s parent memory object is in fact a CMemThread<CMemObj>. Not only that, but if you look at the second slot, it seems that a memory object is its own owner!

But speaking of the vftable, since we’re now familiar with these creatures as simple arrays of function pointers, let’s dump a bit of it. The full thing is rather long; this is just to give you the idea:

0:057> dps 7ffb`edbdc640
00007ffb`edbdc640  00007ffb`edadc140 sqldk!IMemObj::QueryInterface
00007ffb`edbdc648  00007ffb`eda75f00 sqldk!IMemObj::AddRef
00007ffb`edbdc650  00007ffb`eda84d50 sqldk!IMemObj::Release
00007ffb`edbdc658  00007ffb`edb0ee40 sqldk!CMemThread::Alloc
00007ffb`edbdc660  00007ffb`edb0f0b0 sqldk!CMemThread::Realloc
00007ffb`edbdc668  00007ffb`eda75410 sqldk!CMemThread::Free
00007ffb`edbdc670  00007ffb`edb0f1d0 sqldk!CMemThread::GetSize
00007ffb`edbdc678  00007ffb`edb0f440 sqldk!CMemThread::DidAlloc

Recall that commondelete() calls the sixth entry, located 0x28 bytes from the start. Unsurprisingly, this is the Free() function. And look, the first three tell us that COM is spoken here.

Takeaway

This is not the time to get into the details of how memory objects actually work, but we sure got a nice little glimpse of what SQLOS does with the memory space. Also, the charming side of vftables was laid bare.

Although the assembler view might look daunting, we saw how a virtual method, in this case Free(), is called, using nothing more than the address of the vftable and the knowledge that the object residing there is (or is derived from) the base type exposing that method.

But let’s not lose sight of the “Context in perspective” goal. Here the context is the address of a single object, and the perspective is seeing it framed by convention within a memory page. By its placement in memory, the object gains the all-important implicit attribute of an owning memory object.

Further reading

Bob Dorr’s classic How it works: CMEMTHREAD and debugging them is definitely going to feature when I visit memory objects again.

Indirection indigestion, virtual function calls and SQLOS

One of Slava Oks’s classic posts from the 2005 era is
A new platform layer in SQL Server 2005 to exploit new hardware capabilities and their trends. I have occasionally revisited it as a kind of SQLOS manifesto, and some things which at first I found mystifying have become clearer over the years.

In many ways, it seems that the more recent SQLOSv2/SQLPAL work is a simple case of continuing with a project that has been organically evolving since the SQL Server 7 User Mode Scheduler, and rooted in classic Stonebraker: just how far can we assume control of core OS functions within an RDBMS process?
Continue reading “Indirection indigestion, virtual function calls and SQLOS”

Scheduler stories: The myth of the waiter list

‘Tis the season to be controversial, so let’s take a stroll down memory lane to Ken Henderson’s classic Inside the SQL Server 2000 User Mode Scheduler:

The waiter list maintains a list of workers waiting on a resource. When a UMS worker requests a resource owned by another worker, it puts itself on the waiter list for the resource and enters an infinite wait state for its associated event object. When the worker that owns the resource is ready to release it, it is responsible for scanning the list of workers waiting on the resource and moving them to the runnable list, as appropriate. And when it hits a yield point, it is responsible for setting the event of the first worker on the runnable list so that the worker can run. This means that when a worker frees up a resource, it may well undertake the entirety of the task of moving those workers that were waiting on the resource from the waiter list to the runnable list and signaling one of them to run.

John Tenniel's White Rabbit from "Alice in Wonderland"

The lists behind the legend

I have not gone as far as opening up my rusty copy of SQL Server 2000 to see how Ken’s description fits in there, but I am now pretty certain that the above quote has transmuted over the years into a common misunderstanding about SQLOS scheduling mechanics.

Now nothing Ken said is untrue or particularly out of date. It is just that we often hear “the waiter list” (by implication handling resource waits) described as an attribute of a scheduler, which is not the case.

Let’s revisit when the scheduler code runs, and what it does:

  • A worker will yield, either because it needs to wait for a resource, or because it is eaten up with guilt over reaching the end of its allotted quantum.
  • The act of yielding means that scheduler code (methods on the SOS_Scheduler class) gets invoked.
  • After a bit of housekeeping for the common good of all workers sharing the scheduler, control is transferred back to a worker to do its thing – this may even be the same worker who originally yielded.
  • The housekeeping consists of checking for aborted tasks, processing pending I/Os, and checking for I/O completions and timer list timeouts.

The single most important list that a scheduler owns is the collection of runnable workers, that is, the subset of workers belonging to this scheduler who are not waiting for anything other than CPU. This has variously been described as a list and a queue; I shall be using the term “runnable queue” by convention, but be aware that it is a data structure that has changed over the years and isn’t a simple queue.

A scheduler has one piece of “creative” interaction with this runnable queue, and it comes with only two variables:

  • When a context switch is requested by an outgoing worker owning the scheduler, the scheduler code has to choose which one of potentially multiple workers is going to be its next owner.
  • The incoming worker gets given a quantum expiry date, by which time it is expected to yield.

Core scheduler code running during context switching only dequeues runnable workers, and at such moments a given scheduler only looks at its own runnable queue. In contrast, code running all over the place, including in the context of workers belonging to other schedulers, may enqueue workers on to the runnable queue.

Time for a simple diagram:

Someone to watch over me

What I’m trying to get across here is that each instance of a waitable resource has its own wait list, and the scheduler has no interest in this, because a scheduler only acts upon its runnable queue. Seen from a different angle, once a worker is waiting on a resource, its scheduler doesn’t care, because it can’t and won’t manage the waiting logic of something like a latch. This splits the responsibilities neatly in two:

  • The synchronisation class guarding a resource (which inevitably will be built upon an EventInternal) stands watch over all the workers queueing up to have a ride on that resource. The act of granting access to a worker involves moving the worker from the wait list and getting it on to the runnable queue of that scheduler’s worker, and this is achieved by the synchronisation class.
  • The scheduler, in turn, doesn’t decide who is runnable, but it does get to pick which of the runnable workers (however they reached that state) runs next.

The I/O and timer lists

There are however two cases where the scheduler decides to make a worker runnable in the normal course of events. One is when a worker was waiting on I/O to complete, where periodic scheduler housekeeping is the mechanism by which SQLOS takes note of the I/O completion. At this point some workers who were on the I/O list may find themselves moved to the runnable queue just before the next worker is picked to be granted ownership of the scheduler – the lucky winner might be one of these workers, or it may be someone else who has been runnable for a while.

The second, and actually more interesting case, is the timer list. In its simplest use case, this is where you will find workers executing T-SQL WAITFOR statements. The list is neatly ordered by timer expiry date, and at each invocation of the scheduler context-switch housekeeping, workers whose timer expiry dates have now passed will be moved to the runnable queue.

What makes a timer list particularly interesting though, is when it implements a resource wait timeout, for instance a lock timeout. In this scenario we actually have a worker waiting on two things simultaneously: a resource and a timer. If the resource is acquired before the timer expires, all is good: the worker goes on to the runnable queue, and upon being woken up it finds a thumbs-up as the return value of its resource acquisition call.

However, should the timer expire before the resource has been acquired, the scheduler will actually venture forth and take the worker off that waiter list before making it runnable and setting an error return value as wake-up call. Think of it as every teenager’s worst nightmare: you’re not home by curfew, so Mom comes to your dodgy party to drag your sorry ass home. And then you wake up with a hangover and note stuck to your forehead reading “No cake for you”.

Whither next?

I tried to keep this comparatively high-level, but might take a nice little detour into the WorkerTimerRequest some day if time permits.

There you have it. Be home on time and have a thread-safe festive season.