-
Notifications
You must be signed in to change notification settings - Fork 7
Dispatch Queues
Dispatch queues are a very convenient and easy to use mechanism to implement concurrency, timers and signal monitoring in your application. They allow you to focus on writing the code that should be executed and on the high-level design of how concurrency should be structured in your app rather than having to think about how to implement concurrency from scratch and safely.
A dispatch queue provides a FIFO work queue, it allows you to schedule one-shot and repeating timers and it is able to monitor signals on your behalf. The dispatch queue takes care of automatically acquiring virtual processors as needed and it automatically release virtual processors when they are no longer needed.
There are two kinds of dispatch queues: serial and concurrent. A serial queue is a dispatch queue which runs all work items, timers and signal monitors in a serial fashion. Meaning, at most one of these items is executing at any given time and the dispatcher will never run them in parallel. A concurrent queue on the other hand allows multiple items to run at the same time. How many items can run at the same time can be controlled with a dispatch queue API.
An application may set up a main dispatch queue. This is a dispatch queue that takes over the main virtual processor in a process and manages it for the rest of the processes' lifetime. You have to explicitly kick off the main dispatch queue if this is what is needed for your application.
You must create a dispatch queue before you can use any of the other dispatch queue APIs. You do this by filling out a dispatch_attr_t structure, which you then pass to the dispatch_create() function to create the dispatch queue. The dispatch attribute structure allows you to configure the minimum and maximum number of virtual processors the queue should manage. The queue will acquire and maintain the minimum number of virtual processors and it will acquire at most the maximum number. However it is free to relinquish any and all of the virtual processors that are above the minimum concurrency number if they are currently not needed.
A dispatch queue belongs to a quality of service class and it has a priority. The quality of service class controls the general dispatching and scheduling behavior of the dispatch queue. The priority of a dispatch queue is with respect to its quality of service class. A higher priority will allow a queue to react faster to an incoming event compared to another queue of the same class which features a lower priority.
The following example code shows a simple way to create a new serial dispatch queue:
dispatch_attr_t attr = DISPATCH_ATTR_INIT_SERIAL_INTERACTIVE;
dispatch_t dq = dispatch_create(&attr);
It is generically a good idea to initialize the dispatch attributes for your new queue with the help of the DISPATCH_ATTR_INIT_SERIAL_INTERACTIVE macro since this macro takes care of setting up all fields of the attribute structure with reasonable defaults. You can then adjust fields of the attribute structure as needed.
If instead of a serial dispatch queue, you want to create a concurrent dispatch queue with a certain amount of concurrency then you would use the following code instead:
dispatch_attr_t attr = DISPATCH_ATTR_INIT_CONCURRENT_UTILITY(4);
dispatch_t dq = dispatch_create(&attr);
This sample code creates a concurrent dispatch queue with a maximum concurrency of 4 and a minimum concurrency of 1. This means that the queue will always hold on to at least one virtual processor and it will acquire and use up to 4 virtual processors when needed.
There are many different ways to schedule work on a dispatch queue. You can either create a dispatch item object and schedule it for execution or you can use one of the convenience APIs to schedule a simple function for execution later. Dispatch queues maintain a FIFO work item queue and they try to execute newly scheduled work as soon as possible.
Note that dispatch queues treat dispatch items as delayed function invocation objects. What this means is that a queue will never copy a dispatch item and that it will never use it more than once. It also means that the dispatch queue becomes the owner of a dispatch item as soon as it is scheduled for execution and it remains the owner of the item until it has finished execution.
A dispatch item is represented by the dispatch_item data structure. You create an application specific dispatch item by embedding this data structure as the first field in your own data structure. The dispatch item base data structure stores a pointer to the function that should be executed and a pointer to an optional 'retire' function. The dispatch queue executes the retire function after the dispatch item function has finished execution in order to free the resources associated with the dispatch item.
The following sample code shows how to easily schedule the execution of a function using the asynchronous function convenience API:
static void OnAsync(void* _Nonnull ign)
{
...
}
dispatch_async(dq, (dispatch_async_func_t)OnAsync, NULL);
The dispatch queue dq will execute the function OnAsync asynchronously as soon as possible. Here 'executing asynchronously' means that the dispatch_async function call will potentially return before the OnAsync function has had a chance to run.
Sometimes you want to invoke a function on a dispatch queue and wait for the function call to complete. You can achieve this with the following kind of code:
static void OnSync(void* _Nonnull ign)
{
...
}
dispatch_sync(dq, (dispatch_async_func_t)OnSync, NULL);
The dispatch_sync function schedules the execution of the OnSync function and it blocks its caller until the OnSync function has finished executing.
Dispatch queues support the scheduling of one-shot and repeating timers. You simply specify a function for your timer and the necessary timeout values. Timeout values can be specified in relative or absolute time units. Relative time units means that the timer should run 'in D seconds' while absolute time units means that the timer should run 'at time T'. The dispatch queue will try to execute a timer as closely to the specified timeout time as possible. Note however that a timer may execute slightly later than requested.
The following code shows how to schedule a one-shot and a repeating timer:
static void OnTimer(void* _Nonnull ign)
{
...
}
// Repeats 'OnTimer' starting 250ms in the future and the every 250ms there after
struct timespec DELAY_250MS;
timespec_from_ms(&DELAY_250MS, 250);
dispatch_repeating(dq, 0, &DELAY_250MS, &DELAY_250MS, (dispatch_async_func_t)OnTimer, NULL);
// One-shot timer which will fire in 5 seconds
struct timespec DELAY_5S;
timespec_from_sec(&DELAY_5S, 5);
dispatch_after(dq, 0, &DELAY_5S, (dispatch_async_func_t)OnTimer, NULL);
You would use the TIMER_ABSTIME flag to specify an absolute timeout value.
A dispatch queue allows you to easily monitor a signal. You schedule a signal monitor with a signal handler function and the dispatch queue will then automatically monitor the signal and invoke your signal handler when the signal is sent to the dispatch queue.
Every dispatch queue manages a virtual processor group. Each virtual processor group in a process is identified by a virtual processor group id. You can use the dispatch_signal_target function to get the virtual processor group id of a dispatch queue. Pass this id to the system component from which you want to receive a signal when an interesting state change occurs.
You use the dispatch_alloc_signal API to allocate some signal or reserve a specific signal which should be used in the context of a dispatch queue. You use the dispatch_free_signal API to inform the dispatch queue that the signal is no longer needed.
The following sample code shows how to allocate a signal and how to set up a signal monitor for this signal:
static void OnSignal(void* _Nonnull ign)
{
...
}
int signo = dispatch_alloc_signal(dq, 0);
dispatch_item_t item = calloc(1, sizeof(struct dispatch_item));
item.func = (dispatch_item_func_t)OnSignal;
item.retireFunc = (dispatch_retire_func_t)free;
dispatch_signal_monitor(dq, signo, item);
The signal monitor will stay in affect until it is cancelled. You can send a signal to a dispatch queue by taking advantage of the dispatch_send_signal convenience function.
Work items and one-shot timers stay active until after they have finished execution and repeating timers and signal monitors stay active until they are explicitly cancelled. You can cancel any kind of item by calling the dispatch_cancel_item function and specifying the work item or the dispatch_cancel function and specifying the item function. You can also cancel an item from inside the item function by calling the dispatch_cancel_current_item function. This is especially useful for repeating timers and signal monitors.
Note that canceling is cooperative: this means that the cancel function will unschedule a schedule item, but it will not unblock the item function if the item function is currently sitting in a blocking system call.
You can get a handle to the main dispatch queue of an application any time by calling the dispatch_main_queue function. Use this function to get the main dispatch queue so that you can schedule work items, timers and signal monitors on it. Once at least one work item, timer or signal monitor has been scheduled, you should invoke the dispatch_run_main_queue function from the main virtual processor of the application to kick off the main dispatch queue. Note that this function will never return - you have to call exit from one of the scheduled functions to terminate the application.
The following sample code shows how to get a main dispatch queue going:
static void game_loop(void* ctx)
{
...
}
int main(int argc, char *argv[])
{
dispatch_repeating(dispatch_main_queue(), 0, &TIMESPEC_ZERO, &game_loop_delay, (dispatch_async_func_t)game_loop, NULL);
dispatch_run_main_queue();
/* NOT REACHED */
return 0;
}
The main function first schedules a repeating timer for a game loop on the main dispatch queue. Next it invokes the dispatch_run_main_queue function to start the dispatch queue running. This function will never return.
Terminating and freeing a dispatch queue is a multi-step process. The first step is to invoke the dispatch_terminate function. This function initiates the termination process on a dispatch queue and it stops the dispatch queue from accepting new work. Note however that already scheduled work and currently executing work is allowed to finish its execution. You use the dispatch_await_termination function next to wait for the dispatch queue to be finished with its termination work. This function blocks its caller until all still executing work items are done and the queue has reached a resting state.
Finally you call the dispatch_destroy function to free all resources occupied by the dispatch queue. Note that you can not terminate and destroy a main dispatch queue. Once created, a main dispatch queue continues to live as long as its hosting process continues to live.