I am Boris Dobretsov, and this is the fifth part of a series Understanding Parallel Programming: A Guide for Beginners.
If you haven’t read the first four parts, have a look at Understanding Parallel Programming: A Guide for Beginners, Understanding Parallel Programming: A Guide for Beginners, Part II, Understanding Threads to Better Manage Threading in iOS, Understanding Parallel Programming: Thread Management for Beginners.
In our previous lesson, we explored key concepts like threads, processes, event loops, asynchronous programming, and multithreading. These fundamentals are enough to start splitting code into threads. However, managing threads with the Thread
class can be super-heavy. You need to track the number of threads, divide code across them, add tasks, synchronise data, and monitor thread completion – all without standard tools to streamline the process.
To address these challenges, Apple introduced Grand Central Dispatch (GCD). This library encapsulates thread management entirely, offering a simple interface based on queues and tasks instead.
The GCD Approach: Queues and Tasks
GCD operates on the principle of organising blocks of code (tasks) into queues. These tasks can run in parallel or sequentially, depending on the type of queue. The execution speed of queues varies. Tasks enqueued in the main queue always run on the main thread, while others are distributed across available threads by GCD.
Here’s an example of executing a block of code on a different thread using GCD:
DispatchQueue.global().async {
for _ in 0..<10 {
print("😈")
}
}
Here, we fetch a global queue, call the async
method, and pass a closure that executes within that queue. At first glance, this might seem similar to Thread
. However, as we’ll see, GCD offers much more.
It’s crucial to understand that a queue is not equivalent to a thread. Tasks in the same queue may execute on different threads, and tasks from different queues might share the same thread.
Understanding GCD Queues
GCD provides six standard queues in every iOS app: one main queue for UI-related tasks and five additional global queues with different priority levels.
DispatchQueue.main
The main queue is serial and used for all UI operations. Any functions or closures interacting with the UI must be enqueued here. It has the highest priority among all global queues.DispatchQueue.global(qos: .userInteractive)
This queue handles tasks requiring immediate user interaction, such as animations. These tasks should execute quickly (e.g., recalculating something as a user moves their finger across the screen). Its priority is high but lower than the main queue.DispatchQueue.global(qos: .userInitiated)
Designed for tasks initiated by the user but not tied to immediate interaction. Examples include loading data or calculations needed for the next step in a process. These tasks might take a few seconds and have a slightly lower priority.DispatchQueue.global(qos: .utility)
Ideal for tasks like downloading or database cleanup, where immediate feedback isn’t necessary. Tasks here may take seconds to minutes and have moderate priority.DispatchQueue.global(qos: .background)
This queue is for background tasks that aren’t time-sensitive, such as backups or syncing data. These tasks have the lowest priority and run only when the system isn’t busy with higher-priority work.DispatchQueue.global(qos: .default)
The default queue’s priority is automatically balanced betweenuserInitiated
andutility
. If you callDispatchQueue.global()
without specifying a QoS, this is the queue you get.
Comparing Queue Priorities with Examples
Let’s see these queues in action:
DispatchQueue.global(qos: .userInteractive).async {
for _ in 0..<10 { print("😇") }
}
DispatchQueue.main.async {
for _ in 0..<10 { print("😈") }
}
In this example, the tasks execute almost simultaneously because the system resources are sufficient:
😇
😈
😇
😈
...
Now, let’s add two more queues with different priorities:
DispatchQueue.global(qos: .userInteractive).async {
for _ in 0..<10 { print("😇") }
}
DispatchQueue.main.async {
for _ in 0..<10 { print("😈") }
}
DispatchQueue.global(qos: .userInitiated).async {
for _ in 0..<10 { print("👻") }
}
DispatchQueue.global(qos: .utility).async {
for _ in 0..<10 { print("👽") }
}
Output:
😇
😈
👻
👽
😇
😈
...
👽
Here, tasks from higher-priority queues dominate, while lower-priority tasks execute later.
Serial vs. Concurrent Queues
- Serial Queues execute tasks one at a time, in the order they are enqueued.
- Concurrent Queues can execute multiple tasks simultaneously if resources are available.
Example of serial execution:
let serialQueue = DispatchQueue(label: "mySerialQueue")
serialQueue.async { print("A") }
serialQueue.async { print("B") }
serialQueue.async { print("C") }
Output:
A
B
C
Example of concurrent execution:
let concurrentQueue = DispatchQueue(label: "myConcurrentQueue", attributes: .concurrent)
concurrentQueue.async { print("A") }
concurrentQueue.async { print("B") }
concurrentQueue.async { print("C") }
Output (may vary):
B
A
C
Sync vs. Async
The sync
and async
methods control how tasks are executed relative to the current queue:
DispatchQueue.global().async {
print("😈")
}
print("😇")
Output:
😇
😈
Here, the main queue continues its execution while the background task runs asynchronously. Compare this with sync
:
DispatchQueue.global().sync {
print("😈")
}
print("😇")
Output:
😈
😇
Creating Custom Queues
Beyond the built-in queues, you can create your own:
let myQueue = DispatchQueue(label: "myQueue")
myQueue.async {
for _ in 0..<10 { print("😈") }
}
Custom queues are serial by default but can be configured for concurrency:
let myConcurrentQueue = DispatchQueue(
label: "myQueue",
qos: .utility,
attributes: .concurrent
)
Custom queues allow better organization of complex multithreaded code.
Conclusion
Today we explored how GCD simplifies task management in iOS, enabling efficient multithreading with queues and tasks. Understanding the difference between global and custom queues, serial and concurrent execution, and synchronous versus asynchronous calls helps you optimise the app’s performance.
Next time, we’ll tackle synchronisation challenges like race conditions and deadlocks, uncovering best practices to ensure smooth, error-free multithreaded operations. Stay tuned!