«  6.8. Extended Example: Concurrent Prime Number Search   ::   Contents   ::   7.2. Critical Sections and Peterson’s Solution  »

7.1. Synchronization Primitives

Timeline of major CSF topics with Multicore and Threads highlighted

“It’s not an idea until you write it down.”

Ivan Sutherland

The resurgence of interest in multithreading in the late 1990s brought synchronization with it as a core topic. When multiple threads share an address space, they must ensure not to conflict in their memory accesses. Edsger Dijkstra’s THE OS introduced semaphores as a software construct in 1968, while the POSIX.1c-1995 standard defined an interface for them. In addition, higher-level primitives have made synchronization more manageable as a key infrastructure component for concurrent software.

Decorative chapter objectives image

Chapter Objectives


In this chapter, we will address the following instructional objectives:

  • We will justify the need for synchronization primitives as solutions to race conditions and timing constraints.
  • We will compare and contrast primitive synchronization mechanisms and what problems they solve.
  • We will examine code using the POSIX thread library’s synchronization primitive implementations.
  • We will identify the conditions that lead to deadlock, a difficult race condition that can arise in synchronized software.

Concurrent programs introduce a new class of software bugs called race conditions. Race conditions arise whenever the timing of multiple threads of execution determines the outcome. The problem is that the scheduling of threads is out of the control of the programmer who wrote the code and the user running the program.

The scheduling is determined by the operating system kernel, a special program that manages the system’s resources. In modern general-purpose computer systems, there may be hundreds or thousands of threads that must take turns getting access to a small number of CPU processing cores. The kernel makes the decision of which thread to run based on what else the machine is doing, how long each thread has already run, and many other factors. The programmer cannot predict or control any of these factors; in fact, the exact combination of factors is likely to be unique every time the program runs.

In many cases, the nondeterministic nature of thread scheduling is not a problem. If one thread is responsible for factoring 5,182,397,724,980 into a product of primes while another is computing the billionth digit of Pi, it probably doesn’t matter which calculation finishes first. However, if these two results are going to be combined in some way, the program must guarantee that both of them have completed before attempting this third step. That is, these steps must be synchronized.

There are several forms of synchronization that various programs may require. Some common examples include the following:

  • Multiple threads may try to modify a shared data structure, but each write must be performed one at a time to ensure the final result is correct.
  • The system may need to place a limit on the number of simultaneous accesses to a shared resource to avoid delays. For instance, a web server may limit the number of incoming network requests to make sure it does not consume too much memory.
  • Particular events may need to be performed in a specific order, such as when the output produced by one thread is required as input to another thread.
  • The system may need to make sure that a minimum number of calculations have been completed before proceeding to take some sort of action. For example, some programs run multiple different algorithms that should produce the same result; the system may require at least three algorithms agree before having the confidence that the result is correct.

The synchronization primitives described in this chapter can achieve all of these goals. Synchronization primitives are variable types that have one critical aspect: operations that manipulate the primitives are guaranteed to be atomic. This feature contrasts with standard variables that lack this guarantee. For instance, consider the simple line of C code x++ that increments an int variable. This line requires three separate instructions to load the variable into a register, increment the register, then store the result back into memory. In between these instructions, the kernel might interrupt the execution and switch to another thread. In contrast, a synchronization primitive would use special-purpose hardware techniques to guarantee that this kind of multi-step operation happens in a single step that cannot be interrupted by the kernel.

Careless misuse of synchronization primitives can cause a variety of problems. For one thing, some synchronization primitives impose a performance penalty by turning off all parallel execution in a system. This shutdown requires CPU cores temporarily save copies of their data and stop running; they may also need to perform pending writes back to multiple levels of cache. Even worse, synchronization primitives can lead to deadlock, a condition in which two (or more) threads are simultaneously waiting on each other. Finally, some synchronization algorithms contain subtle flaws that can be easily overlooked.

The goal for this chapter is to examine the common synchronization primitives that are widely supported, particular in the POSIX thread library. The focus here is on the basic intent of each primitive and some principles for avoiding deadlock and significant performance penalties. In Synchronization Problems, we will explore how to combine primitives to solve more complex problems.

«  6.8. Extended Example: Concurrent Prime Number Search   ::   Contents   ::   7.2. Critical Sections and Peterson’s Solution  »

Contact Us License