«  3.1. Concurrency with IPC   ::   Contents   ::   3.3. Pipes and FIFOs  »

3.2. IPC Models

There are multiple ways that different forms of IPC can be classified. The first and most common distinction is to separate techniques that adhere to a message passing model from a shared memory model. In message passing, whenever a piece of data is to be exchanged, the process will invoke the kernel with a request to send the information to the target process. That is, the user-mode process will copy the data into a buffer or other form of data structure, then issue a system call to request the data be transferred. Once the kernel is invoked, it will copy the transferred data first into its own memory. At some later point, the target process will also issue a system call to retrieve the data. In short, message passing techniques require the kernel become involved in every distinct exchange of data.

Shared memory techniques work fundamentally differently than message passing. In shared memory, the processes initially set up a region of their virtual memory to use for the IPC. Once the region is established within the process, the process issues a system call to request that the kernel make the region shared. Other processes would do the same. After the initial system call to set up the shared memory, the processes can read from and write to the region just as it would access non-shared data on its own heap. That is, the process could write to the region by dereferencing a pointer to it. This data then appears within the context of the other process automatically. There is no explicit system call required to read the new data.

3.2.1. Advantages and Disadvantages

Message passing and shared memory both have advantages and disadvantages relative to each other. One key dimension for comparing their performance is the amount of time overhead required. In message passing, every piece of data exchanged requires two system calls: one to read and one to write. In addition, the transferred data must be copied twice: once into the kernel memory and once into the receiving process. For a single message, this time penalty is insignificant and is unlikely to affect the process’s performance. However, if the number of messages passed is extremely large, the cumulative effect of this penalty may be significant.

In contrast, shared memory techniques only require a one-time performance penalty during the set-up phase. Once the memory has been shared, there is no additional penalty, regardless of the amount of data transferred. The trade-off is that the penalty for setting up shared memory is significant. The kernel must perform several slow operations to link the shared region to all of the processes’ virtual memory spaces.

Overall, if the two processes will be exchanging a lot of data back and forth repeatedly, shared memory performs very well. While the work to set up the shared memory is expensive, the aggregate effect of the performance is reduced. However, if processes only need to exchange a single message of a few bytes, shared memory would perform very poorly. Message passing techniques impose significantly smaller overhead to set up a one-time data exchange.

Shared memory also has another disadvantage that message passing avoids, which is the problem of synchronization. If both processes try to write to the shared memory region at the same time, the result would be unpredictable and could lead to errors in one or both processes. Consequently, the accesses must be synchronized, meaning that the timing is carefully controlled. Since all message passing exchanges go through the kernel, synchronization techniques are not necessary.

As an example of the synchronization problem, consider an example where the two processes are keeping track of money in your bank account. One process writes a record for a \$100 purchase that you made, while another records a \$100 deposit. If both are recorded correctly, your new balance should be the same as when the processes started. However, if the account record is in shared memory and the writes are not properly synchronized, the results may not be correct. Instead of your final balance being exactly what you started with, you may end up with \$100 more or less than your original balance. Either you or your bank would be very unhappy with this result. Synchronization is a complex topic and we discuss it in its own chapter.

3.2.2. An IPC Taxonomy

There are several different IPC techniques that can be used for a variety of different purposes. In The Linux Programming Interface [Kerrisk2010] , Kerrisk provides a taxonomy of UNIX IPC that is useful for classifying these techniques along a number of different dimensions. These dimensions include which model (shared memory or message passing), the primary intended purpose (data exchange or synchronization), and the granularity of the communication (byte stream or structured messages). We could also add a distinction between network-based or local to a single machine. Table 3.1 shows this taxonomy.

Before summarizing the taxonomy, we note that the term shared memory has two meanings in IPC that can cause some confusion. As we described above, shared memory is a general model that refers to the fact that the IPC occurs in a region of memory that is co-present within the multiple processes. Shared memory also refers to a specific technique for creating these regions; this technique uses library functions that begin with shm, such as shmat() or shm_open(). There are other shared memory techniques that rely on other functions. As an example, memory-mapped files are a form of shared memory that also copies all data into persistently stored files; data shared with shm functions only exists in memory. To avoid this collision of terminology, we will use shm() when referring to the specific techniques while reserving the term shared memory for the more general model.

Technique Model Purpose Granularity Network
pipe/FIFO message passing data exchange byte stream local
socket message passing data exchange either either
message queue message passing data exchange structured local
shm() shared memory data exchange none local
memory-mapped file shared memory data exchange none local
signal message passing synchronization none local
semaphore message passing synchronization none local

Table 3.1: Classifying standard IPC techniques based on several characteristics

Although signals are covered in another chapter, we include them in this taxonomy because they can be interpreted as a limited form of IPC. Signals do not include any data and they are received asynchronously by the destination process. As such, they are very limited in the information they are conveying; they simply alert another process of a particular type of pre-defined event. However, this can be interpreted as a form of IPC.

Sockets are a flexible form of message passing IPC that can be used in multiple ways. The most common way that sockets are used is to send data across a network connection. As such, we will not consider sockets in this chapter and will explore their uses when discussing networks.

«  3.1. Concurrency with IPC   ::   Contents   ::   3.3. Pipes and FIFOs  »

Contact Us License