Interprocess communication

from Wikipedia, the free encyclopedia

The term inter-process communication ( English communication interprocess shortly IPC) referred to in the computer science different methods of information exchange between the processes of a system . With the aid of shared memory , communication takes place in that several processes can access a common data memory , for example common areas of the main memory . With a message queue, on the other hand, messages are sent from one process to a message queue , from where they can be picked up by another process.

Communication between processes should be done in a well-structured manner. To avoid race conditions , the critical sections in which shared storage is accessed should be protected from (quasi) simultaneous access. This can be implemented using various mechanisms such as semaphores or monitors . However, if each process from a set of processes is waiting for an event that only another process from the set can trigger, a deadlock situation arises .

Classic problems of interprocess communication are the producer-consumer problem , the philosopher problem and the reader-writer problem.

Basics

A program is a sequence of instructions (consisting of declarations and instructions ) that complies with the rules of a certain programming language in order to process or solve certain functions or tasks or problems with the aid of a computer . Every program is normally executed in an ( operating system ) process that must be assigned to a processor in order to run. A process provides the runtime environment for a program on a computer system and is a dynamic sequence of actions with corresponding status changes, or in other words, the instantiation of a program. The entire status information of a running program is also referred to as a process. If different processes are activated alternately at short intervals, the impression of simultaneity is created, even if only one process is being processed at any given time (or at least the number of processes is significantly higher than the number of CPUs available ).

In multitasking systems, most processes run concurrently and hardly know anything about each other. A memory protection protects the operating system from having one process to the memory segments (and would affect the stability of the system) can be accessed by another process. A prerequisite for this is a memory management unit (MMU for short). Nevertheless, there are use cases where processes have to communicate with other processes. Some of them are:

  • Several processes must share specific data.
  • The processes are interdependent and have to wait for each other.
  • Data must be passed on from one process to another.
  • The use of system resources must be coordinated.
Three processes access a shared resource at the same time.

In this context, communication basically means the exchange of information according to certain rules. So it should be done in a well-structured manner. The set of rules is summarized in data communication or in computer networks under the term protocol .

Examples:

  • Printer - Spooler : When a process wants to print a file, it enters the file name in a special spooler directory . Another process, the printer daemon ( printer daemon ), periodically checks whether there are any files to print. If there are any, it prints them out and then removes their name from the folder.
  • Pipe : By entering a "|" character in the Unix shell , the output of a first process is passed on to a second process.

Two processes are said to be competitive if there is at least one time interval in which they are both started but not completed. In the single-processor system, concurrent processes can only be processed sequentially in sections, while in the multi-processor system, data-independent sections can be executed simultaneously.

A resource must, in the sense determined to be that under the control of the operating system running programs each providing identical results, regardless of the environment in which they take place, d. H. independently with which other programs together. On the other hand, the actual processes are not determined since it must be possible to react to events regardless of the order in which they occur. Such events are, for example, resource requests, runtime errors and asynchronous interruptions that are generated by input and output devices .

techniques

Memory based communication

Shared memory

With shared memory , the operating system provides a mechanism through which several processes can access a shared data memory . To make this possible, a process must first create a shared data store. Then all processes that should also access it must be made known to this data memory: This is done by inserting the memory area in the address space of the corresponding processes (see also memory management ). The processes must also be told how they can access the memory area (read / write). Once all of this has been done, the data memory can be used like an ordinary storage area.

Shared memory is the fastest form of interprocess communication, as a certain memory area is shared by two or more processes, so copying between server and clients is not necessary. For proper communication, however, it must be ensured that the individual processes are synchronized with one another . Monitors or semaphores are often used as the mechanism for process synchronization .

With memory management by means of paging , shared memory can be implemented by using shared pages. Several page table entries of different processes refer to shared memory areas (page frames) in the physical main memory.

Communication via files

A simple form of communication is the exchange of files : a process can write data to files that another process can also read (or write) to. This variant also requires synchronization. The operating system often offers a lock mechanism that locks entire files. However, classic mechanisms such as semaphores are also possible.

Message-based communication

Message Queue

Communication via a message queue : With enter a new value (3) is added to the queue, and with leave the longest stored element (37) is removed (FIFO principle).

With this mechanism, messages are sent by a process to a message queue , which is managed in the form of a linked list . There the message can be picked up by another process. Each message queue has a unique identifier, which means that it can also be identified by another process. Normally, the first-in-first-out (FIFO) principle is used. The messages can also be given a priority. When a process fetches a message with priority N, it fetches the first message with priority N in the queue, even if it was inserted last, for example.

Queues are often used, for example, to transfer data between asynchronous processes in distributed systems , i.e. when data has to be buffered before further processing . Access is via APIs anchored in the operating system . The size of the queue is limited by the operating system.

(Nameless) pipes

Communication between two processes via a pipe

The Pipe (or pipeline, from engl. Pipeline ) is one of the most important applications for queues. It is a data stream between two processes through a buffer based on the FIFO principle. In simplified terms, this means that a result of a computer program is used as input for a further program. It is the oldest and first technology of interprocess communication. It was invented in 1973 by Douglas McIlroy for the Unix operating system .

A pipe always has the following two basic properties:

  • With a pipe, data can only flow in one direction. A process can practically only ever write to or read from a pipe. If you want to create a full duplex connection instead of a half- duplex connection , you need two pipes between the processes.
  • Pipes can only be established by processes that have a common ancestor. A pipe is usually set up by a parent process creating a pipe and fork () creating a child process that inherits this pipe. Then nothing stands in the way of good communication between the parent process and the child process.

In Unix-like operating systems, the programs are connected via the pipe symbol "|". For example, sends

who | sort

the output of who to the sort program , which outputs an alphabetical list of the logged in users.

Named Pipes (FIFO Pipes)

Unnamed pipes can only communicate with processes that are related to each other. In contrast, named pipes (also known as FIFO pipes) can also be used for communication between processes that are not related to each other. They are given appropriate names like files. Any process that knows the name of a named pipe can use this name to establish the connection to the pipe and thus to other processes. Access rights must also be assigned for named pipes, i. In other words, it must be determined who is allowed to write something in it and who can read from it.

Sockets

A socket is the end point of a communication path, which programs address via a file descriptor , similar to a file . In contrast to pipes, a socket can be in contact with several others at the same time and can also be set up by running programs. The socket API was originally introduced in 1984 for the BSD Unix for communication between different computers over a network, but there is also the possibility of communication between processes on the same computer. This is determined by the scope ( domain ): the local domain enables communication between different processes on one computer, the internet domain enables information to be exchanged between processes on different computers using the TCP / IP protocol family . Like the TCP / IP protocol family, BSD sockets have now also been implemented in most of the major operating systems outside of the Unix family.

Race Conditions

Independent ( disjoint ) processes can run "side by side" as you like, since they do not write to common data. However, you can certainly access certain data jointly (these may just not be changed during runtime). However, when competing processes contain common data that is changed, we speak of overlapping processes. Problems that can arise are illustrated by the following example, in which the variable a is used simultaneously by process 1 and process 2:

a := 10

Prozess 1                    Prozess 2

(1.1) a := a*a               (2.1) a := a/2
(1.2) print(a)               (2.2) print(a)

The results that Process 1 and Process 2 print depend on the order in which their instructions are executed. For example, in the instruction sequence 1.1-1.2-2.1-2.2, process 1 delivers the result 100 and process 2 the result 50. In the instruction sequence 2.1-1.1-1.2-2.2, however, both processes output the result 25. The different results therefore depend on the relative speeds of the overlapping processes.

The type of failure in a system or process in which the outcome of the process unexpectedly depends on the sequence or duration of other events is called a race condition . The term comes from the idea that two signals in a race try to influence the system response first.

Race conditions are a common reason for difficult-to-find bugs . The difficulty is that they rarely occur under certain conditions. A changed memory model or runtime behavior of the system can make the error "invisible" again.

Andrew S. Tanenbaum makes the following example of a race condition:

When a process issues a print job, it enters the filename of the file to be printed in a spooler directory . The printer daemon regularly checks whether there are any files to print. If so, it prints it out and then removes its name from the folder. The entries are numbered with 0, 1, 2… and there are two variables: out points to the next file to be printed and in to the next free entry in the folder. Now the entries 0 to 3 are empty (already printed out) and the entries 4 to 6 are occupied (with the names of files that are to be printed). Two processes and now decide more or less simultaneously that they want to queue another file for printing.
The following can now happen: The process reads in and saves its value 7 in the local variable next_free_slot . A timer interrupt occurs right now and the CPU decides that the process has run long enough and therefore switches to the process . The process reads in and receives exactly like a 7. It also saves it in its local variable next_free_slot . At this moment, both processes believe that the next available entry is 7.
If the process now continues, it stores the name of his file in Listing 7 and updates in to 8. If the process turn again, so has next_free_slot on the entry 7 and so it deletes the entry, the process had just created. Then sets the process in turn on the eighth process is thus never received a copy.
A control flow graph of two processes . The red knot represents a critical section .

If a process carries out internal calculations, this is not a problem. But sometimes it also needs to access shared storage, files, or perform other critical operations. This can lead to just such racing situations. The parts of the program that is accessed in which to shared storage, you referred to as critical section ( critical section ) or critical region ( critical region ). In contrast to this, non-critical sections are those in which data that is shared by several processes is not accessed. Critical sections in processes must be clearly delimited from the non-critical sections.

Andrew S. Tanenbaum names four conditions that must be met in order to avoid race conditions:

  • No two processes are allowed to be in their critical sections at the same time. So it needs a mutual exclusion of the processes.
  • No assumptions may be made about the speed and number of CPUs.
  • No process that runs outside its critical section is allowed to block other processes.
  • No process should have to wait forever to enter its critical section.

synchronization

Active waiting

Under active wait ( busy waiting ) refers to the continuous checking of a variable to a certain value appears. According to this principle, a spinlock is used to protect a shared resource from competing processes. While a process is in a critical section, another process that also wants to enter the critical section has to wait “actively”. However, doing so will waste a lot of CPU time.

Lock variables and strict alternation

To prevent several processes from entering a critical section at the same time, a lock variable can be introduced that can only have the values ​​0 or 1. If a process wants to enter the critical section, it first queries the lock variable. If this is 1, the lock is set and the process must wait until another process has left the critical section. If it is 0, there is no process in the critical section. The process sets the lock variable to 1 and enters the critical section. When exiting, it sets the variable back to 0 so that other processes can enter the critical section. A process therefore needs two operations when it occurs, one to test the lock variable and one to set the lock variable.

However, the following error scenario exists: It can happen that the testing and setting of the lock variable is interrupted by a CPU scheduling decision: A process sees that the lock is 0, but before it can set it to 1, it is interrupted. In the meantime, a process sets the lock to 1. When it is process’s turn again, it also sets the lock to 1 and two processes are in the critical section at the same time.

This can be remedied by a strict change with an integer variable turn . When turn is set to 1, the process can enter the critical section. If you now leave the critical section, it sets turn to 2. This enables you to enter the critical section. At the end of the critical section then sets in turn turn back to 1.

However, this "alternation" is not a good prerequisite if one of the processes is much slower than the other. One process is stuck in a waiting loop until the other process releases the critical section again.

Mutual exclusion with hardware support

Hardware support can be used to prevent a process from losing the CPU between reading and setting a lock tag (and thereby inadvertently two processes entering a critical section).

The TSL instruction ( Test and Set Lock ) is available in some processor architectures, which reads and writes a register value in an indivisible operation . By the command

TSL RX LOCK

the content of the memory word is read LOCKinto the register RXand a value not equal to zero is then LOCKstored at the memory address of . As long as the CPU is executing the TSL command, no other process can access the memory word. The memory bus remains blocked.

A shared LOCK variable can therefore be used to coordinate access to a critical section. When a process wants to enter, it copies the lock variable LOCK into the RX register and overwrites it with a value other than 0 (with the TSL instruction in a single, indivisible operation). Now it compares the old value, which is now in RX, with 0. If this is not 0, the lock was already set, so the program goes into the waiting loop. When the value becomes 0, the calling process can enter the critical section. When a process leaves the critical section, it sets the value of LOCK back to 0 with the help of a normal MOVE command. See the following assembler code:

enter_critical_section:
   TSL RX LOCK                  // kopiere die Sperrvariable LOCK ins Register RX und überschreibe LOCK mit einem Wert ungleich 0
   CMP RX, #0                   // war die Sperrvariable 0?
   JNE enter_critical_section   // wenn der Wert nicht 0 ist, dann war die Sperre schon gesetzt -> gehe in die Schleife
   RET                          // sonst springe zurück und betrete kritischen Abschnitt

leave_critical_section:
   MOVE LOCK, #0                // speichere 0 in die Sperrvariable LOCK
   RET                          // springe zurück

An alternative to TSL is the XCHG instruction, which automatically exchanges the contents of two memory locations, for example a register and a memory word.

Peterson's algorithm

The Peterson Algorithm was formulated by Gary L. Peterson in 1981 and offers a solution to the mutual exclusion problem. Before entering a critical section, each process calls enter_section(int process)with its own process number, 0 or 1, as a parameter. Entry into the critical section is thus delayed until it is safe. As soon as a process leaves the critical section, it calls up leave_section(int process)with its own process number as a parameter. Another process can now enter the critical section if it so wishes.

#define FALSE 0
#define TRUE 1
#define N 2 // Anzahl der Prozesse

int turn; // Gibt an, wer gerade an der Reihe ist
int interested[N]; // Alle Werte mit 0 (FALSE) initialisieren

void enter_section(int process)
{
  int other; // Variable für Nummer des anderen Prozesses
  other = 1 - process; // Der andere Prozess

  interested[process] = TRUE; // Interesse zeigen
  turn = other; // Flag setzen

  while (interested[other] == TRUE && turn == other) ; // Leeranweisung (Aktives Warten)
}

void leave_section(int process) // Prozess, der den kritischen Abschnitt verlässt
{
  interested[process] = FALSE; // Zeigt den Ausstieg aus dem kritischen Abschnitt an
}

For example, if process 0 enter_section(0)calls, it shows its interest by setting it interested[0]to TRUE and setting the variable turn to 0 (its process number). If process 1 is not interested ( interested[1]==FALSEand turn==0), the function returns immediately (and the critical section can be entered). If now process 1 enter_section(1)calls (a little later) , it has to wait as long as interested[0] == TRUEapplies. But only if the process 0 leave_section(0)calls to leave the critical region, is interested[0]to FALSEset, and the process one can enter the critical section.

In the event that both processes are called almost simultaneously enter_section(int process)and each save their process numbers in turn , the last saved save counts, i. that is, the first saved result is overwritten and lost. Since both processes have indicated their interest, both interested[0] == TRUEand interested[1] == TRUEIf, for example, process 0 was the last to save in turn , the following applies turn == 0. Under this condition, process 1 cannot enter the critical section; it has to wait in the waiting loop until the other process sets interested to FALSE .

Passive waiting

In passive waiting, a process that wants to enter a critical section is "put to sleep" (for example by being placed in a queue) as long as it has to wait. The state remains until it is allowed to enter the critical section. Then he is woken up by another process.

semaphore

The concept of the semaphore was developed in 1965 by Edsger W. Dijkstra and is based on locking mechanisms. When the semaphore is started, the semaphore counter is initialized with an integer value that indicates how many processes can use a resource at the same time. The semaphore also maintains a queue for the processes that must wait at the entrance to the critical section. When using a semaphore for mutual exclusion, the semaphore counter is simply initialized with 1. Two operations are available for entering and exiting the critical section:

  • The down operation down (s) is called when entering the critical section. It first checks whether the value of the semaphore counter is greater than 0. If this is the case, the value of the semaphore counter is reduced by 1 and the process can begin. If the semaphore counter is currently at 0, the incoming process is denied entry and it is placed in the queue.
  • The up operation up (s) is called when the critical section is exited. The semaphore counter is increased again by 1, which means that another process can enter the critical area.

The use of semaphores is illustrated by an analogy: Assume a library has 10 study rooms, each of which can only be used by one student. A receptionist "semaphore" now ensures that the "resource" study room is correctly occupied. To do this, he keeps a counter that keeps track of the number of free study rooms (and is initialized with 10). Every time a student enters a study room, the counter is reduced by one. If the counter is 0, all study rooms are occupied and the arriving students must be queued. The waiting students are allowed to sleep in the meantime. Only when a student leaves a study room is the first student in the queue "woken up" and admitted to the study room.

When implementing semaphores, however, it must be noted that the operations down (s) and up (s) themselves have critical sections. To avoid race conditions, for example the TSL or the XCHG command can be used to protect the critical sections of down (s) and up (s) . This does not completely avoid active waiting, but the operations of the semaphore are very short.

Mutex

The term mutex is not defined uniformly. According to Peter Mandl, it is a binary semaphore, i. H. a semaphore with the maximum value N = 1. In practice, a mutex has a variable that can only have two states, namely locked and unlocked . So theoretically only one bit is required for representation. The two procedures mutex_lock and mutex_unlock are available for working on a mutex .

Some authors do point out differences between mutexes and binary semaphores. The main difference should be that a mutex belongs to the calling process (or thread), while a semaphore has no owner. For example, if the semaphore counter for a semaphore is 0 ( locked ), another process (or thread) can increment it, even if it is not the process that originally locked it.

monitor

The implementation of mutual exclusion using synchronization primitives such as semaphores is quite error-prone. That is why CAR Hoare in 1974 and Per Brinch Hansen in 1975 developed a synchronization device on a higher level of abstraction, the monitor. A monitor is a module (an abstract data type , a class ) in which the data shared by processes and their access procedures (or methods) are combined into a single unit. Access procedures with critical sections of the data are specially marked as monitor operations. Access procedures without critical sections can also be offered by the module.

The functionality of monitors is equivalent to that of semaphores. Monitors are, however, easier to oversee for the programmer, "since shared data and access controls are centrally located in one place and not, as in the case of semaphores, are distributed over several processes in the program code". The basic characteristics of a monitor are:

  • Critical sections that work on the same data are procedures of a monitor.
  • A process enters a monitor by calling a procedure on the monitor.
  • Only one process can use the monitor at a time. Any other process that enters the monitor will be suspended and will have to wait for the monitor to become available.

A monitor is often viewed as a space in which only one actor (process) can be accommodated. If other actors want to enter the monitor room, they have to wait until space has become free in the monitor room.

When a process calls a monitor procedure, the first commands in the procedure usually check whether another process is currently active in the monitor. If no other process is using the monitor, it can enter. Otherwise the calling process is "put to sleep" until the other process has left the monitor. The mutual exclusion of monitor entries is now implemented by the compiler , for example using a mutex or a binary semaphore. The programmer does not need to know how the compiler handles mutual exclusion. This makes it much less likely that something will go wrong. It is enough to know that two processes in their critical sections must not be executed at the same time if you bundle all critical sections in monitor procedures.

Deadlocks

A set of processes is in a deadlock state when each process in the set is waiting for an event that only another process in the set can trigger. No process will ever wake up because it is always waiting for events that trigger other processes that are waiting for events from it again.

Andrew S. Tanenbaum makes the following example: Two processes both want to scan a document and then burn it onto a CD. Process occupies the scanner for now. The process proceeds a little differently and initially reserves the CD burner. Now the process tries to reserve the CD burner as well, but this fails because the process has not yet released it. Unfortunately, however, the process does not want to release the scanner as long as the CD burner is not available. So the two processes block each other.

There are also examples of deadlocks from everyday life: If cars arrive at an intersection from all four sides at the same time and the right-of-way rule "right before left" applies, theoretically all four cars have to wait (endlessly) because every car waits for that the car waiting on the right goes first.

Classic problems

Producer-consumer problem

The producer-consumer problem ( producer-consumer problem- also known as a problem to limited buffer), the access control addressed to a data structure with element generating (writing) and element-consuming (read) processes. Two processes can share a fixed-size buffer: one of the processes, the producer, puts information in the buffer, the other, the consumer, takes it out. The access regulation is now intended to prevent the consumer from accessing the data structure (buffer) if it does not contain any elements and the producer accessing the data structure when the recording capacity has already been exhausted.

One possible solution would be to keep track of the number of messages in a buffer with a variable count . The maximum number of messages that can hold a buffer, was in the process N . If count is now equal to N , the producer is put to sleep, otherwise he can add a message and count is increased by one. If count is equal to 0, the consumer is put to sleep, otherwise he can remove a message and count is reduced by one. Each of the processes also checks to see if the other should be woken up, and if so, it does so.

However, this implementation can result in race conditions, for example as follows: The buffer is empty and the moment the consumer checks count , the process is interrupted by a CPU scheduling decision. Then the producer is started, which puts a message in the buffer and increases count by one. Convinced that count was 0 before that and that the consumer is therefore asleep, he triggers a wakeup to wake the producer. The wake-up signal is then lost. However, the next time the consumer runs, it assumes the value of count is 0 because it had read that value before. He will therefore go to sleep. The producer will now increasingly fill the buffer and then go to sleep as well. If they haven't died , they are still sleeping.

The problem of lost wake-up calls can be solved, for example, by using semaphores.

Philosopher problem

Structure of the Philosophers Problem

The philosopher problem is a synchronization problem that Edsger W. Dijkstra published and solved in 1965. It is useful to illustrate processes that compete for exclusive access to a limited number of resources, such as input / output devices.

With the problem, exactly two resources (here forks) must be assigned before the work can be done. There are five philosophers (five processes / threads) sitting around a round table. Each philosopher has a plate of spaghetti in front of him and between two plates there is a fork. A philosopher needs exactly two forks to eat. A philosopher spends his time either thinking or eating. When he gets hungry, he reaches for his left and right fork (in any order). If he manages to grab the two forks, he will eat for a while and then put the forks back down. So a maximum of two philosophers can eat at the same time. The main question is: can you write a program for the philosophers that works and never gets stuck?

A simple solution is to implement the forks as semaphores and initialize them with 1. But since two semaphores cannot record at the same time, there is a risk of deadlock if philosophers are interrupted between the two fork recordings. For example, if all philosophers pick up their left fork almost simultaneously, no one will be able to pick up their right fork. The philosophers wait forever for the right fork.

One possible solution is to use a status variable stat which, for a philosopher, follows three states: thinking (0), hungry (1) and eating (2). So that a philosopher can change from the hungry status ( stat == 1) to the eating status ( stat == 2), none of his neighbors is allowed to eat. A field of semaphores with one semaphore per philosopher now makes it possible to block philosophers as well, if the required forks are in use.

Reader-writer problem

The reader-writer problem involves two types of processes (or threads), the readers and the writers. One now starts from the situation that a lot of readers and writers want to enter a common critical area. However, the following conditions must be observed:

  • The reader and writer must never be in the critical section at the same time.
  • Any number of readers can stay in the critical section at the same time.
  • Several writers must never be in the critical section at the same time.

Andrew S. Tanenbaum gives an example of an airline's reservation system in which there are many competing processes that require read and write access. It is acceptable for several processes to read the database at the same time, but when a process updates (writes) the database, no other process is allowed to access the database, not even for reading.

A solution is offered with the help of a semaphore db : When a reader gets access to the database, he executes a down on the semaphore db . When he's done, he does an up . As long as there is at least one active reader, no writer can enter the critical section. A writer who wants to enter is therefore shut down as long as there are readers in the critical section. However, if new readers keep entering the critical area, a writer may never gain access. He's starving. A possible solution to this problem is that incoming readers are placed behind waiting writers. In this way the writer only has to wait for readers who were there before him to gain access.

See also

literature

  • Erich Ehses, Lutz Köhler, Petra Riemer, Horst Stenzel, Frank Victor: System programming in UNIX / Linux. Basic operating system concepts and practice-oriented applications. Vieweg + Teubner: Wiesbaden, 2012.
  • Peter Mandl: Basic course operating systems. Architectures, resource management, synchronization, process communication, virtualization. 4th edition, Springer Vieweg: Wiesbaden, 2014.
  • Peter Pacheco: An Introduction to Parallel Programming. Elsevier: Amsterdam, u. a., 2011.
  • Abraham Silberschatz, Peter B. Galvin, Greg Gagne: Operating System Concepts. Eighth Edition, John Wiley & Sons: Hoboken (New Jersey), 2008.
  • Andrew S. Tanenbaum : Modern Operating Systems. 3rd, updated edition. Pearson studies, Munich a. a., 2009, ISBN 978-3-8273-7342-7 .
  • Jürgen Wolf, Klaus-Jürgen Wolf: Linux-Unix programming. The comprehensive manual. 4th edition, Rheinwerk Verlag: Bonn, 2016. (older, also cited edition: Jürgen Wolf: Linux-UNIX-Programming. The comprehensive manual. 3rd, updated and expanded edition, Rheinwerk: Bonn, 2009.)

Individual evidence

  1. ISO / IEC 2382-1: 1993 defines "computer program": "A syntactic unit that conforms to the rules of a particular programming language and that is composed of declarations and statements or instructions needed to solve a certain function, task, or problem . ”Until 2001, DIN 44300 defined“ Information processing terms ”identically.
  2. ^ Mandl: Basic course operating systems. 4th ed., 2014, p. 78.
  3. Tanenbaum: Modern Operating Systems. 3rd ed., 2009, pp. 124-125.
  4. ^ Jürgen Wolf: Linux-UNIX programming. The comprehensive manual. 3rd, updated and expanded edition, Galileo Computing: Bonn, 2009, p. 275.
  5. ^ Mandl: Basic course operating systems. 4th ed., 2014, p. 196.
  6. Tanenbaum: Modern Operating Systems. 3rd edition 2009, p. 162.
  7. ^ Wolfgang K. Giloi: Computer architecture. 2nd edition, Springer-Verlag: Berlin, Heidelberg, 1993, p. 48.
  8. Lutz Richter: Operating Systems. 2nd edition, BG Teubner: Stuttgart, 1985, p. 41.
  9. Jürgen Wolf, Klaus-Jürgen Wolf: Linux-Unix-Programming. The comprehensive manual. 4th edition, Rheinwerk Verlag: Bonn, 2016, pp. 354, 424-427; also Mandl: Basic course operating systems. 2014, p. 198.
  10. ^ Mandl: Basic course operating systems. 2014, p. 344.
  11. ^ Mandl: Basic course operating systems. 2014, p. 199.
  12. Jürgen Wolf, Klaus-Jürgen Wolf: Linux-Unix-Programming. The comprehensive manual. 4th edition, Rheinwerk Verlag: Bonn, 2016, pp. 354, 411.
  13. Jürgen Wolf, Klaus-Jürgen Wolf: Linux-Unix-Programming. The comprehensive manual. 4th edition, Rheinwerk Verlag: Bonn, 2016, p. 359.
  14. ^ Daniel J. Barrett: Linux. short and good. 2nd ed., O'Reilly: Beijing, u. a., 2012 (German translation by Kathrin Lichtenberg).
  15. Jürgen Wolf, Klaus-Jürgen Wolf: Linux-Unix-Programming. The comprehensive manual. 4th edition, Rheinwerk Verlag: Bonn, 2016, p. 351.
  16. Konrad Heuer, Reinhard Sippel: UNIX system administration. Linux, Solaris, AIX, FreeBSD, Tru64-UNIX. Springer: Berlin, Heidelberg, 2004, pp. 106-107.
  17. a b Lutz Richter: Operating Systems. 2nd edition, BG Teubner: Stuttgart, 1985, p. 42.
  18. Tanenbaum: Modern Operating Systems. 3rd ed., 2009, pp. 162-163.
  19. a b Tanenbaum: Modern operating systems. 3rd edition, 2009, p. 163.
  20. Tanenbaum: Modern Operating Systems. P. 166.
  21. Tanenbaum: Modern Operating Systems. Pp. 165-166; Mandl: Basic course operating systems. P. 160.
  22. Tanenbaum: Modern Operating Systems. Pp. 166-167.
  23. Tanenbaum: Modern Operating Systems. 2014, pp. 168–170.
  24. Tanenbaum: Modern Operating Systems. 2009, p. 170.
  25. Tanenbaum: Modern Operating Systems. Pp. 167-168.
  26. ^ Edsger W. Dijkstra: Cooperating sequential processes (EWD-123). EW Dijkstra Archive. Center for American History, University of Texas at Austin, September 1965. ( Original )
  27. ^ Mandl: Basic course operating systems. 2014, pp. 162–163, also Tanenbaum: Modern Operating Systems. 2009, p. 173.
  28. Tanenbaum: Modern Operating Systems. 2009, p. 174.
  29. ^ Mandl: Basic course operating systems. 2014, p. 164; so also Jürgen Quade, Michael Mächtel: Modern real-time systems compact. An introduction to embedded Linux. dpunkt.verlag: Heidelberg, 2012, pp. 100–105.
  30. Tanenbaum: Modern Operating Systems. 2009, p. 176.
  31. ^ Peter Pacheco: An Introduction to Parallel Programming. Elsevier: Amsterdam, u. a., 2011, p. 174; Richard H. Carver, Kuo-Chung Tai: Modern Multithreading. Implementing, Testing, and Debugging Multithreaded Java and C ++ Pthreads Win32 Programs. Wiley Interscience: Hoboken, 2006, pp. 91-92.
  32. a b Cornelia Heinisch, Frank Müller-Hofmann, Joachim Goll: Java as the first programming language. From beginner to professional. 6th edition, Vieweg + Teubner: Wiesbaden, 2011, p. 764.
  33. Tanenbaum: Modern Operating Systems. 2009, 182.
  34. Tanenbaum: Modern Operating Systems. 2009, p. 516.
  35. Tanenbaum: Modern Operating Systems. 2009, p. 512.
  36. Dietrich Boles: Learning parallel programming with ease using the Java hamster model. Programming with Java threads. Vieweg + Teubner: Wiesbaden, 2008, p. 226.
  37. a b Tanenbaum: Modern operating systems. 2009, p. 171.
  38. Tanenbaum: Modern Operating Systems. 2009, p. 172.
  39. Tanenbaum: Modern Operating Systems. 2009, pp. 212-213; Mandl: Basic course operating systems. 2014, p. 166.
  40. Tanenbaum: Modern Operating Systems. 2009, pp. 213-215.
  41. Tanenbaum: Modern Operating Systems. 2009, p. 215.
  42. Tanenbaum: Modern Operating Systems. 2009, pp. 215-217.