Concurrency

from Wikipedia, the free encyclopedia
When philosophers problem (Engl. Dining Philosophers problem ) is, to a case study in the field of theoretical computer science.

The concurrency , sometimes parallelism ( English concurrency ) called, is in computer science , the property of a system, several calculations, instructions or commands to run simultaneously. This can range from completely independent instructions to working together on a task. You can also interact with each other (e.g. to exchange intermediate results).

The operations can only appear to be carried out concurrently (in parallel ), in so-called preemptive multitasking , on one processor , or they can also actually be carried out concurrently

The boundary to the parallel computer in the actual sense is fluid in the approaches to real parallelism.

Concurrent processes can u. a. can be described and analyzed by Petri nets or parallel random access machines .

Narrower conception

It is sometimes even more accurately distinguish between "Concurrent treatment" ( English concurrency ) and "parallel processing" ( english parallelism ): The concurrent treatment is then understood primarily as a concept, depict real events that is also useful if only one to run CPU core is available; In contrast, in this understanding of the term, parallel processing focuses on genuinely simultaneous multi-core computation, mostly of a problem.

Parallelizability and serializability

If process steps are so dependent on one another that they have to be carried out one after the other in a fixed order so that it is not possible to carry out them simultaneously, then they cannot be parallelized . If processes have to take place genuinely simultaneously, so that there is absolutely no possibility of executing them one after the other, they cannot be serialized .

The ability to parallelize two or more sections or iterations of a process or of events is the assessment of whether these can be executed / calculated concurrently without leading to a deviating result.

"Actions can be carried out concurrently if neither needs the other's result."

- Wolfgang Schröder-Preikschat : Lecture "System programming: Process synchronization: Non-sequentiality"

The program sections must therefore not be causally dependent on one another. In other words, several transactions or processes / threads can be parallelized precisely when the parallel, interleaved, or in reverse order execution leads to the same result as the sequential execution .

If the program sections are (partially) dependent on one another, they must be synchronized with regard to these dependencies , which causes sequencing (with regard to this dependency).

“Many practical problems have solution methods that can be implemented directly in a parallel algorithm. In this case it is said that the problem has a natural parallelism . "

- B. Monien, J. Schulze, S. Grothklags : "Parallel Algorithms"

Procedures for parallelization are

  • Binary tree method,
  • list ranking,
  • pointer doubling,
  • parallel prefix,
  • symmetry breaking.

See also

literature

  • Peter Ziesche: Concurrent & distributed programming. W3L, 2004, ISBN 3-937137-04-1 .
  • Douglas Schmidt, Michael Stal, Hans Rohnert, Frank Buschmann: Pattern-oriented software architecture, patterns for concurrent and networked objects. dpunkt 2002, ISBN 3-89864-142-2 .

Individual evidence

  1. ^ Rob Pike: 'Concurrency Is Not Parallelism' , Youtube, October 20, 2013
  2. System programming, process synchronization: non-sequentiality , Wolfgang Schröder-Preikschat, Chair of Computer Science 4, November 6, 2013
  3. " Parallel Algorithms ", B. Monien, J. Schulze, S. Grothklags, script at Uni Paderborn, 2001 (with extensive bibliography)
  4. ^ Special lecture "Parallel Algorithms" , University of Potsdam, Didactics of Computer Science