Amdahl's law

from Wikipedia, the free encyclopedia
Speed ​​gain of a problem that can be parallelized through processors working in parallel

The Amdahl's law (named in 1967 by Gene Amdahl ) is a model in the computer science concerning the acceleration of programs by parallel execution . According to Amdahl, the increase in speed is primarily limited by the sequential part of the problem, since its execution time cannot be reduced by parallelization.

description

A program can never be executed completely in parallel, because some parts such as process initialization or memory management only run once on a processor or the sequence depends on certain results. Therefore, the program run is broken down into sections that run either completely sequentially or completely in parallel, and each of them is combined into a group. Let the portion of the runtime of the parallel sections of a program be, then the sequential portion, and the total runtime results from the sum when executing on a core:

Notes on formula symbols
Formula symbol designated size
to
Amdahl
DIN 1304  /
ISO 80000
1 T Total runtime
- Run time of a serial program section
P Duration of a parallel program section
N Number of processors that
are (can) used by parallel program sections
O (n) Runtime for synchronization effort
S. Speedup factor

Suppose a program takes 20 hours on a computer with a CPU, and one hour of this is executed sequentially (for example, initialization routines or memory allocation). The remaining 19 hours make up 95% of the total effort and can be distributed over any number of processors. The total computing time cannot fall below 1 hour even with an infinite number of processors, so the maximum acceleration ( speedup ) is a factor of 20.

Let be the parallel part of a program and the number of processors that can be used for the calculation. Then there is the sequential part and the accelerated parallel part. The total time and thus the total acceleration is made up of the sum of the individual elements, and therefore applies to the "Speedup" :

It can be observed that the acceleration with an increasing number of processors depends more and more on the sequential part of the algorithm and the processor communication. Amdahl argued that parallelization incurs additional costs such as communication and synchronization between the processors. The inequality is thus expanded to include a synchronization time expenditure , which takes this into account and therefore increases as it increases.

As a result of the introduction of the new parameter, the curve no longer converges towards , but reaches a maximum in order to decrease again behind it. This effect is also observed in practice: with a sufficiently large number of processors, the effort to transmit, synchronize and return the problem exceeds the computing effort that is taken off by the additional cores.

criticism

Amdahl's law is one of the most pessimistic when it comes to estimating the theoretical speedup , as it does not take into account some positive factors. According to Amdahl, the greatest possible acceleration is linear, i.e. 10 times the number of cores is a maximum of 10 times the computing power possible. However, a cache is also integrated in every CPU , so that the amount of fast memory increases tenfold. In the best case, it is possible to keep the entire problem size in the cache instead of in the slow main memory, which enables a super-linear speedup , i.e. acceleration that goes beyond the additional, pure computing power. However, this could be due to the fact that the comparison is incorrect for the reason that the integrated cache is considered to be part of the processor module. A comparison without considering the cache size would not lead to the named super-linear speedup . Amdahl's consideration for the maximum speedup remains valid in both cases.

Furthermore, Amdahl assumes in his calculation that the problem size remains unchanged; the goal is therefore to compute a problem in the shortest possible time. In the event that a larger problem is to be calculated in the same time, the conditions are more favorable, since the parallel component can then become very large depending on the problem. The longer a calculation takes, the less important the time for initialization and final synchronization is.

Others

The Amdahl's law also has importance in business administration, particularly in operations research in terms of resource allocations.

Gustafson's Law is closely related, but takes into account some missing aspects of Amdahl's Law, Moore's Law analyzes the speedup of computer chips in general.

literature

  • Gene Amdahl: Validity of the Single Processor Approach to Achieving Large-Scale Computing Capabilities . In: AFIPS Conference Proceedings . tape 30 , 1967, p. 483-485 ( berkeley.edu [PDF]).
  • Thomas Rauber, Gudula Rünger: Parallel and Distributed Programming . Springer, Berlin / Heidelberg et al. 2000, ISBN 3-540-66009-7 .
  • Günther Bengel, Christian Baun, Marcel Kunze u. a .: Master's course in parallel and distributed systems . 1st edition Vieweg + Teubner, Wiesbaden 2008, ISBN 978-3-8348-0394-8 .

Web links

Commons : Amdahl's Law  - collection of images, videos and audio files