Multithreading

from Wikipedia, the free encyclopedia

In computer science, multithreading (also called concurrency , multiple threads or multiple threads ) refers to the simultaneous (or quasi-simultaneous) processing of several threads (execution threads ) within a single process or a task (an application program ).

Unlike the multitasking , in which multiple independent programs from each other foreclosed quasi-simultaneously be performed, the threads of an application program are not sealed off from each other and thus by so-called race conditions cause errors caused by synchronization must be avoided.

Both multithreading and multitasking can either be carried out on just one processor core or can also be combined with multiprocessing (real parallel execution by several processor cores).

software

Since multi-core processors have become increasingly widespread, software-based multithreading has been on the decline; As a rule, only one processor core is involved here - from the hardware (and, in general, operating system) point of view, there is only a single program execution branch. The apparently existing simultaneity is generated internally in the software, i.e. within the application program and only controlled by it, in reality through skillful sequencing (" thread " prioritization) and a logical abstraction of the multiplex process. Individual “threads” of a process / task can react very quickly to time-critical events, while other threads perform lengthy calculations. For example, video editing software can compute a scene “in the background” (thread 1) while it reacts to user input “in the foreground” (thread 2).

Without further hardware support, multithreading reduces the overall system performance to a small extent due to the overhead arising when changing contexts ; In the case of processes that should be independent of one another, the application can instead achieve major performance boosts.

A distinction is also made as to whether multithreading is only implemented in the application program or whether it is supported by the operating system. If it is implemented purely on the application side, the programmer has full control over which thread receives which resources and when. However, since the entire program appears to the operating system only as a sequence, it is completely frozen as soon as it requests resources / services that are currently not available - that is the primary problem when processing user input on a graphical user interface and processing internal data and procedures.

If multithreading is supported by the operating system, it can know those parts of an application that are not blocked - the application can (partially) continue to work.

hardware

The hardware-based multithreaded differs from software-related by the fact that the hardware of a processor supports the apparent or real simultaneous processing of multiple threads. The software must use this capability and be divided into threads accordingly and create / register these accordingly with the operating system.

Modern operating systems split the load of an application using multithreading fully automatically to all existing processor cores - with the maximum clock rate of a single core, this ensures an improved reactivity of the operating system and increases the performance of each individual application.

In symmetrical multiprocessor systems , the computing work of a process / task can be distributed to several system processors through multithreading.

A precise distinction must be made between hardware multi- threading and multi- processing . With multithreading, the threads run in the same task context , which is much easier to implement for the hardware, e.g. B. no second MMU / page table is required. With real multiprocessing, each sequence is a separate process, but the outsourcing of threads to physically available processor cores could also be classified as "real concurrent".

With hardware-based multithreading, there is only marginal overhead during the context change, provided that the number of software threads demanding computing time does not exceed the number of hardware-supported threads and thus each thread can always keep "its" CPU resources (especially the register set).

In addition (both with hardware-side multithreading and with multitasking) processor functional units that are currently not being used by a computation process can be used by another thread (usually called hyper-threading ). By avoiding pipeline hazards , unproductive clock cycles can be avoided and the overall system performance per CPU or processor core increases.

Web links