from Wikipedia, the free encyclopedia

The term multitasking [ ˌmʌltitɑːskɪŋ ] ( Engl. ) Or Multiprocess refers to the ability of an operating system , several tasks ( tasks ) (quasi-) concurrently execute. In general, the processor also offers supporting hardware structures for this . The various processes are activated alternately at such short intervals that the impression of simultaneity is created. Multitasking is therefore a variant of a time division multiplex method . If a computer has several CPU cores so that it can actually execute several tasks at the same time , one speaks of multiprocessing . Both methods are used in combination in modern computers.

Purpose of multitasking

Multitasking can be useful for different requirements, especially when optimizing the workload and for a balanced or priority-based distribution of resources depending on the objective .

The basic idea behind the "optimization of the load" is that the majority of the computing time in an average computer cannot be used because it is often necessary to wait for relatively slow, external events (for example the next key press by the user). If only one process were running (for example the waiting word processing), this waiting time would be lost completely unused (see " active waiting "). However, through multitasking, the waiting time of one process can be used by other processes.

If, on the other hand, a computer or its computing time is largely utilized, for example by individual computing-intensive processes, several users or processes can still receive a proportion of computing time instead of having to wait for the end of another process. This is particularly beneficial for interactivity.

Since the system can take into account priorities for the various processes at the same time, an appropriate weighting is possible, depending on the objective. For example, a server can give preference to the services it is supposed to offer, but prioritize direct user interactions with low priority. Conversely, a desktop PC will primarily prefer input and output from / to the user and will defer background processes for this.

Precursor and evolution

Multiprogramming and TSR programs

The forerunner of multitasking is multiprogramming with the aim of a higher CPU utilization in contrast to the sequential execution of the tasks with batch processing . With multiprogramming, the context change of the programs takes place with the access to peripheral devices, since this inevitably results in a waiting time. Initial approaches are based on the concept by Christopher Strachey from 1959. However, such concepts could only be put into practice with powerful hardware when the interrupt control made it possible to develop TSR programs .

General procedure

The technical process for multitasking is basically always the same.

An important basic requirement of multitasking is generally that a process that is interrupted in favor of another does not need to “know” anything about this other (or possibly several others). This is usually achieved by each process having its own so-called process context that describes its state. A process only ever changes its own process context, never that of another process.

Usually the entire process context (the state of the process) is saved when it is interrupted, e.g. B. on the stack . It remains stored until the relevant process is to be given computing time again. Immediately before this process becomes active again, the saved state is reloaded, so that it appears to the process as if it was not interrupted at all; regardless of whether, how many and what kind of processes have been executed in the meantime. This switching between individual processes is called a task switch .

In this way, a process can find its defined environment again when it continues to run after the interruption, even if other processes have been executed in the meantime.

In cooperative multitasking, changing tasks is very similar to calling procedures or functions in procedural programming .

Cooperative multitasking

With "cooperative multitasking", multitasking is implemented by a central process management in the system kernel: a simple, so-called scheduler . The scheduler saves the process context of the task that has just been interrupted, selects the next process that is to receive computing time, creates its process context and then transfers the processor to this new process. The scheduler can keep lists with differently prioritized tasks and accordingly seldom call low-priority tasks. The computing time already used by a task can also be taken into account. As a rule, internal operating system tasks are done first before a new task receives the processor. It is up to each process to decide when to return control to the core; As a rule, at least every service request to the operating system is associated with a task change.

The advantage of this method is that many system functions (e.g. output) do not have to be re-enterable and therefore do not have to be synchronized, which means a considerable simplification for the manufacturer. (However, interrupt routines always have to solve this problem.) This form of multitasking, like the TSR concept, has the disadvantage that programs that do not cooperate bring the rest of the system to a standstill. Reasons for such program behavior can be:

  • errors contained in the program;
  • malfunction of a system intended by the programmer;
  • Deliberate (advantageous) "sole control" of the program, for example for a time-critical application such as a computer game.

The concept was used, for example

This form of multitasking is basically fast and resource-saving, and technically relatively easy to implement. It was never a viable alternative for multi-user mainframe computers. B. never used under Unix , since one user could block all others by means of an uncooperative program. Even in the newer Windows operating systems of the NT line and in Mac OS X , this technology, which is now considered obsolete, is not used. Sometimes they support similar functionality to a limited extent, for example as user mode threads , but in any case only embedded in preemptive multitasking.

Preemptive multitasking

The basis of the method used as standard today is preemptive multitasking :

The currently operating process after a certain processing time (its " time slot ", and time slice , engl. Time slice ) is interrupted by a hardware timer that an interrupt generated. The interrupt service routine (part of the operating system) interrupts the process and saves its process context - the process is "put to sleep"; then passes it to the scheduler - the operating system has (again) gained control. Unless there is an operating system-specific sequence, the scheduler now selects a process that is ready to run (possibly the same one that was just interrupted), restores its process context, starts the hardware timer and then transfers it to the process.

Usually, each process is assigned an "absolute" time slice (all time slices have the same, fixed duration; usually a few milliseconds); alternatively, he is assigned a certain percentage of this time unit per defined time unit (e.g. depending on his priority), which he can use at most (the length of the time slice is therefore determined anew each time). If it needs an operating system function before its time slice has expired, it is immediately stopped and marked as “not ready for processing” until the operating system has provided the desired service. Only processes marked as “ready to compute” receive processor time allocations.

Choosing the next process

Is a popular implementation of preemptive multitasking using a priority queue associated with the round-robin - scheduling strategy . There is also the processor allocation depending on the task priority , especially in real-time systems z. B. MicroC / OS-II . This only plays a subordinate role for multitasking, since preemptive multitasking describes the kernel or processor control over the processes.

Necessary hardware support

On the hardware side , in contrast to the cooperative variant (see TSR program as a forerunner) , preemptive multitasking requires an interrupt generator (usually a timer ) in the appropriate processor, as the system has no way of removing control of the processor from processes. The timer sends a signal ( interrupt ) to the CPU regularly or after a set time has elapsed , which causes it to interrupt the currently running task and to execute the operating system interrupt service routine.

Memory protection

Modern operating systems also work with memory protection that prevents different processes in the memory from influencing or even overwriting each other. The memory management unit (MMU) in the PC assumes this protective function , which enables the virtualization of the main memory and various authorization levels (rings) or modes (kernel mode versus user mode) and thus allows the operating system to carry out various parallel processes within the computer strictly encapsulate. In the PC, the MMU was used for the first time in computers with i286 processors from Intel. However, this technique is not absolutely necessary for multitasking in the narrower sense.


The first widespread computer systems that mastered preemptive multitasking were the Sinclair QL (1984) and the Commodore Amiga (1985) in the home (in the Amiga without memory protection / privileges and thus "leverable"), as well as the mainframe systems operated under Unix . Windows has partially preemptive multitasking for the first time in the 3.x versions , but only for DOS programs and only if they are run on a system with an i386 compatible processor, as this enables hardware virtualization in such cases . Modern operating systems that fully support preemptive multitasking are Windows NT (and all successors), QNX , BeOS and all Unix-based systems such as Linux , HP-UX , Solaris , Mac OS X and many more

Differentiation from time sharing

In addition, a distinction has to be made between time slicing and time sharing ; the latter allows several users or their processes (e.g. on database servers or mainframes with terminal access) to automatically share the available computing time proportionally. While with multitasking several processes of a single user can share the computing time, with time sharing the time of a process is distributed among several users.

Preemptible multitasking

A special form of preemptive multitasking is the lesser known preemptible multitasking (English spelling Preemptible Multitasking ), which was first implemented in the OS / 2 operating system . Many of the operating system's own kernel routines are managed as scheduler threads; This means that application processes can also receive time slots while an operating system action is actually being carried out (with the exception of atomic OS processes). The concept enables faster response times. With version 2.6 it has also found its way into the Linux kernel .

See also


  • C. Strachey: Time Sharing in Large Fast Computers. Proceedings of the International Conference on Information Processing. UNESCO, 1959.
  • Klaus D. Thies: Real-time multitasking: The kernel - elementary introduction to parallel system programming. Shaker-Verlag, Aachen, ISBN 978-3832248673 .

Web links

Wiktionary: Multitasking  - explanations of meanings, word origins, synonyms, translations

Individual evidence

  1. from article Timesharing , in: Meyers Taschenlexikon in 24 volumes (BI Taschenbuchverlag) - Meyer explicitly mentions time slices in multi-user operation ( MS Encarta World Atlas also refers to multi-user operation in "time sharing").
  2. Präempti bl he Kernel Linux 2.6 , from, accessed September 14, 2012.
  3. Information on ( Memento from August 15, 2009 in the Internet Archive ).