Process scheduler

from Wikipedia, the free encyclopedia

A process scheduler (scheduler = control program; from schedule for " time plan ") is an arbitration logic that regulates the timing of several processes in operating systems . Process schedulers can be roughly divided into interrupting ( preemptive ) and non-interrupting ( non preemptive , also called cooperative ). Non-disruptive schedulers leave a process after it is the CPUhas been allocated once, run until it releases it again on its own or until it blocks. Interrupting schedulers only allocate the CPU for a certain period of time from the start and then withdraw this from the process. It is also possible to differentiate between “work-conserving” and “non-work-conserving” strategies. A scheduler strategy works “work-conserving” when switching between processes only takes a negligible amount of time. A distinction can be made between different systems in which different requirements are placed on the scheduler:

  1. Batch processing systems
  2. interactive systems
  3. Real-time systems

In batch processing systems, the scheduler looks very simple: Incoming jobs are placed in a queue and each time a job is processed, the next one comes out of the queue (queue manager).

Interactive systems have different requirements: the user values ​​short response times . For example, when he keyed in a text editor , the text should appear immediately .

A real-time system must guarantee that a process must have completed a task within a specified period of time. With hard real-time requirements this is guaranteed in 100% of all cases, while with soft requirements the time limit may be exceeded in a small percentage of cases.

Typical desktop PCs are interactive systems on which processes can occasionally run as so-called background processes with lower priority.


The allocation of the CPU to the processes should be done in the best possible way, whereby (depending on the executing system) different goals can be pursued:


  • Fairness: No process should have to wait a disproportionately long time while another is preferred.
  • Balance: The processes should be allocated to the CPU in such a way that other resources such as hard disk , network interface, etc. a. are busy.
  • Compliance with the system rules

Batch processing systems

  • CPU utilization: The CPU should be fully utilized at all times. We don't want the CPU to idle just because a process is waiting for data from the hard drive.
  • Throughput: The number of completed tasks per unit of time should be maximized. This gives a similar strategy to utilization, but looks more at the actual result.
  • Short turnaround time (throughput time): The time that elapses from the arrival of a process to its complete completion should be minimized.

Interactive systems (dialogue systems)

  • Fast response times: The waiting times of the user (or other systems) should be minimized. Processes that require interaction with the user should therefore be preferred to those that can take place in the background.
  • Proportionality: The response times of various processes should meet the expectations of the user. If processes (such as closing an application) are considered simple by the user, they should be executed quickly.

Real-time systems

  • Meet deadlines
  • Predictability: Important for multimedia applications (otherwise there is a risk of deterioration in the sound quality, for example) or safety-critical applications such as control devices for airbags, cruise control in motor vehicles, or autopilots in aircraft.


The biggest problem with the scheduler is the fact that the resources required for the individual processes are not known in advance. In general, therefore, it is not possible to create optimal planning; instead, the scheduler must react dynamically to changed requirements. Different allocation strategies can be used (depending on the scheduler). Amongst other things:

  • First In - First Out (FIFO), First-Come First-Served (FCFS): All processes are processed in the order in which they are received. The processes are only reallocated when the current process begins to wait or its execution has ended. This strategy achieves good utilization in terms of the CPU, but not in terms of resources that can take a long time for a request, such as B. input / output or mass storage. In addition, the strategy is not very suitable for multi-user systems, as individual users may be excluded for a longer period of time (namely in the case of complex processes by other users).
  • Shortest-Job-Next (SJN), Shortest Job First (SJF), Shortest Processing Time (SPT): another method that is not suitable for multi-user systems. It can be used in cases in which the required computing time for individual tasks can be predicted well from empirical values. One disadvantage is that large processes can You may never get the CPU allocated if ever shorter jobs are jostling. If processes can be interrupted so that a process change is carried out if a newly arriving process has a shorter execution time than the currently running process, this is called Shortest Remaining Processing Time (SRT) or Shortest Remaining Processing Time (SRPT) .
  • Earliest Due Date (EDD): With this strategy, those processes with the shortest deadline are carried out first. Static deadlines and the simultaneous arrival of independent tasks are a prerequisite for this. This non-disruptive process is ideal for minimizing maximum delays. If processes can be interrupted, one speaks of deadline-based scheduling , planning according to deadlines or Earliest Deadline First (EDF). This strategy occurs mainly in real-time systems, as it is possible to guarantee a defined response time for certain processes.
  • Priority scheduling : This strategy assigns a priority to each process. Processing then takes place in the order of the priorities.
    • Rate Monotonic Scheduling (RMS): The priority is calculated from the period length (processes with shorter periods have higher priority).
    • Deadline Monotonic Scheduling (DMS): The priority is calculated from the relative deadline (processes with shorter relative deadlines have higher priority).
    • You can also give several processes the same priority, they are then executed in the order in which they are received, or alternate with a subordinate time slice procedure within the same priority ( e.g. multilevel feedback queue scheduling or shortest-elapsed time (SET))
    • The priorities can also be dynamic, with the priority of a process increasing over time so that even low-priority processes can be processed at some point and are not constantly displaced by higher-priority processes.
  • Round robin , time slice method : The CPU is assigned to a process for a specific (short) period of time. The process is then put back in the queue. If the individual time spans are of different sizes, one speaks of Weighted Round Robin (WRR)

Scheduling procedure


Individual evidence

  1. ^ Otto Spaniol : System programming: script for the lecture at RWTH Aachen. 1996, ISBN 3-860-73470-9
  2. Credit Scheduler in the Xen Wiki
  3. Credit2 Scheduler in the Xen Wiki