Memory management

from Wikipedia, the free encyclopedia

The memory management (Engl. Memory management ) is part of an operating system, (parts of) is storage hierarchy managed. In particular, it should enable efficient and convenient access to the physical memory (main memory) of a computer . In this context one speaks of the main memory management . The management of the lowest levels of the memory hierarchy, such as the cache memory, is usually carried out by the hardware .

One of the key functions of an operating system in memory management is usually to provide a virtual (logical) address space for every process . This can be thought of as a set of memory addresses that processes can access. This address space is decoupled from the physical memory of the computer - it can be either larger or smaller than this. The virtual (logical) addresses do not go directly to the memory bus , but to the Memory Management Unit (MMU, dt. Memory Management Unit ) that maps the virtual addresses to physical addresses. The operating system normally loads part of the address space into the main memory, while another part remains on the hard disk . If necessary, program parts are shifted back and forth between the two memories.

Different techniques are used to manage memory, depending on how the computer is used. In multiuser / multiprogramming operating systems , virtual memory management is mostly used today with various optimization options in the demand paging process .

Storage hierarchy

Different storage technologies are normally used in a computer system. The shorter the access times for a certain type of memory, the more expensive it is and the more scarce it will be. An attempt is therefore made to find a compromise between speed, costs and persistence by creating the memory in a memory hierarchy and attempting to use the advantages of the various components through clever memory management while at the same time circumventing their disadvantages . So there is a memory with several levels that are different "far" away from the processor:

  • The internal registers of the CPU form the top layer of the memory hierarchy. CPU registers are part of the CPU and therefore hardly cause any access delays. Their storage capacity is typically on a 32-bit processor and bits on a 64-bit processor. CPU registers are controlled by the software.
  • The cache memory is a buffer memory that is directly accessible from the processor. A cache contains copies of the main memory areas that are currently most frequently used. Since programs often work for a relatively long time on a narrowly limited amount of data and code, the cache is very efficient despite its small size. The cache is a relatively small, high-speed memory that buffers frequently required data between the CPU and main memory. Today's CPUs often integrate two or more cache levels, which makes access even faster. Cache memories are usually controlled by the hardware.
Main memory in the form of an IC on an SDRAM module
  • The working memory or main memory (English main memory or RAM = random access memory ) is the workhorse of the memory system. It contains the programs or program parts to be executed and the data required for them. All requests from the processor that cannot be answered from the cache are forwarded to the main memory. The most common RAMs are "volatile", which means that the stored data is lost when the power supply is switched off.
  • Mass storage media can permanently store large amounts of data. Magnetic hard drives are around 100 times cheaper per bit than RAM and usually have 100 times more capacity. However, random access takes about 100,000 times longer. In addition, data on the disk cannot be processed directly by the processor, but must first be loaded into the main memory. The disk storage can be used as a supplement or expansion of the main storage and is therefore generally referred to as background storage or secondary storage .
  • In addition, there are removable storage media (e.g. DVD , CD , USB stick , floppy disks , magnetic tape ).

The memory manager of the operating system manages this memory hierarchy. It keeps track of which memory areas are currently being used, allocates memory to processes when they need it and then releases it again.

The so-called locality property can be used in all levels of the memory hierarchy : The temporal locality means that address areas that are accessed will be used again with a high degree of probability in the near future and the spatial locality that after an address area has been accessed with a higher Probability of the next access to an address in the immediate vicinity. Over time, memory addresses that are very close to one another are addressed again and again. You can take advantage of this by bringing the neighboring address areas to the next hierarchy level when accessing the memory.

Direct memory management

Block diagram of a processor chip card

The simplest method is not to use memory abstraction at all. Every program simply has physical memory in front of it: a set of addresses from 0 to a certain maximum, each address being assigned to a cell containing a certain number of bits.

Direct addressing of the physical memory is no longer used on mainframes , mini-computers and PCs today. In contrast, there is still no memory abstraction in many embedded systems and smart cards: “Devices such as radios, washing machines and microwaves are now full of software (in ROM) and in most cases the software addresses absolute memory. This works here because all programs are known in advance and users are not allowed to simply run their own software on their toaster. "

Memory management with monoprogramming

In mono programming, the main memory is only allocated to the currently active program and the operating system. Only one process is running at a time. This process then has exclusive access to the physical memory and can address it directly . Management of the memory is very simple in these computers and consists in making the requested address accessible via the data bus .

Even in this simplest storage model, several organizational options are still possible. In this way, the memory can be divided into different parts, for example:

  • The operating system is at the bottom of the memory in RAM , the user program above. This model was previously used in mainframes and minicomputers, but is rarely used today.
  • The operating system is at the top of the ROM memory . This model is used in some PDAs and embedded systems.
  • The device drivers are on top of ROM and the rest of the system is on the bottom of RAM. This model was used by early PCs (e.g. under MS-DOS).

With the help of swapping , however, several programs can be executed simultaneously without memory abstraction : To do this, the operating system must store the entire contents of the main memory on a hard disk and then fetch the next program from the hard disk and execute it in the main memory.

Storage management with fixed partitions

An IBM system 360/20 in the Deutsches Museum, Munich

Even with direct memory management, it is possible to run several programs at the same time (multiprogramming mode) by dividing the main memory into fixed parts, so-called partitions . Exactly one program is executed in each partition. This technology was used, for example, in the IBM OS / 360 operating system .

With this technology, however, the user programs had to be prevented from coming into conflict with one another and with the operating system itself. One program could, for example, overwrite a value at a memory address that the other program had previously stored there. This would cause both programs to crash immediately. Therefore, each memory block was given a protection key, which in turn was stored in a special CPU register. If a process tried to access the memory with a protection code that was different from its own PSW key (program status word), a system call was triggered.


In practice, the total amount of memory required by all running processes is often much larger than the actual physical memory. This means that not all processes can be kept in memory all the time. A simple strategy for counteracting memory overload is so-called swapping. This term usually refers to the movement of data between memory and hard drive. A process is completely loaded into the main memory, is allowed to run for a certain time, and is then transferred back to the hard disk. Since the hard disk space can also be used to service memory requests from programs, requests that go beyond the available RAM can then be served via swap memory.

The main difference to fixed partitions is that the number of processes, the size of the storage space and also the storage location of processes can change. A process is loaded wherever there is space. There is also the possibility that processes will grow dynamically since programs, for example, dynamically reserve memory on a heap . So the question arises as to how much memory should be reserved for a process. When a process becomes larger than the reserved segment, a problem arises. If there is a memory gap alongside the growing process, it can simply “grow into” it. Otherwise, the growing process must either be moved into a gap big enough for it, or processes must be outsourced to create a suitable gap.

Swapping can result in unused gaps in the main memory ( fragmentation ). Such gaps can be lumped into one big gap by moving all processes down as far as possible in memory. This will combine all the gaps into a single large gap in the upper part of the memory. This technique is called storage compression. However, it is very time-consuming, which is why it is usually not used.

Memory management with bitmap: The main memory is divided into allocation units of a fixed size. Each unit corresponds to a bit in the bitmap. This saves whether the allocation unit is occupied (= 1) or free (= 0).

If memory is allocated dynamically, it must be managed by the operating system. There are two different techniques for doing this: bitmaps and linked lists . In memory management with bitmaps, the entire memory is divided into allocation units of a fixed size. A bitmap stores whether the allocation unit is occupied (= 1) or free (= 0). In a linked list (segment list) of memory segments, an entry in the list defines a gap (L) or a process (P) and contains the start address, the length and a pointer to the next entry. The list is sorted according to the start addresses. This makes it relatively easy to update the list when a process is terminated.

In contrast to swapping, programs can run with virtual memory even if only some of them are in the main memory. This means that programs that are larger than the available memory can also be executed.

Video: Outsourcing and storage of complete processes

Virtual memory management

The concept of virtual memory management was first described by John Fotheringham in 1961. This allows the program code, the data and the stack to be larger than the actually available main memory. In contrast to this, in real memory management, a process can be as large as the main memory.

Virtual address space

The memory addresses generated by a program are called virtual addresses and form the virtual address space. The virtual addresses now do not go directly to the memory bus, but to the Memory Management Unit (MMU, dt. Memory Management Unit ) that maps the virtual addresses to physical addresses.

Schematic process for converting a virtual into a physical address

The basic idea behind virtual memory management is that the address space of a program is broken down into units that are assigned to the physical memory area, whereby not all of them have to be in the physical memory: If the program accesses a part of the address space that is currently in the physical memory Memory is located, the hardware can quickly make the necessary allocations. However, if the program wants to access a part of the address space that is not in the physical memory, the operating system is alerted to get the missing piece and to execute the failed command again.

Virtual and physical storage

Often times, a system has much more virtual memory than physical memory. Specifically, the installed RAM determines the size of the physical memory, while the virtual address space depends on the architecture of the instruction set. With a 32-bit processor you can address a maximum of bytes (4 GB) of memory, with a 64-bit system you can address bytes (16 exabytes), even if, for example, only 512 MB RAM is actually installed.

The concept of virtual memory management works particularly well in systems with multiprogramming when individual parts of several programs are in memory at the same time: While a program is waiting for parts to be read in, the CPU can be assigned to another process.

To organize the virtual memory there is on the one hand the segment-oriented approach in which the memory is divided into units of different sizes, and on the other hand the page-oriented approach in which all memory units are of the same length.


Paging with pages , page table and page frame

Paging (of English. Page " Memory page ") refers to the method of storage management by page addressing. When paging, the virtual address space is divided into equal pieces, which one (Engl. As pages pages called). The physical address space is also subdivided in this way. The corresponding units in the physical memory are called page frames or tiles. (page) frames . Pages and page frames are usually the same size. In real systems there are page sizes between 512 bytes and 4 Mbytes, sometimes 1 GB would be possible. Pages of 4 KByte are typical.

A page table containing the mapping between virtual and physical addresses is used to convert virtual to physical addresses . If a program addresses a virtual address that is not assigned to a physical address, a system call is triggered. This call will be side error (Engl. Page fault ) called. The direct consequence of the page fault is a synchronous program interruption ( trap ). The operating system then selects a rarely used page frame, writes its contents back to the hard disk, loads the requested page into the free page frame, changes the allocation table and executes the interrupted command again.

The paging retrieval strategy commonly used today is also called demand paging , since storage only takes place on request, i.e. when the data is actually needed. With so-called prepaging , on the other hand, pages that have not yet been requested can also be fetched into main memory, for example if the locality of programs can be assessed well (see locality property ). Combinations of demand paging and prepaging are also used in practice: for example, when a requested page is fetched, the neighboring pages or other pages can be loaded into the main memory according to a specific algorithm.


Another memory management mechanism is the division of the main memory into segments with completely independent address spaces. A segment forms a logical unit. For example, a segment could contain a procedure, field, stack, or set of variables, but usually not a mix of different types. Segments are usually larger than pages. Different segments can and usually are different sizes. In addition, the size of a segment can change during execution.

The main advantage of segmentation is that the segment boundaries match the natural program and data boundaries. Program segments can be recompiled and exchanged independently of other segments. Data segments whose length changes during runtime ( stacks , queues ) can efficiently use the available storage space through segmentation. On the other hand, the management of variable memory segments requires complex algorithms for main memory allocation in order to avoid strong fragmentation , i.e. H. a "fragmentation" of the storage space with unused storage areas (fragments) between used storage areas. Another problem with segmentation is the high redundancy ( superfluity ) in large segments: Often only a fraction of the stored segment is actually required by the process.

A benefit of segmentation is that program sharing is fairly well supported. A program just needs reentrant be written and loaded once as a segment. Several users can then access it, each with their own data areas.

Paged segments

The combination of segmentation and paging serves to combine the advantages of both methods. If you combine segment and page addressing, one also speaks of segment page addressing. Each segment is in turn divided into pages of the same size. The address specification contains the segment number, the page number and the relative byte address within the page.

See also


  • Albert Achilles: Operating Systems. A compact introduction to Linux. Springer: Berlin, Heidelberg, 2006.
  • Uwe Baumgarten, Hans-Jürgen Siegert: Operating systems. An introduction. 6th, revised, updated and expanded edition, Oldenbourg Verlag: Munich, Vienna, 2007.
  • Peter Mandl: Basic course operating systems. Architectures, resource management, synchronization, process communication, virtualization. 4th edition, Springer Vieweg: Wiesbaden, 2014.
  • Andrew S. Tanenbaum : Modern Operating Systems. 3rd, updated edition. Pearson studies, Munich a. a., 2009, ISBN 978-3-8273-7342-7 .

Web links

Individual evidence

  1. a b Mandl: Basic course operating systems. 4th edition 2014, p. 213.
  2. Tanenbaum: Modern Operating Systems. 3rd edition 2009, pp. 55-59, 228; Carsten Vogt: Operating Systems. Spectrum: Heidelberg, Berlin, 2001, pp. 131-143.
  3. Tanenbaum: Modern Operating Systems. 2009, p. 232.
  4. ^ Mandl: Basic course operating systems. 2014, p. 219.
  5. Tanenbaum: Modern Operating Systems. 2009, p. 229.
  6. Tanenbaum: Modern Operating Systems. 2009, p. 230.
  7. ^ Mandl: Basic course operating systems. 2014, p. 219.
  8. Tanenbaum: Modern Operating Systems. 2009, p. 230.
  9. ^ Mandl: Basic course operating systems. 2014, p. 220.
  10. a b Tanenbaum: Modern operating systems. 2009, p. 236.
  11. Tanenbaum: Modern Operating Systems. 2009, pp. 237-238.
  12. Tanenbaum: Modern Operating Systems. 2009, pp. 238-240.
  13. Tanenbaum: Modern Operating Systems. 2009, p. 235.
  14. Tanenbaum: Modern Operating Systems. 2009, p. 241.
  15. ^ John Fotheringham: Dynamic storage allocation in the Atlas computer, including an automatic use of a backing store. In: Communications of the ACM Volume 4, Issue 10 (October 1961). Pp. 435–436 ( ACM )
  16. Tanenbaum: Modern Operating Systems. 2009, p. 243.
  17. a b c Tanenbaum: Modern operating systems. 2009, p. 243.
  18. AMD: AMD64 Technology , chap. 5.1 Page Translation Overview , p. 120: " The following physical-page sizes are supported: 4 Kbytes, 2 Mbytes, 4 Mbytes, and 1 Gbytes. "
  19. ^ Klaus Wüst: Microprocessor technology. Basics, architectures, circuit technology and operation of microprocessors and microcontrollers. 4th edition, Vieweg + Teubner: Wiesbaden, 2011, p. 174; See also Tanenbaum: Modern Operating Systems. 2009, p. 243, which gives a typical page size between 512 bytes and 64 KB.
  20. Tanenbaum: Modern Operating Systems. 2009, p. 244.
  21. ^ Mandl: Basic course operating systems. 2014, p. 223.
  22. Tanenbaum: Modern Operating Systems. 2009, p. 291.
  23. ^ Wolfram Schiffmann, Robert Schmitz: Technische Informatik 2. Basics of computer technology. 4th, revised edition, 2002, p. 283.
  24. ^ Achilles: Operating Systems. 2006, p. 57.
  25. ^ Mandl: Basic course operating systems. 2014, p. 251 with reference to Uwe Baumgarten, Hans-Jürgen Siegert: Operating systems. 6th edition, Oldenbourg-Verlag, 2006.