Memory management unit

from Wikipedia, the free encyclopedia

The term memory management unit ( MMU ; German  memory management unit ) names a hardware component of a computer to access the, memory managed.

tasks

It converts virtual addresses of each individual process into physical addresses of the external memory. It enables the separation between process memory and main memory, which allows the following concepts:

  • Outsourcing of z. Currently unused memory
  • Delayed provision of requested but not yet used storage.
  • Isolation of processes from one another and between the process and the operating system
  • Sharing of individual pages between processes ( shared memory )
  • Non-sharing of pages between threads of a process ( thread-local storage )
  • Show files as memory ( memory-mapped files )

The MMU also regulates memory protection tasks . Individual memory areas can be blocked for the execution of code or for further writing. A distinction is made here between the foreclosure of

  • Programs among each other ("horizontal separation"): Programs cannot access the memory of other programs (for example in the event of errors).
  • Programs against the operating system (“vertical hierarchy”): The functioning of the operating system must not be endangered by (faulty) application programs. This makes safe operation in multitasking much easier, as the hardware prevents an error in one process from leading to direct access to data in another process or the operating system. In addition, the MMU can present each process with an initially unfragmented, exclusive memory space.

Use and use

Block diagram Skylake CPU
Link to the picture
(Please note copyrights )

MMUs were originally designed as external and optional additional components for microprocessors . Typical examples are the 16-bit Motorola MC68010 CPU, the MMU was housed in an external MC68451 module. Other manufacturers such as Zilog and Intel integrated the MMU directly into the processor (Z8001, Intel 80286). With the advent of caches in CPUs, the MMU has been relocated to the CPU.

This is absolutely necessary because the MMU must be located between the CPU core and the cache (s). It works with physical addresses so that it does not have to be flushed with every thread or task change. Multicore systems with shared cache require the MMU before this shared cache.

              +---MMU-Instruction----L1I-Cache---+
 CPU-Kern-----+                                  +---L2-Cache-------L3-Cache---Hauptspeicher
              +---MMU-Data-----------L1D-Cache---+
              +---MMU-Instruction----L1I-Cache---+
 CPU-Kern 1---+                                  +---L2-Cache---+
              +---MMU-Data-----------L1D-Cache---+              |    shared
                                                                +---L3-Cache---Hauptspeicher
              +---MMU-Instruction----L1I-Cache---+              |
 CPU-Kern 2---+                                  +---L2-Cache---+
              +---MMU-Data-----------L1D-Cache---+

In computers with Harvard architecture, there are two MMUs per core - one for the instruction and one for the data memory of each core. MMUs used to be "luxury items". Nowadays, even in CPUs in the price range around US $ 1, they are standard (BCM2835).

Working principle

Schematic process for converting a virtual into a physical address

Every read or write access requested by a command is first checked for validity and, if valid, converted into a physical address by the memory management unit . Self-reloading MMUs have a special cache memory, the Translation Lookaside Buffer , which caches the last address translations and thus reduces frequent access to the translation table. In addition, the MMU contains special fast registers (such as for base addresses and offsets ) in order to carry out the address calculation as efficiently as possible. The possible types of address translation, a distinction ( English address translation ) according to the type of used page tables .

Originally there were two methods of address translation, the one according to segments (segmented MMU) and the one according to pages (paged MMU). When addressing by segments, logical memory areas of variable sizes are converted to physical memory areas of the same size. However, since this method does not go well with the memory management of modern operating systems with virtual memory management, it is hardly used any more. Page address translation normally uses fixed block sizes and is the common method today. The mechanism for translating logical addresses into physical addresses is therefore also known as paging in the case of fixed block sizes . For processes with a very large address space and a fixed block size, a very large number of table entries would be required in the MMU. For this reason, some operating systems, if a corresponding MMU is available, can combine parts of the address space with page entries that use much larger block sizes. A physical address does not have to be assigned to a logical address at all times. If such an address is addressed, then a so-called page fault ( English page fault, page miss ), whereupon the operating system may load the data from an external memory medium; this process runs transparently for an application . One speaks here of "storage virtualization".

See also

literature

  • Andrew S. Tanenbaum: Modern Operating Systems . 2nd, revised edition. Pearson Studies, 2003, ISBN 3-8273-7019-1 .
  • Eduard Glatz: Operating Systems - Basics, Concepts, System Programming . 2nd, updated and revised edition. dpunkt Verlag, 2010, ISBN 978-3-89864-678-9 .

Individual evidence

  1. a b Andrew S. Tanenbaum: Modern Operating Systems , 2nd revised edition. Pearson Studies, 2003, ISBN 3-8273-7019-1