Shared memory

from Wikipedia, the free encyclopedia

Shared memory is used in computer technology and can describe a different technology depending on the context:

Shared memory in interprocess communication (IPC)

Here two or more processes share a certain part of the background memory (RAM). This shared memory area is located in their address space for all processes involved and can be read out and changed with normal memory access operations. This is usually done using paging mechanisms , in which both processes use the same page descriptors, which means that the same memory page (page) is used in the background memory . Most modern operating systems offer mechanisms for sharing memory.

Shared memory in multiprocessor systems

Shared memory

In MIMD architectures, a distinction is made between tightly coupled and loosely coupled systems, with multi-processor systems belonging to the class of tightly coupled systems. In closely coupled multiprocessor systems, the various processors share a common memory ( shared memory ). Compared to loosely coupled MIMD architectures, this has the following advantages:

  • the processors all have the same view of the data and can therefore communicate with one another in a simple manner
  • the shared memory is accessed very quickly

For these reasons, a tightly coupled MIMD system is usually easier to program than a loosely coupled MIMD system. However, the shared memory can quickly become a bottleneck if there are too many processors, since (with a shared memory bus ) only one processor can access the memory at a time. To counteract this, caches are usually used, i. H. The processors save read values ​​in their own private memory and only need to update them if they or another processor have changed them. In order to achieve this as efficiently as possible, techniques such as bus snooping and write-through caches are used.

Connection-oriented multiprocessor systems

Even when using the above-mentioned techniques, the bus-oriented multiprocessor systems described can not be scaled very well (i.e. expanded by further processors), since each processor added increases the number of accesses to the bus. At some point the bus will run out of capacity. For this reason the concept of connection-oriented multiprocessor systems was developed. The memory access of a processor does not block the entire memory, but only a part of it. This is achieved by using technologies such as crossbar distributors or omega networks . However, these technologies are expensive, which is why in practice loosely coupled MIMD architectures such as computer clusters are used instead of (closely coupled) connection-oriented multiprocessor systems to increase computing power .

Shared memory technology in graphics cards

Some graphics card manufacturers offer graphics cards with "shared memory technology". However, this is not the IPC mechanism mentioned, but a process in which the graphics card shares the main memory of a computer , also known as an integrated graphics processor . On the one hand, this can slow down the graphics hardware and the CPU because the memory bus can now become a bottleneck. On the other hand, it has the advantage that the graphics card can usually be sold cheaper because it does not require its own memory. This technology is mainly used in notebooks , although there are even more advantages to be mentioned. By saving on additional graphics memory chips, better energy efficiency is achieved and, as a rule, helps notebooks to have a longer battery life . In addition, almost all shared memory providers, as well as the Intel GMA models, offer variable use of the main memory. 256 MB can be addressed and used, but normally only a fraction is used (e.g. 16 MB). At AMD , the shared memory is called UMA , and attention is always drawn to techniques that reduce the problem of memory throughput.

Shared memory technology is increasingly being used in business notebooks, ultra-portable notebooks ( subnotebooks ) and inexpensive notebooks. Current systems have switched to addressing the main memory via dual-channel memory controllers, which increases the bandwidth. This is intended to dampen the bottleneck problem so that both processors can access it quickly.

Further developments

The terms TurboCache ( Nvidia ) and HyperMemory ( ATI ) are marketing terms for technologies in graphics cards that use shared memory . They combine the shared memory concept with a graphics card's own memory (which is comparatively small for cost reasons), which means that, in addition to a large shared memory , fast local memory is also available.