Cache algorithm

from Wikipedia, the free encyclopedia

A cache algorithm is an algorithm for controlling a cache , with which memory accesses between a CPU and the main memory are to be optimized and inconsistency problems to be prevented. Caches in the broader sense are also used in software , where the cache algorithms apply accordingly.

A distinction is made between cache write policy (write rule) and cache replacement policy ( replacement rule ). However, the term cache algorithm is usually only related to replacement policy .

When considering the algorithms, a distinction is made between cache hit (requested data is in the cache) and cache miss (requested data is not in the cache). Correspondingly, these situations are called Write Hit / Read Hit and Write Miss / Read Miss when reading / writing.

Cache write policy

The following methods are usually used in computer architectures with one processor. However, inconsistencies in I / O operations can occur here. The cache block (also called the cacheline) is the smallest administrative unit within the cache.

Various cache placement policies (not described here)

Carbonless technology ( write-through )

With write-through, it is usually ensured that the data in the cache and in the memory behind it (hereinafter main memory) are the same. This is achieved by loading the data into the cache as well as into the main memory in the event of a write hit. In the event of a write miss, it depends on the write-miss policy whether the data is loaded into the cache in addition to the main memory.

Writeback technology ( write-back )

With write-back, the main memory is not written directly, but only when the corresponding cache block has to be replaced. This is the case with a read miss or with a write miss with write allocate (see Write-miss policy).

The dirty bit (= data in the main memory and cache are inconsistent) is set in order to determine which data match the main memory and which do not. If the dirty bit is set, this means that the data is not yet in main memory.

The synchronization with the main memory is done by individually writing cache lines marked with the dirty bit into the main memory. Alternatively, this is done using a FLUSH , in which the entire cache is written to main memory.

Write-back is technically more demanding, but also faster than write through .

Write-miss policy

If a write miss occurs, a write-miss policy is used.

Write allocate

Here the data to be written go directly to the cache. But they are also written to the main memory. Depending on the writing technique (write-back, write-through) this happens immediately or when the cache line is displaced.

No-write allocate (write around)

The data is only written to main memory without occupying a cacheline.

Cache replacement policy

When reading and writing, there is a common occurrence that a cacheline needs to be replaced because the cache is full. In this case, the cache replacement policy - also called the cache algorithm - must decide which cache blocks are discarded and which are not.

Multiprocessor systems

In multiprocessor systems , each processor usually has its own cache and uses it to access a central, shared memory. In order to prevent problems caused by inconsistencies between the caches and the main memory, a cache algorithm must then ensure cache coherence .