Efficiency (computer science)
The efficiency of an algorithm is its economy in terms of the resources , computing time and storage space that it uses to solve a specified problem. However, efficient algorithms are usually more difficult to understand because they are often based on sophisticated ideas. Efficient algorithms are quick to solve the corresponding problem .
Run-time and storage efficiency are not just a property of the algorithm. The efficiency is also influenced by the specific implementation in the respective programming language , the underlying hardware and the input data. Therefore, independent factors are used for the efficiency assessment:
- Run time of an algorithm based on the required calculation steps
- Memory requirement through variables
Whether an algorithm can be considered efficient or not depends above all on the perspective from which the algorithm is analyzed and what is known about the complexity of the problem being handled by the algorithm.
In terms of efficiency, a distinction is made between:
- Worst-case - the worst possible behavior
- Average-Case - the average behavior
- Best case - the best possible behavior
Since the number of cycles required for elementary operations fluctuates between different computer architectures , but usually only differs by a constant factor and the runtime and space requirements for small entries are usually insignificant, the Landau notation is used . As a result, the runtime behavior and the space requirement can be displayed while neglecting a constant prefactor for large input variables, which, however, does not say anything about the suitability of the algorithm in practice, since in practice relatively small input variables are usually used, but the Landau notation considers very large input variables.
The term efficient algorithm is used rather vaguely in theoretical computer science . This usually means an algorithm whose runtime is polynomial in the size of the input, but such algorithms can also be practically useless if, for example, the fixed exponent of the associated polynomial is too large. There are even linear time algorithms that are practically useless because the constant prefactor in the representation is too large. Thus, although algorithm A is in theory much more efficient than algorithm B , in practice only algorithm B is used because the input variables to be processed are not large enough for algorithm A to take advantage of its advantages.
The amount of memory used in a data structure must always be assessed in relation to the frequency of its occurrence during the runtime of a program. The effort involved in optimization often only makes sense if many objects of the type under consideration are created in the main memory or are stored permanently. In this case, by reducing the memory consumption, a reduction in costs on the one hand and an increase in the system throughput on the other hand can be achieved.
A reduction in the memory consumption of objects can have a deterioration in the execution time of certain accesses to the underlying data structure, namely when partial information is combined in order to be stored compactly in basic types. The efficiency of the coding operations introduced plays a role here. The relative frequency of certain accesses in a typical program run must be taken into account in order to be optimal overall. Operations on data are divided into two categories: read and write. These correspond to the decoding and coding of the partial information and must therefore be considered separately. An additional aspect comes into play through the model of the abstract data type : the internal representation can, if necessary, be processed without conversion into the components.
The aim is to find a balanced solution that does not pay for advantages on the one hand with losses in efficiency on the other. Effort and complexity must be justified by the profit achieved. When in doubt, the clear implementation is preferable to the tricky one. This means that not only the immediate resource requirements of the individual algorithm at runtime are considered, but also the efficiency of the entire system, depending on the requirements.
Another goal is to avoid redundancy when storing object states. It should also be noted that redundancy reduces serviceability. However, specifically used redundancy can significantly increase the speed of access to data in certain applications.