Computing power

from Wikipedia, the free encyclopedia

The computing power (also called data processing power or performance , English computing power or performance ) is a measure for calculating machines and data processing - (short DV systems) or information technology systems (short IT systems). As a rule, the data processing speed (for the calculations per time period), colloquially also called speed or speed , of the machine parts used (e.g. the main and graphics processor units) and the speed of individual applications (e.g. E.g. simulation calculations or the processing of large database administrations) are the focus of consideration and, more rarely, the computing power of entire IT systems, such as mainframes or the network of these in so-called supercomputers .

In addition to the performance and the physical performance of an IT system describes the work done amount of work or energy expended is considered per time period.

Further restrictions

In the sense of “how fast”, performance means how long the IT system needs to complete a processing order. Such an order can be the online response to a mouse click or the processing of a large background order. “How fast” can also be the question of how many jobs the IT system can process per time period. This is the job throughput .

In addition to this description, the term is also used for other properties such as functionality , energy efficiency or reliability .

Component and system performance

Component performance

Component performance describes the performance of an individual component of a data processing system, for example the processor , the main memory or the network connection. The user community of a component is the equipment that surrounds it, such as the environment of the processor that generates machine commands, the environment of a storage system that generates memory access or the set of nodes in a computer network that generate data transport orders. The IT performance of components is described in terms of performance parameters such as the distribution of the order completion time, throughput rates of orders or the average response time . If necessary, such variables are also differentiated according to order types, for example write and read orders in storage systems. In this case, component performance is the set of all performance variables. To assess whether the component has a satisfactory performance, an assessment must be carried out that compares the determined performance values ​​with the values ​​of the selected performance parameters required by the user community (i.e. the machine environment of the component).

System performance

With system performance , the performance of a complete data processing system will be referred to, which may consist of a plurality of components. Corresponding parts of data processing systems can be individual software components ( application software and operating system ), computers (for example file servers ), computer networks (for example computer clusters ) or special devices (for example switches ). It also depends on how efficiently algorithms are implemented and how the hardware and software are configured .

The user base can consist of human users ("normal" users, administrators, ...) as well as other IT systems. For example, the Wikipedia website is used both by human users and by other IT systems such as search engines. The IT performance of such a system is described - just as with the component performance - with performance parameters such as response time , distribution , average response time, throughput rate and the like. Expediently, these variables are detailed according to the various types of orders that occur in the order stream generated by the entire user community. With system performance is defined as the set of all measured and predicted output sizes.

Attempts have been made repeatedly to calculate system performance from the known component performances . Experience shows that the interrelationships are generally too complex so that this does not work. Reliable system performance values ​​can only be obtained from measurements or, with restrictions, from forecasts.

Parameters and performance criteria

There are a large number of parameters that are used to evaluate performance. In many cases these metrics say little about the actual performance of the system.

Average number of instructions executed divided by the number of clock cycles required to execute the program. IPC can also be expressed as the reciprocal of CPI ( Cycles per Instruction ).
IPC = number of instructions / clock cycles
A high IPC value means that the architecture is highly efficient. However, the value alone does not say anything about the actual speed (effectiveness).
A component performance parameter that was widely used in the past to characterize the performance of a processor is the (average) number of executable machine instructions per unit of time. However, it depends on the computer architecture (in particular the machine instruction set) how many machine instructions must be executed in order to carry out a data processing operation desired by a user.
The performance variable floating point operations per second is used in particular for supercomputers , since these operations play an important role in high-performance computing . However, a comparison is only possible if the benchmark method with which the value was determined is known.
The data transfer rate indicates the total amount of data (user data and control data) per time that can be transferred.
The data throughput indicates the amount of user data per time that can be transmitted.
Response time is the time between sending a request and receiving the associated response.
Ratio of processing time to response time
Frames per Second is used as a performance indicator by graphics cards for the number of images output per second.
However, this is neither a system performance variable nor a component performance variable, but concerns a layer that is even further inside. Factors such as processor architecture , number of processor cores , speed of the internal buses , memory size ( cache and memory ) and others have a significant impact on performance. A comparison based only on the processor speed is misleading.
This is another expression for response time (job processing time after an interrupt) with the secondary condition that a certain upper value is not exceeded. It is the guaranteed maximum response time. It is one of many variants of system performance quantities. Expressed in terms of the ISO standard, the latency is the time class limit (ISO: time class limit) of time class no. 1 (ISO: time class 1) of the processing time requirement (ISO: timeliness function) of the order type (ISO: task type) "Response to an interrupt ". This lead time requirement only has a single time class.
In the case of a storage medium, the access time is the time between the arrival of a write or read command and the start of the corresponding process.

Measurement versus prediction

Data processing performance is described with performance parameters. There are the following ways to determine numerical values ​​of such quantities:

Measurement

Measurement (English measurement ) is the experimental determination of DV power values with the actually constructed DV system. The order stream fed to the system can be generated by real users (real load) or by a user simulator (simulative load). The simulative load can be an individual load, usually as part of a load test , or a standardized load for a benchmark comparison .

A distinction is made between software measurement and hardware measurement.

  • Hardware measurement
Here measuring sensors are attached directly to the measuring object, which transmit the corresponding data. This type of measurement does not affect the process of the property computer.
  • Software measurement
A measuring program is installed on the object computer, which transmits the required information via a standard interface. Since the measuring programs work independently of the hardware, only minimal knowledge of the object computer is required and the measuring programs can be run on almost all computers. However, the program sequence is changed on the object computer and additional resources are used. The dynamic behavior of the object computer is falsified.

forecast

Prediction is a procedure in which the numerical values ​​of IT performance variables are determined using mathematical-analytical methods or simulation methods. In contrast to the measurement methods, a real system does not have to be present for the calculating performance evaluation.

In the analytical process, the IT system and its users are represented by a mathematical model and the IT performance values ​​are determined purely by calculation. In the simulation process, both the IT system and its users are simulated and the numerical values ​​of the IT performance variables are determined from this simulated event. With the analytical methods as well as with the simulation methods, the results are generally only approximate or estimated values. In contrast to values ​​from measurements with a real IT system, they have the character of a prediction or forecast.

  • Description of graph theory

Modeling the system as a graph is particularly useful in communication technology . The components are represented as nodes. Connections between the components are represented as edges. Each edge has a maximum capacity, which must not be exceeded, and a current flow. The resulting network can now be assessed by determining the greatest possible flow between two components. If this is done in pairs for all nodes, slow components can be identified.

A theoretical description of traffic is based on a queue of jobs that is processed by an operator station. The orders reach the queue with a medium arrival rate and leave the operator station with a medium service rate . The traffic intensity is described by the quotient . The system only works properly as long as it is. Otherwise there will be an overflow.

Different systems can be represented with this model. A computer without pipelining with only one processor receives a negative exponential distribution as a distribution function . Pipeline processors with stages are modeled with the Erlang distribution . A hyper- exponential distribution of the first order is used for multiprocessor systems

Another attempt to analytically compare the performance of processors is with mixes and core programs. However, these tests are relatively complex and are rarely used today.

  • Command mixes
The instruction execution times of different instruction types are summed up according to the expected relative frequency of their occurrence and evaluated as the mean instruction execution time.
  • Core programs
Core programs are solutions for typical, delimited tasks that are written for the computer to be evaluated. However, they are not executed. The aim is to determine the execution time based on the individual command execution times.

Evaluation of the IT performance

Performance values ​​(regardless of whether they are determined by means of measurement or by means of prediction methods) are numerical values ​​of physical quantities that are important and interesting, but which in themselves do not make any statement on the important question of whether the IT system meets its (performance) needs User community fulfilled. These needs must therefore be defined numerically.

A comparison of these (required) values ​​with the performance values ​​provided by the IT system must then be carried out. The result of this comparison is the statement as to whether the data processing system is sufficient to satisfy user needs. This is the rating. This leads to the results ultimately required by the users: “insufficient”, “sufficient”, “overachieved” etc. The scale of such an end statement could also be selected in more detail.

The following facts should also be pointed out:

  • The values ​​of IT performance parameters of a system under consideration are concrete numbers. They would only change if the system were changed (for example by replacing hardware components, such as the processor or a memory unit, or software components, such as the operating system version and / or application software with a different software efficiency).
  • In contrast, the evaluation results are dependent on the user population referred to in the evaluation. For example, the evaluation of a data processing system under consideration can be very good for user community A, while the same system for user community B is unsatisfactory.

Benchmark and load test

While the aim of the load test is to provide evidence of whether the expected load can be processed in the required time, the aim of the benchmark is to determine a key figure that can be compared between different systems.

The load test can be designed so that real users generate the load current (real load). The load current can also be generated by a simulator which simulates the entire user population in detail (simulative load). A benchmark always uses a standardized, simulative load to be able to compare the results.

In order to achieve more precise results, measurement software must be used which precisely logs the order process and carries out the evaluation (determination of the data processing performance parameters) after the end of the test.

In the course of time an enormous amount of (computer) benchmarks has been developed and described, both on the scientific level as well as by industry and business. Almost all of these benchmarks have different principles and data processing performance parameters, so that measurement results are generally not comparable. Many of these benchmarks were only up to date for a short time and have disappeared again.

Performance databases

Specialist journals and magazines regularly publish rankings on the performance of computer systems or components. These are determined by key figures or benchmarks.

A well-known database, the TOP500 , lists the 500 most powerful supercomputers in the world. The LINPACK benchmark is used for this.

Standards

The standards DIN 66273 “Measurement and evaluation of the performance of data processing systems” and ISO 14756 “Measurement and evaluation of the performance and software efficiency of data processing systems” make complete proposals for the benchmark methodology and data processing performance parameters. The ISO standard has adopted and expanded the principles of DIN 66273. Measurements that have been carried out according to DIN 66273 are also compliant with ISO 14756. The ISO standard extends the field of application beyond the measurement and evaluation of IT performance to the measurement of the (runtime) efficiency of system and / or application software . The DIN standard standardizes the benchmark methodology, but because benchmarks are short-lived, they do not define specific benchmarks. However, the ISO standard also contains examples of complete benchmarks.

The Application Response Measurement (ARM) is a standard of the Open Group and is used to measure performance of transactions from the user's perspective.

Variable performance

As a rule, the performance of an information technology system is constant. However, for reasons of downward compatibility or to save energy, it may be advisable to reduce the performance. The turbo key ensured downward compatibility with the IBM standard on PCs from the 8086 to Pentium era. This was often done by reducing the clock rate, but also by switching off the level 1 cache or reducing the clock frequency of the front side bus.

In modern notebooks, on the other hand, technologies such as PowerNow! , Cool'n'Quiet or Intel SpeedStep technology, the power when it is not needed in order to conserve the scarce energy resources in the batteries . This can be done by reducing the clock rate or the core voltage or by switching off individual processors in multi-processor systems .

literature

  • D. Ferrari: Computer Systems Performance Evaluation , Prentice-Hall Inc., Englewood Cliffs, New Yersey 1978, ISBN 0-13-165126-9
  • D. Ferrari, G. Serazzi, A. Zeigner: Measurement and Tuning of Computer Systems , Prentice-Hall Inc., Englewood Cliffs, New Yersey 1983, ISBN 0-13-568519-2
  • R. Jain: The Art of Computer Systems Performance Analysis , John Wiley, New York 1991, ISBN 0-471-50336-3
  • G. Bolch: Performance evaluation of computing systems by means of analytical queue models , guidelines and monographs of computer science, BG Teubner, Stuttgart 1989, ISBN 3-519-02279-6
  • H. Langendörfer: Performance analysis of computing systems (measurement, modeling, simulation) , C. Hanser, Munich / Vienna 1992, ISBN 3-446-15646-1
  • AO Allen: Introduction to Computer Performance Analysis with Mathematica , AP Professional, Harcourt Brace & Company Publishers, Boston 1994, ISBN 0-12-051070-7
  • W. Dirlewanger: Measurement and evaluation of IT performance based on the DIN 66273 standard , Hüthig, Heidelberg 1994, ISBN 3-7785-2147-0
  • M. Haas, W. Zorn: Methodical performance analysis of computing systems , R. Oldenbourg, Munich / Vienna 1995, ISBN 3-486-20779-2
  • C. Jones: Applied Software Measurement, Assuring Productivity and Quality. McGraw-Hill, New York 1996, 2nd ed., ISBN 0-07-032826-9
  • W. Dirlewanger: Measurement and Rating of Computer Systems Performance and of Software Efficiency - An Introduction to the ISO / IEC 14756 Method and a Guide to its Applications , Online Verlag Kassel-University-Press-GmbH, Kassel 2006, www.upress.uni -kassel.de, ISBN 3-89958-233-0 and ISBN 978-3-89958-233-8
  • John L. Hennessy , David A. Patterson : Computer Architecture: Analysis, Design, Implementation, Evaluation. Vieweg, Braunschweig 1994, ISBN 3-528-05173-6
  • Andrew S. Tanenbaum , James Goodman: Computer Architecture. 4th edition, Pearson Studium, Munich 2001, ISBN 3-8273-7016-7
  • Niklas Schlimm, Mirko Novakovic, Robert Spielmann, Tobias Knierim: Performance analysis and optimization in software development. Computer Science Spectrum April 30, 2007, PDF
  • Theo Ungerer, Uwe Brinkschulte: Microcontrollers and microprocessors. Springer, 2010, ISBN 9783540468011 , online
  • Thomas Rauber, Gudula Rünger: Parallel and distributed programming. Springer, 2000, ISBN 9783540660095 online
  • Tobias Häberlein: Technical IT. Springer, 2011, ISBN 9783834813725 online
  • Paul J. Fortier, Howard Edgar Michel: Computer systems performance evaluation and prediction. Digital Press, 2003, ISBN 9781555582609 online

Web links

Individual evidence

  1. a b computing power - page at ITWissen.info ; Status: July 21, 2012, accessed on July 21, 2012
  2. Peter Stahlknecht, Ulrich Hasenkamp: Introduction to Business Information Systems. Springer textbook series, Verlag Springer, 2005, ISBN 9783540011835 , page 31 online
  3. With the new DIN standard, the IT performance can be measured ( Memento from January 28, 2007 in the Internet Archive ) - Article in Computerwoche , April 10, 1992, accessed on July 21, 2012
  4. Uwe Brinkschulte, Theo Ungerer: Microcontrollers and microprocessors. Verlag Springer, 2010, ISBN 9783642053979 , page 13 online
  5. ^ Willi Albers: Concise dictionary of economics. Vandenhoeck & Ruprecht, 1980, ISBN 9783525102572 , page 100 online
  6. Dietmar Moeller: Computer structures. Springer, 2002, ISBN 9783540676386 , page 231 online
  7. http://www.spec.org/spec/glossary/#benchmark
  8. http://www.top500.org