Software reliability

from Wikipedia, the free encyclopedia

Software reliability is defined as the "probability that a computer program will function properly in a specified environment within a specified time". Software reliability is therefore one of the objective, measurable or assessable criteria of software quality and is therefore one of the software metrics . The metrics for the reliability of software are in principle based on the frequency of errors, relative to the number of test cases executed. In principle, statistical analyzes are carried out on the basis of extensive tests. In theory, however, there are also techniques based on the static analysis of the computer program or its model.

Test cases

Relevant test cases depend on the focus and granularity of the software under test (SUT):

  • (Sub) system test: Black box processes determine typical test cases from the specification sheet without looking at the design or the implementation, whereby marginal values ​​and implausible values ​​are of particular importance. Furthermore, stress tests can be carried out using test cases from the point of view of quantity structures and speed.
  • Component or integration test : The test cases that can be determined here aim to control all interfaces between components. Using model-based testing, test cases for all interfaces and subsystems can be systematically derived from the model.
  • Unit tests : White box procedures analyze the implementation of units and derive test cases with regard to extreme values, functions and a high branch or even path coverage.

In addition to the test case itself, special attention must be paid to the associated acceptance criterion. In any case, the acceptance must be related to the associated specification, since otherwise systematic inconsistencies between test cases and software specification can occur.

Regression and repetition

In order to obtain statistical statements for a metric, a large number of different test cases and the repeated execution of regression tests are necessary. If the test cases are executed repeatedly, a systematic variation of the environmental conditions that is uncorrelated with the test case is important, because with exactly identical environmental conditions the identical result will always occur due to the determinism of the software. With sufficient system complexity, however, the determinacy quickly becomes mere theory, and repetition with independently varying surroundings produces significant results.

Test automation

In order to be able to carry out the large number of tests practically at all, the tester needs test automation at the level of system and items , right through to refinement on software units.

Reliability tests on a system level only make sense if the items of the next static refinement (subsystems, components, units) have already been tested for reliability beforehand. For this purpose, developers have to provide the testers with test environments, test drivers and test cases separately for each level. In this respect, reliability is a more complex goal than pure freedom from danger (“safety”).

Notes on the design process

The design process should definitely include the possibility of refining the specification sheet and other specifications on the "constructive side" with regard to existing acceptance criteria of test cases, since practice shows that apparent "errors" only result from imprecise or inconsistent requirements. For this reason, the process of formulating test cases and acceptance criteria will regularly require a more precise specification of the respective specification.

Test as part of the integration

Technically, automated tests can be gradually incorporated as part of the software integration. A test protocol is established as a “ target ” in the integration scripts (“build”, “makefile”) of the software integration , which is derived from the call of the test generator, the intermediate artifacts to be tested (“ OBJ ”, “ lib ”, “ OCX "," DLL "," JAR ") and the test cases.

The regression tests at system level should, however, either be split off or parallelized for reasons of computing time. The product (system) would be technically available while the regression tests are still running.

See also

Individual evidence

  1. JD Musa, A. Iannino, K. Okumoto: Engineering and Managing Software with reliability measures. McGraw-Hill, 1987.
  2. ^ Doron Peled: Software reliability methods. Springer-Verlag, 2001, ISBN 0-387-95106-7 .