Dynamic software test procedure

from Wikipedia, the free encyclopedia

Dynamic software test procedures are specific test methods used to detect errors in the software using software tests . In particular, program errors should be recognized that occur as a function of dynamic runtime parameters, such as varying input parameters, runtime environment or user interaction.


While the software to be tested is not executed with static methods ( compile time ), dynamic methods require the software to be executable ( runtime ). The basic principle of the dynamic procedure is the controlled execution of the software to be tested with systematically defined input data (test cases). For each test case, the expected output data are also given in addition to the input data. The output data generated by the test run is compared with the expected data. If there is a discrepancy, there is an error

The main task of the individual procedures is to determine suitable test cases for testing the software .

The dynamic procedures can be categorized as follows:

Specification-oriented procedures

Specification-oriented or black-box procedures (previously function-oriented test methods) are used to determine test cases with which the extent to which the test item (also called test item, test object or test item ) fulfills the specified specifications (black box). One also speaks of testing “against the specifications”. Depending on the test specimen and test type, the specifications are of different types. B. tested against the module specification, in the interface test against the interface specification and in the acceptance test against the technical requirements, such as those laid down in a specification.

Equivalence class formation

When creating equivalence classes , the possible values ​​of the inputs (or also of the outputs) are divided into classes, from which it can be assumed that errors that occur when processing a value from this class also occur with all other representatives of the class. On the other hand, if a representative of the class is processed correctly, it is assumed that the input of all other elements of the class does not lead to errors either. In this respect, the values ​​of a class can be viewed as (in this respect) equivalent to one another.

The test cases are created on the basis of the equivalence classes. To test the valid equivalence classes, the test data are generated from as many valid equivalence classes as possible. In order to test the invalid equivalence classes, a test date from an invalid equivalence class is combined with only valid test data from the other equivalence classes.

The formation of equivalence classes has advantages and disadvantages. The advantages are that they are

  1. the basis for the limit value analysis is
  2. represents a suitable method for deriving representative test cases from specifications.

The disadvantage is that only individual entries are considered. Relationships or interactions between values ​​are not covered.


The specification of an online banking system requires that only amounts between € 0.01 and € 500 may be entered. One can then assume that a transfer of € 123.45 will be accepted and executed correctly if a test has shown that a transfer of € 123.44 is correctly executed. In general, it can be assumed that all amounts from € 0.01 to € 500.00 are processed correctly if this is the case for any amount from this range. It is therefore sufficient to test any representative from the field in order to track down a possible error.

In the same way one can argue for negative values ​​and for values ​​greater than 500 €. For tests it should therefore be sufficient to form three equivalence classes (one valid and two invalid equivalence classes):

  • Values ​​from 0.01 up to and including 500.00 € (valid)
  • Values ​​less than or equal to zero (invalid)
  • Values ​​greater than € 500.00 (invalid)

Every transfer in the online banking system must be authorized by entering a TAN. Similar to the first equivalence class, four equivalence classes can be created here for entering the TAN:

  • Enter a correct TAN
  • Entering an incorrect TAN
  • Enter a TAN that is too short
  • Enter too long TAN

The following test cases are defined on the basis of the two equivalence classes:

  • Enter a valid value (e.g. € 123.45) and confirm with a correct TAN (carried out because everything is correct.)
  • Enter a valid value (e.g. € 123.45) and confirm with a wrong TAN (failed because wrong TAN.)
  • Enter an invalid value (e.g. € 600.00) and confirm with a correct TAN (failed because invalid value.)
  • Enter a valid value (e.g. € 123.45) and confirm with a TAN that is too short (failed because TAN is too short.)
  • Enter a valid value (e.g. € 123.45) and confirm with a TAN that is too long (failed because TAN is too long.)

With the formation of equivalence classes it is not possible to take into account dependencies between different input values. For example, when checking an entered address, it is not sufficient to check for place names, street names and postcodes (e.g. using a database) whether they belong to the class of valid values. They also have to go together.

Limit value analysis

The limit value analysis is a special case of the equivalence class analysis. It arose from the observation that errors occur particularly frequently at the "edges" of the equivalence classes. Therefore, not arbitrary values ​​are tested here, but so-called boundary values ​​or limit values. In the example these would be the values

  • 0.00 € (invalid entry)
  • € 0.01 (valid entry)
  • 500.00 € (valid entry)
  • € 500.01 (invalid entry)

Pairwise method

The Pairwise method is a method to reduce the number of tests of combinations of several input values ​​by not testing all possible combinations. Instead, each input value of one field is tested in pairs with each input value of the other fields. This radically reduces the number of tests required, but of course also means that errors may not be discovered that only occur with very specific combinations of more than two fields.

State-based test methods

State- based test methods are based on state machines , which today are often represented as UML diagrams .

In the description of the state machine no error cases are usually provided. These must also be specified by adding “{initial state; Event} “(also the combinations not specified in the machine) the subsequent state and the triggered actions can be specified. These combinations can then all be tested. In addition to technical applications, state-based methods can also be used for testing graphical user interfaces and classes that are defined by state machines.

The test cases determined in this way are complete if all of the following criteria are met:

  • Are all state transitions going through?
  • Are all events that are supposed to cause state transitions tested?
  • Are all events that are not allowed to cause state transitions tested?

Cause and effect analysis

This method identifies the causes and effects of each sub-specification. One cause is a single input condition or an equivalence class of input conditions; an effect is an output condition or a system transformation.

The partial specification is converted into a Boolean graph that connects causes and effects through logical links. The graph is then supplemented with dependencies due to syntactic constraints or environmental conditions. The following types of dependencies between causes are distinguished:

  1. Exclusive dependency: the presence of one cause excludes other causes.
  2. Inclusive relationship: at least one of several causes is present (e.g. a traffic light is always green, yellow or red - or red and yellow together. If it works and is switched on).
  3. One-and-only-one relationship: There is exactly one of several causes (e.g. male or female).
  4. Requires dependency: The presence of one cause is a prerequisite for the presence of another (e.g. age> 21 and age> 18).
  5. Masking dependency: One cause prevents another cause from occurring.

The resulting graph is transformed into a decision table. In doing so, certain rules are applied that generate n + 1 test cases from a combination of n causes (instead of , as would be the case with a complete decision table). Each test case corresponds to a column in the decision table.

Structure-oriented procedures

Structure-oriented or white box procedures determine test cases based on the software source code (white box test ).

Software modules contain data that is processed and control structures that control the processing of the data. Accordingly, a distinction is made between tests that are based on the control flow and tests that are based on data access.

Control flow-oriented tests refer to logical expressions of the implementation. Data flow-oriented criteria focus on the data flow of the implementation. Strictly speaking, they focus on the way in which the values ​​are connected to their variables and how these statements affect the execution of the implementation.

Control flow oriented tests

The control flow-oriented test procedures are based on the control flow graph of the program. A distinction is made between the following types:

  • Statement coverage
  • Edge overlap (branch overlap)
  • Condition coverage
  • Path coverage

Data flow oriented tests

Data flow-oriented test methods are based on the data flow, i.e. access to variables. They are particularly suitable for object-oriented systems.

There are different criteria for the data flow-oriented tests, which are described below.

All defs criterion For each definition (all defs) of a variable, a calculation or condition is tested. A definition-free path to an element must be tested for each node and each variable. The error detection rate for this criterion is around 24%.

All-p-uses criterion The "p-uses" are used to form truth values ​​within a predicate (predicate-uses). The error detection rate for this criterion is around 34%. In particular, control flow errors are recognized.

All-c-uses criterion The “c-uses criterion” is understood to mean the calculation (computation-uses) of values ​​within an expression. This criterion reveals approx. 48% of all c-uses errors. In particular, it identifies calculation errors.

Because of their complexity, the techniques for data flow-oriented test case determination can only be used with the aid of tools. However, since there are hardly any tools, they are of no practical relevance at the moment.

Diversifying test methods

The group of diversifying tests comprises a set of test techniques that test different versions of software against each other. There is no comparison between the test results and the specification, but a comparison of the test results of the different versions. A test case is considered to have been passed if the outputs of the different versions are identical. In contrast to the function- and structure-oriented test methods, no completeness criterion is specified. The necessary test data are created using one of the other techniques, randomly or by recording a user session.

The diversifying tests avoid the often time-consuming assessment of the test results based on the specification. This naturally harbors the risk that an error which produces the same result in the versions to be compared will not be recognized. Due to the simple comparison, such tests can be automated very well.

Back-to-back test

In the back-to-back test , different versions to be tested against each other arise from the n-version programming , i.e. H. the programming of different versions of a software according to the same specification. The independence of the programming team is a basic requirement.

This method is very expensive and only justified if the security requirements are correspondingly high.

Mutation test

The mutation test is not a test technique in the narrower sense, but a test of the performance of other test methods and the test cases used. The different versions are created by artificially inserting typical errors. It is then checked whether the test method used finds these artificial errors with the existing test data. If an error is not found, the test data is expanded to include corresponding test cases. This technique is based on the assumption that an experienced programmer usually only makes "typical" mistakes.

Regression test

Due to the ambiguous use of the term, a separate article is dedicated to the regression test .

Individual evidence

  1. Richard H. Kölb: Software Quality: Five Principles of dynamic software testing. Elektronik-Praxis, June 11, 2008, accessed on June 3, 2015 : “The aim of dynamic software tests is to let a program or parts of it run in a controlled manner and thereby track down incorrect behavior. It goes without saying that the enormous variety of programs comes with numerous test variations. Nevertheless, there are a number of principles that can be used for almost all tests. "
  2. Chandra Kurnia Jaya: Validation and testing of safety-critical systems. University of Siegen Verification, p. 11 , accessed on June 16, 2015 .

See also

Web links