Black box test
Black box test is a method of software testing . Tests are developed based on the specification / requirement . This means that tests are developed without knowledge of the internal functioning / implementation of the system to be tested. The program is treated as a black box . Only outwardly visible behavior is included in the test. In contrast, white box tests are designed with the algorithm implemented in mind.
The aim is to check the compliance of a software system with its specification. On the basis of formal or informal specifications, test cases are developed that ensure that the required range of functions is adhered to. The system to be tested is viewed as a whole, only its external behavior is used to evaluate the test results.
Deriving test cases from an informal specification is comparatively time-consuming and, depending on the degree of precision of the specification, may be. U. not possible. A complete black box test is therefore often just as uneconomical as a complete white box test .
A successful black box test is also no guarantee that the software is free of errors, since specifications created in the early phases of software design do not cover subsequent detailed and implementation decisions.
Black box tests prevent programmers from developing tests “around their own mistakes” and thus overlook gaps in the implementation. A developer who has knowledge of the inner workings of a system could inadvertently forget certain things in the tests or see them differently from the specification due to certain additional assumptions that are outside the specification. Another useful feature is that black box tests can also be used as additional support for checking the specification for completeness, since an incomplete specification often raises questions when developing the tests.
Because the test developers are not allowed to have any knowledge of the internal functioning of the system to be tested, black box tests require a separate team to develop the tests. In many companies, special test departments are responsible for this.
Comparison with white box tests
Black box tests are used to uncover errors in relation to the specification, but are hardly suitable for identifying errors in certain components or even the error-causing component itself. The latter requires white box tests. It should also be taken into account that two errors in two components could cancel each other out to form a temporarily correct overall system.
Compared to white box tests, black box tests are much more complex to carry out because they require a larger organizational infrastructure (own team).
The advantages of black box tests over white box tests:
- better verification of the overall system
- Testing of meaningful properties with suitable specification
- Portability of systematically created test sequences to platform-independent implementations
The disadvantages of black box tests compared to white box tests:
- greater organizational effort
- Additional functions added during implementation are only tested by chance
- Test sequences with an inadequate specification are useless
It should also be mentioned that the distinction between black box test vs. White box testing depends in part on perspective. The testing of a subcomponent is a white box test from the perspective of the overall system, since from the outside perspective there is no knowledge of the system structure and thus the existing subcomponents for the overall system. From the perspective of the subcomponent, the same test can in turn be viewed as a black box test if it is developed and carried out without knowledge of the internals of the subcomponent.
Selection of test cases
The number of test cases in a systematically created test sequence based on a suitable specification is too high for practice in almost all applications. There are e.g. B. The following possibilities to systematically reduce these:
- Limit values and special values,
- Equivalence class method , classification tree method ,
- (simplified) decision tables
- condition related tests ,
- use case tests,
- Cause and effectiveness
- Finding robustness and security problems fuzzing
- Risk analysis or prioritization of the application of the desired results (important or unimportant functions).
In contrast to this, the reduction can also be carried out in an intuitive way (error guessing). This method should, however, be refrained from, since assumptions are always unconsciously taken into account that could turn out to be negative when the application is used later. But there are also other test directions that are very successful with it. Representatives are e.g. B. James Bach with Rapid Testing or Cem Kaner with Exploratory Testing (ad hoc test). These test types can be assigned to experience-based or unsystematic techniques. This also includes weak-point-oriented testing.
All functions are tested according to the frequency with which they will later be used.
Often one restricts oneself only to intensive testing of those functions in which the probability of errors occurring is high (complex algorithms, parts with insufficient specification, parts by inexperienced programmers ...). Intensive tests can with fuzzing be performed -Werkzeugen, as they allow for extensive automation of robustness and vulnerability tests. The results of these tests are then information about data packets that can compromise the SUT (system under test). Vulnerability tests can e.g. B. be carried out by Vulnerability Scanner or Fuzzer.
The risk-based (extent-of-damage-oriented) testing is limited to a thorough test of functions, where errors can have particularly serious consequences. Examples of this are corruption such as the destruction of an extensive file and system applications for life (medicine, motor vehicles or machine control) or death (defense). These are ranked according to priorities or classes (1, 2, 3 ...) and tested according to this order.
- BCS SIGIST (British Computer Society Specialist Interest Group in Software Testing): Standard for Software Component Testing ( ZIP ; 340 kB), Working Draft 3.4, April 27, 2001.
- Harry M. Sneed , Manfred Baumgartner, Richard Seidl : The system test. From requirements to proof of quality . 3rd, updated and expanded edition. Carl Hanser, Munich 2012, ISBN 978-3-446-42692-4 .