Test automation

from Wikipedia, the free encyclopedia

Under test automation (also test automation ) is the automation of activities in the test to understand both the software testing as well as the automated testing of hardware, the hardware test .

motivation

In software development, it is particularly important to know a fixed, defined status of the software, e.g. B .:

  • Is the current, new software version better than the old version?

Automatic tests that test for undesirable effects on other functions after a change has been implemented are called regression tests . They make software measurable in terms of its quality and show possible side effects of changes made directly and clearly. They serve as direct feedback for developers and testers, who may not be able to oversee the entire software system at once, as well as for recognizing side effects and consequential errors.

Test automation therefore provides a metric , the number of successful test cases per test run . This can answer the following questions:

  • When is a new requirement completely fulfilled by software?
  • When is a program bug fixed?
  • When is the developer's work finished?
  • Who is responsible for what at what point in time?
  • What is the quality of a new software version (see development stage (software) )?
  • Is the new software version qualitatively better than the previous version?
  • Does a bug that has been fixed or a new requirement affect existing software (change in the behavior of the software)?
  • Has it been ensured that real operation with the new software is successful and secure?
  • What is really new functionality and bug fixes in the software; can you understand that?
  • Can the delivery date of the software still be met if it is not possible to assess the current quality of the software?

To the example question: "When is a program error fixed?", The answer in this case is:

"Exactly when all existing test cases and also the test cases written for the program error have been successfully completed."

Only the constant test provides feedback, and this is only possible and realizable through automation .

Another advantage of test automation is the acceleration of the development process. Where production, installation and testing are carried out manually one after the other in software projects without automation, these three steps can be started automatically one after the other in fully automated projects (i.e. when production and installation can be automated in addition to the test), e.g. B. in a night run. Depending on the scope of the project, you can start this process in the evening and have the test result available the next morning.

Activities that can be automated

In principle, the following activities can be automated:

  • Test case creation
    • Test data creation
    • Test scripting
  • Test execution
  • Test evaluation
  • Test documentation
  • Test administration

Test case creation

Depending on the format used to describe a test case , test case creation can be automated by transforming high-level descriptions (test specifications) into this format. Languages ​​of different levels of abstraction are used for test specification: simple table-like notations for test data and function calls, script languages (e.g. Tcl , Perl , Python ), imperative languages ​​(e.g. C , TTCN-3 ), object-oriented approaches ( JUnit ) and declarative and logical formalisms as well as model-based approaches (e.g. TPT ). An extensive and, if possible, fully automatic translation of artifacts in a technical language level remote from the machine into artifacts in a technical language level close to the machine is sought. Another approach is to dynamically generate the test case creation based on the business objects to be declared . If a test specification is not already available in an executable form, but in a non-executable language (e.g. UML , Excel table, or similar), it can be automatically translated into executable test cases using suitable tools.

Test data creation and test scripting

Since the number of possible input values ​​and processes in a program is often very large, input data and processes must be selected when generating test cases from test specifications according to the test coverage to be achieved. The software's data model can often be used to create test data, while behavioral models of the software are used to create test scripts in model-based testing . Solutions that do not require scripts are also available in the commercial market.

Test execution

Today, tests are largely carried out using fully automatic test tools. Depending on the target system, unit test tools , test systems for graphical user interfaces , load test systems , hardware-in-the-loop test benches or other tools are used.

Test evaluation

To evaluate the test, the test result obtained must be compared with the expected value. In the simplest case, only a table comparison is required here; However, if the target behavior is defined by logical constraints or contains extremely complex calculations, the so-called oracle problem can arise. If two software versions or two test cycles and thus two test results are compared against the target result, trend statements and quality statistics can be calculated.

Test documentation

In the test documentation, a comprehensible and understandable test report is generated from the test results obtained. Document generators and template tools can be used for this.

Test administration

Test administration is responsible for the administration and versioning of test suites as well as the provision of an adequate user environment. In addition to standard tools (e.g. CVS , Eclipse ) there are a number of special tools that are specially tailored to the needs of software testing.

Universal architecture for test automation

Universal test system architecture

Various tools exist to automate the above activities. These always focus on solving special tasks and differ in operating philosophy, syntax and semantics. As a result, it is often difficult to choose the right tools for a particular set of activities or to use the tools correctly. The universal test system architecture offers a structuring and classification of the automatable activities to abstract, solution-neutral tool functionality. To this end, it defines five functional levels: test management, test execution and evaluation, test bed control, test object stimulation and observation, and test object environment. The test system architecture supports the integration of existing test tools and components into test systems and thus represents a universal basis for test automation.

Software for automated software testing

Web links

literature

  • Dmitry Korotkiy: Universal test system architecture in mechatronics. Sierke Verlag, Göttingen 2010. ISBN 978-3-86844-238-0
  • Thomas Bucsics, Manfred Baumgartner, Richard Seidl , Stefan Gwihs: Basic knowledge of test automation - concepts, methods and techniques . 2., act. u. revised Edition. dpunkt.verlag, 2015, ISBN 978-3-86490-194-2 .