Pattern recognition

from Wikipedia, the free encyclopedia

Pattern Recognition ( Pattern Recognition ) is the ability to recognize a set of data regularities, repetitions, similarities or regularities. This feature of higher cognitive systems is researched for human perception by cognitive sciences such as perceptual psychology , while for machines it is researched by computer science .

Typical examples for the countless areas of application are speech recognition , text recognition and face recognition , tasks that human perception does continuously and obviously effortlessly. However, the elementary ability of classification is also the cornerstone of concept formation , abstraction and (inductive) thinking and thus ultimately of intelligence , so that pattern recognition is also of central importance for more general areas such as artificial intelligence or data mining .

Pattern recognition in humans

This ability brings order into the initially chaotic flow of sensory perception . By far the best studied was pattern recognition in visual perception . Their most important task is the identification (and subsequent classification) of objects in the outside world (see object recognition ).

In perceptual psychology, a distinction is made between two main approaches to explaining pattern recognition: the " template theories " and the " feature theories ". The template theories assume that perceived objects are compared with objects already stored in long-term memory, while the feature theories are based on the assumption that perceived objects are analyzed and identified based on their “components”. Two of the most extensive theories of features are the "Computational Theory" by David Marr and the theory of the geometric elements ("Geons") by Irving Biederman .

Pattern recognition in computer science

Computer science studies processes that automatically classify measured signals into categories. The central point is the recognition of patterns , the characteristics that are common to all things in a category and that distinguish them from the content of other categories. Pattern recognition processes enable computers, robots and other machines to process the less precise signals of a natural environment instead of precise inputs.

The first systematic research approaches to pattern recognition emerged in the mid-1950s with the desire to sort mail deliveries by machine instead of by hand. In the course of time , the three current groups of pattern recognition processes emerged with syntactic , statistical and structural pattern recognition. The exploitation of support vector machines and artificial neural networks were seen as breakthroughs in the late 1980s. Although many of today's standard procedures were discovered very early, they only became suitable for everyday use after considerable methodological refinements and the general increase in performance of commercially available computers.

approaches

Today a distinction is made between three basic approaches to pattern recognition: Syntactic, statistical and structural pattern recognition. Although they are based on different ideas, on closer inspection one recognizes commonalities that go so far that a method of one group can be transferred to a method of the other group without significant effort. Of the three approaches, the syntactic pattern recognition is the oldest, the statistical one the most widely used and the structural one the most promising for the future.

Syntactically

The aim of syntactic pattern recognition is to describe things by following symbols in such a way that objects of the same category have the same descriptions. If you want to separate apples from bananas, you could introduce symbols for red (R) and yellow (G) as well as for elongated (L) and spherical (K); all apples would then be described by the symbol sequence RK and all bananas by the word GL. In this case, the problem of pattern recognition presents itself as a search for a formal grammar , i.e. for a set of symbols and rules for combining them. Since a clear assignment between feature and symbol is usually not easily possible, methods of probability calculation are used here. For example, colors come in innumerable shades, but you have to make a precise distinction between red and yellow. In the case of complex issues, the actual problem is only delayed instead of solved, which is why this approach receives little attention and is only used for very clear tasks.

Statistically

Most of today's standard methods fall into this area, in particular the support vector machines and artificial neural networks mentioned above . The aim here is to determine the probability of an object that it belongs to one or the other category and ultimately to sort it into the category with the highest probability. Instead of evaluating features according to ready-made rules, they are simply measured here as numerical values ​​and summarized in a so-called feature vector . A mathematical function then uniquely assigns a category to every conceivable feature vector. The great strength of these methods is that they can be applied to almost all subject areas and no deeper knowledge of the interrelationships is required.

Structurally

Structural pattern recognition combines various syntactic and / or statistical processes into a single new process. A typical example is face recognition, in which different classification methods are used for different parts of the face such as the eyes and nose, each of which only states whether the body part being searched for is present or not. Superordinate structural procedures such as Bayesian networks combine these individual results and use them to calculate the overall result, the category affiliation. The basic feature recognition is left to general statistical processes, while higher-level inference processes bring in special knowledge about the subject. Structural procedures are used particularly for very complex issues such as computer-assisted detection , the computer-aided medical diagnosis.

Sub-steps of pattern recognition

A pattern recognition process can be broken down into several sub-steps, which start with the acquisition and at the end a determined classification. During acquisition , data or signals are recorded and digitized using sensors . Patterns are obtained from the mostly analog signals , which can be represented mathematically in vectors , so-called feature vectors , and matrices . Preprocessing takes place to reduce data and improve quality . By extraction of features, the patterns are in feature extraction subsequently transformed into a feature space. The dimension of the feature space, in which the patterns are now represented as points, is restricted to the essential features during the feature reduction . The final key step is the classification by a classifier , which assigns the characteristics to different classes . The classification method can be based on a learning process with the aid of a sample .

Schematic structure of a pattern recognition system

Capture

See also : signal processing , measurement , digitization and measurement technology

Preprocessing

In order to be able to recognize patterns better, preprocessing usually takes place. The removal or reduction of unwanted or irrelevant signal components does not lead to a reduction in the data to be processed; this only happens when the feature is extracted. Possible preprocessing methods include signal averaging , application of a threshold value and normalization. The desired results of the preprocessing are the reduction of noise and the mapping to a uniform range of values.

Feature extraction

After the pattern has been improved by preprocessing, various features can be obtained from its signal. As a rule, this is done empirically according to procedures gained through intuition and experience, as there are few purely analytical procedures (e.g. automatic feature synthesis). Which features are essential depends on the respective application. Features can consist of symbols or symbol chains or can be obtained from different scale levels using statistical methods . In numerical methods, a distinction is made between methods in the original range and methods in the spectral range . Possible features are, for example

Using transformations such as the discrete Fourier transformation (DFT) and discrete cosine transformation (DCT), the original signal values ​​can be brought into a more manageable feature space. The boundaries between methods of feature extraction and feature reduction are fluid. Since it is desirable to gain as few features as possible, but therefore all the more meaningful, relationships such as covariance and the correlation coefficient between several features can be taken into account. Features can be decorrelated with the Karhunen-Loève transformation (principal axis transformation ).

Feature reduction

In order to reduce the characteristics to those essential for the classification, it is checked which characteristics are relevant for the class separation and which can be omitted. Methods of feature reduction are the analysis of variance , in which it is checked whether one or more features can be separated, and the discriminant analysis , in which the smallest possible number of separable non-elementary features is formed by combining elementary features.

Classification

The last and essential step of pattern recognition is the classification of the features into classes. There are various classification methods for this purpose .

Living beings mostly use neural networks to recognize patterns in the signals of our senses . This approach is analyzed and imitated in bionics . The neuro computer science has shown that artificial neural networks are possible learning and recognition of complex patterns, also takes place in the above shown way without first a rule abstraction.

Following the classification of the pattern, an attempt can be made to interpret the pattern. This is the subject of the pattern analysis . In image processing , the classification of images can be followed by so-called image recognition , i.e. the mere recognition of objects in an image without interpreting or analyzing the relationships between these objects.

See also

literature

  • Richard O. Duda, Peter E. Hart, David G. Stork: Pattern classification . Wiley, New York 2001, ISBN 0-471-05669-3 .
  • J. Schuermann: Pattern Classification - A Unified View of Statistical and Neural Approaches . Wiley, New York 1996, ISBN 0-471-13534-8 .
  • K. Fukunaga: Statistical Pattern Recognition . Academic Press, New York 1991, ISBN 0-12-269851-7 .
  • M. Eysenck , M. Keane: Cognitive Psychology . Psychology Press, Hove, 2000.
  • H. Niemann: Classification of Patterns . Springer, Berlin 1983, ISBN 3-540-12642-2 . ( online ).
  • Christopher M. Bishop: Pattern Recognition and Machine Learning . Springer, Berlin 2006, ISBN 0-387-31073-8 . ( online ).
  • Monique Pavel: Fundamentals of pattern recognition . 2nd Edition. Dekker, New York 1993, ISBN 0-824-78883-4 .

Web links

Individual evidence

  1. ^ E. Bruce Goldstein: Perceptual Psychology . Spektrum Akademischer Verlag, Heidelberg 2002, ISBN 3-8274-1083-5