Sensor data fusion

from Wikipedia, the free encyclopedia
The product multi-sensor data fusion and sensor data fusion overlap thematically. Help me to better differentiate or merge the articles (→  instructions ) . To do this, take part in the relevant redundancy discussion . Please remove this module only after the redundancy has been completely processed and do not forget to include the relevant entry on the redundancy discussion page{{ Done | 1 = ~~~~}}to mark. - Special circumstances ( discussion ) 15:26, May 22, 2012 (CEST)


Overview of sensor data fusion

The combination of the output data of several sensors is generally referred to as sensor data fusion . The goal is almost always to obtain better quality information. The meaning of "better quality" always depends on the application: For example, the data from two radar systems can be combined (merged) to cover a larger detection area. Another application of sensor data fusion is, for example, the fusion of camera and radar data in order to classify objects and increase the detection performance of the sensor system.

Objectives and potentials of sensor data fusion

When selecting sensors for an application, in addition to costs, the main focus is on completeness, consistency, accuracy and security of the recorded data. The use of several sensors including a clever combination of the output data makes achieving these goals at least potentially more likely than the use of just one sensor:

  • The reliability of the overall system is usually increased by using several sensors. On the one hand, it is easier to detect the failure of a sensor and, on the other hand, it is possible to compensate for the failure. A failed sensor does not necessarily mean a complete failure of the entire system.
  • When using several sensors - especially if they work according to different measuring principles - the probability of detection increases . This means that phenomena are recognized by the overall system even if individual sensors are limited in their perception or "blind" due to environmental conditions.
  • An important goal when linking data from multiple sensors is to increase accuracy . The prerequisite is that the measurement errors of the sensors are not correlated, have to satisfy certain probability density functions (e.g. normal distribution ) or have to be identifiable and quantizable by the system in another way. Kalman filters are often used here and the final accuracy achieved can be determined according to the rules of error propagation .
  • The fields of vision of sensors are usually limited; the use of several sensors increases the field of vision of the overall system accordingly.
  • The resolution of ambiguities is easier when using multiple sensors.
  • Additional sensors often provide additional information and thus expand the knowledge of the overall system.
  • Multiple sensors that detect the same phenomenon in the same field of view effectively increase the measuring rate .
  • Sensor data fusion can also be used to reduce costs . In this case, several, overall cheaper sensors replace a particularly expensive sensor.

Disadvantages and problems of sensor data fusion

In addition to the advantages mentioned, there are also problems that the use of multiple sensors and the linking of their output data can bring:

  • Higher data rates put a strain on the communication systems. The complexity of communication increases, as does the time required for transmission and processing.
  • The integration of the sensors (for example in a vehicle) becomes more and more difficult as the number increases. This is due to the space requirement, the limited number of installation locations that are favorable for the measurement and the necessary communication and supply facilities.
  • If several, inexpensive sensors are not supposed to replace an expensive sensor, the costs increase .
  • Since the measurements of the individual sensors typically take place at different times and the sampling time is also often different, there is a need to synchronize the data. This leads to additional efforts in software and hardware (e.g. real-time capable bus systems ).

Distinguishing criteria

Approaches to sensor data fusion can be differentiated according to different criteria:


Functionality

Brooks and Iyengar (1997) distinguish four types of sensor data fusion with regard to their function:

  • A complementary merger aims to increase the completeness of the data. For this purpose, independent sensors consider different viewing areas and phenomena or measure at different times.
  • In competitive fusion , sensors simultaneously capture the same field of vision and provide data of the same type. The (often weighted) combination of such "competing" data can increase the accuracy of the overall system.
  • Real sensors often do not provide the desired information alone. For example, the required information is only obtained from the assembly of the various output data. Such a merger is called a cooperative merger .
  • A special case is the independent merger . Strictly speaking, there is no real sensor data fusion because here data from different sensors are not linked, but processed in a common system.

In real systems, mixed forms or a combination of different fusion types are typically used (sometimes also referred to as hybrid fusion ).


Levels of sensor data fusion

Hall & Llinas (1997) distinguish three levels of sensor data fusion:

  • With data fusion , the raw sensor data are merged with one another before further signal processing steps. Example: Noise suppression with the help of beamforming .
  • In the case of feature fusion , unique features are extracted before the merging. The newly combined feature vectors are then processed further. Example: Audiovisual Speech Recognition . Acoustic and visual feature vectors are combined in order to achieve acceptable recognition rates through the combination of speech sounds and lip movements even in noisy surroundings or with disturbed channels.
  • In decision fusion , the merging takes place only after all signal processing and pattern recognition steps have been carried out.


Fusion architecture

According to Klein (1999), fusion approaches can be distinguished according to the following types of architecture:

  • With Sensor Level Fusion , modules upstream of the actual fusion module process the sensor data and then forward them to the fusion module. An example of such preprocessing is feature extraction . The amount of data to be merged is usually reduced in such an architecture due to the preprocessing steps. The fusion of data then takes place either on the level "feature fusion" or "decision fusion".
  • If only minimally preprocessed sensor data are used as input data for the fusion module, one speaks of a central level fusion . Further signal processing steps only take place after the fusion, so that the level of fusion is typically "data fusion". The amount of data to be merged is hardly reduced here compared to the amount of sensor data.
  • For mixed forms, elements from Central Level Fusion and Sensor Level Fusion are used. So preprocessed data as well as data directly from the sensor are merged in parallel. The term Hybrid Fusion is often used here , which makes it possible to confuse it with the distinction between functionality.


Further distinguishing criteria

The literature describes other types of fusion types, only briefly listed here:

  • Sensor networks, both static and dynamic
  • Homogeneous and heterogeneous arrangements, i.e. the use of sensors of the same or different types
  • Algorithm fusions

Tools

The linking of the sensor data usually takes place within computers or control units . There are many algorithms or mathematical processes to merge sensor data from different sources. Some examples are:

Application examples

Sensor data fusion is used in various technical areas. One area of ​​application are driver assistance systems . Data from cameras are validated here, for example, with the position information from the radar sensors in order to be able to reliably detect objects / obstacles. In some systems, the field of vision of the long-range radar is expanded with radars for the short range in order to obtain a larger field of vision and to be able to offer additional functions.

The odometry of the European train control system ETCS uses sensor data fusion to compensate for the specific weaknesses of individual sensor types for safe distance measurement.

General literature

  • DL Hall, J. Llinas: An introduction to multisensor data fusion. In: Proceedings of IEEE. Vol. 85, 1997, OCLC 926654310 , pp. 6-23.
  • Richard R. Brooks, Sundararaja S. Iyengar: Multi-Sensor Fusion: Fundamentals and Applications with Software. Prentice Hall PTR, 1997, ISBN 0-13-901653-8 .
  • Lawrence A. Klein: Sensor Data Fusion. Artech House, 1999, ISBN 0-8194-3231-8 .
  • Yaakov Bar-Shalom (Ed.): Multitarget-multisensor tracking: applications and advances. Volume I, Artech House, 1989, ISBN 0-89006-377-X .

Aviation Applications Literature

See also