Multimodal interaction

from Wikipedia, the free encyclopedia
Multimodal interaction WiiSports

In computer science, multimodal interaction describes forms of interaction between people and computers in which several modalities are used for input and output . The development of multimodal applications is a research area of human-computer interaction . Various modalities, i.e. types of input such as language, gestures, touchscreens, but also keyboard and mouse, can be used for the interaction. Several modalities are also used for the output. In addition to graphic displays, voice output, sounds or haptic feedback can also be used.

The aim of developing applications that support multimodal interaction is to make the interaction between humans and computers more natural than with unimodal applications. It is often attempted to imitate the interaction between people and real environments. An early example of a multimodal application was developed by Richard Bolt . The system enables the user to select virtual objects through gestures and to select commands through voice input .

literature

  • Richard A. Bolt, “Put-that-there”: Voice and gesture at the graphics interface, Proceedings of the 7th annual conference on Computer graphics and interactive techniques, 1980
  • Sharon Oviatt, Ten myths of multimodal interaction, Communications of the ACM, Volume 42, Issue 11, 1999