Natural user interface

from Wikipedia, the free encyclopedia
QS IT
This article was due to content flaws on the quality assurance side of the computer science editorial added. This is done in order to bring the quality of the articles from the subject area of ​​computer science to an acceptable level. Help to eliminate the shortcomings in this article and take part in the discussion !  ( + )

Natural User Interfaces (NUI) or “Reality Based User Interfaces” enable the user to interact directly with the user interface by swiping, typing, touching, gestures or speech. Natural user interfaces such as touchscreens are touch- sensitive and react to finger and hand movements. In this context, one speaks of gesture-based operation.

With the development of touchscreens, the previous operating patterns of graphic interfaces (GUI) have changed significantly. While artificial input devices such as a keyboard or mouse were previously necessary for interaction, a finger touch is now sufficient. Many smartphones , tablets , but also ticket offices or ATMs and other devices use this direct form of operation.

Since touching and influencing virtual objects works almost in the same way as real objects, it is easy for users to transfer everyday actions into the digital system. By acting in the real, everyday environment, parallels to the virtual objects can be drawn and ways of acting can be transferred. Existing knowledge structures are activated and previous knowledge is applied. The development away from input devices like a mouse and towards multi-touch brings the real and the virtual world closer together. Objects are no longer influenced by commands to the computer, but are taken into their own hands. This approach is called 'Reality-Based Interaction' (RBI) and serves as the basis for the design of multi-touch applications.

Various predetermined interaction options, so-called 'patterns', such as scaling, moving and rotating images or scrolling information, allow the user to interact with the device and the software directly via the interface.

NUI enables people to deal with interactions in a much more natural way and means an extension of the previously limited artificial handling of technical interfaces.

development

The first attempts to develop touch-sensitive input devices began in the 1950s. Between 1945 and 1948, the Canadian scientist Hugh Le Caine developed the first voltage-controlled synthesizer that had touch-sensitive buttons with which, for example, the tone color and frequency of the synthesizer could be set.

From the mid-1960s to 1971, various touchscreen technologies were developed by, among others, IBM and the University of Illinois. For example 'PLATO IV', a touchscreen terminal from 1972. It works with a forerunner of today's optical infrared technology.

In 1982 Nimish Mehta developed the first multi-touch system at the University of Toronto. The so-called 'Flexible Machine Interface' enables the user to draw simple graphics by pressing the screen with his finger.

In 1990 the 'Sensor Cube' was developed as the successor to the 'Sensor Frame' of Carnegie Mellon University from 1985 in cooperation with NASA. Its optical system is able to recognize the angle of the finger to the touchscreen.

The 'Digital Desk', developed in 1991 by Pierre Wellner at Rank Xerox EuroPARC, uses for the first time an interaction, such as the scaling of objects, which can be carried out with two fingers.

In 1994 the first mobile phone with a touchscreen appeared on the market. 'Simon' is being developed by IBM and Bell South and can be considered an early precursor to the iPhone and other smartphones of today.

In 1995 the Input Research Group of the University of Toronto presented a 'Tangible Interface' which differentiates between different objects and recognizes their location and rotation on a display. Thus, with real physical objects, graphic objects of the display can be moved and influenced.

The 'Diamond Touch' touchscreen developed by Mitsubishi Research Labs in 2001 recognizes simultaneous screen touches by several people and can distinguish their location and pressure.

At the TED Conference 2006 in Monterey, California, Jeff Han presented a multi-touch screen with various functions such as moving and rotating objects, changing colors depending on the pressure of the finger, zooming and sorting images and many other applications.

In 2007 Apple presented the most famous example of a multi-touch device to date, the iPhone. The user can write e-mails and SMS via a multi-touch display and navigate through the contents of the appointment calendar, music and pictures.

In the same year Microsoft presented the interactive multi-touch table MS Surface. The user can interact with digital content and objects on the table surface by means of hand movements and touches.

Windows 7: Microsoft is launching Windows 7 and the integrated "Windows Touch" function in 2009, an operating system that enables multi-finger input.

technology

Regardless of the technology used to register a point of contact (“touch event”), all systems use three different components as the basis of their hardware: sensors , comparators and actuators . Sensors register changes in the system and, through their sensitivity and range, determine the usable interactions of a multi-touch screen. The task of the comparators is to carry out a state comparison. The state of the system after the interaction is compared with the state of the system before the interaction, and the comparator decides what effects the interactions carried out have. This is passed on to the actuators and carried out as an actual action. Comparators and actuators appear in the form of software. Sensor technologies can be divided into the following technologies:

Resistive technology: A resistive touchscreen works with two foils, each with a conductive coating. The two foils touch each other through pressure and electrical contact is established. The coordinates of the contact point can be determined from the voltage drop within the resistance matrix.

Capacitive technology: In capacitive touchscreens, a pane of glass with a conductive metallic coating is used. A conductive polyester film is attached over this. Touching the surface changes the electric field. The current derived is proportional to the distance between the applied voltages and the contact point, which means that its coordinates can be calculated.

Surface wave technology: The touch events are detected using ultrasound and acoustic pulse detection.

Optical systems: The touch events are detected using infrared light and cameras.

Current examples of natural user interfaces

Apple iOS

iOS is an operating system that is used on the Apple products iPhone, iPad and iPod touch . These are all multi-touch devices. The first to appear was the iPhone . Touches on the screen are recorded via a capacitive system, thus enabling scrolling through picture and music databases with CoverFlow, your own address book or zooming in on maps and websites. In addition to a keyboard that adapts to the application in use and thus saves space on the multi-touch screen, the iPhone has a motion sensor that detects the position of the phone and aligns the screen content accordingly. Further sensors register the light conditions of the surroundings and adjust the display brightness accordingly.

Microsoft PixelSense

The multi-touch table Microsoft PixelSense (formerly Surface ) recognizes gestures as well as objects that touch its surface and can interact with them. With the simultaneous detection of over 52 points of contact and its 360-degree user interface, PixelSense also enables use in larger groups. PixelSense consists of a PC, an optical system, a projector and a stable tabletop. A projector, which is aimed directly at the Plexiglas surface from below, throws the image. Infrared LEDs illuminate the surface evenly. Five infrared cameras detect touches on the screen by detecting the light reflected on the surface.

Microsoft Kinect

The Kinect was originally developed as a controller for the Xbox 360 . The Kinect has a PrimeSense depth sensor, 3D microphone and a color camera. Thus, the Kinect enables operation via voice and gesture control. There is an SDK and a driver for Windows as well as Linux and Mac.

research

In recent years there has been an increasing number of scientific studies on interactions on multi-touch screens and on improving existing technology. Researchers like Hrvoje Benko from Columbia University and Microsoft Research or Tomer Moscovich from Brown University deal with topics such as the precise selection and recognition of objects and points of contact on touchscreens.

A current study by the Stuttgart Media University and User Interface Design GmbH confirms the intuitive nature of the gesture-based operation. The results of the study show that the interaction with multi-touch devices such as PixelSense is easy to understand for young and older users alike and hardly causes any problems for users.

See also

literature

  • Rainer Dorau: Emotional interaction design: gestures and facial expressions of interactive systems, Springer 2011, ISBN 978-3-642-03100-7

Web links

Individual evidence

  1. Girouard, A., Hirshfield, L., Horn, M., Jacob, R., Shaer, O., Solovey, E. & Zigelbaum, J. (2008). Reality-Based Interaction: A Framework for Post-WIMP Interfaces. In ACM (Ed.), Proceeding of the twentysixth annual SIGCHI conference on Human factors in computing systems (pp. 201-210). New York: ACM.
  2. Young, G. (1999). Hugh LeCaine. Accessed December 21, 2009 at http://www.hughlecaine.com/en/biography.html .
  3. a b c d Buxton, W. (2009). Multi-Touch Systems that I have known and loved. Accessed December 21, 2009 at http://www.billbuxton.com/multitouchOverview.html .
  4. ^ Saffer, D. (2009). Designing gestural interfaces. Cologne: O'Reilly, p. 8.
  5. ^ Saffer, D. (2009). Designing gestural interfaces. Cologne: O'Reilly, p. 10.
  6. Mitsubishi (2002). DiamondTouch SDK: Support for Multi-User, Multi-Touch Applications. Accessed December 21, 2009 at http://www.merl.com/publications/TR2002-048/
  7. ^ TED (2006). Accessed February 12, 2010 at http://www.ted.com/talks/jeff_han_demos_his_breakthrough_touchscreen.html .
  8. Windows 7 (2009): Accessed December 21, 2009 at http://windows.microsoft.com/de-DE/windows7/products/features/windows-touch
  9. ^ Saffer, D. (2009). Designing gestural interfaces. Cologne: O'Reilly.
  10. Apple (2009): Accessed on December 21, 2009 under Archived Copy ( Memento of the original from April 14, 2010 in the Internet Archive ) Info: The archive link was inserted automatically and has not yet been checked. Please check the original and archive link according to the instructions and then remove this notice. @1@ 2Template: Webachiv / IABot / www.apple.com
  11. onli-blogging (2011): http://www.onli-blogging.de/index.php?/1080/Kinect-unter-Linux.html
  12. Microsoft SDK: http://www.microsoft.com/en-us/kinectforwindows/
  13. Technology Review (2011): www.heise.de/tr/artikel/Vom-GUI-zum-NUI-1107776.html
  14. Buxton, W. (2009). Multi-Touch Systems that I have known and loved. Accessed December 21, 2009 at http://www.billbuxton.com/multitouchOverview.html .