Image synthesis

from Wikipedia, the free encyclopedia
3D scene rendered by various methods

Image synthesis or rendering (from English (to) render , German to  do something or to reproduce something ) describes the generation of an image from raw data in computer graphics . Raw data can be geometric descriptions in 2D or 3D space (also called scene ), HTML , SVG, etc.

A scene is a virtual spatial model that defines objects and their material properties, light sources, as well as the position and direction of view of a viewer.

Computer programs for rendering images are called renderers . A distinction is made between B. the rendering engine for computer games, the HTML renderer etc.

The following tasks usually have to be solved when rendering:

  • the determination of the objects visible from the virtual observer ( occlusion calculation )
  • the simulation of the appearance of surfaces, influenced by their material properties ( shading )
  • the calculation of the light distribution within the scene, which is expressed, among other things, by the indirect lighting between bodies.

In addition, the generation of computer animation requires some additional techniques. An important area of ​​application is the interactive synthesis of images in real time , in which mostly hardware acceleration is used. In the case of realistic image synthesis, on the other hand, value is placed on high image quality or physical correctness, while the required computing time plays a subordinate role.

A rendered image with reflection and depth of field effects

Real-time rendering

With real-time rendering, a series of images is quickly calculated and the underlying scene is changed interactively by the user. The calculation is done quickly enough that the image sequence is perceived as a dynamic process. Interactive use is possible from a frame rate of around 6 fps; at 15 fps one can speak of real time with certainty. On modern computers, real-time rendering is supported by hardware acceleration using graphics cards . With a few exceptions, graphics hardware only supports points, lines and triangles as basic graphic objects .

Graphics pipeline

With real-time rendering, the graphics pipeline describes the path from the scene to the finished image. It is a model concept that can vary depending on the system. The graphics pipeline is often implemented in parts similar to processor pipelines , in which calculations are carried out in parallel. A graphics pipeline can be broken down into three major steps: application, geometry, and rasterization.

The application step makes any changes to the scene that are specified by the user as part of the interaction and forwards them to the next step in the pipeline. In addition, techniques such as collision detection , animation, morphing and acceleration methods using spatial subdivision schemes are used.

Representation of a scene with a virtual observer and visual volume (light gray) that is delimited by the two clipping levels . The visual volume is transformed into a cube in the course of the projection, so that closer objects appear larger than further away.

The geometry step takes over a large part of the operations with the vertices , the corner points of the basic objects. It can be divided into various sub-steps which successively carry out transformations into different coordinate systems . In order to simplify the perspective illustration, almost all geometric operations in the geometry step work with homogeneous coordinates . Points are defined by four coordinates and transformations by 4 × 4 matrices .

First of all, all basic objects of the scene are transformed in such a way that the virtual observer looks along the z (depth) axis. If the scene contains light sources, a color is calculated for each vertex based on the material properties of the corresponding triangle. The volume of the scene visible to the observer is a truncated pyramid ( frustum ). In the next step, this frustum is transformed into a cube, which corresponds to a central projection. Basic objects that are partially or completely outside the visible volume are cropped or removed using clipping and culling techniques. Finally, a transformation is applied that moves the vertex coordinates to the desired drawing area on the screen. The z -coordinates are retained because they are required for the later calculation of the occlusion.

In the rasterization step , all remaining, projected basic objects are rasterized by coloring the pixels belonging to them. Since only the visible parts are to be displayed in the case of overlapping triangles, a Z-buffer is used, which takes over the calculation of the masking.

Graphics APIs

Graphics APIs are usually used to control graphics pipelines , which abstract the graphics hardware and relieve the programmer of many tasks. The OpenGL standard originally introduced by Silicon Graphics has made a significant contribution to the development of real-time rendering. The latest innovations in OpenGL and Microsoft's DirectX are mainly used in modern computer games . In addition to DirectX and OpenGL, there were other approaches, such as Glide , which, however, could not prevail. OpenGL is very important in the professional field. DirectX, on the other hand, is heavily optimized for game development. DirectX is proprietary software that is only available on Windows ; it is not an open standard.

Historical techniques

See also history of computer graphics

The first interactive technique for occlusion calculation was published in 1969 by Schumacker and others. Schumacker's algorithm was used for flight simulation for the US armed forces, an application in which massive investments were always made in graphics hardware.

In the early days of computer games with interactive 3D graphics, all the computationally intensive graphics operations were still carried out by the main processor of the computer. Therefore, only very simple and restricted rendering methods could be used. The first person shooter Wolfenstein 3D (1992), for example, used raycasting for the calculation of masking , with which only a fixed height dimension and rooms adjoining each other at right angles could be represented. Doom combined raycasting with two-dimensional binary space partitioning to increase efficiency and render more complex scenes.

Shading and direct lighting

As shading (dt .: shading ) the calculation of the color of surfaces using the associated generally material properties and the directly arriving from the light sources respectively. The shading is used in both real-time rendering and realistic rendering. The indirect lighting of other surfaces is initially not taken into account. A special case is represented by non-photorealistic shading techniques ( non-photorealistic rendering ) , in which, for example, distortions are created for aesthetic reasons, such as cel shading for comic-like images.

Light sources and shadows

Different, often physically incorrect types of light sources are common in modeling. Directional lights send parallel beams of light in a specific direction without attenuation, point light sources emit light in all directions, and spot lights only emit light in a cone-shaped area. In reality lights have a certain area; the light intensity decreases quadratically with distance. This is taken into account in the realistic image synthesis, while in real-time rendering mostly only simple light sources are used.

Shadows are an important element of computer graphics because they give the user information about the placement of objects in space. Because light sources are of a certain size, shadows actually appear more or less blurry. This is taken into account in realistic rendering processes.

Local lighting models

Light reflection on a Lambertian (ideally diffuse), a less rough (shiny) and a smooth (reflective) surface

Local lighting models describe the behavior of light on surfaces. When a light particle hits a body, it is either reflected, absorbed or - except for metals - refracted inside the body . Incoming light is only reflected on very smooth surfaces; In the case of non-metallic bodies, the relative proportion of reflected and refracted light is described by Fresnel's formulas .

Microscopic unevenness means that the light is not reflected, but with a certain probability is reflected in a different direction. The probability distribution that describes this behavior for a material is called the bidirectional reflectance distribution function (BRDF). Local lighting models are mostly parameterizable BRDFs. Ideally diffuse surfaces can be simulated , for example, with Lambert's law and shiny surfaces with the Phong lighting model . Real-time rendering often uses a combination of a diffuse, a glossy and a constant factor. Further, physically more plausible models were developed for realistic image synthesis.

The BRDF assumes that the light arriving at one point on the surface also exits exactly there. In reality, non-metallic bodies scatter light inside them, resulting in a softer appearance. The simulation of this volume scatter is particularly important for realistic image synthesis.

Interpolated shading

A body (more precisely a polyhedron ), rendered on the left with flat shading, on the right with Gouraud shading so that it looks like a sphere (or an ellipsoid of revolution ). The description of the body is available as a wire frame model , because only the area is relevant , not the volume! A real sphere could be described with far fewer parameters: center + radius . Instead, you describe surfaces using many vertices and connect them to each other with straight lines. Some models prescribe triangles, others squares. Calculations are applied to these vertices. This method is not only used in image synthesis, but also z. B. also with numerical simulations ( FEM ).

In real-time rendering, there are three common ways to calculate the lighting of a triangle. With flat shading , the color is calculated for a triangle and the entire triangle is filled with this color. This makes the facets that make up the model clearly visible. The Gouraud shading supported by most graphics cards, on the other hand, determines the color at each corner of a triangle, so that the raster interpolates between these color values ​​and the result is a softer appearance than with flat shading. With Phong Shading , the normal at this vertex is available together with each vertex. The raster interpolates between the normals and the local lighting model is calculated according to these normals. This procedure avoids some display problems of Gouraud shading.

Mapping techniques

Balls with different procedural textures
Bump mapping for simulating surface unevenness, in the middle the bump map used

Normally, local lighting models are applied uniformly to an entire object. Mapping techniques are used to simulate surface details due to color or structure variations. The material or geometry properties are varied at every point on the surface using a function or raster graphic. Many mapping techniques are also supported by graphics hardware. In addition to the procedures listed below, many other mapping techniques have been developed.

  • Texture mapping is the oldest mapping technique and is used to depict a two-dimensional image (texture) on a surface or to “stick” it with it. In addition to raster graphics , procedural textures are also used, in which the color at a point is determined by a mathematical function. Various filter methods are possible when determining a color value. Mip mapping is common on graphics hardware , in which the texture is available in different image resolutions for reasons of efficiency .
  • Bump mapping is used to simulate surface unevenness. The actual normal vectors on the surface are disturbed by a bump map . However, this does not affect the geometry of an object.
  • Displacement mapping is also used to simulate surface unevenness, but in contrast to bump mapping, the surface geometry is actually changed. Since there are usually not enough vertices available for this, additional surface points are inserted that are shifted according to a height field .
  • Environment mapping or reflection mapping is used to simulate mirroring effects during real-time rendering. For this purpose, the viewer sends a beam to the reflecting object and reflects it. In contrast to ray tracing (see below), the intersection point of the reflected ray with the closest surface is not calculated. Instead, the color value is determined from a precalculated image of the scene based on the direction of the beam.

Realistic rendering and global lighting

How realistic a rendered image looks depends largely on the extent to which the distribution of the light within the scene has been calculated. While with shading only the direct lighting is calculated, with indirect lighting the reflection of light between objects plays a role. This enables effects such as rooms that are only lit overall by a narrow gap. The light path notation is used to specify the simulation of lighting with respect to the capabilities of a rendering algorithm. If all types of light reflection are taken into account, one speaks of global lighting . It must be taken into account for a realistic result and is not possible or only possible to a very limited extent with real-time methods.

Mathematically, global lighting is described by the rendering equation , which uses radiometric quantities to indicate how much light reaches a surface point from another surface point after a reflection . The rendering equation can be calculated with ray tracing , for special cases also with radiosity . In addition to these two great techniques for realistic image synthesis , variants of the REYES system are used , especially in film technology .

Ray tracing

Image calculated with ray tracing. Light reflections and refractions are comparatively easy with ray tracing.

Ray tracing is primarily an algorithm for the computation of masking, which is based on the perspective emission of rays from the observer. Each ray is tested against all basic objects for an intersection and, if necessary, the distance to these objects is calculated. The visible object is the one with the closest distance. In extended forms, ray tracing can also simulate light reflections and refractions.

In order to calculate the global lighting using ray tracing, the “light intensity” arriving at this pixel must be determined using the rendering equation. This is done using a Monte Carlo simulation , in which many rays of light are randomly emitted on the surfaces. Such ray tracing techniques are called Monte Carlo ray tracing; the simplest of these methods is path tracing . These algorithms are comparatively time-consuming, but they are the only option for scenes with complicated lighting conditions and different materials. If implemented appropriately, they also provide unbiased images. This means that the image noise is the only deviation from the correct, fully converged solution. Photon mapping is used to accelerate the calculation of the light distribution using ray tracing, but can lead to visible image errors (artifacts).

Radiosity

A scene rendered with radiosity with directly and indirectly illuminated, ideally diffuse surfaces

In its basic form, the radiosity algorithm can only be used on ideally diffuse surfaces and is based on the subdivision of the surfaces into small partial areas (patches). Under these prerequisites, the rendering equations can be used to set up a linear system of equations for each patch that is solved numerically; Radiosity is one of the finite element methods . Radiosity can be extended to any material, but precision is limited by the number of patches and the resulting memory requirements. One advantage over ray tracing is that the light distribution is calculated independently of the viewpoint and the occlusion calculation is not part of the actual radiosity algorithm. This makes radiosity particularly suitable for rendering static or less animated scenes in real time, provided that a time-consuming advance calculation is justifiable.

Volume graphic

Image of a skull rendered using the means of volume graphics

With volume graphics , the objects to be rendered are not described as surfaces, but as spatial data sets in the form of voxel grids . Voxel grids contain values ​​arranged in a grid that describe the "density" of an object. This form of data representation is particularly suitable for objects that do not have clear outlines, such as clouds. Special techniques are required to render voxel grids. Since numerous imaging processes generate voxel data, volume graphics are also important for medicine.

literature

  • Tomas Akenine-Möller, Eric Haines: Real-Time Rendering. AK Peters, Natick, Mass. 2002, ISBN 15-6881-182-9 ( website )
  • Philip Dutré among others: Advanced Global Illumination. AK Peters, Natick, Mass. 2003, ISBN 15-6881-177-2 ( website )
  • Andrew Glassner: Principles of Digital Image Synthesis. Morgan Kaufmann, London 1995, ISBN 15-5860-276-3
  • Matt Pharr, Greg Humphreys: Physically Based Rendering. From theory to implementation. Morgan Kaufmann, London 2004, ISBN 01-2553-180-X ( website )
  • Ian Stephenson: Production Rendering: Design and Implementation. Springer, London 2005, ISBN 1-85233-821-0
  • Alan Watt: 3D Computer Graphics. Addison-Wesley, Harlow 2000, ISBN 0-201-39855-9

Web links

Commons : 3D computer graphics  - album containing pictures, videos and audio files

Individual evidence

  1. Tomas Akenine-Möller, Eric Haines: Real-Time Rendering, p. 1
  2. Tomas Akenine-Möller, Eric Haines: Real-Time Rendering, p. 7
  3. Tomas Akenine-Möller, Eric Haines: Real-Time Rendering, p. 11
  4. ^ Ivan Sutherland et al: A Characterization of Ten Hidden-Surface Algorithms. ACM Computing Surveys (CSUR) 6, 1 (March 1974): 1-55, here p. 23, ISSN  0360-0300
  5. ^ RA Schumaker et al .: Study for Applying Computer-Generated Images to Visual Simulation. AFHRL-TR-69-14. US Air Force Human Resources Laboratory, 1969