Texture mapping

from Wikipedia, the free encyclopedia
Texture mapping of the face of a cube

The term texture mapping (German for "pattern illustration") describes a process of 3D computer graphics . It is used to equip the surfaces of three-dimensional surface models with two-dimensional images - so-called " textures " - and surface properties. Textures make computer-generated images appear more detailed and realistic without having to refine the underlying model itself.

Texture coordinates

In addition to its position in xyz space, a texture coordinate in uvw space can be assigned to each vertex of a 3D object. The texture coordinates (also called uv or uvw coordinates ) are used to define how a texture (a bitmap or a mathematical texture) is mapped onto a polygon . If a two-dimensional bitmap texture is used, as is the case in computer games, only the u and v coordinates are required to determine which part of the image is to be mapped on the polygon . With mathematical textures such as B. 3D noise or volumetric textures, the w-coordinate is often also required.

The uv coordinate (0,0) corresponds to the lower left corner of the texture and the uv coordinate (1,1) corresponds to the upper right corner. Uv values ​​greater than 1 and less than 0 are possible and lead to edge repetition effects in the texture. These can usually be defined. Two possibilities are edge repetition or mirroring. A texture can be tiled by defining the texture coordinates over a polygon .

It is also possible to assign multiple texture coordinates to a vertex. One then speaks of several mapping channels . In this way, several images or image sections can be displayed superimposed on a polygon .

In 3D models with many polygons, a single texture is often used for the whole model, so each point on the model has only one set of texture coordinates (and not different texture coordinates for the different polygons that use that point) because this format is for hardware accelerated 3D -Graphic and also for the designer of the 3D model is particularly favorable.

In the simplest variant of texture mapping, the texture coordinates are linearly interpolated along the edge lines of the polygon that have already been transformed from 3D to 2D space . Then they are linearly interpolated along a screen line (or column) from edge line to edge line, with each pixel the color value of the texel belonging to the interpolated (u, v) coordinates (image point in the texture) is adopted.

Perspective correction

In the case of polygons that have a greater extent in the viewing direction, the method described above leads to visually unsatisfactory results because the texture coordinates are interpolated after the projection and therefore do not take into account that a line in the more distant part of the projected polygon is a larger line in the original polygon in 3D space corresponds to a line segment in the nearer part of the polygon. As a result, the assignment of texture coordinates to points in three-dimensional space changes when the perspective changes.

To solve this problem, instead of the texture coordinates u and v, the values ​​of u / z and v / z and also 1 / z are linearly interpolated, where z is the coordinate in 3D space in the viewing direction (z or 1 / z must therefore be saved for each projected point of the polygon). In order to calculate the texture coordinates for a pixel, divisions must now be carried out:

u = (u / z) / (1 / z)

v = (v / z) / (1 / z)

Because divisions are relatively slow operations, they are usually not done on every pixel; instead, u and v are only calculated in this way for a few pixels that are evenly distributed over the polygon. For all other pixels, the values ​​of u and v are interpolated between those of those pixels. In this way, the disruptive effects can be greatly reduced without having to expend too much computing power.

Texture interpolation

To simplify matters, the methods described so far assume that each pixel can be assigned exactly to one texel. However, if both pixels and texels are viewed as points without expansion, this is generally not the case. Rather, the texture coordinates of a pixel usually lie between several texels. It is therefore important to decide how the color value for the pixel is obtained from the color values ​​of the surrounding texels: A suitable scaling process is required .

The easiest and fastest scaling method is to simply select the closest texel. This procedure is called nearest neighbor or also point sampling . In the more complex bilinear filtering , the color value sought is interpolated from the four surrounding texels depending on their distance. Even more complex filters, such as the Gaussian filter , include additional texels in the calculation or weight the distance differently. Since unsuitable interpolation methods lead to undesired aliasing effects - for example moiré effects - a compromise must be found between speed and artifact formation.

MIP mapping

These techniques are used as long as the grid spacing of the pixels is smaller than that of the texels, i.e. at most one texel is assigned to any pixel. However, if the grid spacing of the pixels is greater than that of the texels, one pixel corresponds to a whole area of ​​the texture. Although it is no longer difficult to form the color value as the mean value of all the texels, this is very complex - many arithmetic operations have to be carried out for a single pixel - and therefore not practicable.

Instead, MIP maps are used . In addition to the original texture, these contain copies of the texture with decreasing size, so-called " level of detail " ( LOD). You select the greatest level of detail from this, which restores the usual state of "pixels smaller than texel", and work on it as on the original texture. In addition to the previous interpolation methods, there is the option of performing a further linear interpolation between two successive levels of detail; in combination with the bilinear filtering, a trilinear filtering is obtained . The use of MIP maps in connection with point sampling already greatly reduces aliasing effects, in connection with more complex filters and interpolation between the levels of detail they can be reduced to a minimum.

Anisotropic filtering

The applications described above consider pixels and texels as points, that is, as one-dimensional objects with no extension. Instead, they can also be viewed as small squares. Then it has to be taken into account that a pixel that is projected onto the texture does not form a square area there, but rather an area stretched in one direction when the observed polygon expands in the viewing direction. If this different propagation in different directions ( anisotropy ) of the pixel in the texture space is taken into account during the filtering , one speaks of anisotropic filtering .

Special procedures

There are several methods of making a surface covered by a texture appear three-dimensional:

  • With bump mapping , the lighting calculation is made with a normal vector varied over the surface.
  • With displacement mapping , additional polygons are created with the information from the texture.
  • With environment mapping , the texture is used to simulate a reflection.

Web links

Commons : Textures  - collection of images, videos and audio files