Z-buffer

from Wikipedia, the free encyclopedia

The Z-Buffering (also Depth Buffering, Tiefenpuffer- or depth buffer method ) is a method of computer graphics for masking calculation that is to say the visible from the observer, the three-dimensional to identify surfaces in a computer graphics. Using depth information in a so-called Z-Buffer (“Z- Buffer ”), the process determines pixel by pixel which elements of a scene have to be drawn and which are covered. Practically all graphics cards today implement Z-buffering directly in hardware. Edwin Catmull is considered to be the developer of the Z-Buffer process ; however, Wolfgang Straßer described the principle in a different context around the same time. The most important alternative to Z-buffering is the ray tracing algorithm.

functionality

The principle of Z-buffering is very simple. In addition to the visible part of the image memory , which contains the current color values, there is another memory, the Z-buffer, which contains the depth of the visible object at each pixel. Alternatively, the pixel values ​​in the frame buffer can be extended by a value. At the beginning, the entries in the Z-buffer are set to a value that stands for an infinite distance (backplane distance). The frame buffer is initialized with the background color. Each polygon is now rasterized . Only if the currently rasterized point of the polygon is closer to the viewer than the point whose distance is entered in the Z-Buffer, the values ​​in the Z-Buffer and in the Framebuffer are replaced by the distance or the color of the current polygon.

Principle of the Z buffer using the example of two intersecting polygons

The order in which the polygons are rasterized is basically arbitrary. Not only polygons but any other graphic primitives can be rendered with the help of the Z-buffer.

The memory size of the values ​​in the Z-buffer has a great influence on the quality of the rendered image. If two objects are very close together, artifacts can easily arise with a Z-buffer with 8  bits per pixel . 16, 24 or 32 bit deep Z-buffers generate fewer artifacts.

On current graphics cards , the Z-buffer takes up a significant part of the available memory and data transfer rate . Various methods are used to try to reduce the influence of the Z-buffer on the performance of the graphics card. This is possible, for example, through the lossless compression of the data, since compressing and decompressing the data is more cost-effective than increasing the data transfer rate of a card. Another method saves deletion processes in the Z-buffer: the depth information is written to the Z-buffer with alternating signs . One image is saved with a positive sign, the next one with a negative sign, only then must it be deleted. Another possibility for optimization is the pre-sorting of the primitives: If the closer primitives are rendered first, it can later be decided directly for the more distant ones whether or which pixels need to be rendered and which are covered by foreground objects, thereby creating texturing and pixel shaders -Processes can be saved.

algorithm

Pseudocode
initialization:

          zBuffer durch ∞
          FrameBuffer durch Hintergrundfarbe

Start:

          Für jedes Pixel (x,y) auf 
              Ermittle Farbe c und z-Koordinate z
              IF z < zBuffer(x,y)
                   zBuffer(x,y) = z
                   FrameBuffer(x,y) = c
              FI

Coding of the depth information

A computer-generated image (above) and the content of the associated Z-buffer (below)
Z-Fighting two polygons

The range of depth information in the camera room that is to be rendered is often defined by the near value and far value of . After a perspective transformation, the new value of , referred to here as , is calculated as follows:

It is the new value of the camera room. Sometimes the abbreviations and are also used.

The resulting values ​​of are normalized to values ​​between −1 and 1, where the area at near receives the value -1 and the area at far receives the value 1. Values ​​outside of this range are from points that are out of view and should not be rendered.

When implementing a Z-buffer, the values ​​of the vertices of a polygon are linearly interpolated and the values, including the intermediate values, are stored in the Z-buffer. The values ​​of are distributed much more closely on the near surface and much more scattered towards the far surface, which leads to a higher accuracy of the representation near the camera position. The closer the near area is placed to the camera, the lower the precision in the far area. A common cause of undesirable artifacts in distant objects is that the near surface was placed too close to the camera. These artifacts, referred to as Z-Fighting (Z-conflict, deep conflict), occur in particular when two coplanar surfaces are very close to one another, for example a wall and a poster attached to it. Which of the two polygons is then in the foreground is essentially random and can also change as a result of minor changes to the camera location. To remedy this, the programmer must take explicit measures, for example by artificially changing the Z-values ​​of the poster or by using a so-called stencil buffer .

Since the distance values ​​are not stored evenly in the Z-buffer, objects close to them are better represented than distant objects, as their values ​​are stored more precisely. This effect is generally desirable, but it can also lead to obvious artifacts when objects move away from each other. A variation of Z-buffering with more balanced distance values ​​is so-called W-buffering. In order to implement a W buffer, the unchanged values are stored by or in the buffer, generally as floating point numbers . These values ​​cannot be interpolated linearly, but must be inverted , interpolated and then inverted again. In contrast to , the resulting values ​​are evenly distributed between near and far . Whether a Z-Buffer or a W-Buffer leads to better images depends on the respective application.

Advantages and disadvantages

  • advantages
    • easy implementation (both in software and directly in hardware)
    • no pre-sorting of the entry necessary
    • very fast
    • simple parallelization possible (e.g. subdivision into smaller quadrants)
    • no random access to the scene required
  • disadvantage
    • every polygon of the input is rendered
    • the running time increases linearly with the input size
    • no connections between the objects are exploited

See also

  • Culling to determine the visibility of objects / surfaces

literature

  • Tomas Akenine-Möller, Eric Haines: Real-Time Rendering. AK Peters, Natick Mass 2002, ISBN 1-56881-182-9
  • James D. Foley et al. a .: Computer Graphics: Principles and Practice. Addison-Wesley, Reading 1995, ISBN 0-201-84840-6
  • David F. Rogers: Procedural Elements for Computer Graphics. WCB / McGraw-Hill, Boston 1998, ISBN 0-07-053548-5
  • Alan Watt: 3D Computer Graphics. Addison-Wesley, Harlow 2000, ISBN 0-201-39855-9

Web links

Individual evidence

  1. Hans-Joachim Bungartz u. a .: Introduction to Computer Graphics: Fundamentals, Geometric Modeling, Algorithms, p. 128. Vieweg, Braunschweig 2002, ISBN 3-528-16769-6
  2. ^ Michael Bender, Manfred Brill: Computer graphics: an application-oriented textbook, p. 67. Hanser, Munich 2006, ISBN 3-446-40434-1
  3. ^ Edwin Catmull: A Subdivision Algorithm for Computer Display of Curved Surfaces. Dissertation, Report UTEC-CSc-74-133, Computer Science Department, University of Utah, Salt Lake City 1974
  4. Wolfgang Straßer: Fast curve and surface display on graphic display devices. Dissertation, TU Berlin 1974
  5. ^ WK Giloi: Computer Graphics Pioneers: the Giloi's School of Computer Graphics - Starting Computer Graphics in Germany. ACM SIGGRAPH Computer Graphics 35,4 (Nov. 2001): 12-13, ISSN  0097-8930