Light field

from Wikipedia, the free encyclopedia

The light field is a function that describes the amount of light that falls in all directions at any point in three-dimensional space .

history

Michael Faraday was the first to describe in his Thoughts on Ray Vibration that light can be interpreted as a field , similar to the magnetic field that Faraday had been working on for several years. The term light field was coined by Arun Gershun in a publication on the radiometric properties of light in three-dimensional space. In the recent history of computer graphics , this term has been reinterpreted a little.

The plenoptic function

The radiance L along a ray can be understood as the amount of light that moves along all possible straight lines through a pipe, the size of which is determined by its solid angle and its cross-sectional area .

In geometric optics , light is described by light rays (the concept of geometric optics is based, among other things, on the assumption of incoherent radiation and object sizes well above the light wavelength ). The measure of the amount of light that is transported along these light rays is the radiance . The radiance is denoted by L and is measured in watts (W) per steradian (sr) per square meter (m 2 ) . Steradian is the measure of the solid angle and square meter is the measure of the cross-sectional area.

Parameterization of a ray in three-dimensional space by position (x, y, z) and orientation ( , ) .

The distribution of the radiance along light rays in an area of ​​three-dimensional space, which is caused by static light sources that cannot be changed over time, is called the plenoptic function. The plenoptic function is an idealized function that is used in image processing and computer graphics to describe an image from any position, from any angle, at any point in time. In practice, the plenoptic function is not used; however, it is useful in understanding various other concepts of image processing and computer graphics. Since straight rays can be described by their position in three spatial coordinates ( x , y and z ) and two angles ( and ), it is a five-dimensional function. ( Wavelength , polarization angle and time can be viewed as additional variables, resulting in a higher-dimensional function.)

The summation of the radiation density vectors and , which are emitted by two light sources and , produces a resulting vector with the magnitude and direction shown (out).

Like Adelson (1991), Gershun (1936) defined the light field at every point in space as a five-dimensional function. He treated it as an infinitely large collection of vectors , for each direction of incidence or orientation to a point, whose lengths are proportional to the radiance. The integration of these vectors over the entire sphere of possible orientations at a point results in a single, scalar value - the irradiance . The diagram reproduced from Gershun's publication shows this calculation for the case of two light sources. In computer graphics, this “vector-valued” function of three-dimensional space is also referred to as the irradiance vector field . The vector direction of a point in the light field can be interpreted as the normal direction of a surface, which is aligned on this point in such a way that the local irradiance is at its maximum there.

The 4-D light field

If there is a concave object in a scene (e.g. the inner part of a curved hand), then light emitted from one point on the object will be blocked by another point on the object after a short distance. No practical measuring device could determine the plenoptic function in such a scene.

Radiation along a light beam remains constant unless it is blocked by some object in its path.

If, on the other hand, the scene is only limited to convex objects, then the plenoptic function can be determined very easily (even with a digital camera). In addition, the function even contains redundant information in this case , since the radiation density does not change along the light beam. The redundant information is more precisely limited to a single dimension, so that a four-dimensional function is sufficient here. Parry Moon named this photonic field in 1981 , while researchers in the field of computer graphics refer to it as a 4-D light field or lumigram . Formally, the 4-D light field is defined as the radiation density along light rays in an empty space.

The array of rays in a light field can be parameterized in different ways , a few of which are shown below. The commonly used parameterization is the two-level form (picture below, right illustration). This shape cannot represent all rays, for example no rays that are parallel to the two planes, provided the planes are parallel to each other. The advantage, however, is that its description is closely related to the analytical geometry of perspective mapping. A simple way to imagine the two-level form of the light field is as a collection of many perspectively depicted images of the st -plane (and every object that lies above or outside) and each of which is from a different observation position in the uv -plane is recorded. A light field that is parameterized in this way is sometimes called a light slab .

Some alternative ways of parameterizing the 4-D light field in relation to the flow of light in an empty room. Left: points on a plane or curved surface and directions away from the points. Middle: pairs of points on the surface of a sphere. Right: pairs of points on two planes (in any position)

It should be noted here that light slab does not mean that the 4-D light field is equivalent to capturing two 2-D planes with information (the latter is only two-dimensional). For example, a pair of points with the positions (0,0) in the st -plane and (1,1) in the uv -plane corresponds to a ray in space; however, other rays can also pass through point (0,0) in the st plane or point (1,1) in the uv plane. The pair of points, on the other hand, describes only this one ray, not all of the other rays.

Analogy in acoustics

The equivalent of the 4-D light field in acoustics is the sound field or wave field as it is used in wave field synthesis . This corresponds to the Kirchhoff-Helmholtz integral, which says that the sound field is given over time by the sound pressure on one level without any obstacles in the room. This is therefore 2-D information for any point and, including time, a 3-D field.

The fact that the sound field has two dimensions (compared to the four-dimensional light field) is due to the fact that light is transported in rays (0-D at a point in space at a point in time, 1-D as a function of time), while a wave front of the sound is due to the Huygens principle can be modeled as spherical waves (2-D at a point in space at a certain point in time, 3-D as a function of time). Light travels in a single direction (2-D information) while sound travels in all directions.

generation

Light fields are fundamental representations for light. They can be generated in various ways, for example by computer programs or suitable recording techniques.

In computer graphics, light fields are typically created by rendering a 3-D model or by photographing a real scene. In both cases, images have to be taken from a large number of different angles in order to create a light field. Depending on the selected parameterization, these viewing angles lie on a line, a plane, a sphere or another geometry. However, it is also possible to choose unstructured perspectives.

Devices that can take photographs of light fields consist of a moving, hand-held camera, an automatically moving camera, cameras mounted on an arch (as in the bullet-time effect known from the film Matrix ), a matrix-shaped arrangement of cameras or a hand-held camera, microscope or an optical system with a microlens array that is positioned in the optical beam path (see also plenoptic camera ). Some public archives for light field records are given below.

The largest known light field data set ( Michelangelo's statue of Night ) contains 24,000 1.3 megapixel images. The number of images required depends on the application. If you want to render a light field from an opaque object (see section Applications below) so that it can be viewed from all sides, the back of the object must be photographed. If you want to look at the object from a short distance and the object lies on both sides of the st plane, then images must be taken from positions close to one another in the uv plane with high spatial resolution (with the two-plane parameterization described above).

The number and position of the images in a light field and the resolution of the individual images are referred to as the sampling of the 4-D light field. Research on light field sampling has been carried out by many scientists . A good entry point on this topic can be found in. Also of interest are the effect of obscuration, the effect of lighting and reflection as well as applications of plenoptic cameras and 3-D displays.

Applications

Computational Imaging refers to all methods of imaging that involve a computer. Many of these methods use the visible wavelength spectrum and many produce light fields. Therefore, all applications of computational photography in art, science, engineering and medicine would have to be searched for if one wanted to list applications for light fields.

A downward-pointing light source (F-F ') induces a light field whose radiance vectors curve outwards. Through his calculations, Gershun was able to calculate the radiation density that falls (off) on a point ( ) on a surface.

Selected applications from the field of computer graphics are listed below:

  • Lighting engineering. Gershun's reason for studying the light field was to derive the lighting pattern (if possible in a closed form) that occurs above surfaces and is caused by light sources of different positions and shapes. An example of this is shown on the right. A recent study is for example.
  • Light field rendering. By extracting 2-D sections from a 4-D light field, new views of a scene can be generated. Depending on the parameterization of the light field, these views can be perspective, orthographic, cross-slotted, multi-perspective or according to another form of projection. Light field rendering is a form of image based modeling and rendering .
  • Synthetic aperture photography. By integrating a sensible 4-D subset of a light field, the view can be approximated that would result if the image were recorded with a camera with a finite aperture . Such a view has a limited depth of field . By shearing or curving the light field before this integration, it is possible to focus on fronto-parallel or inclined planes. If a light field is recorded with a handheld digital camera, the focus of the recorded images can be adjusted later (see also plenoptic camera ).
  • 3-D displays. If a light field is presented with a technique that correctly assigns each sample to the corresponding beam in space, an autostereoscopic effect is created similar to that when viewing the real scene. Non-digital techniques for doing this include integral photography , parallax panoramas, and holography . Digital techniques for autostereoscopy include the use of lens arrays over high resolution displays or the projection of the image onto a lens array with an array of projectors. If the latter is combined with an array of video cameras, time-varying light fields can be recorded and projected. A 3-D television system can be built with both. Image generation and pre-distortion of synthetic images for holographic stereograms is one of the first examples of computer-generated light fields that motivated the later work of Levoy and Hanrahan.
  • Reduction of highlights. Multiple light scattering and reflections within a lens cause highlights that affect the image contrast. Highlights have already been analyzed in two-dimensional image space. However, it makes sense to think of highlights as a phenomenon in 4-D radiation space. Through the statistical analysis of the radiation space inside the camera, highlight artifacts can be classified and masked out. In the radiation room, highlights behave like high-frequency noise that can be reduced by outlier filters. Such filtering can be carried out by detecting the light field inside the camera; however, this reduces the spatial resolution of the image. For the reduction of highlights without a significant decrease in resolution, uniform / non-uniform sampling of the light rays could be used.

swell

  1. ^ Marc Levoy, Jonathan Shade: A light field of Michelangelo's statue of Night ( English ) Stanford Computer Graphics Laboratory. May 25, 1999. Retrieved July 15, 2019.

theory

  1. Faraday, M., "Thoughts on Ray Vibrations" , Philosophical Magazine , page 3, Vol XXVIII, N188, May. 1846
  2. Gershun, A. (1936). "The Light Field", Moscow, 1936. Translated by P. Moon and G. Timoshenko in Journal of Mathematics and Physics , Vol. XVIII, MIT, 1939, pp. 51-151.
  3. Adelson, EH, Bergen, JR (1991). "The plenoptic function and the elements of early vision" , In Computation Models of Visual Processing , M. Landy and JA Movshon, eds., MIT Press, Cambridge, 1991, pp. 3-20.
  4. Gershun, A. (1936). "The Light Field", Moscow, 1936. Translated by P. Moon and G. Timoshenko in Journal of Mathematics and Physics , Vol. XVIII, MIT, 1939, fig 17
  5. Arvo, J. (1994). "The Irradiance Jacobian for Partially Occluded Polyhedral Sources" , Proc. ACM SIGGRAPH , ACM Press, pp. 335-342.
  6. Moon, P., Spencer, DE (1981). The Photic Field , MIT Press.
  7. a b Levoy, M., Hanrahan, P. (1996). "Light Field Rendering" , Proc. ACM SIGGRAPH , ACM Press, pp. 31-42.
  8. a b Gortler, SJ, Grzeszczuk, R., Szeliski, R., Cohen, M. (1996). "The Lumigraph" , Proc. ACM SIGGRAPH , ACM Press, pp. 43-54.
  9. Gershun, A. (1936). "The Light Field", Moscow, 1936. Translated by P. Moon and G. Timoshenko in Journal of Mathematics and Physics , Vol. XVIII, MIT, 1939, fig. 24

analysis

  1. Chai, J.-X., Tong, X., Chan, S.-C., Shum, H. (2000). "Plenoptic Sampling" , Proc. ACM SIGGRAPH , ACM Press, pp. 307-318.
  2. ^ Durand, F., Holzschuch, N., Soler, C., Chan, E., Sillion, FX (2005). "A Frequency Analysis of Light Transport" , Proc. ACM SIGGRAPH , ACM Press, pp. 1115-1126.
  3. Ramamoorthi, R., Mahajan, D., Belhumeur, P. (2006). "A First Order Analysis of Lighting, Shading, and Shadows" ( Memento of the original from August 28, 2006 in the Internet Archive ) Info: The archive link was automatically inserted and not yet checked. Please check the original and archive link according to the instructions and then remove this notice. , ACM TOG . @1@ 2Template: Webachiv / IABot / www1.cs.columbia.edu
  4. ^ Ng, R. (2005). "Fourier Slice Photography" , Proc. ACM SIGGRAPH , ACM Press, pp. 735-744.
  5. ^ Zwicker, M., Matusik, W., Durand, F., Pfister, H. (2006). "Antialiasing for Automultiscopic 3D Displays" ( Memento of the original from May 1, 2007 in the Internet Archive ) Info: The archive link was inserted automatically and has not yet been checked. Please check the original and archive link according to the instructions and then remove this notice. , Eurographics Symposium on Rendering, 2006 . @1@ 2Template: Webachiv / IABot / graphics.ucsd.edu
  6. Halle, M. (1994) “Holographic stereograms as discrete imaging systems” (PDF; 992 kB), In: SPIE Proc. Vol. # 2176: Practical Holography VIII , SA Benton, ed., Pp. 73-84.

equipment

  1. ^ Levoy, M. (2002). Stanford Spherical Gantry .
  2. Kanade, T., Saito, H., Vedula, S. (1998). "The 3D Room: Digitizing Time-Varying 3D Events by Synchronized Multiple Video Streams" , Tech report CMU-RI-TR-98-34, December 1998.
  3. ^ Yang, JC, Everett, M., Buehler, C., McMillan, L. (2002). "A real-time distributed light field camera" , Proc. Eurographics Rendering Workshop 2002 .
  4. Wilburn, B., Joshi, N., Vaish, V., Talvala, E., Antunez, E., Barth, A., Adams, A., Levoy, M., Horowitz, M. (2005). "High Performance Imaging Using Large Camera Arrays" , ACM Transactions on Graphics (Proc. SIGGRAPH), Vol. 24, no. 3, pp. 765-776.
  5. a b Ng, R., Levoy, M., Brédif, M., Duval, G., Horowitz, M., Hanrahan, P. (2005). "Light Field Photography with a Hand-Held Plenoptic Camera" , Stanford Tech Report CTSR 2005-02, April, 2005.
  6. Georgiev, T., Zheng, C., Nayar, S., Curless, B., Salesin, D., Intwala, C. (2006). "Spatio-angular Resolution Trade-offs in Integral Photography" (PDF; 593 kB), Proc. EGSR 2006 .
  7. Levoy, M., Ng, R., Adams, A., Footer, M., Horowitz, M. (2006). "Light field microscopy" , ACM Transactions on Graphics (Proc. SIGGRAPH), Vol. 25, No. 3.

Light field archives

Applications

  1. Buehler, C., Bosse, M., McMillan, L., Gortler, S., Cohen, M. (2001). "Unstructured Lumigraph rendering" , Proc. ACM SIGGRAPH , ACM Press.
  2. Ashdown, I. (1993). "Near-Field Photometry: A New Approach" , Journal of the Illuminating Engineering Society , Vol. 22, No. 1, Winter, 1993, pp. 163-180.
  3. Zomet, A., Feldman, D., Peleg, S., Weinshall, D. (2003). "Mosaicing new views: the crossed-slits projection" , IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) , Vol. 25, no. 6, June 2003, pp. 741-754.
  4. ^ Rademacher, P., Bishop, G. (1998). "Multiple-center-of-Projection Images" , Proc. ACM SIGGRAPH , ACM Press.
  5. Isaksen, A., McMillan, L., Gortler, SJ (2000). "Dynamically Reparameterized Light Fields" , Proc. ACM SIGGRAPH , ACM Press, pp. 297-306.
  6. Jump up Vaish, V., Garg, G., Talvala, E., Antunez, E., Wilburn, B., Horowitz, M., Levoy, M. (2005). "Synthetic Aperture Focusing using a shear-warp factorization of the viewing Transform" , Proc. Workshop on Advanced 3D Imaging for Safety and Security , in conjunction with CVPR 2005.
  7. Javidi, B., Okano, F., eds. (2002). Three-Dimensional Television, Video and Display Technologies , Springer-Verlag.
  8. ^ Matusik, W., Pfister, H. (2004). "3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes" , Proc. ACM SIGGRAPH , ACM Press.
  9. ^ Halle, M., Benton, S., Klug, M., Underkoffler, J. (1991). "The UltraGram: a generalized holographic stereogram" (PDF; 320 kB), SPIE Vol. 1461, Practical Holography V , SA Benton, ed., Pp. 142-155.
  10. Talvala, EV., Adams, A., Horowitz, M., Levoy, M. (2007). "Veiling glare in high dynamic range imaging" , Proc. ACM SIGGRAPH.
  11. a b Raskar, R., Agrawal, A., Wilson, C., Veeraraghavan, A. (2008). "Glare Aware Photography: 4D Ray sampling for Reducing Glare Effects of Camera Lenses" , Proc. ACM SIGGRAPH.