Active pixel sensor

from Wikipedia, the free encyclopedia

An Active Pixel Sensor ( APS ; German active pixel sensor ) is a semiconductor detector for measuring light that is manufactured using CMOS technology and is therefore often referred to as a CMOS sensor . In contrast to the passive pixel sensor, which is also manufactured using CMOS technology , each picture element contains an amplifier circuit for signal reading.

Using CMOS technology makes it possible to integrate further functions into the sensor chip, such as exposure control, contrast correction or the analog-digital converter .

CMOS sensors can be found in smartphones and some digital cameras, while competing technology, CCD sensors , is built into video cameras and other digital cameras.

One of the first CMOS-APS, developed by NASA

Working principle

Schematic diagram of a single pixel in an AP sensor

The simplest implementation of an integrating APS picture element consists of a photodiode , which is operated in reverse direction, as a photosensitive element and three n-channel MOSFETs (field effect transistors). At the beginning of the exposure measurement, the voltage across the photodiode is set to a defined initial value by means of the reset transistor, in this case the threshold voltage of the transistor. During the subsequent brightness measurement, the junction capacitance of the photodiode is discharged by the photocurrent . The voltage across the photodiode drops proportionally to the irradiance and the exposure time. After the exposure time has expired, this voltage value is read out and sent to analog post-processing or immediately to an analog-digital converter . For this purpose, each picture element has an amplifier transistor which, by means of the selection transistor, is usually switched in columns to a readout line common to all picture elements of a row.

Compared to CCD sensors , there is the advantage that the electronics can read out the voltage signal of each individual pixel directly without having to shift the charges, which has a significantly lower tendency to blooming . The disadvantage is that there is a lot of electronics between the light-sensitive photodiodes that are not themselves light-sensitive, which originally led to a lower light sensitivity compared to CCD technology with the same chip area. Since the necessary integration density to be competitive with CCD had not yet been achieved, this technology was still meaningless in the 1970s and 1980s.

history

Active Pixel Sensors were invented by Eric Fossum at the Jet Propulsion Laboratory in the 1990s . For this he received the Queen Elizabeth Prize for Engineering in 2017 .

Due to the readout electronics, which were initially difficult to reduce in size, the fill factor, i.e. the proportion of the light-sensitive area in the total area of ​​a pixel, was only 30 percent, i.e. the charge yield was low (thus also the achievable signal strength), which resulted in poor signal-to-noise -Ratio and manifested itself in a strong image noise with poor light sensitivity . These disadvantages were only reduced later through intensive further development in the miniaturization of CMOS technology and through the use of microlenses above each picture element, which direct all of the incident light onto the light-sensitive part.

application areas

AP sensors are used as image sensors in digital cameras and video cameras . Today they are used in various digital single-lens reflex cameras . Such sensors are practically the only ones used in cell phones with camera functions.

CCD sensors are currently used almost exclusively in camcorders , but in 2005 Sony released the HDR-HC1 , a high-definition video camcorder that uses an AP sensor. AP sensors are also used in many industrial cameras. In 2004, the Munich company ARRI released the D-20, a video camera that uses an AP sensor with an image resolution of 2880 × 1620 pixels. Its size corresponds to the active image area of ​​a 35 mm film , which allows the use of generic film camera lenses and is intended to adjust the depth of field of the images to that of the film. Sometimes a separate CMOS sensor is installed for each basic color (so-called 3MOS sensor), so that greater color saturation is achieved even with lower brightness.

A special form of CMOS image sensors are the photodiode arrays , which are quasi an n × 1 CMOS image sensor. As a rule, they are only used in embedded applications, i.e. applications in which the image is not viewed or evaluated by people. Examples are barcode scanners and angle sensors.

Color image sensors

To record a color image, at least three wavelength ranges of the light have to be recorded separately, usually assigned to the primary colors red, green and blue. When using a sensor, this is often done by a color filter mosaic superimposed on the pixels, such as the Bayer pattern . In contrast to CCD sensors, this color separation can also be carried out in the same pixel with CMOS sensors by stacking three photodiodes on top of each other, which are achieved by different colors due to the different depths of penetration of the different light wavelengths. Such sensors are used commercially in digital cameras by Sigma under the name Foveon X3 . An alternative design called the Transverse Field Detector is being researched.

Differences to CCD sensors

Initially, it was hoped that production would be inexpensive with a larger production volume, on the assumption that the sensors could be manufactured on the production lines designed for high quantities without retrofitting, thus resulting in lower production costs per chip. This has not been confirmed (as of 2001). However, and in chips with Active pixel sensors often parts of the peripheral circuit, such as analog-to-digital converter , clock generation , timing sequencer , and integrated voltage level adjustment, which more compact and overall cost systems allowed.

A fundamental advantage of APS lies in the amplifier present in each pixel, so that individual amplifiers do not have to be used for several pixels as with CCD. As a result, at a given pixel rate, each amplifier can be operated with a lower bandwidth and thus lower self-noise. In 2013, AP sensors achieved an input noise of 1–2 photons with an image scan of over four hundred megapixels per second, with the sensors comprising 4–10 MPixels and a quantum efficiency of over 70%. If only one aspect predominates, however, CCDs can be advantageous: EMCCDs are used to detect fewer photons with very little noise ; CCDs can be manufactured with quantum efficiencies close to one hundred percent in a limited spectral range, and their low dark current results in a small image noise with very long exposure times.

CMOS image sensors often have a higher sensitivity in the NIR range ( near infrared , short-wave infrared radiation ) than CCD sensors. For many CMOS sensors, the maximum sensitivity is in the NIR range (> 650 nm), while CCD sensors have the maximum in the visible range (green light, 550 nm).

The following list of the advantages and disadvantages of CMOS sensors compared to CCD sensors relates to general statements about standard components. Specialized sensors can have significantly different properties in both technologies.

Advantages (of the CMOS sensors):

  • Lower power consumption
  • Smaller (device) size, thanks to the integration of the evaluation logic on the same chip ( system on a chip )
  • Some processing steps can be carried out right away in the pixel amplifier, e.g. B. Logarithmization for the HDRC sensor ( high dynamic range CMOS ).
  • By processing each pixel separately (converting charges into voltages):
    • Very high frame rates compared to a CCD of the same size (quick preview, video function)
    • More flexible reading through direct addressing ( binning , multiple reading, simultaneous reading of several pixels)
    • Very limited blooming effect

Disadvantage:

  • separate conversion of the charge into voltage for each pixel and integration of the evaluation logic leads to:
    • greater sensitivity differences between the pixels ( uniformity ) due to manufacturing tolerances, which leads to greater color noise in Bayer sensors , and
    • a lower fill factor (ratio of the light-sensitive to the total pixel area), resulting in an overall poorer light-sensitivity.

See also

Web links

Individual evidence

  1. Bruce G. Batchelor: Cameras . In Machine Vision Handbook , Springer- Verlag 2012, ISBN 978-1-84996-168-4 , p. 383
  2. YouTube: Transverse Field Detector
  3. Eric R Fossum: Comment on Transverse Field Detector
  4. a b Dave Litwiller: CCD vs. CMOS: Facts and Fiction ( Memento of April 8, 2011 in the Internet Archive ) (PDF; 395 kB). In: Photonics Spectra. No. 1, 2001, pp. 154-158.
  5. ^ Vu, Paul; Fowler, Boyd; Liu, Chiao; Mims, Steve; Balicki, Janusz; Bartkovjak, Peter; Do, Hung; Li, Wang: High-performance CMOS image sensors at BAE SYSTEMS Imaging Solutions , bibcode : 2012SPIE.8453E..0DV .
  6. Junichi Nakamura: High-resolution, high-frame rate CMOS image sensors for high-end DSC / DVC applications , Aptina (see: Chipworks 6th Annual Image Sensors Conference ( Memento from May 6, 2013 in the Internet Archive ), Blog, 2013 )