Please enable JavaScript to view this site.

IDS peak 2.8.0 / uEye+ firmware 3.33

Image sensors are colorblind by nature. They only detect brightness as a total of their spectral sensitivity and the spectrum of the incident light. But the color infomation gets lost.

The most common method to bring color into image sensors is to apply a color filter array to the pixels. A well-established pixel array is the Bayer RGB pattern, named after its inventor Bryce Bayer. Each pixel is either covered by a green, a blue or a red filter. Two out of every four pixels are green (G), one pixel is red (R) and one is blue (B). The double proportion of pixels with a green filter is due to the fact that color perception is inseparable from the functioning of the human eye. The spectral sensitivity of the human eye is strongest for green light.

Fig. 11: Bayer RGB filter pattern

Fig. 11: Bayer RGB filter pattern

RAW Bayer pixel format

A sensor with a Bayer pattern provides an image in RAW Bayer pixel format.

In the typical RAW Bayer format, the first pixel in the first row is a red pixel, but image transformations like mirror or rotation can change the sequence of pixel colors (depends on the sensor). The color sequence of the pixel's filters is called the PixelColorFilter and starts with with "Bayer" followed by the two alternating colors of the first pixel row (row 0).

PixelColorFilter

 

Even rows (0, 2, ...)

Odd rows (1, 3, ...)

BayerRG

icon-bayer-rg

Red Green

Green Blue

BayerGR

icon-bayer-gr

Green Red

Blue Green

BayerBG

icon-bayer-bg

Blue Green

Green Red

BayerGB

icon-bayer-gb

Green Blue

Red Green

The RAW Bayer pixel format contains only one color information for each pixel. To obtain a full-color image, various algorithms are applied to obtain red, green and blue values for each pixel. This interpolation of the individual pixel values is called debayering.

Debayering

Debayering is also known as Bayer conversion or demosaicing.

With debayering, the missing color information are calculated from the surrounding pixels. The resulting image has a color pixel format that consists of three planes, or "channels", for each pixel (red, green and blue). The amount of data is tripled.

There are various algorithms to generate the missing color information. Typically, fast algorithms result in a less accurate color reproduction, showing more artifacts. Slower algorithms have higher quality of color reproduction.

Some cameras allow to perform debayering in the camera. This reduces the load on the computer's CPU, but increases the data that must be transmitted. Note that this increases the required transmission bandwidth of the camera.

What else should you know about color sensors?

When creating a color image sensor by adding a color filter array the native resolution gets affected. Each pixel can only belong to one of the channels red, green, or blue. As a consequence, some of the sensor's potential to resolve details is dispensed for the ability to detect color. How much native resolution is lost in a particular scenario, depends on the properties of the scene and the debayering method used. As a rule of thumb, the drop in resolution is roughly 50 %.

Debayering may cause image artifacts such as edge effects ("aliasing ") or false colors at object edges.

The applied color filter array absorbs a part of the incident light. Thus, the pixel behind it detects less brightness. This leads to a lower sensitivity compared to a monochrome sensor. For applications where the color information has no additional benefit, we recommend the use of monochrome sensors.

Color sensors and IR cut-off filter

Compared to the human eye, color sensors are also sensitive in the near infra-red region. Most IDS color cameras contain an additional IR cut-off filter (HQ filter) that removes these unwanted parts of the spectrum. This way, the colors appear more natural and vivid than without that filter.

© 2024 IDS Imaging Development Systems GmbH