Before trying to solve a computer vision problem through software, it’s worth asking: are you using the right camera? While this article focuses on cameras, the broader topic of image acquisition is covered [here].
Even when mono and color cameras are priced similarly, a monochrome camera may still be the better choice for certain applications. Here’s why.
How Color Cameras Work
Each camera pixel measures light intensity, and to capture color, most cameras use a Bayer filter on the sensor. This filter follows a 2×2 grid: two green, one red, and one blue pixel.
Contrary to what you might expect, a color camera does not record full RGB at each pixel. Each pixel captures only the wavelength corresponding to its filter. To estimate the missing two colors, the camera averages the neighboring pixels. For example, to get the green value on a blue pixel, the four surrounding green pixels are averaged.
Simulating Bayer Patterns
To illustrate this, we can simulate a Bayer pattern on a color image:
- Simulated Bayer image: The raw sensor image before demosaicking.
Zoomed Bayer pattern: Helps visualize individual pixel positions.


The image above shows one-layered RGB simulation as it would look like on a camera sensor without demosaicking, which is explained in the next chapter. Since it is quite difficult to distinguish pixels, you can see a zoomed part with roundabout below.

Debayering (Demosaicking)
To convert the raw Bayer image into a standard RGB image, a process called debayering or demosaicking is performed. The simplest method averages the 8-neighborhood around each pixel to estimate the missing colors. For example, each blue pixel averages values from its four red and four green neighbors.
- Result of demosaicing: At first glance, the image looks fine.
Closer inspection: Borders are weaker, slightly shifted, and harder to detect compared to a monochrome image.


At first glance, there is no difference between the original and interpolated images, but if we take a closer look at the borders, it becomes obvious.




Borders on the interpolated image are weaker than on the original one, which means that they are harder to detect, and their positions have shifted slightly.
Conclusion
Converting a Bayer-filtered color image to grayscale effectively reduces resolution because only partial information is used.
- High-precision applications: Use a monochrome camera to avoid interpolation artifacts.
- Non-pixel-critical applications: A color camera is fine.
- Avoid pseudo-gray cameras: Some color cameras advertise gray mode but still contain a Bayer filter.
Choosing the right camera upfront can save countless hours of software fixes later.
For further questions, contact us at info@subpixel.hr