Before trying to solve the problem using software, we should first reconsider if we’re using the right hardware. This article will focus exclusively on the camera but for a more general article take a look here.
Considering that prices for mono and color cameras are identical for some manufacturers, does it still make sense to even consider buying a mono camera for your application? If you are just looking for a quick solution, the answer is yes. As to why keep on reading.
Let us first explain the process of acquiring the color information from the camera pixel. Each pixel measures light intensity only and to receive the color information cameras usually have an integrated Bayer filter over the sensor. Bayer filter has a 4-grid pattern consisting of two green fields, one red and one blue.
We are naturally expecting to receive all color information (RGB) for each pixel but the individual field is letting only wavelengths for corresponding color (R, G or B) and losing quite a bit of information about the rest of the spectrum. To get the color of each pixel, missing two colors are averaged from the closest neighbors. Meaning that if we wish to read the green value on a position of the blue pixel, we need to average the value from four surrounding green pixels. You can find a more detailed description of this interpolation below.
Now let’s see how this would play out in the real world. Instead of photographing one scene with color and a mono camera, we will take a color image and simulate a Bayer pattern.
The image above shows one-layered RGB simulation as it would look like on a camera sensor without demosaicking, which is explained in the next chapter. Since it is quite difficult to distinguish pixels, you can see a zoomed part with roundabout below.
At this point, we have an image as it would come from the single-chip color camera if we didn’t perform any conversion/interpolation on it. To see the picture as an end-user would see it, a process called debayering or demosaicking must be performed.
The simplest method is to go through every pixel and look at its 8-neighborhood (eight closest pixels) and average the neighboring values for the missing colors. E.g. every blue pixel is missing values for green and red colors and in that case, we would take an average of respectively four green and four red neighbors:
At first glance, there is no difference between the original and interpolated images, but if we take a closer look at the borders, it becomes obvious.
Borders on the interpolated image are weaker than on the original one, which means that they are harder to detect, and their positions have shifted slightly.
To summarize, the process of converting a color RGGB image to a gray one means using partial information (Bayer filter) and trying to achieve the same result as it would be on a mono camera, but with effectively reduced resolution.
If the application requires high precision measurement and there are no advantages in having a color camera, the gray one is a better choice. On the other hand, if you’re not working on a pixel or subpixel level, feel free to use the color camera. In case of high precision requirements with a monochrome camera take care not to buy a pseudo-gray camera containing Bayer filter.
If you have any further questions feel free to contact us at firstname.lastname@example.org