Hello and welcome to part 3 of this series (let’s talk cameras 3). Here we are discussing the basics of cameras.

There are two parts in this series in which we have already covered a lot. If you are a beginner, I would advise you to go read them up. You can find them here and here. You can also find out how to read camera specs here.
Before we begin, let us define the following key terms;

Definitions

Filter: A filter is a very small piece of colored film that covers a pixel in a sensor.

CFA: Colour Filter Array refers to the colors used in certain types or variations of filters.

Demosaicing (De-mo-saic-ing): This is the process that reconstructs a full-color image from the primary colors on a CFA image sensor.

RGB: stands for Red-blue-green. These are primary colors from which other secondary colors are derived. Examples are purple, pink, yellow, brown etc.

So with those out of the way, let’s talk cameras 3.

What are color filters and why are they used?

As we already know, for a camera to take pictures, light that is reflected from the object or scene in question has to enter the camera. It is then captured by the pixels (photosites) on the image sensor. These pixels convert this light into an active electric signal which is sent to the Image Signal Processor (ISP) for processing and interpretation.

These pixels only catch light. They can tell how bright or dull the light is, but they cannot see color. This is why early cameras were black and white.

The simple solution used today for smartphones is simply placing a colored filter over the individual pixels. When different colors of filters are used at once to cover an image sensor, we refer to it as a color filter array (CFA).

The filters are placed on top of individual pixels and they only allow a particular type of light to enter the pixel e.g. a red filter would only allow red light to enter the pixel while a green filter would only green light to reach the pixel. These then help cameras to see color.

How do the filters work?

These filters do not know which color is red, blue, or green. They are only able to separate lights by their wavelengths. Light with a wavelength of 700nm (red) would not pass through a blue or green filter. These filters will deflect it, but it would easily pass through a red filter. Greenlight (530nm) and blue light (470nm) would only pass through their respective filters but would not pass through a red filter.

Types (or modifications) of filters

There are many variants of filters in the market today for smartphone cameras as well as professional cameras. These include the original Bayer filter (RGB), RGBE (red, green, and emerald), RYYB (red, yellow, yellow, and blue), CYYM (cyan, yellow, yellow, and magenta), RGBW (red, green, blue and white), etc. The most commonly used filter for smartphones is the Quad Bayer filter (RGBG).

Quad Bayer filter

The Quad Bayer filter was originally developed by Sony while similar filters like the Tetracell and the 4-cell are produced by Samsung and Omnivision respectively. It is similar to a Bayer filter but it uses 4x blue, 4x red, and 8x green.

It uses double the number of green filters compared to the other two. Why? Well, this is because it is trying to mimic the human eye which is more receptive to green colour.

RYYB filters (Credits: Wikipedia)

How cameras see color

By now, some of you may be wondering, if camera sensors only ‘see’ in RGB, how then do they see other secondary colors?

This is mostly done by the Image Signal Processor (ISP) through a process called demosaicing. Demosaicing can be defined as a digital image process used to reconstruct a full-color image from a CFA sensor.

Red, Green, and Blue photos are combined to produce a composite image. (Credits: Wikipedia)

Interpolation

When a camera sensor captures an image, it only captures light in RGB. When this image gets to the ISP. The firmware (or software) in the ISP then calculates the true color value in each pixel by taking the values of the colors in the surrounding eight pixels and then calculating the average. It does this for every single pixel in the image and then combines everything to get a sample picture.

The eight pixels around the circled blue one have their colors analyzed against the blue pixel to produce its true color (Credits: Wikipedia)

In simpler terms, the ISP combines all three primary colors in the image to get a final multicolor end product, just we as do when using color palettes. The process of comparing and combining these color pixels is called interpolation, while the overall action is called demosaicing.

RAW

This sample picture produced after demosaicing is called a RAW image or the picture is in a RAW format. The RAW format may look bland or dull because it is not the end product. This RAW photo now undergoes further post-processing and colour correction to get the final image which is usually in a JPG format.

Post-processing

Different companies post-process RAW images differently. Some lean towards warmer colors while others prefer cooler colors, some prefer truer colors while others prefer high saturation.

Below are two photos taken by 2 different Smartphones;

Pic A shot on a Tecno Spark 2 leans towards warmer colors while Pic B shot on the Samsung S9 leans towards cooler colors

Let's talk cameras 3
A (TECNO SPARK 2)
Let's talk cameras 3
B (SAMSUNG S9)

PIXEL BINNING

Small pixels inside image sensors do okay in bright daylight especially if the resolution is high enough, but they do poorly in low light or at night. This is because light is the most important ingredient in a photo so if there’s no light, you’ll get no photos.

A good way to combat this is to make the pixels bigger. But then you’ll have to remember that smartphone camera sizes are small. Some companies also like to keep the small pixels. So that they can fit more onto a slightly larger sensor and they’re good for marketing too.

Pixel binning (Let's talk cameras 3)
Four pixels are joined together to act as one in a Quad Bayer sensor (Credits: Gadgetstop)

So the other way to solve this small pixel problem is to combine several small pixels to act as one. The most common method in use is joining 4 pixels together to act as one super pixel. This way the small pixels can catch more light and give out better shots but there is a catch. Pixel binning effectively reduces the effective resolution of the camera to ¼ of its original resolution.

In pixel binning, a 64MP camera may have 64 million pixels on the image sensor but it will output (64/4=) 16MP photos. A 48MP sensor can be binned to produce 12MP shots. This binning is done by the companies that produce these cameras and not the phone makers themselves.

The end

This brings me to the end of Let’s Talk Cameras 3, Hit me up with your questions and comment below what you have learned. Thank you.

Please leave a comment if you have any difficulty and remember to: