LET’S TALK CAMERAS 3

Hello and welcome to the part 3 of this series (let’s talk cameras 3). Here we are discussing the basics of cameras.

There are two parts in this series in which we have already covered a lot. If you are a beginner, I would advise you go read them up. You can find them here and here. You can also find out how to read camera specs here.
Before we begin, let us define the following key terms;

Definitions

Filter: A filter is a very small piece of coloured film that covers a pixel in a sensor.

CFA: Colour Filter Array refers to the colours used in certain types or variations of filters.

Demosaicing (De-mo-saic-ing): This is the process that reconstructs a full colour image from the primary colours on a CFA image sensor.

RGB: stands for Red-blue-green. These are primary colours from which other secondary colours are derived. Examples are purple, pink, yellow, brown etc.

So with those out of the way, let’s talk cameras 3.

What are colour filters and why are they used?

As we already know, for a camera to take pictures, light that is reflected from the object or scene in question has to enter the camera. It is then captured by the pixels (photosites) on the image sensor. These pixels convert this light into an active electric signal which is sent to the Image Signal Processor (ISP) for processing and interpretation.

These pixels only catch light. They can tell how bright or dull the light is, but they cannot see colour. This is why early cameras were black and white.

The simple solution used today for smartphones is simply placing a coloured filter over the individual pixels. When different colours of filters are used at once to cover an image sensor, we refer to it as a colour filter array (CFA).

The filters are placed on top of individual pixels and they only allow a particular type of light to enter the pixel e.g. a red filter would only allow red light to enter the pixel while a green filter would only green light to reach the pixel. These then help cameras to see colour.

How do the filters work?

These filters do not know which colour is red, blue or green. They are only able to separate lights by their wavelengths. Light with a wavelength of 700nm (red) would not pass through a blue or green filter. These filters will deflect it, but it would easily pass through a red filter. Green light (530nm) and blue light (470nm) would only pass through their respective filters but would not pass through a red filter.

Types (or modifications) of filters

There are many variants of filters in the market today for smartphone cameras as well as professional cameras. These include the original Bayer filter (RGB), RGBE (red, green and emerald), RYYB (red, yellow, yellow and blue), CYYM (cyan, yellow, yellow and magenta), RGBW (red, green, blue and white) etc. The most commonly used filters for smartphones is the Quad Bayer filter (RGBG).

Quad Bayer filter

The Quad Bayer filter was originally developed by Sony while similar filters like the Tetracell and the 4-cell are produced by Samsung and Omnivision respectively. It is similar to a Bayer filter but it uses 4x blue, 4x red and 8x green.

It uses double the number of green filters compared to the other two. Why? Well this is because it is trying to mimic the human eye which is more receptive to green colour.

RYYB filters (Credits: Wikipedia)

How cameras see colour

By now, some of you may be wondering, if camera sensors only ‘see’ in RGB, how then do they see other secondary colours?

This is mostly done by the Image Signal Processor (ISP) through a process called demosaicing. Demosaicing can be defined as a digital image process used to reconstruct a full colour image from a CFA sensor.

Red, Green and Blue photos are combined to produce a composite image. (Credits: Wikipedia)

Interpolation

When a camera sensor captures an image, it only captures light in RGB. When this image gets to the ISP. The firmware (or software) in the ISP then take calculates the true colour value in each pixel by taking the values of the colours in the surrounding eight pixels and then calculating the average. It does this for every single pixel in the image and then combines everything to get a sample picture.

The eight pixels pixels around the circled blue one have their colours analysed against the blue pixel to produce it’s true colour (Credits: Wikipedia)

In simpler terms, the ISP combines all three primary colours in the image to get a final multicolour end product, just we as do when using colour palettes. The process of comparing and combining these colour pixels together is called interpolation, while the overall action is called demosaicing.

RAW

This sample picture produced after demosaicing is called a RAW image or the picture is in a RAW format. The RAW format may look bland or dull because it is not the end product. This RAW photo now undergoes further post processing and colour correction to get the final image which is usually in a JPG format.

Post processing

Different companies post process RAW images differently. Some lean towards warmer colours while others prefer cooler colours, some prefer truer colours while others prefer high saturation.

Below are two photos taken by 2 different Smartphones;

Pic A shot on a Tecno Spark 2 leans towards warmer colours while Pic B shot on the Samsung S9 leans towards cooler colours

Let's talk cameras 3
A (TECNO SPARK 2)
Let's talk cameras 3
B (SAMSUNG S9)

PIXEL BINNING

Small pixels inside image sensors do okay in bright daylight especially if the resolution is high enough, but they do poorly in low light or at night. This is because, light is the most important ingredient in a photo so if there’s no light, you’ll get no photos.

A good way to combat this is to make the pixels bigger in size. But then you’ll have to remember that smartphone camera sizes are small. Some companies also like to keep the small pixels. So that they can fit more onto a slightly larger sensor and they’re good for marketing too.

Pixel binning (Let's talk cameras 3)
Four pixels are joined together to act as one in a Quad Bayer sensor (Credits: Gadgetstop)

So the other way to solve this small pixel problem is to combine several small pixels together to act as one. The most common method in use is joining 4 pixels together to act as one super pixels. This way the small pixels can catch more light and give out better shots but there is a catch. Pixel binning effectively reduces the effective resolution of the camera to ¼ of its original resolution.

In pixel binning, a 64MP camera may actually have 64 million pixels on the image sensor but it will output (64/4=) 16MP photos. A 48MP sensor can be binned to produce 12MP shots. This binning is actually done by the companies that produce these cameras and not the phone makers themselves.

The end

This brings me to the end of let’s talk cameras 3, hit me up with your questions and comment below what you have learnt. Thank you.

Please leave a comment if you have any difficulty and remember to: