Hello and welcome to part 3 of this series (let’s talk cameras 3). Here we are discussing the basics of cameras.
There are two parts in this series in which we have already covered a lot. If you are a beginner, I would advise you to go read them up. You can find them here and here. You can also find out how to read camera specs here.
Before we begin, let us define the following key terms;
Definitions
Filter: A filter is a very small piece of colored film that covers a pixel in a sensor.
CFA: Colour Filter Array refers to the colors used in certain types or variations of filters.
Demosaicing (De-mo-saic-ing): This is the process that reconstructs a full-color image from the primary colors on a CFA image sensor.
RGB: stands for Red-blue-green. These are primary colors from which other secondary colors are derived. Examples are purple, pink, yellow, brown etc.
So with those out of the way, let’s talk cameras 3.
What are color filters and why are they used?
As we already know, for a camera to take pictures, light that is reflected from the object or scene in question has to enter the camera. It is then captured by the pixels (photosites) on the image sensor. These pixels convert this light into an active electric signal which is sent to the Image Signal Processor (ISP) for processing and interpretation.
These pixels only catch light. They can tell how bright or dull the light is, but they cannot see color. This is why early cameras were black and white.
The simple solution used today for smartphones is simply placing a colored filter over the individual pixels. When different colors of filters are used at once to cover an image sensor, we refer to it as a color filter array (CFA).
The filters are placed on top of individual pixels and they only allow a particular type of light to enter the pixel e.g. a red filter would only allow red light to enter the pixel while a green filter would only green light to reach the pixel. These then help cameras to see color.
How do the filters work?
These filters do not know which color is red, blue, or green. They are only able to separate lights by their wavelengths. Light with a wavelength of 700nm (red) would not pass through a blue or green filter. These filters will deflect it, but it would easily pass through a red filter. Greenlight (530nm) and blue light (470nm) would only pass through their respective filters but would not pass through a red filter.
Types (or modifications) of filters
There are many variants of filters in the market today for smartphone cameras as well as professional cameras. These include the original Bayer filter (RGB), RGBE (red, green, and emerald), RYYB (red, yellow, yellow, and blue), CYYM (cyan, yellow, yellow, and magenta), RGBW (red, green, blue and white), etc. The most commonly used filter for smartphones is the Quad Bayer filter (RGBG).
Quad Bayer filter
The Quad Bayer filter was originally developed by Sony while similar filters like the Tetracell and the 4-cell are produced by Samsung and Omnivision respectively. It is similar to a Bayer filter but it uses 4x blue, 4x red, and 8x green.
It uses double the number of green filters compared to the other two. Why? Well, this is because it is trying to mimic the human eye which is more receptive to green colour.
How cameras see color
By now, some of you may be wondering, if camera sensors only ‘see’ in RGB, how then do they see other secondary colors?
This is mostly done by the Image Signal Processor (ISP) through a process called demosaicing. Demosaicing can be defined as a digital image process used to reconstruct a full-color image from a CFA sensor.
Interpolation
When a camera sensor captures an image, it only captures light in RGB. When this image gets to the ISP. The firmware (or software) in the ISP then calculates the true color value in each pixel by taking the values of the colors in the surrounding eight pixels and then calculating the average. It does this for every single pixel in the image and then combines everything to get a sample picture.
In simpler terms, the ISP combines all three primary colors in the image to get a final multicolor end product, just we as do when using color palettes. The process of comparing and combining these color pixels is called interpolation, while the overall action is called demosaicing.
RAW
This sample picture produced after demosaicing is called a RAW image or the picture is in a RAW format. The RAW format may look bland or dull because it is not the end product. This RAW photo now undergoes further post-processing and colour correction to get the final image which is usually in a JPG format.
Post-processing
Different companies post-process RAW images differently. Some lean towards warmer colors while others prefer cooler colors, some prefer truer colors while others prefer high saturation.
Below are two photos taken by 2 different Smartphones;
Pic A shot on a Tecno Spark 2 leans towards warmer colors while Pic B shot on the Samsung S9 leans towards cooler colors
PIXEL BINNING
Small pixels inside image sensors do okay in bright daylight especially if the resolution is high enough, but they do poorly in low light or at night. This is because light is the most important ingredient in a photo so if there’s no light, you’ll get no photos.
A good way to combat this is to make the pixels bigger. But then you’ll have to remember that smartphone camera sizes are small. Some companies also like to keep the small pixels. So that they can fit more onto a slightly larger sensor and they’re good for marketing too.
So the other way to solve this small pixel problem is to combine several small pixels to act as one. The most common method in use is joining 4 pixels together to act as one super pixel. This way the small pixels can catch more light and give out better shots but there is a catch. Pixel binning effectively reduces the effective resolution of the camera to ¼ of its original resolution.
In pixel binning, a 64MP camera may have 64 million pixels on the image sensor but it will output (64/4=) 16MP photos. A 48MP sensor can be binned to produce 12MP shots. This binning is done by the companies that produce these cameras and not the phone makers themselves.
The end
This brings me to the end of Let’s Talk Cameras 3, Hit me up with your questions and comment below what you have learned. Thank you.
Please leave a comment if you have any difficulty and remember to:
- Subscribe to our YouTube channel
- Follow on Facebook
- Join our Telegram community
- Participate on Reddit
- Find us on Quora
- Follow us on Instagram
- Check out the Clubhouse as well
It’s always a delight coming back to read your posts.
PS:. What are the manual settings you’ll recommend for an Umidigi A3S smartphone?
I’ll simply love to improve my picture taking skills🙂
That would depend on the condition of the day. The settings for daylight photography are different from those used for night time
Okay….
What will be the best setting for daylight photography?
Shalom.
I would go with an ISO of 100 and a shutter speed of 1/250s for daylight pics.
For night pics I would go with around 8s for shutter speed and an ISO of 3200
Leave everything else on auto settings
[…] Image Signal Processor (ISP) for post processing (a more detailed explanation can be found here). The pixels that catch light also come in different sizes measured in micrometers (µm). The […]