LET’S TALK CAMERAS 4
Welcome to the final part of this series, Let’s talk cameras 4. If you have not read parts one, two, and three, I would strongly suggest that you do so.
Prelude
We started by defining smartphone cameras. We then talked about the parts of the camera and how they helped the camera capture light and convert it into the images that we see. Finally, we looked at pixels, megapixels, filters, and demosaicing.
When you take a photo, the electronic shutter turns on the pixels in the sensor. This makes the pixels sensitive to light. So light (from the outside) travels through the lenses and hits the pixels on the sensor.
The pixels capture the light corresponding to the different color filters on top of them. It then undergoes demosaicing and hands over the RAW image to the ISP. The camera software goes to work on the RAW image to produce the final Jpeg product.
Companies take different approaches to producing the end photo. Some favor warmer colors while others go for cooler colors. Some favor a higher saturation while others prefer to keep the colors as natural as possible. In the end, it all comes down to what you want as a consumer.
Let’s talk cameras 4
In this post, let’s talk cameras 4, we’re going to be looking at image stabilization and computational photography.
IMAGE STABILIZATION
When trying to take a photo, it is important to try to be as still as possible. Sometimes, this is not always possible and it leads to blurred (and ruined) photos. This led manufacturers to try to counteract the effects of camera shake and mitigate its effects on a photo. There are generally three methods in use today to keep images stable even while the camera is shaking:
i. Optical image stabilization (OIS):
Optical image stabilization is a form of hardware stabilization where the camera moves in the opposite direction of the camera shake. This helps it maintain relatively the same position even as you are moving. In OIS, the lens is suspended (or floating) with small motors and gyroscopes. They sense movement and adjust the lens accordingly on a vertical or horizontal plane (X, Y axis). There are several types of OIS e.g. 2 axis, 3 axis, 5 axis etc.
ii. Electronic image stabilization (EIS):
EIS is also known sometimes as Digital image stabilization (DIS). EIS is a software-based image stabilization in which the camera’s software uses the device’s accelerometer to detect small movements. It then takes multiple shots in quick succession and combines the images. It also does some cropping to get the sharpest possible image. For videos, the software shifts the pictures in the frame to certain angles to compensate for shakes.
iii. Hybrid image stabilization (HIS):
HIS is a combination of both OIS and EIS on the same device to get the sharpest possible images.
OIS produces better results than EIS but it is also more expensive. A physical blow to the phone may compromise the OIS system built into the camera module. EIS is not as accurate as OIS, but it is much cheaper and can receive software updates. HIS is the best of both worlds but it is very expensive.
Alternate Image Stabilization
If your phone has none of this, you could get a tripod or gimbal to avoid blur in your photos. It is important to note that OIS, EIS, and HIS are only good for small shakes and jerks. They would fail if you move too much and too fast.
COMPUTATIONAL PHOTOGRAPHY
Every camera has two basic operational parts, the hardware and the software (or firmware).
All of the photography processes required to capture a photo are controlled by software. From capturing light to demosaicing and production of RAW images, are all software controlled. Even the process of converting RAW images to Jpeg photos is also carried out by software.
Software is very important in smartphone cameras these days as bad software could ruin a great camera.
The Redmi Note 8 has a quad camera setup (Credits: GSMARENA)
Most smartphones these days come with multi-lens systems. This is done to achieve some types of DSLR (Digital Single-Lens Reflex) camera photographic effects.
A lot of cameras have depth sensors for portrait photography (bokeh shots), telephoto cameras (for high resolution or hi-res zoom), ultrawide sensors (for wider angle shots), etc.
These have helped a lot of people take stunning photographs that would give professional photographers a run for their money.
One company decided to buck the trend. Instead of going with a multi-lens system, Google decided to use only one lens. Was this a handicap? Well no. Google Pixel phones achieved almost perfect portrait shots, hi-res zoom, and night sight without a dedicated night sensor. How did they manage to achieve these without the extra lenses? Well, enter Computational Photography.
Computational photography refers to the process of digital image capture and processing techniques that use digital computation instead of optical processes. This simply means that the device uses software to achieve different types of photography instead of using hardware (e.g. extra lenses).
Computational photography uses machine learning and Artificial Intelligence (AI) algorithms. These help to automatically adjust certain specifics like HDR (high dynamic range), ISO (light sensitivity), Shutter speed and focus, etc. So that even as an amateur, you can simply point, shoot, and achieve a great photo.
Google Camera – GCam
Google (Google camera or Gcam) and Apple (Deep fusion) are the two main leaders in the field of computational photography.
Although, it is only right to say Google is the leader at this point. Until recently, it used only one camera to achieve different types of stunning photographs. It simply uses software to achieve these effects. If it can’t, sometimes it “fakes” them if need be.
Google’s camera software has been the most successful. So much so that it is the most sought-after camera port today. It has modded and ported both officially and unofficially to a lot of phones. Usually, phones have the hardware to support it.
GCAM instantly upgrades the performance of your native camera by a country mile. This is especially notable if you use a midrange device. It is a must-have for all smartphone photography enthusiasts.
A GCam Mod (BSG v6.0) on a Redmi Note 7 showing various modes
With that we have come to the end of this post Let’s talk cameras 4. This is also the end of our main introductory series on cameras.
Please leave a comment if you have any difficulty and remember to:
- Subscribe to our YouTube channel
- Follow on Facebook
- Follow on WhatsApp
- Join our Telegram community
- Participate on Reddit
- Find us on Quora
yeh_mee
You definitely took your time with this series. Very interesting to read and stuffs for one to learn about Smartphone cameras and cameras in general.
Jeff Ogodogun
I did actually.
Thanks a lot being regular on the blog. Your satisfaction is our command.
yeh_mee
🤗
Application Programming Interface (APIs) - Inquisitive Universe
[…] very interesting or at least I hope it will be. It certainly was for me when I was learning about GCam, one of the things that prompted me to opt for Xiaomi Redmi Note 7 well over 3 years […]