We started off by defining smartphone cameras, we talked about the parts of the camera and how they helped the camera capture light and convert it into the images that we see. We also talked about pixels, megapixels, filters and demosaicing.
When you take a photo, the electronic shutter turns on the pixels in the sensor and light travels through the lenses and hits the pixels on the sensor. The pixels captures the light corresponding to the different colour filters on top of them, undergoes demosaicing and hands over the RAW image to the ISP which then allows the camera software to work on them so as to produce the final Jpeg product. Different companies take different approaches towards producing the end photo. Some favour warmer colours while others go for cooler colours. Some favour a higher saturation while others prefer to keep the colours as natural as possible. In the end, it comes down to what you want as a consumer.
When trying to take a photo, it is important to try to be as still as possible. Sometimes, this is not always possible and it leads to blurred (and ruined) photos. This led manufacturers to try to counteract the effects of camera shake and mitigate its effects on a photo. There are generally three methods in use today to keep images stable even while the camera is shaking:
i. Optical image stabilization (OIS):
Optical image stabilization is a form of hardware stabilization where the camera moves in the opposite direction of the camera shake so that it maintains relatively the same position even as you are moving. In OIS, the lens is suspended (or floating) with small motors and gyroscopes. They sense movement and adjust the lens accordingly on a vertical or horizontal plane (X, Y axis). There are several types of OIS e.g. 2 axis, 3 axis, 5 axis etc.
ii. Electronic image stabilization (EIS):
EIS is also known sometimes as Digital image stabilization (DIS). It is a software based image stabilization in which the camera software uses the device’s accelerometer to detect small movements. It then takes multiple shots in quick succession and combine the images together, it also does some cropping to get the sharpest possible image. For videos, the software shifts the pictures in the frame to certain angles so as to compensate for shakes.
iii. Hybrid image stabilization (HIS):
HIS is a combination of both OIS and EIS on the same device to get the sharpest possible images.
OIS produces the better results than EIS but it is also more expensive and a physical blow to the phone may compromise the OIS system built into the camera module. EIS is not as accurate as OIS but it is much cheaper and can receive software updates. HIS is is the best of both worlds but it is also expensive. If your phone has none of this, you could get a tripod or gimbal so as to avoid blur in your photos. It is important to note that OIS, EIS and HIS are only good for small shakes and jerks. It would fail if you move too much and too fast.
Every camera has two basic operational parts, the hardware and the software (or firmware). When the camera has captured light and the pixels has transduced (converted) it to electrical signals, the demosaicing algorithm that carries out the colour interpolation and produces the RAW image is a form of software. The process of converting RAW images to Jpeg photos is also carried out by software. Software is very important in smartphone cameras these days as a bad software could ruin a great camera.
The Redmi Note 8 has a quad camera set up (Credits: GSMARENA)
Most smartphones these days come with multi lens systems in order to achieve some types of DSLR (Digital Single-Lens Reflex) camera photographic effects. A lot of cameras have depth sensors for portrait photography (bokeh shots), telephoto cameras (for high resolution or hi res zoom), ultrawide sensors (for wider angle shots) etc. These have made a lot of people take stunning photographs that would give professional photographers a run for their money.
The Google Pixel 3 only uses single lens to achieve stunning photographs (Credits: GSMARENA)
One company decided to buck the trend and instead of going with a multi lens system decide to use only one lens and yet achieved almost perfect portrait shots, hi res zoom and night sight without a dedicated night sensor. How did they manage to achieve these without the extra lenses? Well, enter Computational Photography.
Computational photography refers the process of digital image capture and processing techniques that uses digital computation instead of optical processes. This simply means that the device uses software to achieve different types of photography instead of using hardware (e.g. extra lenses).
Computational photography uses machine learning and Artificial Intelligence (AI) algorithms to adjust certain specifics like HDR (high dynamic range), ISO (light sensitivity), Shutter speed and focus so that even as an amateur, you can simply point, shoot and achieve a great photo.
Google (Google camera or Gcam) and Apple (Deep fusion) are the two main leaders in the field of computational photography although Google is the leader at this point. Until recently, it used only one camera to achieve different types of stunning photographs. It simply uses software to achieve these effects and sometimes “fakes” them if need be.
Google’s camera software has been so successful that it is the most sought after camera port today. It has modded and ported both officially and unofficially to a lot of phones that have the hardware to support it. GCAM instantly upgrades the performance of your native camera by a country mile especially if you use a midrange device. It is a must have of all smartphone photography enthusiasts.
A GCam Mod (BSG v6.0) on a Redmi Note 7 showing various modes
With that we have come to the end of our main introductory series on cameras. What are your thoughts? If there are any questions, I would try to reply as soon as is humanly possible. Thank you.