How AI scene optimization works on your phone’s camera

If you have a flagship camera phone, you are probably wondering how it takes such amazing photos. Phones brighten up shadows, blur backgrounds for DSLR-like portraits, boost colors, and generally add spice to images in ways that traditional cameras don’t.

Much of this process comes down to optimizing the AI ​​scene, which you’ll find on most smartphones today to some extent – at its best on the best smartphones around.

Phones with cameras are smart; after all, they are minicomputers. The chip that powers your phone delivers processing power that we couldn’t have dreamed of decades ago.

Smartphone makers combine this intelligence with the little camera module on the back of your phone to essentially give it artificial eyes. With these eyes, your phone can understand what you’re shooting, figure out what to optimize, and prepare your final Instagram photo without the need for hours of retouching.

More impressive; in some scenes, a smartphone camera can outperform a dslr over $ 1,000, thanks to all that intelligence and processing power crammed into its processor and intelligent HDR effects.

In short, it’s a story about how adversity – tiny smartphone sensors – created the next evolution in computer photography.

What is AI Scene Optimization?

The clue is in the name. AI Scene Optimization is the process by which your phone optimizes a photo you take based on the scene you capture.

While speaking while taking a photo, when you point your camera phone at a subject, light passes through the lens and falls on the sensor. This light is processed as an image by the Image Signal Processor (ISP), which is usually part of your smartphone’s chip, whether it’s a Qualcomm Snapdragon 888 or an Apple A11 Bionic.

ISPs are not new. They’ve been using digital cameras for years and doing the same basic things on smartphones.

Some of their tasks are providing you with an image to preview, reducing noise, adjusting exposure, white balance, etc. The peculiarity of smartphones today is how smart these ISPs are.

For starters, with improved computing capabilities, camera phone ISPs are able to better understand what you are filming. While you were on an old DSLR or compact digital camera, you might have had to adjust your scene manually. Now smartphones can do it on their own. It’s just scratching the surface though.

Samsung galaxy s21

(Image credit: Samsung)

With today’s AI scene optimization, a camera phone can understand multiple elements of your image and adjust the processing of each in very specific ways.

For example, if you take a picture of a person in front of a grassy field with a blue sky in the frame, your phone’s ISP might light up their face since that is probably the subject, increase the greens in the frame. The grass to make them look richer and enhances the blues of the sky separately, and depending on your phone, may even soften the background slightly to focus on your subject.

Many smartphones today go even further. Sony’s high-end smartphones, for example, have pet eye tracking, while Huawei smartphones have automatic night-time detection that works so well on flagships, it can brighten up a virtually dark scene, making look like night to day.

This technology is available on iPhones, as well as Portrait mode, which allows maximum control over background blur and lighting effects when capturing a photo of a person, and Pixel phones. from Google are renowned for their astrophotography.

Although not AI-enabled, when turned on, it looks at the night sky and intelligently assesses how long to keep the digital shutter open so that it can capture stars and even galaxies. .

So surely smartphones must pale in comparison to DSLRs when it comes to AI photography? In fact, they don’t. Interestingly, camera makers are learning a thing or two about processing photos from camera phone engineers.

iPhone 13 Pro Max

(Image credit: TechRadar)

Small sensors; smart solutions

If you think back to 10 years ago, Nokia released smartphones with big, powerful xenon flashes like the Nokia 808 Pureview.

It had to increase the hardware specs on its camera phones because the photos taken on most phones at night looked like an excruciating, grainy mess. Even a phone like the Nokia Pureview didn’t cut the mustard in some really tough scenes.

The reason mobile cameras have been so contested is that phones have to fit in our palms and pockets, and therefore have to be small. Smartphones must also have many other elements: screens, speakers, batteries, antennas, etc.

Tiny moving sensors with tiny lenses have a hard time getting much light in. More light translates into a better picture, and that’s where the problem lies: small sensors, limited light capture capabilities, poor image quality.

These limitations for mobile cameras have forced phone makers to stop trying to solve the problem with better, expensive, oversized, and battery-consuming hardware, and instead turn to software.

One of the first times it really made the headlines was when Google released the Pixel, a phone without optical image stabilization but with such good electronic stabilization that it outperformed much of it. competition.

The Pixel and Pixel 2 then showcased incredible photo processing capabilities that turned the photos from meh to wow in front of your eyes by recognizing the scene.

This then led brands like Huawei to introduce neural processing units into chipsets, the Mate 9 showcasing AI scene detection, and the feature found on phones from other smartphone makers, with Samsung phones detecting around 32 scenes. .

Huawei mate 40 pro

(Image credit: Huawei)

What can’t AI scene detection do?

And that’s how smart people solving a photographic challenge made something impossible to overcome with hardware alone – high-quality smartphone photos in all lighting conditions – possible using a software and scene detection by AI.

The next frontier is the detection of AI scenes in video. While already available to some extent, the ultra-smart night photo capabilities used in photos by Apple, Google (Night Sight), and Huawei haven’t quite taken the plunge to crisp nighttime video.

A video has at least 24 frames per second, so that’s another level of processing power.

As processors become more powerful, AI development reaches new heights, and smartphone sensors are increasingly able to capture light despite their small size, scene detection by AI seems ready to continue changing the face of photography for all.

You may also like...

Leave a Reply

Your email address will not be published.