2013-05-16

New advancements are ensuring that it’s only a matter of time until your smartphone’s camera is just as good as a point-and-shoot.

Smartphone cameras have come a long way—moving from convenient methods for sharing mediocre snapshots to near pro-quality image-capture tools in the right hands. Although the old benchmark of resolution seems to have topped out, innovation is still accelerating in many other areas of mobile camera technology.

“Packing more, but smaller, pixels into the same size sensor increases noise.”

BIGGER, BETTER PIXELS

After years of racing toward higher megapixel counts, camera vendors have finally realized that more is not always better. Packing more, but smaller, pixels into the same size sensor increases noise because smaller pixels capture fewer photons in a given time period. Tiny pixels also run closer to the diffraction limits of optics—particularly the inexpensive kind found in phones—so the added resolution gain isn’t really all it’s cracked up to be. In some high-resolution cameras, a 50 percent increase in pixel resolution only equates to an effective resolution boost of around 10 percent.

HTC has led the way in the retro effort to go back to fewer, larger pixels. Its 4MP UltraPixel cameras feature sensor sites that have three times the surface area of 13MP competitive cameras. In a somewhat odd move, Nokia has also swerved from offering the über-resolution 41MP Nokia 808 PureView to trumpeting the “good enough” 8.7MP resolution of its new flagship, the Lumia 920—which has amazing low-light performance thanks to a combination of high fill factor courtesy of its back-illuminated sensor, better optical image stabilization, a Zeiss “low-light optimized” f/2 lens, and lots of fancy noise reduction and image processing that’s done immediately after the capture.



FASTER, LESS-EXPENSIVE FOCUSING

Autofocus has been a major source of irritation for both smartphone and point-and-shoot camera users—and because it’s never fast enough to capture quickly moving action, it has helped keep D-SLR makers in business. Smartphone makers are moving to change that.

DigitalOptics Corporation (DOC) has created an autofocus system based on microelectromechanical (MEMS) technology that uses an electrostatic charge to move the focus. This lets camera modules (and thus smartphones) be slimmer, and DOC also claims its system reduces lens tilt during autofocus, which in turn reduces image distortions including vignetting. DOC is planning to sell a 5.1mm tall, 8MP camera module with this technology to Chinese smartphone makers, but it’s on the expensive side ($25 per module).

Startup LensVector, meanwhile, is hoping to address the lack of autofocus in lower-end smartphone with a low-cost element that realigns liquid crystal to change the refractive index of different areas of the lens and thus effectively change the focus.

“The relatively small photo sites in camera phone sensors restrict their dynamic range.”

HDR: POST-PROCESS YOUR IMAGES BEFORE YOU TAKE THEM

The relatively small photo sites in camera phone sensors restrict their dynamic range. As a result, photos that are backlit or combine sun and shade can either lack detail or look completely burned out. High-dynamic range (HDR) photography combines two or more images with different exposures to try to take the “best of both” images and create a single image that more accurately reflects how the original scene looked.

For many years, HDR could only be done after the fact, with processing software on a computer. But Apple’s introduction of in-phone HDR with the iPhone 4S changed all that, and has ushered in a number of new phones with integrated intelligent image processing that make HDR still image and full-time entire new class of mobile device camera capabilities.

HDR video possible. This feature has until now needed to be custom-coded by the phone vendor and rely on the image signal processor (ISP) chip to do the work. But Nvidia is smashing through that limit with its new Chimera architecture, which will be available starting with its upcoming Tegra 4 family of processors.

By unleashing the horsepower of the GPU during image capture, features formerly only found on high-end cameras will become available on smartphones. Real-time object tracking and panoramas, and best shot selection, will quickly become reality.

Other vendors are putting together systems with many of these capabilities, but what makes Chimera unique is its open interface. This lets other companies write plug-ins that access to the low-level data straight off the sensor, and use the computing power of the ISP, GPU, and CPU. Although it remains to be seen whether Google and Microsoft let these programming interfaces shine through in stock Android or Windows RT, there will certainly be an opening for custom camera applications integrated with homebrew ROM versions. Chimera is open enough to support this type of advanced functionality.

WHAT WILL THE ULTIMATE SMARTPHONE CAMERA LOOK LIKE?

Putting together all these innovations will take a few years, but is inevitable. Combining a Lytro-like light field sensor with a high-powered architecture like Chimera will make amazing photo effects and post-processing possible in real time, in the phone. MIT’s Camera Culture team, along with startups Pelican, Heptagon, and Rebellion are all working on the light field sensor component—as it is expected are Apple and HTC. Pelican in particular made waves recently with its low-key demo of after-the-fact refocusing at this year’s Mobile World Congress—done in conjunction with Qualcomm’s new Snapdragon 800 processor. After four years, Pelican finally appears ready to start announcing some products—stressing how thin its light field–based sensors are, and how they can make possible depth-related processing after the fact.

Google doesn’t want to be left out. Hiring computational photography guru Marc Levoy to work on its mobile photography architecture is just one indication of how serious it is. To quote Google’s senior vice president Vic Gundotra, “We are committed to making Nexus phones insanely great cameras. Just you wait and see.”

Sensor architecture will also continue to advance, with stacked sensors enabling greater on-chip innovation. Expect zero-lag global shutters (which read out the entire frame at once, eliminating motion artifacts) to become commonplace. Real zooms will soon start to be available. Add-on lenses will also increase in functionality, providing true wide-angle and telephoto capabilities. Rumors for the Nexus 5 even include the possibility of a camera module co-branded with Nikon. The only question will be whether anyone will still need a point-and-shoot once these innovations come to smartphones.

 

 

Source

The post Moving Beyond Megapixels : More Is Not Always Better! appeared first on Diary Of A Geek.

Show more