2014-05-20

Refocusing: Could Software Solve the Depth of Field Problem on Small Sensors?



This was shot on 4 megapixel cellphone camera.

Cameras get better. Every generation, features are added. Every two or three generations, sensors improve dramatically. Resolution, noise levels, dynamic range, and color fidelity have reached a point that many photographers feel they don't need a large sensor to get image quality that's "good enough" for their uses. But there's one area photography has refused to budge on: depth of field. Unfortunately, physics is stubborn; the wider your field of view and further your subject, the bigger your sensor or the brighter your lens needs to be if you want to have any sort of shallow depth field.

But maybe it doesn’t have to be this way. After all, virtually every recent compact camera and mirrorless system incorporates some type of software correction to compensate for the physical limitations of its optics: chromatic aberrations, distortion, vignetting, etc. Perhaps a related technology could be used to not just correct flaws, but actually enhance a sensor and lens combination.





Of course, adding bokeh in post isn’t a new idea. Before I bought my first real camera, I was using Photoshop plugins to imitate a shallow depth of field look on my cellphone pictures. Sometimes, the effect would turn out surprisingly realistic—indeed, I still sometimes add just a little bit of pseudo-bokeh when I feel my kit didn’t give me enough. But this method is way too time consuming and cumbersome for any image with elements at varying depths. On the other hand, Instagram and other tools feature tilt-shift and other effects to somewhat imitate shallow depth of field quickly, but they almost always look incredibly fake.

The solution to this artificiality could lie in any of various methods to measure the depth within an image, and then map that information to different blur values. Arguably the most exciting of these is Lytro’s light-field technology, which places you in a sci-fi movie and lets you refocus an image after it has been taken. By capturing angular information about incoming light in addition to the usual color and intensity, light field cameras can perform a number of cool tricks. You can create 3D photos from a single capture and lens, design focus animations to direct the eyes to different points of interest, interact with photos, and indeed, make the depth of field appear shallower than it actually was.

This is one of my favorite photos--it had much wider DoF before my processing

While the original Lytro camera was more of a commercial proof-of-concept than a practical device, the technology has already started to have a huge influence. There have been rumors of Apple wanting to incorporate Lytro technology into a future device, and every other flagship Android and Windows phone now features its own technique for refocusing an image after capture and adding artificial bokeh. The new Google Camera app does it by asking you to move the camera to capture parallax information, Samsung does it by what essentially amounts to focus bracketing, and the HTC One M8 actually features a secondary rear camera for the sole purpose of capturing depth information.

In practice, none of these methods is terribly reliable. HTC's approach makes practical sense but out-of-focus transitions are too harsh and bokeh is too exaggerated. On the other hand, Google's effect generally looks more realistic and thankfully allows you to control the amount of added bokeh (subtletly works best), but it's too easily confounded by any motion within the frame. In both cases, additional processing time is needed to render the effect (much more so with the Google option). And still, I find myself using them. Once you get used to what works and what doesn’t, it’s a surprisingly convenient way to get natural-looking ‘bokehliscious’ images out of cellphone camera.

Besides, it's important to remember we’re talking about first generation technology. Right now, it’s being pushed by cellphone manufacturers for average-to-mediocre cellphone cameras; it’s meant primarily for the Instagram selfie crowd. Imagine, then, the potential if a serious camera-maker were to take hold of the technology and run with it? Perhaps Lytro could do it with its upcoming Illum: it already uses clever light-field tricks to fit an insane 30-250mm F2 lens on a 1-inch sensor, in a body the size of a mirrorless zoom kit.

Take a minute to think about it, and endless possibilities come to mind. In my ideal world, this recorded depth information would become standardized. You could then use any 3rd party developer like Lightroom to adjust depth of field parameters the way you adjust white balance or exposure. But until then, I could imagine a specialized Bokeh or Portrait mode—just like HDR or Panorama spots on a drive dial—to apply these effects. In fact, I’d say it’s just a matter of time until a company like Samsung starts to feature this tech in their more serious cameras. If Apple were to introduce such a feature onto its phones, you could virtually guarantee it popping up elsewhere.

There are a couple of flaws here, but for the most part the effect is subtle enough to not arouse any suspicions.

Playing around with the rudimentary lens blur effect in the Google Camera app has gotten me to use my phone’s camera a lot more than I have in years. And while almost all the cellphone photos in this article have heavy processing, truth is I don’t think most unaware viewers would give a second thought if I told them they’d been shot with a DSLR. Of course, the resolution is low and the flaws are readily visible if you look closely (although easily fixable in LR), but for photos I took on my phone and spent 30 seconds editing, I’m happy. Heck, I can’t even get this sort of wide angle depth of field with anything in my professional kit anyway.

Perhaps it’s in bad taste to consider the lack of depth of field with small sensors a “problem” in the first place. After all, every system has its limitations; learning to compromise within the different facets of photography is part of the beauty of this art. You shouldn’t need shallow depth of field to create a good picture. But at the same time, like HDR or stitched panoramas, post-production bokeh could add significant value to negate the limitations of a camera.

Like they say, the best camera is the one you have with you. And if that camera can fit in your pocket but still give you shallow depth of field, even better, right?

Show more