Computational Photography

Remnant snowfield at Thompson Pass near Valdez.

Photography as a craft has always been married to technology. For photographers who wanted to practice that craft as an art form it has presented many challenges throughout its history. I believe no other art form has been transformed as much as photography. With each transformation has come rigorous debate, and at times, self-doubt, among photography’s practitioners. Are photos art compared to paintings? Is color photography a legitimate fine art medium compared to black and white? Are 35 mm (single lens reflex) cameras for serious photographers? Is digital photography real photography compared to film photography? Fortunately, the answer to each question has come back in the affirmative, but the subsequent questions are becoming harder, more frequent and with less time to ponder. If we think that the transition to digital photography twenty years ago was a sea change for photography, we are on the cusp of a miraculous technological change – some might say wizardry – called computational photography.

In a recently published TechCrunch article, “The future of photography is code,” by Devin Coldewey, he writes, “the future of photography is computational, not optical.” Coldewey lays out a convincing case that technology is reaching the limits of sensor design and optics especially as it pertains to the small spaces in the cameras most people use today – smartphones. Hence, Apple, Google, and Samsung are investing millions in software improvements to their smartphone cameras. These improvements go beyond the in-camera computations like panoramas photographers routinely have come to expect. They include the ability in Apple’s latest iPhones to adjust bokeh (i.e., the blurring of the background by changing depth of field). This is accomplished with two lenses of different focal lengths. Taken to the extreme is Light’s (light.co) folded optics L16 camera which uses ten lenses to create a 3D depth map. The ten images generate a single 52 MPix composite that allows the aperture to be adjusted from f/2 to f/15 after the fact (and focal length from 28-150 mm) in a form factor slightly larger than an iPhone. Another improvement is portrait lighting introduced in the iPhone X last year that simulates studio lighting. Both Apple’s and Light’s cameras perform these advanced photographic features instantaneously with the press of the shutter button and a lot of computational might. While most photographers will rejoice about having this capability at their fingertips, there is another application of computational photography that I believe presents an alarming trend especially to professional creatives.

Even before digital photography became mainstream, the launch of Photoshop in 1990 heralded the age of computational photography with photo retouching. At the time, Photoshop was sold for use with scanners that digitized film. Its use as an image processor is now so pervasive that it has become a verb in our vernacular; to manipulate a photo is to Photoshop it. The myriad tools that allow selective image adjustments like dodging and burning, luminosity masking and exposure blending, high dynamic range, panorama and focus stacking compositing have expanded photographers’ capabilities. All require significant computation and microprocessor power. With the introduction of content-aware fill in Photoshop CS5 in 2010, Adobe tapped into the power of artificial intelligence (AI), machine learning and neural networks. The ability to remove unwanted objects and replace it with a background appropriate to the scene as if the object were never there at times seems magical. Nevertheless, Adobe’s efforts in applying AI and computational photography (part of their Adobe Sensei Technology) are just the beginning. At the Adobe Max 2018 Conference, Adobe previewed MovingStills that adds realistic, almost 3D motion to still photos. Their website on Adobe Sensei promises to turn over the tedium of culling images to AI. Using criteria like sharpness, depth of field, even composition the AI algorithms will find the “best” photos for you!

Adobe is not alone in using AI to power their photographic software. Google’s AI Photo Editor also performs automatic culling. Perhaps most concerning is AI powered post-processing to improve the look of an image. Skylum’s Accent filter applies AI in one slider to automatically adjust an image. I’ve used this on occasion and the results are impressive. Contrast is enhanced, skies are darkened, and the foreground lightened in a realistic, non-HDR way. Photolemur’s 3.0 AI-powered editor “…makes all your images great automatically with the help of Artificial Intelligence” and “…makes your photos look pro without expensive gear.” Topaz’s A.I. Gigapixel offers “intelligent resizing” up to 600% with A.I. upsampling. Will smartphone resized photos be sufficient for museum quality prints? And there are many others all tapping into this burgeoning technology.

As someone who embraced technology as an engineer for twenty-five years, and still does as an avowed gear geek, I am excited about the potential of this new field. However, as a working photographer who sees how A.I. will narrow the gap between skilled and unskilled photographers I have concerns. Does doing something with the assistance of AI make you a better post-processor? A better photographer? Will AI generated imagery become the new look or trend in photography? Will all photographs begin to look the same? As AI does more of the thinking for us will it turn us into dumb participants? It seems to me that AI is moving creativity away from photographers to programmers. Will this stifle creativity if photographers have a one-button solution to great looking images? A counter to this is that photography is all about light and composition and the decision making still resides with the photographer as to where, when, and with what lenses to take a photograph. But good light can be faked as well, for example, crepuscular rays (a.k.a. God beams) or rainbows with existing software. With reality fading away in so many facets of our lives, the line between imagery created by a human and that by a computer is getting thinner by the minute. We have to ask ourselves at what point does photography stop being an art?

2 thoughts on “Computational Photography

  1. Bob Waldrop

    This is a terrific thought piece. Informative and provocative. Thanks immensely for putting so much thought, experience and wisdom into a single blog. Bravo.

  2. JULES

    Interesting! In someway the modern photographer is like cave art leaving a message on a limestone canvas that today creates wonder for us to interpret. Now AI offers a similar challenge by giving us an image of what we thought we saw.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.