A simple snapshot reveals how computational photography can surprise and alarm us
Last month, an actress and comedian in the U.K named Tessa Coates, who was recently engaged and was trying on wedding dresses, posted a seemingly straightforward snapshot of herself dressed in white, looking at herself in a double mirror, on Instagram. As most brides and those who have been through the wedding process know, such a mirror gives two views of a bride. Since the photograph was shot from behind Coates, with her iPhone, you could see much of Coates and the dress from the back, giving you a third view.
However, although the photo depicts what appears to be a bride simply viewing herself in a prospective wedding dress, upon closer inspection, something looks off. In fact, when you study Coates’ arms and hands, you soon realize that they are all positioned differently from each other. In other words, neither mirror is reflecting the image of Coates that we see from the back in the photograph. According to the story Coates tells on Instagram, when she looked at the photo (which she said wasn’t Photoshopped), she had a full panic attack. She even went back to ask the dress shop owner if the mirrors were taking and displaying video. (They weren’t.)
She became more anxious and alarmed after showing and discussing it with friends that day. She then showed it to a number of people and employees in a nearby Apple store she visited. On Instagram, she joked that perhaps she really had magically crossed over and lived “in the mirror realm now,” or that she was “on the second layer of ‘Inception.’”
But, had, Coates, in fact, followed Alice through the looking glass, of sorts?
What kind of photo did this bride-to-be have on her iPhone?
At that Apple store, she finally was able to meet with an Apple genius, named “Roger,” who was able to provide her with answers. Roger said, “First of all, an iPhone is not a camera, but a computer.” When it takes a photo, he said, it captures a series or burst of images…even when it’s not a panorama, live photo, or burst of images. “It takes a series of images from right to left.” In this case, at the exact moment it crossed behind her back, Roger said, “You raised your arms, and [your iPhone] made a different image on the other side.”
Coates then said Roger mentioned that in that split-second moment the camera made an “AI decision and stitched those two photos together.” Coates said Roger also mentioned Apple was beta testing this new feature on phones (a feature that’s presently only found on Google phones). Lastly, Coates said Roger noted it was a million-in-one chance that it would stitch the photo right at the moment she raised her arms.
On Instagram and elsewhere, there have been lots of theories posted on how or why this happened, including some who are skeptical of the entire story. But one tech reviewer, named Faruk, who has a YouTube channel called iPhonedo, describes why he believes it’s not Photoshopped, and that it is, in fact, a panorama.
What is computational photography?
Although it’s not clear exactly what type of photo or what kind of mode the image Coates had on her phone (an iPhone 12) was captured in, what is clear is that it’s a computational photograph, since it was captured with an iPhone.
But what exactly is computational photography? As the name implies, computational photography uses computer processing power to harness data from an image sensor (or multiple image sensors) in various ways to enhance traditional photography techniques.
But it’s also used to develop new photography techniques and forms. For instance, with traditional film photography, photographers could produce panoramas, but it was very labor intensive and difficult to create panoramas, even ones that included just a few images. However, because iPhones are not only digital, but are empowered with computer algorithms, they can quickly stitch scores of images together to create a panoramic photo.
But iPhones and other smartphones not only use computational photography for panos, they also use it to produce high-dynamic range (HDR) images, photos shot in portrait modes, and other innovative digital genres that combine computer power and photo optics. All of these are examples of computational photographs.
How to take good panoramas on your Apple iPhone
We can’t be sure exactly what type of beta mode Coates’ phone was in (if it was, in fact, in a beta or testing mode). However, some comments on the internet suggest that the image might have been created in panorama mode, which allows you to capture extremely wide photos of particular scenes, landscapes, or cityscapes.
On iPhones, you can set your camera in this mode by opening up the camera app and then swipe left until you find “PANO.” Here are a few tips to get better results with your panoramas:
Check the direction your panning: If you want to create a horizontal panorama, hold your iPhone vertically and then pan either right-to-left or left-to-right. Make sure the arrow points in the direction you’re panning. To create a vertical panorama, you’ll hold your iPhone horizontally and pan upwards or downwards.
Avoid moving subjects: When shooting panoramas, it’s best not to capture subjects that move, since you’ll often end up with odd or weird aberrations or distorted subjects.
Practice, practice, practice: Before you create your final panorama, take some time to create a few test panoramas. For starters, it will allow you to figure out if you’re panning too slowly or too quickly. It will also help you work out other compositional issues, such as if the lighting is correct, if there are subjects that are moving, etc.
Should we be worried about computational photography?
Although there are some aspects of Tessa Coates’ story that sound somewhat comical—I mean after all she is an actress and a comedian—there were other elements that were alarming. This is partly due to the fact that although computational photography appears similar to traditional photography, in many ways it’s very different. And we don’t always know if we can recognize those differences.
Take the portrait mode in an iPhone. When using this computational photography mode to capture a person’s face, it will most often accurately focus on the person’s head and only blur the background to replicate bokeh found in traditional film photography. However, unlike a film camera, you can’t use the feature on still-life subjects. For example, when I tried to shoot an evergreen branch set against a background of similarly colored green trees, it severely cropped my photo of the branch, creating an artificial looking branch. Of course, in my example, iPhone’s portrait mode was never meant to capture an evergreen branch outside set again a background of green trees. But with an analog film SLR camera, I could apply the same rules of shallow depth of field to my outdoor evergreen photo that I could when shooting a portrait.
What’s important to understand is that by using computational photography, tech companies (Apple, makers of Android phones, and others) have dramatically altered some of the essential rules, principles and techniques of photography. My concern is not that they’re changing these rules, but that many of those changes will remain in the dark and hidden from us…which is what happened when Coates’ iPhone captured a composite panorama-like photograph of her wearing a wedding dress.