CAMERA

When AI Can Geolocate a Photo, Is There Any Hope for Vacation Privacy?

While this view looks innocuous, when AI can easily identify a scene and pinpoint a location, it doesn’t matter if there isn’t any personal info in the photo: a stalker can make use of just a location.

In April, ChatGPT users stumbled across the AI’s unnervingly accurate ability to locate where just about any photo was taken. At first blush, it seems harmless, but a recent story on Vox points out that it’s now very easy to stalk someone from just their social media photos.

It’s not a new concern, but it is one that is worth pointing out again. When PetaPixel originally reported on the feature, TechCrunch had asked OpenAI about these privacy concerns.

“OpenAI o3 and o4-mini bring visual reasoning to ChatGPT, making it more helpful in areas like accessibility, research, or identifying locations in emergency response. We’ve worked to train our models to refuse requests for private or sensitive information, added safeguards intended to prohibit the model from identifying private individuals in images, and actively monitor for and take action against abuse of our usage policies on privacy.”

That doesn’t directly address the problem, however. As Vox writer Kelsey Piper explains, a stalker would get everything they would need out of a photo without it sharing “private or sensitive information.” After sharing a photo of a beach, which just shows sand, waves, and a cloudy sky, OpenAI’s o3 was able to correctly guess the location.

A large metal tree sculpture with illuminated branches stands in the center of a cobblestone courtyard surrounded by historic buildings at night.
ChatGPT was able to guess the location of this photo within a 12 minute walk of its actual location. Not perfect, but close enough. | Photo by Jaron Schneider

“To my merely human eye, this image doesn’t look like it contains enough information to guess where my family is staying for vacation. It’s a beach! With sand! And waves! How could you possibly narrow it down further than that?” Piper asks.

“But surfing hobbyists tell me there’s far more information in this image than I thought. The pattern of the waves, the sky, the slope, and the sand are all information, and in this case, sufficient information to venture a correct guess about where my family went for vacation.”

Previously, unless you are afraid of being tracked by rainbolt, sharing innocuous photos of a vacation spot on social media felt pretty safe. Piper was likely right to assume that most people — a vast majority — wouldn’t be able to figure out where she was from her single ocean-facing photo. But AI can, and the ease with which anyone can access ChatGPT means that no photo is necessarily safe anymore.

View from a high vantage point of snow-covered buildings, trees, and fields stretching into the distance, with mountains and a partly cloudy sky at dawn or dusk.
ChatGPT was bang-on correct when it guessed the location of this photo, which was taken on the slopes of Mount Bandai, near Lake Inawashiro in Fukushima Prefecture, Japan.​ | Photo by Jaron Schneider

Some will argue that personal data has not been personal for some time now. Google and Meta have been ravenous for personal data and collect it constantly. But the difference here, as Piper points out, is that Google and Meta use that data to sell ads. OpenAI’s product is far more, well, open.

“While Google has incentives not to have a major privacy-related incident — users would be angry with them, regulators would investigate them, and they have a lot of business to lose — the AI companies proliferating today like OpenAI or DeepSeek are much less kept in line by public opinion,” Piper writes.

There have been multiple recorded cases of streamers, influencers, and celebrities having issues with stalkers — and that was before the ubiquity of AI tools.

Unfortunately, there isn’t an easy solution to these concerns. Outside of more direct regulation of AI companies and the types of tools, they are allowed to add to their platforms, holding them responsible for how a user leverages them maliciously will be difficult legally. Piper points out that New York is considering a law that would regulate AI when they take actions that would be a crime if they were taken by humans, but it remains to be seen if such a law could pass or how it would be enforced. It would also need to be implemented in more than just one state to have any meaningful effect.

For now, if you’re concerned about privacy, it’s best not to share any photos of your current location online.


Image credits: Photos by Jaron Schneider




Source link

Related Articles

Back to top button