NC governor candidate cries AI fabrication as defense for racist porn forum posts
On Thursday, CNN broke news about inflammatory comments made by Mark Robinson, the Republican nominee for governor of North Carolina, on a pornography website’s message board over a decade ago. After the allegations emerged, Mark Robinson played on what we call “deep doubt” and denied the comments were his words, claiming they were manufactured by AI.
“Look, I’m not going to get into the minutia about how somebody manufactured these salacious tabloid lies, but I can tell you this: There’s been over one million dollars spent on me through AI by a billionaire’s son who’s bound and determined to destroy me,” Robinson told CNN reporter Andrew Kaczynski in a televised interview. “The things that people can do with the Internet now is incredible. But what I can tell you is this: Again, these are not my words. This is simply tabloid trash being used as a distraction from the substantive issues that the people of this state are facing.”
The CNN investigation found that Robinson, currently serving as North Carolina’s lieutenant governor, used the username “minisoldr” on a website called “Nude Africa” between 2008 and 2012. CNN identified Robinson as the user by matching biographical details, a shared email address, and profile photos. The comments included Robinson referring to himself as a “black NAZI!” and expressing support for reinstating slavery, among other controversial comments.
Considering the trail of evidence CNN pieced together and the fact the comments were reportedly posted long before the current AI boom, Robinson’s claim of an AI-generated attack is very unlikely to be true.
Here’s Mark Robinson on @CNN claiming he’s being framed, insisting that all the bizarre things he wrote on “Nude Africa” were generated by AI.
AI is powerful, but it can’t rewrite internet history from 15 years ago. It’s the same nonsense we saw this summer from Laura Loomer… pic.twitter.com/fcgfXXi1ub
— Mike Nellis (@MikeNellis) September 19, 2024
Mike Nellis, a former senior adviser to Kamala Harris, pointed out the comment in a post on X, saying, “It’s already hard enough for people to figure out the truth, without MAGA politicians and conspiracy theorists blaming AI for anything that makes them look bad. I’m not sure what the solution is yet, but this is a huge problem for the future of democracy.”
Another case of the “deep doubt era” in action
As previously covered on Ars Technica, the emergence of generative AI models has given liars a new excuse to dismiss potentially harmful or incriminating evidence, since AI can fabricate realistic deepfakes on demand. This new “deep doubt” era of suspicion has already given rise to at least two claims by former US President Donald Trump that certain credible photos had been AI-generated.
Researchers Danielle K. Citron and Robert Chesney first formalized this subset of the deep doubt concept, called the “liar’s dividend,” in a 2019 research paper. In the paper, the authors speculated that “deepfakes make it easier for liars to avoid accountability for things that are in fact true.” They wrote that realistic deepfakes may eventually erode democratic discourse.
It’s important to note that the deep doubt era sets the stage for these types of claims, but it doesn’t automatically mean that the claims are believable. Already in Mark Robinson’s case, a few members of his own party think he should resign his candidacy, but the NC GOP has so far defended him, and Robinson has no plans to drop out as of this morning.
While Robinson is sticking to his guns, history has shown that lies tend to spiral. When someone in a prominent position denies overwhelming evidence against them, they often end up creating a fictional universe where they are correct, attempting to justify the lie. The result splits the perception of reality between that person’s supporters and everyone else into parallel narratives, which leads to more division over time.
Already, others are buying into Robinson’s deep-doubt narrative. Another CNN report quotes North Carolina Republican Rep. Greg Murphy, who also raised doubt about the porn forum comments’ authenticity. “What I read was very concerning, but given the degree of electronic manipulation that can happen these days with AI, with everything else, who the hell knows what’s true and what’s not,” he said.
As AI systems increase in capability in the coming years, potentially allowing for “context attacks” that might use generative AI models to automatically create an endless flow of supportive fiction that manufactures evidence to justify lies on demand, we may be in for wild times ahead.