Using ChatGPT to make fake social media posts backfires on bad actors
Instead of radically altering the threat landscape, OpenAI tools like ChatGPT are mostly used to take shortcuts or save costs, OpenAI suggested, like generating bios and social media posts to scale spam networks that might previously have “required a large team of trolls, with all the costs and leak risks associated with such an endeavor.” And the more these operations rely on AI, OpenAI suggested, the easier they are to take down. As an example, OpenAI cited an election interference case this summer that was quickly “silenced” because of threat actors’ over-reliance on OpenAI tools.
“This operation’s reliance on AI… made it unusually vulnerable to our disruption,” OpenAI said. “Because it leveraged AI at so many links in the killchain, our takedown broke many links in the chain at once. After we disrupted this activity in early June, the social media accounts that we had identified as being part of this operation stopped posting” throughout the critical election periods.
OpenAI can’t stop AI threats on its own
So far, OpenAI said, there is no evidence that its tools are “leading to meaningful breakthroughs” in threat actors’ “ability to create substantially new malware or build viral audiences.”
While some of the deceptive campaigns managed to engage real people online, heightening risks, OpenAI said the impact was limited. For the most part, its tools “only offered limited, incremental capabilities that are already achievable with publicly available, non-AI powered tools.”
As threat actors’ AI use continues evolving, OpenAI promised to remain transparent about how its tools are used to amplify and aid deceptive campaigns online. But the AI company’s report urged that collaboration will be necessary to build “robust, multi-layered defenses against state-linked cyber actors and covert influence operations that may attempt to use our models in furtherance of deceptive campaigns on social media and other Internet platforms.”
Appropriate threat detection across the Internet “can also allow AI companies to identify previously unreported connections between apparently different sets of threat activity,” OpenAI suggested.
“The unique insights that AI companies have into threat actors can help to strengthen the defenses of the broader information ecosystem, but cannot replace them. It is essential to see continued robust investment in detection and investigation capabilities across the Internet,” OpenAI said.
As one example of potential AI progress disrupting cyber threats, OpenAI suggested that, “as our models become more advanced, we expect we will also be able to use ChatGPT to reverse engineer and analyze the malicious attachments sent to employees” in phishing campaigns like SweetSpecter’s.
OpenAI did not respond to Ars’ request for comment.
Source link