Actions

‘Evolution, not revolution’: OpenAI tracks growing misuse of AI by scammers and state actors

OpenAI mostly found that threat actors used ChatGPT to improve their existing tactics, rather than creating new ones.
OpenAI
Posted

Bad actors are increasingly weaving artificial intelligence into their existing operations, according to OpenAI’s latest threat report released Tuesday – and while they’re not inventing new tricks, the company’s investigators found they’re increasingly efficient, blending multiple AI tools to plan schemes.

Since it began publicly reporting on misuse in early 2024, OpenAI says it’s disrupted more than 40 malicious networks that violated its usage policies. Tuesday’s report shows many were using ChatGPT alongside other AI models, like Anthropic’s Claude or DeepSeek, to streamline cyberattacks, phishing campaigns, and scam messaging, offering a unique window into evolving tactics.

IN CASE YOU MISSED IT | OpenAI announces new safety measures for teens and users in crisis on ChatGPT

OpenAI mostly found that threat actors used ChatGPT to improve their existing tactics, rather than creating new ones.

“It's new tools to do the same old job. For example, you can think of a Chinese covert influence operation generating social media posts rather than writing them manually. Or a scam network generating cold call SMS messages,” Ben Nimmo, principal investigator on OpenAI's intelligence and investigations team, told reporters in a call ahead of the report's release. “A year ago, we described that as evolution, not revolution, and that assessment still holds true, but that said in the world of AI, evolution moves fast.”

The report laid out some of the attempts OpenAI intercepted, including Chinese-language accounts using ChatGPT to adjust a phishing automation that would ultimately run on DeepSeek, another cluster of Chinese-language accounts looking to explore malicious code and automation targeting Taiwan’s semiconductor industry, U.S. academia and think tanks, and organizations critical of the Chinese communist party; and scam operations that appear to be based in Cambodia, Myanmar, and Nigeria using the platform to create fake investment websites and generate personas as financial advisors.

IN CASE YOU MISSED IT | ChatGPT's dark side: Report details alarming responses to teens seeking help

“Across every case that we disrupted … what we saw was incremental efficiency gains, not new capabilities,” Michael Flossman, who leads OpenAI’s threat intelligence engineering team, told reporters. “When these adversaries’ interactions crossed into the clearly malicious territory, we found that our safeguards worked as intended, and our models refused to respond. I think it's a reminder that AI can help bad actors at the margins, even if it doesn't net them something fundamentally new.”

Investigators found that some operators are adapting to public AI detection cues or stigmas, explicitly asking ChatGPT to remove em dashes in order to erase clues that the content was AI-generated. Officials also found that the LLM is now used to identify scams three times more often than it’s used to create them.