ChatGPT in Influence Campaigns: Global AI Challenges
If there’s one thing that’s become clear in the world of artificial intelligence, it’s that advanced language models like ChatGPT are as powerful as they are vulnerable to misuse. As of June 5, 2025, OpenAI has once again shone a spotlight on how foreign actors—especially those linked to China, Russia, and Iran—are leveraging AI to fuel sophisticated influence campaigns, manipulate public opinion, and even conduct surveillance on a global scale[1][5]. While the technology behind generative AI continues to dazzle with its creative potential, the dark underbelly of these advancements is increasingly impossible to ignore.
Let’s face it: the idea that AI could supercharge disinformation campaigns has moved from the realm of science fiction to daily reality. OpenAI’s latest threat reports, released just days ago, detail how its tools have been co-opted by state-backed entities to generate everything from inflammatory social media posts to fake news articles and internal performance reviews for the teams running these operations[2][3][5]. The scale, speed, and subtlety of these campaigns are unprecedented—so much so that even seasoned cybersecurity experts are raising alarms.
The Rise of AI-Powered Influence Campaigns
Historical Context: A Brief Timeline
The use of AI in influence operations isn’t new. Researchers from cybersecurity firm Mandiant reported as far back as 2019 that AI-generated content was being deployed in politically motivated online campaigns[1]. But the pace and sophistication have accelerated dramatically. In 2023, these operations became more frequent and harder to detect. By 2024, OpenAI publicly disclosed its efforts to disrupt five state-affiliated operations across China, Iran, and North Korea—targeting everything from debugging code and generating scripts to creating phishing campaign content[1].
Fast forward to 2025, and the problem has only deepened. OpenAI’s most recent reports, published in June 2025, reveal that at least ten covert influence operations have been dismantled in just the past few months. Four of these were likely backed by the Chinese government, with others linked to Russia, Iran, and North Korea[3][5].
How Are These Campaigns Being Run?
The tactics are as varied as they are concerning. One Chinese-linked group, dubbed “Sneer Review,” used ChatGPT to churn out short, biting comments on TikTok, X (formerly Twitter), Reddit, and Facebook[3][5]. The topics ranged from U.S. politics to a Taiwanese video game that allows players to “fight” the Chinese Communist Party. The group even used AI to generate replies to their own posts, creating the illusion of organic conversation. In some cases, the same actors posed as journalists and geopolitical analysts, using AI to craft biographies and analyze correspondence addressed to a U.S. Senator[5].
Another operation, labeled “sponsored discontent,” generated English-language comments attacking Chinese dissidents and Spanish-language news articles critical of the U.S. These articles were published on Latin American news sites, sometimes as sponsored content[2]. Interestingly, this marked the first time a Chinese influence operation was found translating long-form articles into Spanish and targeting Latin American audiences[2].
The Technology Behind the Mischief
AI tools like ChatGPT make these campaigns possible by automating content creation at an unprecedented scale. A single operator can generate hundreds of posts, comments, or articles in minutes, each tailored to specific audiences and platforms. The AI can mimic human writing styles, translate content into multiple languages, and even analyze data to refine messaging strategies[5].
OpenAI’s Ben Nimmo, principal investigator on the company’s intelligence and investigations team, put it bluntly: “What we’re seeing from China is a growing range of covert operations using a growing range of tactics. Some of them combined elements of influence operations, social engineering, and surveillance.”[1][3]
Real-World Impact and Examples
The real-world consequences are already being felt. In one case, a Chinese-linked group used OpenAI’s models to create marketing materials and internal documents, effectively streamlining their propaganda efforts[3]. Another group developed surveillance tools designed to monitor social media for anti-China sentiment and protests, reporting back to Chinese authorities[5].
On platforms like X and Facebook, AI-generated posts and comments have been used to both support and decry hot-button issues, creating confusion and amplifying political divisions[1]. The goal? To stir up misleading political discourse and manipulate public opinion, both at home and abroad.
The Response: Detection, Disruption, and Dilemmas
OpenAI’s Countermeasures
OpenAI has not been idle. In the past three months, the company has banned accounts associated with these operations and disrupted their activities[3][5]. The company’s threat intelligence team uses a combination of machine learning models, human analysts, and cross-platform data sharing to identify and neutralize malicious actors.
But the challenge is immense. As Nimmo noted, “Without our view of their use of AI, we would not have been able to make the connection between the tweets and web articles.”[2] In other words, the very technology that enables these campaigns is also the key to detecting them.
The Broader Industry Response
Other tech companies are also stepping up. Platforms like Meta (Facebook, Instagram), X, and TikTok have introduced new policies and detection tools to flag and remove AI-generated content used for influence operations. However, the arms race between creators and detectors is ongoing, and the bad actors are constantly evolving their tactics.
Ethical and Policy Considerations
The misuse of AI for influence campaigns raises profound ethical questions. Should AI companies be held responsible for how their tools are used? What role should governments play in regulating AI-powered disinformation? These questions are at the heart of current debates in Washington, Brussels, and beyond.
From my perspective, as someone who’s followed AI for years, the line between innovation and exploitation has never been blurrier. The same technology that can help us write poetry, debug code, or analyze data can also be weaponized to undermine democracy and sow discord.
Comparing AI Influence Campaigns: Tactics, Targets, and Tools
Campaign Name | Country of Origin | Platforms Targeted | Tactics Used | Notable Details |
---|---|---|---|---|
Sneer Review | China | TikTok, X, Reddit, Facebook | Short comments, fake dialogues, performance reviews | Targeted U.S. politics, Taiwanese game |
Sponsored Discontent | China | X, Latin American news sites | English/Spanish articles, sponsored content | First use of long-form Spanish articles |
Unnamed (Iran) | Iran | Multiple | Social media posts, phishing scripts | Debugging code, content for phishing |
Unnamed (Russia) | Russia | Multiple | Social media posts, surveillance | Social engineering, surveillance |
Future Implications and Forward-Looking Insights
The trajectory is clear: AI-powered influence campaigns are here to stay, and they’re only going to get more sophisticated. As generative AI models become more advanced, the ability to create convincing fake content—text, images, even video—will only improve. This poses a significant challenge for democracies, media organizations, and tech companies alike.
But it’s not all doom and gloom. The same AI that powers these campaigns can also be used to detect and counter them. Advances in machine learning and natural language processing are enabling faster, more accurate identification of AI-generated content. The key will be collaboration—between tech companies, governments, and civil society—to stay one step ahead of the bad actors.
Interestingly enough, some experts believe that the rise of AI-generated disinformation could actually spur greater investment in media literacy and critical thinking education. After all, if people are better equipped to spot fake content, the impact of these campaigns will be blunted.
Conclusion
As of June 5, 2025, the use of ChatGPT and other generative AI tools by foreign propagandists is not just a hypothetical threat—it’s a daily reality. OpenAI’s latest reports reveal a growing wave of covert influence operations, with tactics ranging from social media manipulation to surveillance and social engineering. The challenge for the tech industry, policymakers, and society at large is to harness the power of AI for good, while guarding against its misuse.
But here’s the thing: the story is far from over. The arms race between AI creators and detectors is just beginning, and the stakes couldn’t be higher. As the technology evolves, so too must our strategies for detecting, disrupting, and ultimately defeating AI-powered disinformation campaigns.
**