China Uses ChatGPT for AI Propaganda
Imagine a world where the very tools designed to foster creativity and collaboration are hijacked to sow confusion, manipulate minds, and shape political narratives. That’s not a dystopian novel—it’s the reality unfolding today in the digital battlegrounds of artificial intelligence. As of June 6, 2025, OpenAI has pulled back the curtain on a sprawling, covert operation: Chinese state-linked groups have been using ChatGPT to spread propaganda, manipulate social media engagement, and target journalists and politicians in a coordinated, AI-powered influence campaign[1][3][4].
The revelation, detailed in OpenAI’s latest threat intelligence report, marks a significant escalation in the digital arms race. Gone are the days when misinformation campaigns relied solely on human trolls and clumsy botnets. Now, generative AI like ChatGPT is the weapon of choice for those seeking to influence public opinion at scale—and the results are as subtle as they are alarming[2][3][5].
The Anatomy of a Modern Influence Operation
OpenAI’s investigation uncovered at least ten covert influence operations abusing its generative AI tools. Four of these campaigns, the company believes, were likely orchestrated by the Chinese government. These weren’t just random acts of mischief; they were highly organized, multifaceted campaigns designed to sway public opinion, conduct surveillance, and even target specific individuals in politics and media[4][1][3].
The tactics were varied. State-linked actors used ChatGPT to generate social media posts, comments, and even internal documents and marketing materials. The content ranged from praising the Chinese Communist Party to decrying its critics, from stirring controversy over US politics to amplifying narratives around Taiwan—specifically, a Taiwanese video game where players fight the Chinese Communist Party[3][4]. The goal? To muddy the waters, create confusion, and drive engagement—both at home and abroad.
Ben Nimmo, principal investigator on OpenAI’s intelligence and investigations team, put it bluntly: “What we’re seeing from China is a growing range of covert operations using a growing range of tactics.” Nimmo noted that these operations “targeted many different countries and topics, even including a strategy game,” and combined elements of influence operations, social engineering, and surveillance across multiple platforms[3][4].
The Tools and Tactics: How AI Is Weaponized
The use of generative AI in influence campaigns is not new, but its sophistication is increasing. In 2023, cybersecurity firm Mandiant found that AI-generated content had been used in politically motivated online influence campaigns since 2019. By 2024, OpenAI had already disrupted five state-affiliated operations across China, Iran, and North Korea. The latest findings, however, indicate a worrying trend: these campaigns are getting more ambitious, more targeted, and harder to detect[3][4].
ChatGPT was used to create not just text, but also scripts and code for more complex operations, such as debugging malware or generating content for phishing campaigns. The AI-generated posts appeared on mainstream platforms like TikTok, Facebook, Reddit, and X (formerly Twitter), blending seamlessly with legitimate user content[3][4]. The sheer volume and variety of posts—some supporting, others attacking hot-button issues—made it difficult for platforms and users alike to distinguish fact from fiction.
OpenAI has responded by banning accounts associated with these operations and disrupting their activities. In the last three months alone, the company has dismantled multiple campaigns, but the cat-and-mouse game continues. As Nimmo noted, “We’re seeing a growing range of tactics, and the actors are evolving quickly.”[4]
The Targets: Journalists, Politicians, and the Public
Who was in the crosshairs? The campaigns targeted not just the general public but also journalists and politicians, aiming to shape narratives at the highest levels. By flooding social media with AI-generated content, these operations sought to influence public opinion, discredit critics, and even gather intelligence through fake personas[1][3][4].
The use of AI for surveillance is particularly troubling. By creating believable fake profiles, state-linked actors could monitor discussions, identify key influencers, and even attempt to recruit unwitting participants into their campaigns. This blend of influence, social engineering, and surveillance represents a new frontier in digital espionage—one where the line between information and manipulation is increasingly blurred[3][4].
Historical Context: From Troll Farms to AI-Powered Propaganda
To understand the significance of these developments, it’s worth looking back at how influence operations have evolved. A decade ago, the term “troll farm” entered the lexicon, referring to groups of people paid to post inflammatory content online. These operations were effective but limited by human resources and the need for manual labor.
With the advent of generative AI, the game changed. Now, a single operator can generate thousands of posts, comments, and even entire personas in minutes. The content is more convincing, the scale is unprecedented, and the cost is a fraction of what it once was. As someone who’s followed AI for years, I’m struck by how quickly the landscape has shifted—and how unprepared many institutions remain for this new reality[3][4].
Current Developments and Industry Response
OpenAI’s announcement is the latest in a series of efforts to combat AI-powered misinformation. The company has invested heavily in threat intelligence, using advanced detection methods to identify and disrupt malicious actors. But the challenge is immense. As Nimmo explained, the actors are “evolving quickly,” and the tactics are becoming more sophisticated[4][2].
Other tech giants are also stepping up. Platforms like Facebook, TikTok, and X have introduced new policies and detection tools to identify and remove AI-generated content. However, the sheer volume of posts and the subtlety of the manipulation make it a constant game of whack-a-mole.
Interestingly enough, OpenAI’s report also highlights the use of AI in more traditional cyber operations, such as debugging code and generating scripts for phishing campaigns. This underscores the versatility of generative AI—and the need for a multi-pronged defense[3][4].
The Global Impact and Multinational Response
The implications of these campaigns are global. OpenAI’s findings suggest that Chinese state-linked operations targeted many different countries, not just the US. The goal appears to be not just to influence domestic audiences but to shape international perceptions and sow discord abroad[3][4].
Other nations are also involved. OpenAI has disrupted operations it believes originated in Russia, Iran, and North Korea. But the scale and sophistication of the Chinese campaigns stand out, reflecting Beijing’s growing ambitions in the digital sphere[3][4].
The international community is taking notice. Governments, tech companies, and civil society groups are calling for stronger regulations, better detection tools, and greater transparency. But progress is slow, and the incentives for bad actors remain strong.
Future Implications: The Arms Race Heats Up
Looking ahead, the use of AI in influence operations is only likely to increase. As generative AI becomes more powerful and accessible, the barriers to entry for malicious actors will continue to fall. The result? More sophisticated, more targeted, and more damaging campaigns.
The challenge for defenders is to stay one step ahead. This means investing in better detection tools, fostering international cooperation, and educating the public about the risks of AI-generated content. It also means rethinking the way we approach digital literacy and media consumption.
As someone who’s been in the trenches of AI reporting, I’m both excited and concerned. Excited by the potential of these technologies to transform society for the better. Concerned by the ease with which they can be turned to darker purposes.
Comparing Influence Operations: Then and Now
Feature | Traditional Troll Farms (2010s) | AI-Powered Influence Ops (2020s) |
---|---|---|
Scale | Limited by human labor | Virtually unlimited |
Cost | High (payroll, infrastructure) | Low (automation, AI tools) |
Content Quality | Often crude, repetitive | Highly convincing, diverse |
Detection Difficulty | Moderate | Very high |
Target Scope | Mostly domestic | Global |
Tactics | Manual posting, basic bots | AI-generated personas, scripts |
The Human Cost and Ethical Quandaries
Let’s face it: the real victims here are ordinary people trying to make sense of the world. When AI-generated content floods social media, it becomes harder to trust anything online. Journalists and politicians, already under siege from misinformation, face new challenges in distinguishing genuine engagement from orchestrated campaigns.
The ethical questions are profound. Should tech companies be held accountable for the misuse of their tools? How do we balance innovation with responsibility? And what role should governments play in regulating AI-powered influence operations?
These are not easy questions, but they are urgent. As we grapple with the implications of AI, we must also reckon with its darker side.
The Road Ahead: Solutions and Challenges
There are no silver bullets, but there are steps we can take. Tech companies must continue to invest in detection and disruption. Governments need to collaborate on international norms and regulations. And all of us—as citizens, consumers, and creators—must become more discerning about the content we encounter online.
OpenAI’s latest report is a wake-up call. The digital landscape is changing, and the stakes are higher than ever. By shining a light on these covert operations, we can begin to fight back—not just with technology, but with vigilance, cooperation, and a renewed commitment to truth.
**