Why OpenAI Can't Delete ChatGPT Chats Now
Imagine waking up to find your private conversations—thoughtfully deleted—suddenly pulled back from the digital void. That’s the reality for ChatGPT users right now, thanks to a blockbuster legal battle between OpenAI and The New York Times. As of June 6, 2025, OpenAI can’t simply wipe your chat history, even if you’ve asked for it. The reason? A federal judge’s order, stemming from a lawsuit that could reshape not only AI policy but the very boundaries of privacy and intellectual property in the digital age[1][3][4].
Let’s dig into what’s happening, why it matters, and what it could mean for the future of generative AI.
The Legal Showdown: OpenAI vs. The New York Times
In late 2023, The New York Times sued OpenAI and Microsoft, alleging that their AI systems—including ChatGPT—had unlawfully scraped and reproduced vast amounts of copyrighted material. The publishers argued that OpenAI’s technology encouraged users to plagiarize their content, undermining the value of original journalism and creative work[3]. The case was allowed to proceed, with a federal judge agreeing that there was enough evidence to consider whether OpenAI had, in fact, induced infringement on a massive scale.
Fast forward to May 2025, and Judge Ona T. Wang issued a bombshell order: OpenAI must preserve and segregate all ChatGPT output logs—even those that users have requested to be deleted—to ensure that The New York Times (and other publishers) can accurately track alleged copyright violations[3][4]. The judge acknowledged the privacy concerns but justified the decision by noting the “significant” volume of deleted conversations, which could obscure evidence of wrongdoing.
OpenAI, for its part, is not taking this lying down. The company has appealed the ruling, arguing in public statements and FAQs that the order “compromises our users’ privacy” and “sets a bad precedent.” CEO Sam Altman took to X (formerly Twitter) to voice his concerns, writing that the decision “abandons long-standing privacy norms and weakens privacy protections”[3]. Interestingly, the order does not affect ChatGPT Enterprise or ChatGPT Edu customers, whose data remains subject to different privacy agreements[3].
Why This Matters: Privacy, Copyright, and the Future of AI
At stake is a fundamental question: Who owns the data generated by AI, and who controls its use? For users, the court order means that your deleted chats—perhaps containing sensitive personal information, business secrets, or even creative ideas—could be stored indefinitely as part of a legal discovery process. For publishers like The New York Times, it’s about protecting their intellectual property and ensuring they’re compensated for their work.
This case is part of a broader wave of litigation against AI companies. Google, Anthropic, and others are also facing lawsuits from creators who claim their content was used without permission to train AI models. The tech companies argue that their use falls under “fair use” copyright law, which allows limited use of copyrighted material for purposes like research and education[3]. Creators, on the other hand, contend that AI companies are profiting from stolen content, threatening their livelihoods.
The Data Dilemma: Balancing Privacy and Evidence
The judge’s order raises thorny questions about data retention and user rights. Normally, when you delete a ChatGPT conversation, OpenAI removes it from your account and, presumably, its servers. Now, those conversations must be preserved—segregated, but not deleted—potentially for years, until the lawsuit is resolved[3][4]. The judge even asked OpenAI if there was a way to anonymize the data to address privacy concerns, but the company has not publicly detailed any such measures.
This is not just a technical issue; it’s a human one. Users trust OpenAI with their thoughts, questions, and sometimes deeply personal information. The company’s ability to honor deletion requests is a cornerstone of that trust. With that trust now in question, some users may think twice before sharing sensitive information with AI systems.
Real-World Implications: From Journalism to Everyday Users
The impact of this case extends far beyond the courtroom. For journalists and publishers, it’s a fight for survival. The New York Times and others argue that AI-generated summaries, articles, and even direct reproductions of their work are undermining their business models. If OpenAI is forced to compensate publishers, it could set a precedent for the entire industry.
For everyday users, the implications are more personal. If your deleted chats are preserved, what does that mean for your privacy? Could they be subpoenaed in future lawsuits? And what happens if there’s a data breach? These are questions that don’t have easy answers, but they’re becoming increasingly urgent as AI becomes more integrated into our daily lives.
Historical Context: The Evolution of AI and Copyright
This isn’t the first time that technology has outpaced the law. In the early days of the internet, courts struggled to adapt copyright law to new forms of digital content. The rise of search engines, social media, and streaming services all led to landmark legal battles. Now, generative AI is the latest frontier.
Historically, courts have often sided with technology companies, citing the importance of innovation and the public good. But the scale and sophistication of today’s AI models—capable of generating text, images, and even code that rivals human output—have forced a reckoning. The outcome of the OpenAI case could set a new standard for how AI companies operate, both in the U.S. and around the world.
Current Developments and Industry Reactions
As of June 6, 2025, the legal battle is ongoing. OpenAI’s appeal is pending, and the court has not yet ruled on the merits of the copyright claims. Meanwhile, other publishers and creators are watching closely, ready to join the fray if the precedent is favorable.
Industry experts are divided. Some argue that the court’s order is necessary to protect intellectual property and ensure accountability. Others worry that it could stifle innovation and make it harder for AI companies to operate. One thing is clear: the stakes are high, and the outcome will have far-reaching consequences.
Future Implications: What’s Next for AI and Privacy?
Looking ahead, this case could prompt a wave of new legislation and regulation. Lawmakers in the U.S. and Europe are already debating how to govern AI, balancing the need for innovation with the protection of individual rights and creative works. The OpenAI case could accelerate those efforts, leading to stricter rules on data retention, copyright, and user privacy.
For AI companies, the challenge will be to navigate this new landscape while maintaining user trust. That might mean investing in better privacy protections, developing new ways to anonymize data, or even rethinking how they train their models. For users, it’s a reminder to be mindful of what you share with AI—and to advocate for your rights in an increasingly digital world.
Comparing AI Data Practices: OpenAI, Google, and Others
Here’s a quick look at how some of the biggest players in AI handle data retention and privacy, especially in light of recent legal developments:
Company | Data Retention Policy (User Chats) | Impact of Recent Lawsuits | Privacy Protections |
---|---|---|---|
OpenAI | Must preserve all logs (court order) | Copyright lawsuit ongoing | Appeals privacy concerns[3] |
Varies by product | Facing similar lawsuits | Strong encryption, controls | |
Anthropic | Not publicly detailed | Facing lawsuits | Privacy-focused messaging |
Microsoft | Varies by product | Named in lawsuit | Enterprise-grade security |
As you can see, OpenAI is now in a unique position, forced to retain data it would otherwise delete. Other companies may soon face similar demands as the legal landscape evolves.
Voices from the Industry
Let’s hear from the experts and those directly involved:
- Sam Altman, CEO of OpenAI: “This fundamentally conflicts with the privacy commitments we have made to our users. It abandons long-standing privacy norms and weakens privacy protections.”[3]
- Judge Ona T. Wang: Ordered OpenAI to preserve and segregate all ChatGPT output logs, citing the “significant” volume of deleted conversations as a justification.[3][4]
- The New York Times: Argues that OpenAI’s technology has induced users to plagiarize its materials, and seeks to track alleged copyright violations.[3]
Personal Perspective: Trust in the Age of AI
As someone who’s followed AI for years, I’m struck by how quickly the conversation has shifted. What started as a debate about the capabilities of large language models has become a full-blown legal and ethical showdown. The trust between users and AI companies is being tested like never before.
Let’s face it: we all want the benefits of AI—smarter assistants, better search, and creative tools that push the boundaries of what’s possible. But we also want to know that our data is safe, and that our rights are respected. The OpenAI case is a reminder that, as AI becomes more powerful, so too must our safeguards.
Conclusion: A Pivotal Moment for AI
The battle between OpenAI and The New York Times is more than just a lawsuit—it’s a defining moment for the future of AI. The outcome will shape how data is collected, used, and protected in the age of generative AI. It will influence how creators are compensated, how users’ privacy is safeguarded, and how innovation is balanced with accountability.
For now, OpenAI can’t delete your ChatGPT chats, even if you ask. Whether that’s a temporary hiccup or a permanent shift remains to be seen. But one thing is certain: the rules of the game are changing, and everyone—users, creators, and tech companies alike—will need to adapt.
**