AI Pioneer Bengio Reveals $30M Nonprofit for AI Safety
Let’s face it: artificial intelligence is everywhere. It’s in our phones, our cars, our workplaces—even our refrigerators are getting smarter. But as AI systems grow more powerful, so does the urgency to ensure they don’t outpace our ability to control them. Enter Yoshua Bengio, the world’s most cited computer scientist and a pioneer in deep learning. On June 3, 2025, Bengio made waves by launching a $30 million nonprofit, LawZero, dedicated to reimagining AI safety from the ground up[1].
This isn’t just another research lab or think tank. LawZero represents a radical departure from the approach taken by tech giants like OpenAI and Google, who are racing to develop artificial general intelligence (AGI)—systems that can perform nearly any task a human can. While these companies tout AGI’s potential to solve climate change or cure diseases, Bengio is sounding the alarm: unchecked AI agency could be catastrophic. “If we get an AI that gives us the cure for cancer, but also maybe another version of that AI goes rogue and generates wave after wave of bio-weapons that kill billions of people, then I don’t think it’s worth it,” he recently told TIME[1]. It’s a stark warning, but one that’s increasingly echoed by policymakers and researchers worldwide.
The Current Landscape: AI’s Promise and Peril
AI’s capabilities are advancing at a breakneck pace. Hundreds of billions of dollars are pouring into AI research and development every year, with companies like OpenAI, Google DeepMind, and Meta pushing the boundaries of what’s possible[3]. The dream: machines that can think, learn, and act autonomously—virtual employees, digital assistants, and even autonomous researchers. The reality: as these systems become more autonomous, the risks of misuse, unintended consequences, and loss of human control grow.
Recent evaluations, such as those conducted on OpenAI’s O1 system, have raised the threat level from “low” to “medium” for certain risks, just shy of what many experts consider acceptable[3]. National security agencies are increasingly concerned that the scientific knowledge embedded in these systems could be weaponized by malicious actors. “We still don’t know how to make sure they won’t turn against us,” Bengio notes[3].
The Birth of LawZero: A Safer Path Forward
So, what’s different about LawZero? Bengio’s nonprofit is built around the concept of “Scientist AI”—systems designed to explain the world from observations, rather than take actions within it[1][4]. Unlike agentic AI, which can plan, act, and pursue goals autonomously, Scientist AI focuses on modeling, reasoning, and explaining data. It’s a fundamentally different architecture, built for transparency and safety.
LawZero’s approach is rooted in the precautionary principle: if we don’t fully understand the risks, we should err on the side of caution. The organization’s mission is to develop AI that is “safe by design,” with explicit mechanisms for uncertainty modeling and robust guardrails against overconfident predictions[4]. The goal? To accelerate scientific progress—including AI safety research—without the existential risks posed by agentic systems.
Historical Context: From Deep Learning to Deep Concern
Bengio’s journey is emblematic of the broader AI community’s evolving priorities. In the early 2010s, the focus was on breakthroughs in deep learning, which powered everything from speech recognition to image classification. Bengio, along with Geoffrey Hinton and Yann LeCun, is widely credited with driving these advances. But as the technology matured, so did the recognition of its dual-use potential.
In 2023, Bengio and other leading AI researchers, including OpenAI CEO Sam Altman, signed a statement declaring that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”[1]. This marked a turning point in the AI safety movement, galvanizing policymakers and researchers to take action.
Global Momentum: The International AI Safety Initiative
The launch of LawZero comes at a pivotal moment. In January 2025, the UK government published the world’s first comprehensive International AI Safety Report, synthesizing current literature on the risks and capabilities of advanced AI systems[2]. Commissioned by the UK and overseen by Bengio as Chair, the report was developed in collaboration with 30 nations and an expert advisory panel[2].
The report’s findings underscore the need for international cooperation and evidence-based policymaking. “We should allocate far more resources towards advancing AI safety,” Bengio emphasized in a recent TED Talk[3]. The report also highlights the growing consensus that AI safety is not just a technical challenge, but a global governance issue.
Scientist AI vs. Agentic AI: A Side-by-Side Comparison
To better understand LawZero’s approach, let’s compare Scientist AI with the agentic AI systems being developed by major tech companies.
Feature | Scientist AI (LawZero) | Agentic AI (OpenAI, Google, etc.) |
---|---|---|
Primary Function | Model, explain, and reason | Plan, act, and pursue goals |
Autonomy | Limited (no agency) | High (full agency) |
Safety Mechanism | Built-in uncertainty, transparency | Guardrails, but agency remains |
Risk Profile | Low (predictive, non-agentic) | High (potential for misuse, loss of control) |
Use Case | Scientific research, safety research | Virtual employees, assistants |
Design Philosophy | Safe by design, precautionary | Utility-driven, rapid deployment |
This table illustrates the fundamental differences between the two approaches. While agentic AI offers powerful capabilities, it also introduces significant risks—risks that LawZero seeks to mitigate through its focus on non-agentic, explainable AI[4].
Real-World Applications and Impacts
LawZero’s approach isn’t just theoretical. Scientist AI could revolutionize scientific research by assisting humans in modeling complex systems, generating hypotheses, and interpreting data. Imagine a system that helps researchers understand climate change, design new medicines, or unravel the mysteries of the human brain—without ever taking an autonomous action that could have unintended consequences.
Moreover, Scientist AI could serve as a guardrail against the deployment of risky agentic systems. By providing a safer alternative, LawZero aims to shift the trajectory of AI development toward more responsible innovation[4].
Different Perspectives: The Debate Over AI Safety
Not everyone agrees with Bengio’s approach. Some argue that the risks of agentic AI are overstated, and that the benefits—such as solving global challenges like climate change and disease—outweigh the potential downsides. Executives like Google DeepMind’s CEO Demis Hassabis point to AGI’s transformative potential as a reason to push forward[1].
But Bengio and his allies counter that the stakes are simply too high to gamble. “We need to be humble about what we don’t know,” he says[3]. The debate is likely to intensify as AI systems become more capable and more autonomous.
The Future of AI Safety: What’s Next?
Looking ahead, LawZero’s launch signals a new chapter in the AI safety movement. With $30 million in funding and the backing of leading researchers, the nonprofit is well-positioned to drive meaningful change. But challenges remain. Building safe, explainable AI systems is a complex technical problem, and international coordination will be essential to ensure that safety standards are upheld across borders.
As someone who’s followed AI for years, I’m struck by how far we’ve come—and how much further we have to go. The launch of LawZero is a bold step, but it’s just the beginning. The real test will be whether the broader AI community, policymakers, and the public can come together to prioritize safety over speed.
Conclusion
Yoshua Bengio’s LawZero is more than a research initiative—it’s a call to action. By rethinking AI safety from first principles and championing a non-agentic, explainable approach, LawZero offers a safer path forward in an era of unprecedented technological change. As the world grapples with the dual promise and peril of AI, initiatives like LawZero remind us that innovation and responsibility must go hand in hand.
**