New York's Pioneering AI Safety Law Targets Risks

New York establishes groundbreaking AI safety legislation with the RAISE Act, addressing risks from major tech companies.

New York just made history. On June 13, 2025, the state legislature passed the RAISE Act—short for Responsible Artificial Intelligence in Societal Engagement—a first-of-its-kind law specifically designed to regulate the safety of advanced AI systems developed by the world’s largest tech companies[1][3][4]. This isn’t just another bureaucratic hurdle for Silicon Valley. It’s a bold attempt to address the real and growing risks posed by next-generation AI, the kind that powers everything from chatbots to potentially dangerous automated systems. With AI’s rapid advancement, New York is stepping in before it’s too late.

Why Now? The Context Behind the RAISE Act

Let’s face it—AI isn’t just the future anymore. It’s happening now, and it’s changing everything. Over the past five years, companies like OpenAI, Google, Anthropic, DeepSeek, and Meta have pushed the boundaries of what’s possible, training massive models on computing clusters that cost hundreds of millions of dollars[1][3][4]. These so-called “frontier models” are capable of generating human-like text, designing molecules, and even writing code. But with great power comes great risk—something that’s been top of mind for policymakers, researchers, and even the public.

The RAISE Act didn’t come out of nowhere. It’s the result of mounting concerns over AI’s potential for misuse—think deepfakes, automated disinformation, and even AI-assisted cybercrime or bioweapons design[4]. Until now, most regulations have been light-touch, relying on voluntary commitments from tech companies. But as the stakes have risen, so has the urgency for real oversight.

Inside the RAISE Act: What’s Actually in the Bill?

So, what does the RAISE Act actually require? Here’s a breakdown:

  • Targeted Regulation: The law focuses on the biggest players—companies that have spent more than $100 million on computing resources to train their AI models and make them available to New York residents[3][4].
  • Transparency and Reporting: Major AI developers must publish detailed safety and security assessments, including potential misuse scenarios, technical vulnerabilities, and incidents involving unsafe behavior or data breaches[1][3].
  • Incident Disclosure: If a dangerous AI model is stolen or starts behaving in a risky way, companies are required to report it immediately[3][4].
  • Penalties for Noncompliance: The New York Attorney General can slap companies with civil penalties of up to $30 million if they fail to meet these standards[1][3][4].
  • Balancing Innovation and Safety: The law is designed to shield smaller startups from undue regulation, focusing only on the largest, most impactful models[1].

Who’s Affected? A Look at the Players

The RAISE Act is squarely aimed at the global giants of AI—OpenAI (ChatGPT, GPT-4), Google (Gemini, Bard), Anthropic (Claude), Meta (Llama), and international heavyweights like DeepSeek and Alibaba[3][4]. These companies are at the forefront of AI research and development, and their models are already being used by millions, if not billions, of people worldwide.

Here’s a quick comparison of how the RAISE Act might impact these major AI labs:

Company Key AI Model(s) Estimated Training Cost (USD) RAISE Act Applicable?
OpenAI GPT-4, GPT-5 $100M+ Yes
Google Gemini, Bard $100M+ Yes
Anthropic Claude $100M+ Yes
Meta Llama 3, Llama 4 $100M+ Yes
DeepSeek DeepSeek-R1 $100M+ Yes
Alibaba Qwen $100M+ Yes

Industry Reaction: Praise, Pushback, and Everything in Between

Not surprisingly, the reaction has been mixed. On one hand, advocates for AI safety and responsible development are cheering. “Would we let automakers sell a car with no brakes? Of course not. So why would we let developers release incredibly powerful AI tools without basic safeguards in place?” said one New York lawmaker[4]. It’s a compelling analogy—and one that’s resonated with many.

On the other hand, industry groups like BSA (the Software Alliance) have raised strong concerns. They argue that the new law creates an “extensive and unworkable third-party audit regime” and could lead to “fragmented enforcement through private lawsuits”[2]. Some worry that New York’s approach could set a precedent for other states, creating a patchwork of regulations that stifles innovation and increases costs.

But New York officials aren’t backing down. They’re betting that the state’s economic clout—home to Wall Street, major media, and a huge tech workforce—will force compliance, no matter how much the industry grumbles[1].

Real-World Impact: What This Means for AI Developers and Users

For the average person, the RAISE Act might not change much—at least not right away. But for large AI developers, it’s a wake-up call. They’ll need to invest more in safety testing, incident response, and transparency. And if something goes wrong—say, a model is hacked or starts behaving unpredictably—they’ll have to own up to it, fast.

For smaller startups and open-source projects, the law is designed to avoid unnecessary red tape. Only the biggest, most resource-intensive models are in scope, which means most innovators can keep doing what they do best—building and experimenting[1].

Historical Context: How We Got Here

A decade ago, AI was mostly the domain of academia and a handful of big tech companies. Fast forward to today, and AI is everywhere—from your phone to your fridge. As the technology has become more powerful, so have the risks. High-profile incidents, like deepfakes used in political campaigns or AI-generated misinformation, have made headlines and fueled public anxiety.

Governments around the world have been scrambling to keep up. The EU’s AI Act, passed in 2023, was a landmark, but it’s broader and more general than New York’s new law. The RAISE Act is the first in the U.S. to specifically target “frontier models”—the most advanced and potentially risky AI systems[1][3].

Future Implications: What’s Next for AI Regulation?

The RAISE Act is just the beginning. If signed by Governor Kathy Hochul—which seems likely, given the broad support in the legislature—it could set a new standard for AI safety in the U.S. and beyond[3]. Other states are watching closely, and federal lawmakers may take note as well.

The big question: Will this law actually make AI safer? Only time will tell. But one thing’s for sure: the era of self-regulation is over. As someone who’s followed AI for years, I’m thinking that this could be a turning point—not just for New York, but for the entire tech industry.

Different Perspectives: Balancing Safety and Innovation

Not everyone agrees on how to regulate AI. Some argue that strict rules will slow down progress and push innovation overseas. Others believe that without clear guardrails, we’re risking disaster. The RAISE Act tries to strike a balance: it’s tough on the biggest players, but leaves room for smaller innovators to thrive[1][2].

Interestingly enough, this debate isn’t new. Every major technology—from cars to the internet—has gone through similar growing pains. The difference now is the speed and scale of AI’s impact.

Real-World Applications and Impacts

The RAISE Act could have ripple effects far beyond New York. If other states or countries follow suit, we could see a new era of AI governance—one where safety and transparency are baked into the development process from day one. For consumers, that could mean more trustworthy AI tools and fewer nasty surprises.

For companies, it means more work—but also a chance to build trust and demonstrate leadership. And for policymakers, it’s an opportunity to show that they can keep up with the pace of technology.

Conclusion: A New Chapter for AI Safety

The RAISE Act is a bold step forward—one that could redefine how we think about AI safety and regulation. It’s not perfect, and there will be challenges along the way. But by focusing on the biggest risks and the most powerful models, New York is setting a new standard for responsible AI development.

As AI continues to evolve, so must our approach to governing it. The RAISE Act is proof that, when it comes to technology this powerful, we can’t afford to wait and see what happens. We have to act now—before it’s too late.

**

Share this article: