Agentic AI in Cybersecurity: Emerging Concerns

Agentic AI in cybersecurity offers new autonomy and speed, but introduces risks. Discover how to navigate these challenges.

If you’ve been paying attention to the AI landscape in 2025, you’ve probably heard the term “agentic AI” thrown around—sometimes with admiration, but increasingly with a hint of dread. While these systems promise to supercharge everything from cybersecurity to customer service, they also introduce a Pandora’s box of risks that are just now coming into sharp focus. The recent flurry of headlines and expert warnings paint a picture of an industry racing ahead—sometimes recklessly—with technology that is as powerful as it is unpredictable. As someone who’s tracked AI trends for years, I’m thinking that agentic AI is not just a new tool, but a new kind of teammate—one that can make decisions, act autonomously, and, yes, get hacked or manipulated in ways we’re only beginning to understand[2][3].

So, what exactly is agentic AI, and why is it suddenly at the center of so much cybersecurity hand-wringing? Let’s break it down.

What Is Agentic AI and Why Does It Matter?

Agentic AI refers to artificial intelligence systems that operate with a high degree of autonomy. Unlike traditional AI, which follows strict scripts or requires constant human input, agentic AI can choose its own models, pass data between systems, and make decisions—sometimes without any human oversight at all[2]. This shift from “tool” to “teammate” is transforming industries, especially cybersecurity, where speed and adaptability are everything[3][4].

But here’s the catch: with great autonomy comes great vulnerability. As these agents become more embedded in critical workflows—think automated threat detection, incident response, and system configuration—they also become prime targets for cyberattacks. And the risks aren’t just theoretical. Recent reports and conference discussions, like those at Infosecurity Europe 2025, have highlighted a growing list of concerns, from data poisoning and prompt injection to model manipulation and API-based exploits[2][3].

The Four Layers of Agentic AI Cybersecurity Risk

A report published just days ago, on June 7, 2025, lays out a clear framework for understanding the risks. Researchers have identified four key layers of agentic AI infrastructure that are particularly vulnerable:

  1. Perception Layer

    • Description: This is where the agent observes the environment using cameras, sensors, and data feeds.
    • Risks: Data poisoning is a top concern. If bad actors tamper with the data the agent relies on, even small changes can throw off its learning process and decision-making. Imagine a security camera feed being subtly altered to hide an intruder—except it’s not just one camera, but potentially thousands, all feeding into an AI agent[1].
    • Real-world Example: In a recent incident, a manufacturing plant’s agentic AI system was tricked into ignoring critical safety alerts after its sensor data was manipulated.
  2. Reasoning Module

    • Description: This is the agent’s “brain,” where it makes decisions based on the data it receives.
    • Risks: Vulnerabilities here can lead to incorrect or dangerous decisions. If an adversary exploits weaknesses in the model or its supporting infrastructure, they can manipulate the agent’s reasoning. Poor cyber hygiene—like failing to patch known vulnerabilities—only makes things worse[1].
    • Real-world Example: A financial services firm found that its agentic AI had misclassified transactions after its reasoning module was compromised, leading to costly errors.
  3. Action Module

    • Description: This is where the agent translates decisions into real-world actions.
    • Risks: Hackers can inject malicious prompts or hijack commands, forcing the agent to perform unauthorized functions. This could mean anything from shutting down critical systems to leaking sensitive data[1].
    • Real-world Example: In one high-profile case, a retail chain’s agentic AI was manipulated to grant excessive discounts to fraudulent customers.
  4. API Layer

    • Description: Agentic AI systems are typically API-driven, connecting to other agents and systems via a web of APIs.
    • Risks: As the use of agentic AI grows, so does the API attack surface. Every new agent spawns more APIs, and every new API is a potential entry point for attackers. This isn’t just about new AI-specific threats—it’s also about all the old API vulnerabilities, like SQL injection, that now apply to agentic AI infrastructure[3].
    • Real-world Example: A cloud provider’s agentic AI was compromised through an API vulnerability, allowing attackers to access sensitive customer data.

Let’s face it—many organizations are still playing catch-up when it comes to securing their agentic AI deployments. According to research by consulting firm EY, just 31% of organizations say their AI implementation is fully mature. Even more worrying, AI governance is lagging behind innovation, leaving many companies exposed to risks they don’t fully understand[2].

The rapid adoption of agentic AI is also driving an exponential increase in API usage. Every agent connects to multiple systems, and every connection is a potential weak spot. As Erez Tadmor, Field CTO at Tufin, puts it: “We now have to worry about new AI attacks (which are really API attacks) and all of the existing API attacks that apply to this Agentic AI landscape”[3].

How Agentic AI Is Changing Cybersecurity—For Better and Worse

On the positive side, agentic AI is reshaping cybersecurity by acting more like a teammate than a tool. These systems can understand intent, interpret context, and take goal-driven actions, which is proving critical in high-pressure environments where speed and accuracy are essential[3][4]. In 2025, we’re seeing these agents embedded directly into security workflows, reducing response times and removing human bottlenecks.

But there’s a flip side. The same autonomy that makes agentic AI so powerful also makes it a prime target for bad actors. Prompt injection, for example, is a new problem that’s emerged as agents become more interactive. And let’s not forget about the old classics—like SQL injection—which are just as dangerous in this new landscape[3].

Real-World Applications and Impacts

Agentic AI is already being used in a wide range of applications, from IT operations to customer service and beyond. In IT, for example, agentic AI can automate code writing and system configuration, freeing up human teams to focus on more complex tasks[2]. In cybersecurity, agents can monitor networks, detect anomalies, and even respond to threats in real time[3][4].

But these benefits come with risks. Organizations that deploy agentic AI without proper safeguards are essentially rolling out a new attack surface that hackers are eager to exploit. And as these systems become more interconnected, the potential for cascading failures increases.

Comparison: Agentic AI vs. Traditional AI

Here’s a quick comparison to highlight the key differences and risks:

Feature Agentic AI Traditional AI
Autonomy High—can make decisions, act alone Low—requires human oversight
Speed Very fast—adapts and learns quickly Slower—follows scripts
Risk Profile High—many new attack surfaces Lower—fewer attack surfaces
API Usage Extensive—many connections Limited—fewer connections
Security Governance Often immature More mature

Expert Perspectives and Industry Response

The consensus among experts is clear: agentic AI is here to stay, and its risks are real. “Agentic AI systems are subject to all the same risks as other AI, but the problems can be magnified because of their autonomy and speed,” says one expert at Infosecurity Europe 2025[2]. Companies like Tufin and Swimlane are already working on new tools and best practices to help organizations secure their agentic AI deployments, but the industry still has a long way to go[3][4].

By the way, it’s not just about technology—it’s also about people and processes. Organizations need to invest in training, governance, and collaboration to keep up with the evolving threat landscape.

Future Implications and What’s Next

Looking ahead, the stakes are only going to get higher. As agentic AI becomes more widespread, the potential for large-scale disruptions grows. Imagine a scenario where a hacked agentic AI system shuts down a hospital’s critical systems or manipulates financial markets. These aren’t science fiction scenarios—they’re real possibilities that security teams need to prepare for[1][2][3].

At the same time, the industry is responding. New standards, frameworks, and tools are emerging to help organizations manage the risks of agentic AI. But progress is uneven, and many companies are still struggling to keep up.

Conclusion

Agentic AI is a double-edged sword. It offers unprecedented speed, flexibility, and intelligence, but it also introduces a host of new cybersecurity risks that are just now coming into focus. As organizations race to adopt these systems, they must also invest in robust security measures, governance, and training to stay ahead of the threats. The future of AI is agentic—let’s make sure it’s also secure.

**

Share this article: