top of page

The Industry’s First Agentic SOC for Autonomous MDR is Here

The Industry’s First Agentic SOC for Autonomous MDR is Here

Meet Kanati

Suspect a Breach? 

!

Contact Us:

Pondurance_Logo_R-10pxMargin_312px_REV-wordmark.png

Agentic AI Can Increase Speed, Scale, and Effectiveness of Exploits
for Threat Actors

Gartner_Resources-Tout_Exposure-Management (2).png
Pondurance
April 6, 2026

Over the past few years, organizations have been adopting generative artificial intelligence (AI) tools that focus on creating prompt-based content such as text, images, and code. Now, agentic AI is taking center stage with its autonomous abilities to accomplish more complex goals with little or no human oversight. Agentic AI is helping organizations work more productively and efficiently in their business pursuits. However, on the flip side, agentic AI is also helping threat actors work at increased speed, scale, and effectiveness in their malicious exploits. 


Today, a threat actor's average breakout time, or how long a threat actor has between initial access and lateral movement toward an objective, has dropped to only 29 minutes, which is a 65% increase in speed from 2024, according to CrowdStrike's 2026 Global Threat Report. Threat actors are still heavily reliant on unpatched vulnerabilities, stolen credentials, and security misconfigurations to penetrate a target's environment, but the speed, scale, and effectiveness are much improved with agentic AI. 


In addition, nation-states and threat actors have increased their AI-enabled attacks by 89% year over year, and less sophisticated threat actors are using agentic AI — with its lower barrier to entry in terms of cost and required knowledge — to jump into the cyber action with more advanced attacks. All in all, agentic AI has set the stage for 2026 to be a busy year for cybersecurity.


Increased speed, scale, and effectiveness

Use of agentic AI by threat actors is not currently widespread. So far, Microsoft has not seen any large-scale use of it, but CrowdStrike reports that more than 90 organizations have had a threat actor exploit their own legitimate AI tools to execute malicious commands and steal data. In the incidents where threat actors have used agentic AI, the speed, scale, and effectiveness of it have been impressive.


The first documented case of a threat actor successfully using autonomous agentic AI for a cyberattack happened in mid-September 2025, when a Chinese state-sponsored threat actor group used Anthropic's Claude Code agentic AI tool to conduct a sophisticated espionage campaign. The group infiltrated 30 organizations globally and succeeded in a few of those attempts, using little human intervention. The agentic AI tool selected the targets, conducted the exploits and intrusions, exfiltrated the data, and gained persistence. In all, the threat actor used AI to perform 80%-90% of the attack. In its policy report, Anthropic claimed, "the AI made thousands of requests, often multiple per second—an attack speed that would have been, for human hackers, simply impossible to match."


As the first case shows, the speed of cyberattacks using agentic AI is machine-fast. Agentic AI helps threat actors move faster, make quicker decisions, and rapidly adapt to unexpected changes within an environment to achieve their malicious goals in a shortened time frame. 



Traditionally, threat actors have spent considerable time researching one or a few targets, searching for vulnerabilities, making decisions once inside the network, and exfiltrating data. Now, with agentic AI, the scale of an exploit can be much larger because a threat actor can run reconnaissance and make initial access attempts on dozens, thousands, or even millions of targets at once — and the speed with which they do it allows them to make multiple attempts, further increasing the volume of attacks.


Agentic AI performs tasks that would otherwise take an entire team of threat actors to do. Now, an effective exploit can be done by only a small number of threat actors — or a single threat actor — in a reduced time frame. The tasks can be complex tasks, such as reconnaissance, privilege escalation, or lateral movement, performed across a global attack surface. And agentic AI allows threat actors to readily adjust their exploits as attack conditions change. Threat actors don't have to stay with a set plan and fail when the plan fails. Instead, they can stay flexible and adaptive, switching tactics as they go, from emails to text messages to job board alerts. Overall, according to Unit 42's Global Incident Response Report 2026, the use of agentic AI can improve the rate of success at every stage of an attack.


Lower barrier to entry

Threat actors, particularly nation-state actors and established ransomware groups, have typically had the money and depth of knowledge required to perform sophisticated cyberattacks. But agentic AI has lowered the barrier to entry for conducting exploits. Now, new threat actors with less money and technical know-how are entering the cyber arena. 


  • Less money needed. For a relatively low cost, inexperienced threat actors can use agentic AI to implement a cyberattack. The lower cost allows them to experiment with agentic AI at all stages of an exploit. Small groups of threat actors or a solo actor can profile targets, steal credentials, analyze stolen datasets, and even develop ransomware in a short span.

  • In-depth experience not required. Threat actors don't need the same level of experience that was once required for a cyberattack because agentic AI can do much of the work for them — at breakneck speed. Threat actors working with autonomous agentic AI can make mistakes during an attack, then quickly learn from those mistakes, make adjustments, and try the exploit again and again until they get it right. Even while threat actors sleep, agentic AI can keep working 24/7 to achieve the set goals for a successful exploit. 


Conclusion

Agentic AI is indeed helping organizations work more productively and efficiently, but it's also helping threat actors at all levels of sophistication work at increased speed, scale, and effectiveness. Increased use of agentic AI by threat actors will likely mean a busy year ahead for cybercriminals and cyber defenders alike. 


To combat the use of agentic AI by a threat actor, Pondurance has launched Kanati, the industry’s first agentic security operations center designed for autonomous operations within a managed detection and response service. Kanati replaces alert-driven, error-prone workflows with a coordinated system of AI agents that operate continuously throughout the full threat life cycle. To learn more about Pondurance's Kanati, read our blog post.

wave pattern background

Featured Posts

Cybersecurity 101 - Iran Cyber Attacks

March 12, 2026

Cybersecurity 101: Public WIFI

March 9, 2026

Cybersecurity 101: A Whole New World of Malware Delivery (Clickfix)

March 2, 2026

bottom of page