The Industry’s First Agentic SOC for Autonomous MDR is Here
The Industry’s First Agentic SOC for Autonomous MDR is Here

Agentic AI in the Hands of Threat Actors Has Created an Imbalance in Cybersecurity
Organizations are taking artificial intelligence (AI) to the next level by rapidly adopting agentic AI. As much as 73% of organizations are using or developing agentic AI in cybersecurity, up from 59% the prior year, according to Cyber Security Tribe's Annual State of the Industry Report 2026. Agentic AI's autonomous capability allows organizations to accomplish complex goals with little or no human oversight, increasing productivity and efficiency.
That sounds positive for businesses, and it is, but threat actors are also adopting agentic AI — and weaponizing the technology against organizations. Over 98% of cybersecurity leaders reported that AI, including both generative AI and agentic AI, has been used in cyberattacks against their organizations, according to Osterman Research Group. While threat actors can prompt generative AI to conduct activity such as write email content for phishing scams, create fake images, and generate code for malware, agentic AI is the looming threat.
Autonomous agentic AI can help threat actors exploit victims with greater speed, scale, and efficiency than ever seen before. Human cyber defenders are having a hard time keeping pace, creating an imbalance in the cybersecurity equation that, at present, seems to tilt in favor of the threat actors.
Advantages for threat actors
Threat actors know that cyberattacks are a numbers game, and if they try to infiltrate victims' networks enough times, the odds are with them that some human will finally make a mistake and let them in. With agentic AI, the odds get even better because threat actors can move at machine speed as they relentlessly exploit organizations on a large scale. Of course, speed and scale, and even innovation, increase the odds that the bad guys will succeed.
Threat actors are still using many traditional tactics, such as phishing campaigns, credential harvesting, and ransomware attacks, when targeting victims, but AI has accelerated the pace of the attacks. An attack that used to take days or weeks to complete now only takes hours or minutes. Unit 42's Global Incident Response Report 2026 found that threat actors need only 72 minutes to go from initial access to data exfiltration, which is four times faster than 2025.
In addition, threat actors using agentic AI are exploiting newly disclosed vulnerabilities at high speed. The Cybersecurity Infrastructure Security Agency requires federal agencies to patch critical vulnerabilities within 15 days of initial detection. However, Unit 42 reports that threat actors are scanning for vulnerabilities within 15 minutes of a Common Vulnerabilities and Exposures announcement to identify and exploit the flaw — before defenders even have time to patch.
But speed isn't the only thing driving these threat actors to use agentic AI. The scale of attacks is far greater. Threat actors are using agentic AI to perform reconnaissance and attempt initial access on hundreds, thousands, or millions of targets at once, in different industries and countries. The technology has the ability to learn from its mistakes, switch tactics as needed, and keep trying to exploit targets, which further increases the number of attacks. Because agentic AI doesn't sleep, threat actors can continue conducting attacks 24/7.
In addition, agentic AI and generative AI tools themselves have become vectors of compromise, increasing the attack surface for threat actors. Agentic AI also is allowing threat actors to innovate and adapt with new types of cyberattacks and traditional tactics with new twists. A few examples include:
Prompt injection attack. These cyberattacks allow threat actors to conceal malicious prompts in legitimate data, emails, and websites to trick generative AI into taking specific actions such as leaking personally identifiable information or forwarding private documents.
Polymorphic malware. Using agentic AI, threat actors can create malicious software, including viruses, worms, trojans, or ransomware, that can constantly change or morph its code to evade signature-based detection methods.
LLM jacking. In this attack, threat actors steal cloud credentials to large language model (LLM) services and sell them for malicious use.
Disadvantages for defenders
Threat actors using agentic AI are indeed moving at high speed, and most defenders are struggling to keep up, making cybersecurity a reactive rather than proactive endeavor. After all, traditional tools and procedures, such as perimeter defense, reactive monitoring, and approval processes, were developed for attacks that took time to unfold. But now, cyberattacks happen in just hours or minutes, and often, traditional methods are no longer enough to defend against them. When it takes just one human error to allow a threat actor into the environment, the odds are stacked against the defenders.
There are a few particular areas of cybersecurity, including manual workflows, alert fatigue, and fragmented visibility, where defenders are lagging behind threat actors who use agentic AI. Defenders need to take a close look at these slow-moving areas of defense and contemplate how changes could improve their capabilities.
Manual workflows. Cybersecurity teams often work in silos, each focusing on its own specific role within an environment. But if teams don't work together in a big picture way, it can slow down response time. The security operations center (SOC), vulnerability management, incident response, and all other teams must work in lockstep to streamline the process.Most organizations today have a playbook and/or incident response plan to follow when a cyberattack occurs, but such plans were written for the days when attacks moved slower. Organizations are still working from these outdated plans, which may include obsolete training, policies, and procedures that don't take into account the speed and scale of attacks in the age of agentic AI.
Defenders also are slowed down because they must incorporate organizational guardrails and governance frameworks in their workflows. They have to comply with federal and state regulatory requirements or face harsh penalties and legal action, whereas threat actors break the rules and don't look back.
Alert fatigue. Most SOC teams have too many alerts, and a large percentage of those alerts are false positives, leading defenders to experience alert fatigue. The average organization generates 4,330 security alerts per day, and SOC analysts only investigate 37% of them, according to the Ponemon Institute's 2026 State of SecOps Report. That means, on average, 63% of the alerts generated each day are not being investigated. There are not enough hours in the day for a team to attend to every alert or even prioritize such a large number of alerts. The situation becomes increasingly difficult as teams experience ongoing challenges in hiring.
Fragmented visibility. Once, threat actors entered networks from traditional perimeter entry points and moved in predictable patterns, dwelling in the network for an extended time frame. Today, threat actors most often gain initial access to networks by logging in with stolen credentials, then moving laterally across the network at machine speed. As evidence of the increase in this tactic, credential theft increased by 50% in the second half of 2025 compared to the first half, according to Recorded Future's 2025 Identity Threat Landscape Report. Unfortunately, traditional monitoring tools have a difficult time recognizing lateral movement, and threat actors often go undetected as they move across networks, endpoints, and the cloud.
Conclusion
Agentic AI is helping organizations accomplish complex goals with little human intervention, but it's also helping threat actors improve their speed, scale, and efficiency during cyberattacks. As threat actors move at machine speed, defenders are having a hard time keeping up, creating an imbalance in the cybersecurity fight.
To fight agentic AI with agentic AI, Pondurance has launched Kanati, the industry’s first agentic AI SOC designed for autonomous operations within a managed detection and response service. Kanati replaces alert-driven, error-prone workflows with a coordinated system of AI agents that operate continuously throughout the full threat life cycle. To learn more about Pondurance's Kanati, visit pondurance.com or email kanati@pondurance.com to learn more or request a demo.


.png)


.jpg)