top of page

The Industry’s First Agentic SOC for Autonomous MDR is Here

The Industry’s First Agentic SOC for Autonomous MDR is Here

Meet Kanati

Suspect a Breach? 

!

Contact Us:

Pondurance_Logo_R-10pxMargin_312px_REV-wordmark.png

How Agentic AI Detects Sleeper Attacks to Reduce Breach Risk

Gartner_Resources-Tout_Exposure-Management (2).png
Doug Howard
April 13, 2026

Threat actors use a diverse set of tools and methods for launching successful campaigns, such as ransomware or a distributed-denial-of-service (DDoS) attack. One tactic is stealth, using low-noise signals over time to bypass an organization’s security defenses and gain access to critical systems and data. 


The Mandiant M-Trends 2026 Report notes that threat actors are using low-impact techniques—such as malvertising or fake browser updates—to gain a foothold. Because these initial signals appear to be low-impact malware, or in some cases even normal user operations, organizations focused only on high-impact methods often miss them until it’s too late.


This Q&A with Doug Howard, CEO at Pondurance, is the second in our series on agentic AI in security operations. He explains how agentic AI detects “sleeper attacks,” helping midsized organizations to contain breach risks that traditional security tools may overlook. 


Q: What are “sleeper” attacks, and how can they potentially evade an organization’s security?

Doug: Sleeper threats—often called “low-and-slow” attacks—are intrusions where threat actors deliberately move quietly and gradually to avoid detection. Unlike “smash-and-grab” attacks that generate obvious signals, these attackers minimize noise by spacing out activity and often using legitimate system tools (“living off the land”) to blend in.  Because they often can be confused with each other, for clarity, a sleeper attack is a tactic while an APT is the adversary running the playbook.


Rather than triggering a single clear alert, their activity appears as small, isolated anomalies—nothing that immediately signals an active attack. This allows them to remain undetected for extended periods, with dwell times historically reaching months.


Moving slowly also gives attackers strategic advantages. They can observe systems, gather intelligence, and understand backup processes. In some cases, they wait long enough to compromise backups—overwriting them with corrupted or encrypted data—so recovery becomes difficult or impossible.


In essence, sleeper attacks prioritize persistence and stealth, enabling attackers to quietly expand their foothold and fully compromise an organization over time.


Q: How do dwell time and persistence relate to sleeper attacks? 

Doug: Dwell time is how long an attacker remains in an environment—from initial access to discovery or completion. In low-and-slow scenarios, this can last weeks or months, giving threat actors time to quietly expand access, observe systems, and prepare for larger objectives.


Persistence enables that extended presence. It refers to how attackers maintain access even if systems reboot or connections drop. Common methods include remote access tools that automatically reconnect, as well as “beacons”—small pieces of code that periodically check in with attacker-controlled infrastructure to receive and execute instructions.


These beacons often operate in parallel across multiple machines, creating a distributed foothold. Each one generates minimal, routine-looking activity, but together they enable coordinated actions like lateral movement, where attackers spread gradually from system to system. Over time, attackers can map the environment, identify critical assets, and even interfere with backup processes.


Backup interference is especially risky for midsized organizations, which often rely on basic backups that are not versioned or immutable, largely due to infrastructure and cost constraints. 


Without clean, restorable data, organizations are vulnerable to attackers who can use long dwell times to overwrite backups and cloud storage with compromised or encrypted versions of data. By the time the attack becomes visible, recovery options may be severely limited—making persistence a strategy for maximizing impact, not just staying hidden.


Q: Why do traditional tools often fail to detect sleeper threats? 

Doug: These tools are not designed to see the bigger picture over time. Sleeper attacks generate small, low-noise signals spread across weeks or months—far below the threshold most tools flag as suspicious.


A key limitation is that many tools operate at the micro level. EDR and XDR solutions, for example, focus on individual devices. A single machine with a beacon calling out to a website may appear completely benign. Only when viewed across dozens or hundreds of devices does a pattern emerge. However, most traditional tools don’t correlate activity at that scale.


They also rely on limited time windows. Firewalls or endpoint tools may analyze only recent activity—such as the last 30 days—which isn’t enough to detect behaviors unfolding over months. The sample size is simply too small to identify meaningful patterns.


Finally, traditional tools are largely reactive. They analyze what has already happened but lack the predictive capability to connect early signals into a confirmed threat—allowing sleeper attacks to progress undetected.


Q: How is agentic AI able to detect sleeper threats? 

Doug: Agentic AI can do what humans—and even many cybersecurity controls— struggle to do at scale: continuously analyze massive volumes of historical and real-time data and connect small, seemingly unrelated signals into meaningful patterns.


Like recommendation engines, agentic AI analyzes telemetry data over time —user activity, system behavior, network traffic—to establish what is “normal”. From there, it identifies subtle deviations that persist over time, even if those deviations are small and spread out across months. 


For example, a slight but consistent increase in PowerShell usage or web traffic may not trigger an alert on its own. But when viewed in the context of long-term patterns, it becomes a meaningful anomaly.

Agentic AI then applies predictive analysis to assess whether these anomalies represent real threat activity. Trained on large-scale historical data, agentic AI can compare patterns against known attack behaviors and identify alignment with low-and-slow tactics.


Unlike human analysts, agentic AI processes this data continuously and in parallel—across multiple environments—without fatigue. This enables earlier pattern detection, long-term and faster identification of sleeper threats before they escalate into full-scale attacks.


Detect and Defeat Sleeper Attacks with Kanati™

Kanati from Pondurance is the first agentic AI security operations center (SOC) designed for autonomous operations in a next-generation managed detection and response (MDR) service. Kanati has been proven to deliver:

  • 100% alert coverage

  • 90% faster threat analysis

  • 80% fewer false positives

Visit pondurance.com or email kanati@pondurance.com to learn more or request a demo.

wave pattern background

Featured Posts

How Agentic AI Detects Sleeper Attacks to Reduce Breach Risk

April 13, 2026

Agentic AI Can Increase Speed, Scale, and Effectiveness of Exploits
for Threat Actors

April 6, 2026

Battlefield Cyber: Employment Scams

April 1, 2026

bottom of page