AI Agents Are Amplifying Hard-to-Detect Attacks

Last year, Coalition warned that AI was helping threat actors enhance their campaigns. While attribution is challenging, we’ve seen firsthand that phishing emails are getting smarter and attacks now accelerate beyond the rate of human speed.Â
In September 2025, Anthropic published a report on the first documented large-scale cyberattack executed primarily by artificial intelligence (AI). In its investigation, Anthropic found that a nation-state linked group leveraged Claude Code, an AI-powered command-line tool, to target roughly 30 organizations across various sectors.Â
Anthropic’s findings showcase that AI is not just assisting threat actors, it’s driving attacks: 80-90% of tactical operations during the campaign were entirely autonomous.
The new challenge for defenders? With minimal human intervention, attackers can now amplify already hard-to-detect “living off the land” (LOTL) techniques — a strategy that abuses legitimate tools within an operating system — by using AI agents to avoid detection and execute faster.
A perfect storm: LOTL x AI
In 2024, 79% of detections observed by CrowdStrike were malware-free (compared to 40% in 2019). Threat actors increasingly favor native tools, such as PowerShell and remote desktop protocol (RDP), over malware to execute attacks.
Traditional cybersecurity solutions more readily flag the introduction of external malicious files rather than making decisions on whether a pre-installed tool is being used suspiciously or not. For this reason, attackers are using the same administrative and remote management tools as security teams to camouflage their malicious actions.Â
LOTL attacks have always been difficult to detect, but now they’re being radically amplified by AI agents that can dynamically generate PowerShell commands, orchestrate reconnaissance, pivot laterally, escalate privileges, and exfiltrate data, all while blending into seemingly normal system activity.
In 2024, 79% of detections observed by CrowdStrike were malware-free (compared to 40% in 2019).Â
For example, Anthropic found that attackers integrated Claude Code with open standard Model Context Protocol (MCP), which connects AI systems with the data sources they need to complete tasks. But without the right protections in place, MCP enables AI agents to execute code and interact with resources as if they are a legitimate developer or administrator.Â
The nation state-linked group observed by Anthropic was able to present malicious tasks to Claude AI disguised as routine technical requests and each step, from vulnerability scanning to lateral movement, appeared legitimate when evaluated in isolation.
Researchers discovered that IT ticketing services could be similarly abused, dubbing it a “Living off AI” attack. They reported that attackers could submit a malicious support ticket with a prompt injection and ultimately exfiltrate data with an unchecked connection to MCP.
The challenge for defenders
An exposed blind spot
State-sponsored cyber group, Volt Typhoon, compromised the IT environments of multiple critical infrastructure organizations in the continental and non-continental United States. Through the exclusive use of LOTL techniques, Volt Typhoon maintained undetected access to several victims for at least five years. By using built-in tools, attackers were able to blend in and avoid endpoint detection and response (EDR) products.
The problem: Traditional defensive architecture is designed to catch suspicious code or unusual activity, not determine the intent behind a user’s actions.Â
For example, thousands of PowerShell commands can occur daily for legitimate purposes. But if PowerShell was used to activate BitLocker across all devices (a Windows service used to protect data through encrypting drives), the intent behind the action is probably malicious. EDR would likely flag the activity, but not immediately block it. An analyst would need to review the alert, among hundreds of others, and determine if the behavior is malicious themselves.Â
Traditional defensive architecture is designed to catch suspicious code or unusual activity, not determine the intent behind a user’s actions.Â
A race to detect
In nearly one in five cases, attackers increasingly aided by automation and toolkits exfiltrated data within the first hour of compromise. In the AI-driven campaign investigated by Anthropic, AI made thousands of requests (often multiple per second), a pace that would have been impossible for humans to match.
The problem: We’ve seen threat actors move laterally in 47 seconds. Given the speed of automated threats, defenders need the ability to make quick, decisive decisions about intent and act immediately. But, now that attacks can unfold too quickly to be human, how can anyone keep up?
Stay ahead of AI-driven attacks
As always, the risk landscape is rapidly transforming before our eyes.Â
Threat actors will continue to try to exploit blind spots through LOTL techniques, but the use of AI agents will only make it harder for defenders to catch attackers in time. However, businesses can outpace attackers by adopting security best practices and deploying automated tools.
Apply least-privilege access
The principle of least privilege minimizes the risk of escalation and unauthorized access. By granting employees with only the necessary permissions required to perform their specific job functions, businesses can significantly reduce their attack surface.Â
This same approach can also be applied to MCP servers and AI agents. Rather than providing broad access to AI agents, businesses should scope what AI agents need to do (and access) by implementing permission controls.
Log managementÂ
Logging identifies activity within a certain application or computer system. Everything from operating systems to endpoint devices are documenting events (like logins) as text records.Â
These logs can provide helpful context to determine if activity is an administrator completing a predictable task or potentially something malicious. Knowing intent matters, especially in LOTL attacks, and logs provide a comprehensive view of user behavior to help pinpoint actions that stand out as unusual or malicious.Â
Automated threat detection & response
Most security teams lack the bandwidth to sort through thousands of alerts every day. This can result in overlooked activity that turns into a full-blown attack. Alternatively, an overwhelmed security analyst can misclassify a potential LOTL technique as genuine user behavior.
Defenders need superior software and algorithms to win. Wirespeed™ by Coalition has a median time to verdict (MTTV) of 1801 milliseconds (under 2 seconds!) and remembers user behavior to detect anomalous intent — fast. Automated threat detection and response can cut through the noise of alerts and stop threats in seconds.
LIGHTING-FAST SPEED. LASER PRECISION.
Automated Threat Detection & ResponseÂ
See how Wirespeed MDR can stop threats in seconds >






