Defending Against Superhuman Intelligence with Alex Stamos

Modern cyber risk is starting to feel like a science fiction novel turned to life. Attackers are using deepfakes to flawlessly impersonate real people, AI agents are executing entire attack cycles, and AI is discovering elusive software bugs that the smartest minds in cybersecurity overlooked for years.
So, if that’s the present, what will we be up against in five years?
“Anyone that says they know exactly what the future of security looks like is trying to sell you something,” Alex Stamos, Chief Security Officer at Corridor, said to a full room at Activate 2026, our annual broker conference. “But we do know that AI is completely revolutionizing the practice of cybersecurity, both defense and offense.”
Drawing on past experience as Chief Security Officer at Facebook and Chief Trust Officer at SentinelOne, Stamos provided vital insights into emerging threats, AI’s impact on the modern workplace, and strategies for mitigating long-term risk — including a few predictions for what comes next.
AI is already revolutionizing attacks
Stamos said that before worrying about the future, we need to truly understand what AI is capable of now. Current AI models already provide attackers with “superhuman” capabilities, challenging long-standing beliefs about cybersecurity and defense.
Vulnerability discovery
In November 2025, Anthropic released Claude Opus 4.5, a highly intelligent AI model designed for complex coding and agentic workflows. This brought forth a new era of cybersecurity: AI can far exceed the human ability to catch vulnerabilities.
Since then, the internet has exploded with people discovering bugs across the web using AI. Notably, a researcher at Anthropic, Nicholas Carlini, reported a vulnerability discovered by Claude.
“This is a bug in the Linux kernel that is older than a bunch of my coworkers,” said Stamos. “It’s from 2003, it’s very hard to find, and a human being never found it, despite the fact that some of the best bug hunters across the world have looked.”
Opus not only found the bug, but was also able to provide an exploit.
Hacking humans
With the rise of remote work globally, AI deepfakes have evolved. For example, the North Korean government has required attackers to target jobs in Western countries for financial gain, according to Stamos.
Under the guise of a convincing, stolen, identity, scammers use deepfake technology to apply for employment opportunities, trick interviewers, get hired, and profit, usually by stealing Bitcoin. Tech companies, especially in the cryptocurrency industry, are popular victims. To make matters worse, deepfakes are getting even harder to detect.
Stamos played two videos side by side, showing the evolution of face replacement technology. A year ago, there were tell-tale signs of fraudulent behavior, such as lagging mouth movement or a clear glitch when the interviewer asked the scammer to look down. Now, the latest face and voice replacement technology shows no visible red flags.
Autonomous kill-chain
Attackers aren’t just using AI to find vulnerabilities. The entire “kill-chain” — exploitation, lateral movement, and data exfiltration — can be automated with AI models.
Free, open-weight models, like DeepSeek or Qwen, provide attackers with endless opportunities to execute attacks with fewer guardrails than OpenAI or Anthropic. Threat actors can download “attack toolkits” that are built on open-weight models, modified to have no safety controls.
“You can take out all the safety parameters with a technique called obliteration, which is like brain surgery on an open-weight model,” said Stamos. “You pick out ‘parts of its brain’ repeatedly until it does exactly what you want, ‘like sure boss, I’ll go hack anything.’”
The corporate impact
Attackers aren’t the only ones evolving. Cybersecurity teams are currently undergoing a massive structural overhaul, re-engineering their traditional workflows to incorporate AI.
Tier 1 SOC analysts are increasingly being replaced by automation; security engineers are leveraging tools like Claude Code to ship tools in record time; and threat intelligence teams are using LLMs to distill massive datasets into actionable insights. Even product security design teams have adopted an "attacker mindset," using AI to hunt for vulnerabilities before they can be exploited.
But Stamos notes a sweeping trend in this move towards efficiency: “We’re automating all the entry-level jobs that bring people into security in the first place.”
Beyond the pipeline problem, AI is a double-edged sword for security teams. While it streamlines internal workflows, it simultaneously expands the attack surface. As businesses rapidly introduce new AI tools for productivity, they also introduce brand new risks. The playing field is still tilted in the favor of threat actors.
“There is an asymmetric benefit to attackers from AI because they are now so good at finding bugs,” said Stamos. “That’s just the reality of the chaos unleashed with AI in the security field.”

“There is an asymmetric benefit to attackers from AI because they are now so good at finding bugs." — Alex Stamos
Defense in the age of AI
Despite not knowing exactly what the future holds for us, Stamos has practical advice for both large enterprises (with in-house security teams) and small businesses (without them) to prepare for the AI-enhanced threats of today and tomorrow.
Large businesses
Defense needs to shift right: Most businesses haven’t experienced a zero-day — a vulnerability without an available patch — yet. But as AI enables attackers to discover long overlooked vulnerabilities and exploit them quickly, businesses across all industries are at risk.
“You have to assume that your system can be breached and that you can survive it. Focus on resilience, detection, and incident response,” said Stamos.
Stop making new bugs: In the presence of superhuman intelligence finding bugs, human beings probably can’t write “safe code” any longer, warns Stamos. He suggests that businesses embrace AI to refactor new code and fix old bugs.
Truly dependable systems require old-school isolation: Businesses that cannot risk downtime, such as critical infrastructure providers and banks, should turn to “old-school” solutions for maximum security against AI-enhanced threats. This means air-gapped backups, where backups are either stored in a separate physical location or isolated in the cloud.
“You have to assume that your system can be breached and that you can survive it. Focus on resilience, detection, and incident response." — Alex Stamos
Small businesses
Throw away unnecessary data: “You can’t lose what you don’t have,” said Stamos. Many small businesses aren’t completely aware of all the data they have, which puts them at increased risk when undergoing a ransomware attack and gives attackers leverage.
Embrace simplicity: Successful companies can be run on Chromebooks and the cloud. Most employees are doing the majority of their work on the web browser, anyway. If businesses can embrace simplicity, they can become almost an unhackable company, suggests Stamos.
Find a trusted security vendor: If a small business cannot do all of their work in the cloud, they should find an MSSP or security partner that they can trust.
“You’ll need a helping hand that can do all of this work for you,” said Stamos.
What’s next?
Our future depends on the evolution of AI’s superhuman capabilities. In the optimistic view, there’s a finite pool of vulnerabilities that humans were not able to discover, but AI can. This pool is now being drained and will eventually run out.
“As new models come out, the next few years will be a steep hill but it will eventually steady out,” said Stamos. “It will be terrible for a while, but we will go back to how it was a few years ago, just bad.”
The darker possibility? As these superhuman capabilities grow, they will invent new classes of vulnerabilities that we don’t even know exist today. This would take advantage of the lack of formality in software engineering and human-written code. Each new model would uncover new attack surfaces and novel vulnerability classes, creating an endless uphill battle.
While we don’t know which outcome is more likely, Stamos made some predictions for what could come in the near-future:
Machine-to-machine conflict
Cybersecurity will involve humans supervising machine-to-machine conflict between defensive and offensive attacker agents working too fast for humans to intervene. Responding to an attack in ten minutes will not be fast enough — so humans will need to trust AI to decide and react for them.
Global AI race
Many believe open Chinese models are less than a year behind frontier models, like Anthropic and OpenAI. Stamos warned that if that’s true, we don’t have much time. We have to find and patch the flaws current cutting-edge models can discover before attackers have the ability to exploit at scale.
The past becomes the ‘good ol’ days’
We have no historical precedent for thousands of threat actor groups having this kind of capability to execute attacks. But all hope is not lost.
“Things can still change,” said Stamos. “If the optimistic view is true, and we get together as defenders to start fixing things and fast.”
Time to focus on the now
We don’t know where AI will be a year from now, let alone five. Experts like Alex Stamos are already laying the groundwork for a safer, more resilient internet, but businesses need to “future-proof” against evolving threats now.
Enter Automated Detection and Response (ADR), the evolutionary next step in endpoint security. By automating traditional managed detection and response solutions, we can provide 24/7 protection without the human bottleneck and close the speed gap between AI-accelerated attackers and defenders.





