Security News

- Previous Post >>

Maybe AI won't kill us after all: a more balanced take on AI and CyberSecurity

The doom headlines write themselves.
AI creates infinite zero days. AI will automate hacking at scale. AI will make every developer's code a security liability. Your organisation is exposed. You should be scared.
And sure, some of that is true. We've covered it here before.
But lately, there's been a quieter conversation happening, a growing number of credible voices suggesting that AI might actually move the needle in the right direction for cybersecurity, over the medium to long term. Not instead of the risks, but alongside them. And that's worth talking about.

The Infinite Bug Problem, and the Math
Let's start with Firefox.
Last week, Mozilla released Firefox 150, which patched 271 vulnerabilities, many of them identified with the help of Anthropic's Claude Mythos. That's a remarkable number. Industry coverage predictably focused on the headline figure.
But here's the thing worth sitting with: how many vulnerabilities existed in Firefox before Mythos ran its analysis? The answer, of course, is that we didn't know. Software of that scale and complexity has always had a long tail of undiscovered bugs.
So yes, 271 were fixed. But the real story isn't 271. It's "infinity" - 271. There are still unknown vulnerabilities in every major piece of software you run today. And the uncomfortable truth is that this was always the case, we just had no systematic way to surface them at scale.
Mythos didn't create that problem. It started, however slowly, to fix it. That's different.

And as Firefox's CTO Bobby Holley noted, the AI tools can now cover the full space of vulnerability-inducing bugs, categories that were previously only reachable through manual human analysis. That's not a threat story. That's a capabilities story for defenders.

The Rise of Ephemeral Software
There's another angle that doesn't get enough attention: the nature of software itself may be changing.
As AI-assisted development matures, we're beginning to see the early signs of what you could call ephemeral software, code that is uniquely generated for a specific customer, a specific purpose, at a specific moment. Hyper-customised, disposable, not widely shared.
This matters for security because a significant part of the attacker playbook depends on scale. Find a vulnerability in a widely deployed library, and you potentially own thousands of systems. But if the software stack running your application is unique to you, generated fresh, tailored entirely to your context, the economics of exploitation shift. A bug that works everywhere is increasingly less likely to exist.
This doesn't eliminate risk. Unique code can still be badly written code. But it challenges the assumption that attackers will always have a target-rich environment of standardised, widely-shared vulnerabilities to exploit.

The Other Side of the Coin
None of this means we should relax.
AI-generated code, vibe code, as it's become known, carries its own risks. Developers are increasingly shipping code they don't fully understand, generated by models that don't always produce secure outputs by default. The speed of development is outpacing the discipline of review.
That's a real problem. A developer who can't reason about the code they're deploying is a developer who can't spot when something is fundamentally wrong with it. And insecure-by-default AI code, accepted without scrutiny, is a path to a new category of systemic risk, one that's harder to audit because no single human ever fully understood the system to begin with.
We will see breaches that trace directly back to unchecked AI-generated code. That's not speculation.

A Voice Worth Listening To
At the SANS AI Cybersecurity Summit last week in Arlington, one of the more grounded talks came from Jacob Klein, Head of Threat Intelligence at Anthropic. His team's job is exactly what it sounds like: monitoring how people are actually using and abusing Claude in the real world, tracking misuse cases, identifying patterns, and translating those findings into safeguards.
Klein shared that medium capability uplift in AI-assisted cyberattacks (they call it AI-Enablement) moved from 12% last summer to 48% by February 2026, a significant jump. But, he treated that not as a reason to panic, but as a reason to get very focused on where defenders can still change the outcome.
That framing matters. It's the kind of disciplined optimism that's often missing from the public conversation.
Klein presented a brief history of how Claude has been adopted by malicious actors,covered in detail by SiliconAngle, from a lone actor building unsophisticated ransomware in spring 2025, to a Russian extortion operation two months later, to a Chinese state-sponsored group using Claude for system reconnaissance, penetration testing at scale, and lateral movement by September 2025. Sobering stuff.
Klein concluded with 3 take aways:

  • 1) Assume your adversary has an AI in the loop
  • 2) Detection has to move towards behaviour chains, not single TTP and
  • 3) Labs (Security teams) have a visibility advantage and they intend to keep sharing
And yet his broader message, as I understood it, was that the road ahead may be bumpy but the destination is a safer IT environment, and that AI, net-net, gets us closer to that destination than further from it.
I agree with that assessment.

One Thing That Did Make Me Think
There was an interesting undercurrent to Klein's talk that he didn't state explicitly, but that's worth drawing out.
His team's ability to detect Claude misuse depends on visibility. They can see how the model is being used. They can track patterns. They can identify when something looks like an emerging threat campaign.
That's reassuring from a security perspective. But it also means that everything you send to an AI model, your prompts, your context, your queries, is, to some degree, observable by the people operating that model.
Klein didn't frame it as a privacy concern. He framed it as a safety mechanism, which is fair. But the implication is the same: assume that what you send to any AI system is not private, not permanent, and may be reviewed by humans at some point, whether for safety, model improvement, or some other purpose.
This is not a reason to stop using these tools. It's a reason to think carefully about what you put into them.
Don't share confidential data. Don't share client information. Don't assume the conversation ends when you close the tab.

The Long View
The security industry has always had a complicated relationship with fear. Fear sells products, drives budget conversations, and gets boardroom attention. It's a legitimate tool.
But fear as a default lens distorts the picture.
AI is creating new risks. It is also, in parallel, beginning to systematically address vulnerabilities at a scale and speed that was simply not possible before. The Firefox story is an early data point. It will not be the last.
The road is bumpy. But the direction, if we're disciplined about how we use these tools and honest about their limitations, is towards a more defensible software ecosystem.
That's not naive optimism. That's the evidence so far.

- Previous Post >>