All News

- Previous Post >>

Patching Faster Than Your Shadow? AI and the New Vulnerability Race

A few days ago, I wrote that there was no need to panic about AI and cybersecurity.
I still believe that.
But "do not panic" does not mean "do nothing". It certainly does not mean that organisations can keep doing exactly the same thing, at exactly the same speed, with exactly the same assumptions.
And one of those assumptions may be quietly dying in front of us: the idea that organisations have weeks to fix serious vulnerabilities before attackers can realistically exploit them at scale.

The Old Patching Rhythm
For many organisations, vulnerability management has historically been built around a fairly comfortable rhythm: scan, prioritise, assign, test, patch, report, chase, escalate, and eventually close.
That process was never perfect, but it was familiar. Critical vulnerabilities often had remediation SLAs measured in weeks. High vulnerabilities were sometimes given months. Only the truly urgent cases, the ones where exploitation was confirmed or an incident may already have been unfolding, were usually treated in days.
That was already risky. But in many environments it was considered manageable, because there was still friction on the attacker side. Turning a vulnerability into a reliable exploit required skill, time, testing, infrastructure and, in many cases, some luck.
AI changes that equation.
Not because AI is magic. Not because it can autonomously hack the planet while wearing a black hoodie. But because it can make competent attackers much faster.

The Three-Day Warning
Reuters recently reported (here) that US officials are considering reducing the deadline for fixing actively exploited vulnerabilities in government systems from roughly two or three weeks to just three days. The reason is simple: AI is compressing the exploitation timeline.
That matters.
And it matters because this is not about every theoretical software bug sitting somewhere in a backlog. This is about known and exploited vulnerabilities: the kind already listed, already discussed, already analysed, already patched by a vendor, already added to exploit frameworks, or already being abused somewhere in the world.
In other words, not "maybe dangerous one day".
Dangerous now.
If attackers can use AI to move from CVE to proof-of-concept faster, defenders cannot continue to treat exploited vulnerabilities as normal operational tickets with a comfortable multi-week SLA.
A three-week remediation window for an exploited internet-facing vulnerability may soon feel like leaving the front door open and saying: "We'll get to it after the next change advisory board/Security Committee."

AI Does Not Need to Replace the Hacker
The uncomfortable part is that AI does not need to fully replace the attacker to change the risk landscape. It simply needs to help the attacker move faster.
Give a skilled hacker a CVE, a patch diff, a public advisory, some technical breadcrumbs, and an AI assistant that can help reason through the vulnerable code path, generate test cases, explain exploitation logic, or draft proof-of-concept code, and the distance between "vulnerability disclosed" and "working exploit" becomes much shorter.
There is still a human in the loop.
But the human now has a very fast assistant.
That is the real issue. Not science fiction. Not robot hackers. Just acceleration.
And acceleration is enough.

The SLA Problem
Most companies already have vulnerability remediation SLAs. On paper, at least.
The problem is that many of those SLAs were designed for a slower world. They often assume that the organisation has time to identify the asset, confirm ownership, assess exposure, plan a change, test the patch, wait for a maintenance window, deploy, validate, and then close the ticket.
That may still be reasonable for many vulnerabilities. Not every CVE is equal. Not every vulnerability is exploitable in your environment. Not every advisory deserves an emergency response.
But known exploited vulnerabilities are different. Internet-facing vulnerabilities are different. Vulnerabilities affecting identity systems, remote access, perimeter devices, collaboration platforms or widely deployed infrastructure are different.
For those, timelines measured in weeks may no longer be credible.
The challenge, of course, is that reducing the SLA is easy to write in a policy and very difficult to execute in the real world. Patching is not just pressing a button. There are legacy systems, fragile applications, business owners who fear downtime, vendors who respond slowly, change windows, testing constraints, unsupported platforms, conflicting priorities, and environments where nobody is completely sure what talks to what.
So yes, this will be hard.
But hard is not the same as optional.

More People Will Not Be Enough
The answer is not simply to throw more people at the problem.
More human resources can help, up to a point. But if AI increases the number of vulnerabilities found, the number of exploit attempts, and the speed at which attackers operationalise them, then organisations cannot respond with spreadsheet-driven vulnerability management and a few exhausted engineers chasing tickets.
That does not scale.
Organisations will need better automation, better asset visibility, better exposure management, faster prioritisation, and tighter integration between vulnerability scanners, threat intelligence, CMDBs, EDR/XDR, ticketing and change processes.
In some cases, SOAR-style automation will be needed to trigger containment, compensating controls or emergency workflows without waiting for the next weekly governance meeting.
And yes, defenders will probably need AI too.
Not because AI is a magic shield, but because the workload is becoming too fast and too wide for purely manual processes. AI can help summarise advisories, map affected assets, support triage, detect exploit patterns, assist with testing, and help teams understand whether a vulnerability is theoretical, reachable, exposed or already being targeted.
Used properly, AI can help defenders regain some of the speed that attackers are gaining.

Do Not Forget the Controls Around the Patch
There is another important point: patching is not the only defensive control.
It is the cleanest fix, when it is available and safe to deploy. But while an organisation is racing to patch, other controls matter enormously.
An EDR or XDR platform may detect and stop post-exploitation behaviour. An IPS or WAF may block an exploit attempt before it reaches the vulnerable component. Network segmentation may prevent a compromise from becoming a domain-wide disaster. Privilege management may limit what the attacker can do after entry. Attack surface management may identify systems that should never have been exposed in the first place. Good backups may turn a catastrophic incident into a painful but manageable recovery exercise.
This is not an argument against patching. It is an argument against pretending that patching exists in isolation.
If you cannot patch immediately, you still need to reduce the risk immediately.
Block. Isolate. Monitor. Restrict. Detect. Hunt. Compensate.
Waiting quietly for the next maintenance window is not a defensive strategy.

Faster Than Your Shadow?
You may not always be able to patch faster than your shadow.
Lucky Luke could.
Most companies cannot.
But if you cannot always draw first, you can still make sure there are snipers on the rooftops ready to neutralise the threat before it reaches the saloon.
That is the practical mindset shift. Not panic. Not fatalism. Not the lazy conclusion that AI means security is over.
The answer is adaptation: shorter remediation timelines for exploited vulnerabilities, more automation, better prioritisation, stronger compensating controls, real testing of emergency patching processes, clear ownership, less tolerance for unknown assets, and less comfort with "we will fix it next month".
Because doing nothing is also a decision.
And in the AI era, it may become a very expensive one.

- Previous Post >>