A conversation that has been recurring each time a new technology makes news head lines and is transformative: : "We need to adopt (insert technology name here). We need to move fast. Security, please stop being a blocker."
Of course, at the moment that technology is AI.
A few weeks ago I wrote about why organisations are rushing into AI and the risks that come with it. That post dealt with the pace of adoption and the absence of governance frameworks. This one is about something different: the internal cultural dynamic that the rush creates, and one specific damaging idea that keeps surfacing inside organisations under pressure.
The idea that being security-aware (or conscious) is the same as being risk averse. And that being risk averse is the enemy of innovation.
It is not. And the confusion between the two is causing real harm.
Being risk averse and being risk aware are not the same thing. Not even close.
Risk averse means avoiding risk at almost any cost, even when that risk is acceptable and the reward is real. It means saying no reflexively, building walls around everything, and treating change as inherently threatening.
Risk aware means understanding the risk clearly, quantifying it where possible, deciding consciously how much of it you are willing to accept, and putting controls in place for the rest. It means saying yes, but not blindly.
A good security professional is not trying to stop innovation. He or she is trying to make sure that innovation does not create a catastrophic liability that wipes out the gains it was supposed to deliver.
Those are very different objectives. And confusing them does real damage to organisations that need both their innovators and their risk managers working together, not against each other.
If your organisation adopts that next cutting edge technology next week instead of next month, what competitive advantage does that four-week difference actually deliver?
In most cases, the honest answer is: not much.
Now flip the question. If you rush that new technology adoption next week without proper security controls, governance, or risk assessment, what additional exposure are you taking on?
Potentially quite a lot.
Depending on your sector, your data, your regulatory obligations, and the specific new tools you are deploying, you could be looking at data leakage, regulatory breaches, IP exposure, vendor lock-in with unclear contractual protections, or in the case of AI, agentic AI systems making consequential decisions without appropriate oversight.
The asymmetry here matters. The upside of rushing is marginal. The downside of rushing carelessly is potentially significant. That is not a risk averse position. That is basic arithmetic.
I have been doing this for quite a while, and I have seen cyber security labelled a blocker to progress far too many times. Whilst in some cases it may have been true, most of the time it wasn't.
When organisations rushed into cloud adoption, security teams raised questions about shared responsibility, data sovereignty, and access control. They were told they were slowing things down. Some of the organisations that ignored those questions spent considerably more cleaning up breaches and misconfigurations than they would have spent on a proper security review.
When BYOD hit, employees connected personal devices to corporate networks before anyone had thought through network segmentation or data separation. Security was the friction. The cleanup was not cheap.
When SaaS exploded, entire departments subscribed to third-party platforms and uploaded sensitive data before legal, security, or procurement had seen the contract. Shadow IT became a serious and persistent problem that many organisations are still managing today.
In every case, the narrative was the same: security is slowing us down. In every case, the organisations that listened to security early fared better than those who dealt with the consequences later.
AI is not an exception to this pattern. It is the next chapter of it. The difference is that AI is more capable, more deeply integrated into decision-making, and moving faster than any of its predecessors. Which means the consequences of the same mistake are, this time, proportionally larger.
I will acknowledge the obvious: cyber security is my profession and after three decades, it is also my instinct.
So yes, I am biased.
But that bias also means I have seen both sides clearly. I genuinely find AI exciting. Not as a threat to my profession, but as a genuinely transformative capability that will change the way security itself is done. I am interested in it, I work with it, and I think organisations that engage with it thoughtfully will gain real competitive advantage.
The key word is thoughtfully. Not reluctantly. Not slowly for the sake of slowness. Thoughtfully.
There is a version of AI adoption that is fast, ambitious, and secure. It requires that security is a participant in the conversation from the start, not an afterthought bolted on after the product is already in production.
If you are running AI adoption projects, leading innovation initiatives, or presenting to leadership on AI strategy, I have one small recommendation.
Add a bullet point about cyber security to every AI presentation you give. Even if it is brief. Even if it is a footnote at the bottom of slide twelve. Even if all it says is: "security implications are being assessed as part of this programme."
That single line does several things at once:
- It signals that your team is thinking about risk.
- It invites the right conversations early.
- It protects you if something goes wrong later.
- And, more importantly, it normalises the idea that a new technology adoption and security consideration are not opposing forces, they are part of the same responsible practice.
You need a mindset. One that says: we are going to move with ambition, and we are going to do it with our eyes open.
Your business survival does not usually depend on adopting a new technology next week rather than next month.
It may very well depend on adopting it without creating vulnerabilities that take years to remediate, regulatory penalties you didn't budget for, or a breach that destroys the trust you spent decades building with your customers.
Not risk averse. Risk aware.
There is a world of difference.
Make sure the people pushing you to accelerate understand which one they are actually asking you to abandon.
(*) granted, AI is not just a new "flavour of the month", it is more disruptive than many new technologies/concepts in recent years. But the above comments still stand in my views.

RSS Feeds
Risk Averse Vs Risk Aware: a difference that matters!