The Human Factor: Cybersecurity’s Blind Spot

The Human Factor: Cybersecurity’s Blind Spot
Why it persists – and how to address it (Part 1 of a three-part series)
Authors: Professor Jamie Douglas, Andy Andrews and Matt O’Sullivan

Across this series, one argument runs consistently: the defining variable in cybersecurity isn’t technology – it’s human behaviour. Organisations keep investing in tools while structuring work around how people *should* behave, not how they *actually do*. These blog posts explore why that gap exists and what it takes to close it.

Part One: “Cybersecurity Isn’t a Technical Problem, It’s a Human One” establishes the foundation. Despite record investment in tools and platforms, breach rates remain stubbornly unchanged because technology can detect and block, but it cannot stop someone from clicking a link under pressure, approving an MFA request reflexively, or bypassing a control to meet a deadline. These aren’t irrational acts; they’re predictable human responses to how work is designed. Until organisations measure and develop behaviour alongside technical controls, investment will keep outpacing impact.

Part Two: “From Weakest Link to First Line of Defence: Reframing People in Cybersecurity” challenges the dominant response to that problem. Labelling people “the weakest link” is not just inaccurate, it is strategically counterproductive. People are the only part of any system capable of handling ambiguity, adapting to novel threats, and making judgement calls when signals are incomplete. The real lever is capability: the behavioural and cognitive competencies – risk management, adaptability, emotional intelligence, analytical thinking – that determine how individuals actually perform under real-world conditions. Most organisations don’t define, recruit for, or develop these systematically.

Part Three: “From Insight to Action: Building Behaviour-Led Cyber Resilience” turns insight into a practical model: define observable behaviours by role, build capability through contextual development rather than generic training, design environments where secure behaviour is the path of least resistance, and measure the leading indicators that reveal *why* risk occurs – not just when. Together, these four layers shift cybersecurity from a compliance function to an organisational capability.

The conclusion is the same throughout: attackers already exploit human behaviour deliberately. Defenders need to be equally deliberate in developing it.

Part One: Cybersecurity Isn’t a Technical Problem, It’s a Human One

 

“We’ve invested heavily in cyber tools – why isn’t it working?”

It’s a question echoing through boardrooms and security teams everywhere.

Budgets are growing. Tools are smarter. AI is watching every endpoint. And yet, breaches continue – often alarmingly similar to those from five years ago.

This gap between investment and outcome hides a deeper truth: technology creates the appearance of control, not actual resilience.

If cybersecurity were purely a technical challenge, we would have solved it by now.

Why Investment Creates a False Sense of Security

Organisations have never spent more on cybersecurity, yet the numbers barely move.

The Cyber Security Breaches Survey 2025 reports that 43% of UK businesses and 30% of UK charities suffered a cybersecurity breach or attack in the 12 months prior to the survey. Among larger organisations, the rates rise to 67% for mid-sized and 74% for large enterprises – virtually unchanged year-on-year.1

Meanwhile, investment keeps climbing. Zero trust, AI-powered monitoring, automated patching – the tools are everywhere. On paper, this should mean progress.

In practice, it has created an illusion.

Technology can detect, block and warn, but it can’t stop someone from clicking a link, trusting a message, or responding under pressure.

Phishing remains the number one threat, with 85% of businesses that experienced a cybersecurity incident reporting a phishing attack.1

These attacks don’t break encryption; they exploit routine human behaviour.

The real problem isn’t weak systems. It’s that systems are not designed for how people actually work.

Where Technology Meets Human Behaviour

Cybersecurity doesn’t live inside a firewall; it emerges from the interaction between people, processes, and technology.

Controls only work when people use them as designers expect. When reality diverges – due to deadlines, distractions, or misplaced trust – gaps open up and attackers step in.

Most breaches aren’t the result of irrational behaviour. They stem from normal human responses under pressure2:

  • Ignoring alerts to focus on delivery
  • Approving MFA requests reflexively
  • Following processes mechanically
  • Overriding controls to meet a deadline

These aren’t one-off mistakes, they’re predictable patterns.

Human factors create risk, but they can also reduce it. When behaviour aligns with context and awareness, the same staff who might have been exploited become the first line of defence.

How Everyday Behaviour Shapes Risk

 

How risk appears

Risk often surfaces because work is designed for efficiency rather than security. People take shortcuts to keep things moving:

  • Security steps get skipped to meet targets
  • MFA prompts are approved automatically
  • Threat alerts are ignored when the volume is overwhelming
  • Over time, constant vigilance leads to fatigue, and fatigue leads to mistakes

How risk is reduced

For example…

A salesperson pauses before clicking a client’s link. They verify the request by phone. They recognise manipulation and stop an attack in its tracks.

The toolset is the same; only the behaviour changes the outcome.

Cyber success and failure both stem from how people interpret and act, not how systems perform.

Why Knowledge and Tools Alone Don’t Change Behaviour

Organisations typically respond to incidents in two ways:

  1. Buy more technology
  2. Deliver more training

But the outcomes might not be as expected.

Awareness isn’t behaviour

Knowing what to do doesn’t guarantee it gets done. Behaviour is shaped by:

  • Time pressure
  • Incentives and performance metrics
  • Organisational culture
  • Leadership example

A well-trained employee can still click a malicious link if urgency or authority outweighs caution.

The training gap

According to the Cyber Security Breaches Survey 2025, only 19% of UK businesses and 21% of UK charities provided cybersecurity training in the 12 months prior to the survey.1 Where it exists, it often feels irrelevant, technical, or compliance-driven, and not as something that people see as part of everyday decision-making (a Fortra/Ipsos study found 52% of employees in France, the UK, Canada, Australia and the US felt cybersecurity was not part of their role).3

Effective training connects security to context and consequence. It develops judgement under pressure, not just knowledge of policy.

That means focusing on cognitive and behavioural competencies, not slide decks about passwords.

Leadership Signals and Ownership

Leadership behaviour sets the tone.

72% of UK businesses rank cybersecurity as a high priority, but only 27% have a board member directly responsible for it, down from 38% in 2021.1 When accountability is unclear, so is alignment.

Employees take cues from leaders more than from policies. If executives bypass controls for convenience, the rest of the organisation follows their example.

Conversely, when leaders question risk trade-offs, support cautious decisions, and model secure habits, those signals cascade.

Cybersecurity starts in the boardroom long before it’s tested in operations.

The Measurement Problem

Most organisations still rely on technical metrics: patches applied, endpoints monitored and incidents detected. Useful, but incomplete.

If risk emerges from the interaction between people and systems, then we need behavioural indicators too:

  • How quickly are unusual requests challenged?
  • How do teams react under time pressure?
  • How often are procedures adapted or ignored in practice?

Attackers measure human behaviour precisely. Defenders rarely do.

The Questions That Actually Build Resilience

Moving beyond tools means changing the questions asked at the leadership level:

If cyber risk is socio-technical, why measure only technical controls?

If attackers exploit behaviour, how are we deliberately developing the behaviour of our staff?

What would resilience look like if systems were designed for real human behaviour, not idealised compliance?

Executive boards are moving in this direction, but implementation still lags behind intent.

Conclusion: Closing the Human Gap

Cybersecurity is often framed as a contest between attackers and technology.

In truth, it’s a system of human judgement, communication and trust that is supported, not solved, by technology.

Breaches persist not because tools are weak, but because organisations are built around assumptions about how people should behave, rather than how they do.

Until cybersecurity strategies start with that reality – human behaviour as the defining variable – investment will keep outpacing impact.

Want to see how a competency-based approach can help you bridge this gap by defining, developing, and measuring human capabilities?

Want to take the next step?

Book a chat and a demo with Lexonis today.

Interested in learning more?

See for yourself how the skills framework from the Chartered Institute of Information Security Engineers (CIISec) latest skills framework translates into real role design, skills assessment and planning inside the Lexonis TalentScape platform. Join our live demo:

Tuesday 5th May – CIISec Skills and Jobs Framework in Action

References

¹ Ipsos in the UK (2025), Cyber Security Breaches Survey 2025, UK Government, Department for Science, Innovation & Technology.

² Help Net Security (2019), Human Error Still the Leading Cause of Data Breaches.

³ Fortra / Ipsos (2023), From Data Protection to Cyber Culture, Terranova Security.

Share:

Find out more

AI Just Made Your Cybersecurity Job Descriptions Out of Date
AI Cybersecurity
AI Just Made Your Cybersecurity Job Descriptions Out of Date

Cybersecurity has always evolved alongside technology, but the rise of artificial intelligence (AI) is accelerating that evolution at a pace that is faster than workforce planning in most organisations. Yet beneath the surface lies a more complex challenge. AI is not just changing how cybersecurity work is done. It is fundamentally reshaping what cybersecurity roles look like.

Learn More
Probably Fine Isn’t a Compliance Strategy
Compliance
Probably Fine Isn’t a Compliance Strategy

Compliance isn’t a guessing game. But across regulated industries - from finance to healthcare - many organisations are still relying on assumptions. The result? Risk exposure, audit failures, and a dangerous gap between policy and practice. Here’s why your compliance strategy must start with your people - and the skills they bring.

Learn More
The Skills Blind Spot That’s Holding Tech Teams Back
Skills
The Skills Blind Spot That’s Holding Tech Teams Back

Tech leaders are losing sleep over emerging technologies. New tools arrive with names, vendors and roadmaps every day. The pace is relentless. But here’s the uncomfortable truth: most organisations are focused on the wrong threat.

Learn More
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.