Cybersecurity has always evolved alongside technology, but the rise of artificial intelligence (AI) is accelerating that evolution at a pace that is faster than workforce planning in most organisations.
Security teams are already feeling the shift. AI tools now detect anomalies in network traffic, automatically triage alerts, analyse malware patterns, and surface threats in seconds that previously required hours of human investigation. For overstretched security operations centres (SOCs), this is a welcome development.
Yet beneath the surface lies a more complex challenge. AI is not just changing how cybersecurity work is done. It is fundamentally reshaping what cybersecurity roles look like.
And for many organisations, job design hasn’t yet caught up.
The End of the ‘Alert Factory’

Enterprise environments can generate over 10,000 security alerts daily, and around 90% of SOCs are overwhelmed by backlogs and false positives, with more than 80% of analysts reporting they feel constantly behind, according to Osterman Research 1.
For years, many cybersecurity teams operated like alert factories. Analysts spent their days reviewing thousands of alerts – repetitive work, but critical to operations.
AI is now automating much of that triage. Machine learning models can rapidly filter false positives, correlate signals across systems, and prioritise incidents based on risk.
The result? Entry-level security roles are changing.
Instead of spending most of their time reviewing alerts, analysts increasingly focus on higher-value actions. The role is becoming less about volume and more about judgement. Activities now include:
- Investigating complex incidents
- Validating AI-generated findings
- Performing proactive threat hunting
- Analysing adversary behaviour
In other words, the nature of the work is shifting from monitoring to investigation and interpretation. For organisations, this means that traditional role definitions may no longer reflect the real capabilities required to operate effectively.
From Reactive Defence to Proactive Security
AI is also enabling security teams to move away from purely reactive defence. Modern detection platforms use behavioural analytics and predictive modelling to identify suspicious activity patterns before a breach occurs. Rather than waiting for alerts, analysts can actively search for signs of adversary activity.
This shift is redefining key cybersecurity roles. Threat analysts, for example, increasingly need skills in:
- Data analysis
- Behavioural pattern recognition
- Adversary tactics and techniques
- Intelligence interpretation
Similarly, incident response specialists now work alongside automated response systems that can isolate devices, block malicious traffic, or revoke credentials instantly. The human role becomes one of oversight, judgement, and escalation, rather than manual execution.
New Security Risks in the Age of AI
According to IBM’s Cost of a Data Breach Report, organisations using AI and automation in security operations contained breaches 108 days faster and saved an average of $2.22 million more than those without AI-driven defences. 2
While AI strengthens defence, it also creates new attack vectors. Cybercriminals are already using AI to:
- Generate highly convincing phishing campaigns
- Automate vulnerability discovery
- Develop adaptive malware that evades detection
- Create deepfake-based social engineering attacks
This evolving threat landscape requires entirely new capabilities within security teams. New roles are beginning to emerge as organisations recognise that protecting AI systems is becoming a critical security priority:
- AI security specialists/engineer
- Adversarial AI researchers
- AI governance professionals
These roles are still taking shape across the industry – job titles and responsibilities vary widely – but their appearance on the landscape signals that securing AI systems is fast becoming a discipline in its own right.
This is new ground for most organisations. Securing traditional IT infrastructure is one challenge. Securing machine learning models, training data, and algorithmic decision systems is another entirely.
The Skills Gap Behind the Technology

The global cybersecurity workforce gap reached nearly 4.8 million unfilled roles in 2024 – a 19% increase year on year – against a total workforce of just 5.5 million, according to the ISC2 2024 Cybersecurity Workforce Study. 3
Here’s the critical point: AI adoption in cybersecurity is advancing faster than the workforce’s capabilities. Tools are evolving rapidly, but many organisations still define roles using outdated job descriptions that were written before AI-driven security platforms existed.
This creates several problems:
- Teams struggle to recruit for poorly defined roles.
- Training programmes target the wrong skills.
- Career pathways become unclear for security professionals whose responsibilities are evolving.
In short, organisations risk deploying powerful AI tools without ensuring their people have the skills to use them effectively. The challenge isn’t just adopting AI; it’s also about redesigning cybersecurity roles for an AI-enabled world.
Turning Role Redesign into a Structured Process
Redesigning roles doesn’t need to be guesswork. This is where structured skills frameworks and modern skills intelligence platforms become essential. Frameworks such as the CIISec Skills Framework and SFIA (Skills Framework for the Information Age) provide organisations with a clear, standardised way to define cybersecurity capability.
CIISec offers specialist depth across cyber disciplines, from threat intelligence and governance to incident response and digital forensics, defining what effective performance looks like at different levels of expertise.
SFIA complements this by providing a broader framework for digital and technology skills across seven levels of responsibility, enabling organisations to align cybersecurity roles with wider IT and business capability models.
Together, these frameworks provide a common language for defining the skills required in modern cyber roles. Used well, these frameworks give organisations a shared vocabulary and a clear benchmark for capability. But frameworks alone aren’t enough. Organisations also need the ability to quickly and accurately map those skills to their workforce.
From Framework to Action with Lexonis
This is where the Lexonis TalentScape comes in. By combining recognised skills frameworks such as CIISec and SFIA with AI-driven skills intelligence, the Lexonis platform enables organisations to move from theory to implementation quickly.
Instead of manually redesigning job descriptions and capability models organisations can:
- Generate AI-assisted cybersecurity job profiles aligned to CIISec and SFIA
- Assess the current skills of their teams against those role definitions
This alone can save weeks of manual effort and organisations can then go on to:
- Identify emerging capability gaps as AI reshapes security work
- Design targeted development pathways for evolving roles
For organisations operating in regulated environments, Lexonis TalentScape also supports compliance reporting – providing the audit trail and evidence needed to demonstrate that cybersecurity capability meets regulatory and governance requirements. Find out more about how Lexonis solutions support compliance.
In practical terms, this means organisations can redesign cybersecurity roles in weeks rather than months, ensuring their teams remain aligned with the realities of modern threat environments.
In an era where both attackers and defenders are using AI, the organisations that succeed will not simply deploy better technology. They will build teams whose roles, skills, and capabilities evolve just as quickly as the threats they face. And that must begin with clear visibility of your skills landscape.
If you want to explore how CIISec, SFIA, and Lexonis TalentScape can help you redesign cybersecurity roles for the age of AI, speak to the Lexonis team about mapping your cybersecurity roles to CIISec and SFIA.
Contact Lexonis.
1 Source: Osterman Research Report, cited in MSSP Alert
2 Source: IBM Cost of a Data Breach, cited in Practical DevSecOps
3 Source: ISC2 2024 Cybersecurity Workforce Study
Interested in learning more?
See for yourself how CIISec’s latest framework translates into real role design, skills assessment and planning inside the Lexonis TalentScape platform – Join our live demo:
Tuesday 5th May – CIISec Skills and Jobs Framework in Action