Mon. Feb 16th, 2026

Making sense of AI’s role in cyber security

Robot worn hero


Over the past year, the buzz around artificial intelligence (AI) has reached new heights, with businesses being inundated with AI solutions and executives eager to harness its transformative potential to drive innovation and growth. 

Ellie Hurst, commercial director at Advent IM, points out that procurement teams are adding AI clauses and chief information security officers (CISOs) are under pressure to “do something with AI”.

According to Hurst, this creates fertile ground for marketing: more webinars, more whitepapers, bolder claims and a fresh wave of “we can automate your security operations centre (SOC)” pitches.

She says there is also fear, uncertainty and doubt (FUD) around AI-powered cyber attacks. While attackers do use automation and are increasingly using AI, she says the risk is being used to rush IT buyers into purchasing tools that have not yet been proven to reduce risk in corporate IT environments. 

Where AI makes sense for cyber security

Hurst urges IT security chiefs being presented with the so-called latest and greatest AI enhancements to security tools to assess whether a specific product’s AI features are mature enough to help their organisation, without introducing new risk. “Some AI features genuinely save analyst time or improve detection. Others are little more than chatbots bolted onto dashboards,” she says. 

According to Richard Watson-Bruhn, a cyber security expert at PA Consulting, cyber security tools offering AI accelerators to help IT security teams reduce the time spent on repetitive workloads tend to be provided as add-ons distributed as software as a service (SaaS).

Some AI features genuinely save analyst time or improve detection. Others are little more than chatbots bolted onto dashboards
Ellie Hurst, Advent IM

Another category of AI cyber security caters to buyers looking for a product that meets the requirements of enterprise AI. Watson-Bruhn says this type of tool is a good choice when IT decision-makers require trusted outputs and verifiable sources that can be produced entirely inside the corporate network.

“Use enterprise AI when the work spans multiple teams, touches sensitive data, or your policies need it to run the same way every time,” he adds.

With AI-powered or AI-enhanced cyber security tools now seemingly everywhere, Aditya K Sood, vice-president of security engineering and AI strategy at Aryaka, says the challenge for CISOs is not just assessing whether AI belongs in IT security, but also how to identify practices that truly deliver value when AI is being sold as part of the feature set in a cyber security product. Sood urges CISOs and IT buyers to ensure they distinguish adequate AI security from marketing hype.

Sood points out that AI in cyber security is not a new phenomenon. Machine learning (ML) has powered spam filters, anomaly detection, user behaviour analysis and fraud detection systems for over a decade. But what is new, in his view, is the arrival of large language models (LLMs) and more accessible AI tooling that cyber security software providers are rapidly layering onto existing products.

“This shift has changed how security teams interact with data – summaries instead of raw logs, conversational interfaces instead of query languages, and automated recommendations instead of static dashboards,” he says.

While this can be genuinely helpful, Sood believes it also creates an illusion of intelligence, even though the underlying security fundamentals may not have changed. “The mistake many organisations make is assuming that more AI automatically equals better security. It doesn’t,” he warns.

The mistake many organisations make is assuming that more AI automatically equals better security. It doesn’t
Aditya K Sood, Aryaka

In Sood’s experience, there is one lesson that keeps resurfacing, which is that sound IT security architecture beats features.

“An AI bolted onto a weak security foundation won’t save you,” he says. “If identity is broken, data governance is unclear, or network visibility is fragmented, AI simply operates on bad inputs and produces unreliable outputs.”

Sood urges CISOs and IT buyers to take into account the fact that AI is not replacing the fundamentals of good cyber security – it amplifies them.

Building on a corporate IT security foundation

Advent IM’s Hurst recommends that IT buyers begin by looking at the outcomes they want to achieve and at threat models, rather than focusing on features of a particular product. “Anchor decisions to your top risks,” she says. These may include identity abuse, ransomware, data exfiltration, third-party exposure, operational technology and critical national infrastructure constraints.

Hurst suggests IT security leaders work out what controls they require to help their organisation mitigate these risks and limit exposure. Most organisations have a small number of recurring pain points, such as alert overload, slow investigations of cyber security incidents, vulnerability backlogs, logging gaps, identity sprawl, poor visibility of internet-exposed assets or supplier connections they do not fully understand. 

“Don’t buy an ‘AI cyber tool’ because it sounds clever. Buy something because it fixes a real problem you already have,” she says.

Rather than being won over by a slick demo of AI-powered capabilities from a provider of cyber security tools, Hurst recommends IT decision-makers focus on the areas of weakness they have identified in the organisation’s cyber security strategy to inform their decisions around the most useful functionality.

AI agents in IT security operations

Analyst firm Gartner predicts that 70% of large security operations centres (SOCs) will pilot AI agents to augment operations by 2028, but only 15% will achieve measurable improvements without structured evaluations.

According to Gartner vice-president analyst Craig Lawson, the potential of AI agents to transform security operations and ease workloads is real, but only if approached with rigour and evaluated through an outcome-driven lens.

As Lawson pointed out in a recent Computer Weekly article, AI agents can automate high-volume tasks, which reduces manual workloads and frees up IT security analysts to focus on complex investigations and strategic priorities. These agents drive greater consistency across processes, bridging skills gaps so even less experienced team members can handle more complex tasks based on the tribal knowledge AI SOC agents have captured.

However, he feels the idea that AI SOC agents can fully replace human expertise in security operations is a myth. “Today’s reality is one of collaboration – AI agents are emerging as powerful facilitators, not autonomous replacements. The future of security operations will be shaped by how well organisations blend AI-driven augmentation with skilled human judgement.”

Numerous barriers are holding back the deployment of AI agents for IT security. Gartner predicts that 45% of SOCs will re-evaluate their build-versus-buy decisions for AI detection technology by 2027, with an emphasis on enhancing analyst capabilities.

Lawson notes that pricing models may be tied to usage or require “bring your own AI” arrangements, and certain features could be capped or restricted as operational demand grows.

In addition, he says poor interoperability with existing tools or workflow inefficiencies can create new siloes within security operations or require costly re-architecture.

Priorities for tools selection

From these discussions, it is clear that IT buyers need to be cautious when approached by cyber security companies selling AI functionality and should avoid technology lock-ins or a single point of failure.

Hurst recommends that IT decision-makers should make sure they have an exit plan. “Ensure you can extract your data, avoid proprietary black boxes and revert to previous processes without a six-month rescue project,” she says. 

Gartner’s Lawson advises IT buyers to give a high priority to seamless integration with the organisation’s existing SOC technology stack when assessing cyber security products offering agentic AI capabilities. “Every investment should be tied to measurable outcomes, such as improvements in mean time to repair (MTTR), mean time to contain (MTTC), reduction in false positives or analyst workload,” he says.

A sound cyber security strategy should not rely on unproven shortcuts, and IT leaders considering the new AI functionality that is appearing in cyber security tools should ensure the companies selling these tools are able to prove the value of the AI functionality.

Hurst urges organisations to make sure they control the data the AI tools use. She recommends that the decisions these AI tools make need to be explainable. “Put humans in the loop until trust is earned,” she says. Overall, these AI-powered cyber security tools should fit in with how IT security is being run.

By uttu

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *