Identity and access management (IAM) is no longer a back-office security control. In an AI-driven world, it is fast becoming the control plane for how organisations operate, compete and manage risk.
The rapid adoption of generative AI (GenAI), autonomous agents and machine-driven workflows is fundamentally reshaping the identity landscape. What we are seeing is not an incremental evolution of IAM, but the emergence of an entirely new identity stack, one that must account for humans, machines and increasingly, AI agents acting with autonomy and speed.
This shift is exposing a critical gap. Traditional IAM architectures were built around relatively static identities, employees, partners and customers, with predictable access patterns. AI breaks that model. Identities are now dynamic, ephemeral and often non-human, with agents being created, modified and retired in real time.
That has immediate security implications. Gartner predicts that by 2028, 25% of organisational breaches will be traced back to AI agent abuse, underscoring how quickly this risk surface is expanding.
The rise of AI agents as first-class identities
One of the most significant changes in the new identity stack is the elevation of AI agents to first-class identities. These are not simply service accounts or bots in the traditional sense. They can act independently, make decisions and interact across systems with varying levels of privilege.
This creates a new category of identity risk. In many environments today, highly privileged AI agents can be indirectly controlled by users with far lower levels of access. The result is a widening gap between who is authorised and what is actually executed, a fundamental breakdown of least privilege principles.
At the same time, the business uses of these identities are highly transient. The roles and uses of AI agents may exist for seconds or minutes, with needed permissions shifting continuously based on context. This makes traditional identity governance approaches, including periodic reviews, static roles and policy-based controls, increasingly ineffective.
Organisations are, in effect, trying to secure a moving target with tools designed for a fixed perimeter.
From identity management to identity intelligence
To address this, IAM must evolve from identity management to identity intelligence.
This means embedding AI not just into user experience, but into the core of identity security, enabling real-time detection, adaptive access control and continuous verification. Identity decisions can no longer rely solely on predefined rules; they must be context-aware, risk-based and responsive to rapidly changing behaviours.
For example, detecting anomalous behaviour from an AI agent requires understanding not just who or what the agent is, but what it is trying to achieve, how its behaviour is changing, and whether that aligns with expected intent. This is a fundamentally different problem from traditional authentication and authorisation.
It also introduces new challenges around explainability, audit and compliance. As AI systems make or influence access decisions, organisations must be able to trace actions back to both human intent and machine execution, a requirement that many current IAM systems are not designed to support.
The hidden risk in the AI identity layer
What makes this shift particularly challenging is that many organisations are already deploying AI at scale without fully addressing these identity risks.
In practice, AI adoption is often outpacing governance. Security teams are being asked to retrofit controls onto systems that were not designed with AI identities in mind. This creates blind spots across the identity layer, from data leakage through AI interactions, to model manipulation and privilege escalation.
The dual challenge for IAM leaders is clear: they must both protect AI systems and use AI to improve identity security. Gartner highlights that IAM solutions now need to operate in a dual mode, securing AI while also leveraging it to enhance detection, response and operational efficiency.
This is not simply a technical adjustment. It requires a rethinking of strategy, skills and operating models.
Why a “battle plan” is needed now
Organisations that treat AI as an add-on to existing IAM capabilities risk falling behind. The scale and speed of change demand a more deliberate, structured response.
A clear “battle plan” for IAM in the age of AI starts with transformation, not transition. This means rethinking identity strategy from the ground up, aligning roadmaps, retraining teams and prioritising AI-centric security risks as core business issues, not niche concerns.
It also requires difficult trade-offs. Resources must shift away from only maintaining legacy capabilities towards building AI-ready identity platforms. In some cases, this will mean partnering or acquiring to accelerate capability development and close critical gaps.
Crucially, time to market matters. As AI adoption accelerates, organisations that can rapidly operationalise identity controls for AI agents will gain a significant advantage, not just in security, but in trust.
Defining the next era of digital trust
The emergence of the new identity stack is ultimately about trust.
Every AI-driven interaction, whether it is a recommendation, a transaction or an automated decision, depends on confidence in the identity behind it. If organisations cannot govern AI identities effectively, that trust erodes quickly.
This is why IAM is moving from a supporting function to a mission-critical foundation. The organisations that succeed will be those that recognise identity as central to their AI strategy, not peripheral to it.
The next phase of IAM will not be defined by incremental improvements in authentication or access management. It will be defined by how well organisations can govern high-speed, autonomous and often opaque identities at scale.
Those that get this right will help shape the future of AI trust. Those that do not may find that the weakest point in their AI strategy is not the model, but the identity layer underpinning it.
Gartner analysts will further explore how organisations can secure and govern AI-driven identities, agents and access at scale at the Gartner Security & Risk Management Summit in London, from 22–24 September 2026.
Ted Ernst is senior director analyst at Gartner
