Supply-chain attacks will intensify (and force “proof-based” cyber governance) - Stuart Sharp, VP of Product Strategy
The next wave of supply-chain breaches go beyond exploiting software dependencies and weaponize the trust layer between organizations and their vendors/partners. Off-the-shelf toolkits, some of them state-sponsored, are lowering the barrier to entry for third-party compromises. As a result, regulators are hard-coding “continuous verification” into frameworks such as NIS2, DORA, and the EU Cyber Resilience Act. Enterprises will need to demonstrate, not just declare, that least-privilege delegation and continuous monitoring extend beyond their own perimeter. Static compliance checklists will give way to living audit trails that record who granted what access, when, and why.
This gradual regulatory tightening will intensify in 2026 and create a new control plane between enterprises and vendors: an identity-centric layer that enforces just-in-time privileges, records delegation/custody chains, and supports instant evidence production for inspections. What that means in practice is that identity teams will be judged on provable reduction of third-party exposure rather than on policy documentation.
“Trust but verify” becomes “prove and continuously enforce.”
We’ll see the first major “AI interface > privilege escalation” breach - Alan Radford, Global Strategist
The AI goldrush has begun. AI assistants are graduating to AI agents that go beyond informing to also take action, becoming operational gatekeepers that are authorized to execute commands, orchestrate workflows and even change production configurations without human interaction. As the rush to invest in AI continues, many will run before they can walk, outpacing the capabilities of their current security programs. Undeniably, we are on the verge of witnessing the first high-profile breach to emerge from this new attack surface, where over-privileged AI will be exploited. Prompt-injection and response-time attacks will target the workflow between user and agent, triggering of unauthorized workflows, revealing and manipulating sensitive data, unintentional changes in configurations and over escalation of privileged permissions and roles. When this does occur, it will expose how fragile current privileged boundaries have become where AI is authorized to act autonomously on human requests.
The direct consequence is a shift in defensive priorities. Business will have to move from securing identities that use AI to securing identities within AI itself. The bigger picture goes beyond securing human identities that use AI, to securing the machine and agent identities that live inside AI systems themselves. Guardrails will evolve to log not just what the AI did, but who or what process triggered it with a fully traceable chain of custody. Organizations need to demonstrate provable accountability of agentic actions, binding every automated decision to distinct human ownership. In other words, “AI governance” is no longer an abstract compliance term, it has become an essential frontline security control.
AI vs AI: an autonomous “immune system” for identity? - Alan Radford, Global Strategist
We are already in an AI-accelerated arms race between attackers and defenders. Threat actors are using AI for vulnerability reconnaissance and generating tailored exploits. Ransomware for example is evolving into an AI-augmented form. Malware exists today using large-language-models to generate code (e.g. ‘PromptLock’) and threat-actors are starting to use AI chatbots for extortion negotiations. What was once a manual chain of events is becoming an adaptive, self-optimising system. For defenders this means static signature-based controls are no longer sufficient. We must architect for continuous detection, dynamic response and forensic chain of custody.
On the defensive side, AI systems are correlating data, surfacing anomalies, and reacting to threats at scale. On the offensive side, malicious models and tools are emerging that can continuously probe environments, test weaknesses and adapt behaviour to evade detection.
In short, AI is reducing the cost of attack while increasing the cost of defence. Threat actors benefit from reduced skill requirements and faster, automated campaign orchestration. Meanwhile, defenders must absorb the opposite pressures by investing in more advanced identity-centric security controls and specialised expertise, to counter increasingly intelligent automated threats. This will intensify on both sides throughout 2026 and beyond. Traditional, static implementations of zero trust will have to evolve into more dynamic, self-adjusting ‘immune systems’ for identity security. Permissions, connectors, and even micro-code modules will be created, elevated, and dismantled on demand, producing new audit and rollback challenges that businesses have never seen before.
The key differentiator here will be explainability. Rather than letting defensive AI get on with things, organizations will need a complete immutable audit trail in order to trace why a defensive AI took or reversed an action and, critically, the accountable human owner. AI cannot be taken to court. In 2026, visibility, accountability, and transparency will become a key performance indicator when it comes to AI automation. The most successful enterprises will insist on AI systems that can justify every privilege change as clearly as a human analyst. In other words, we need to “know our AI.”
The NHI crisis will continue… and demand kill switches - Robert Kraczek, Global Strategist
Bots, service accounts, and machine agents already outnumber employees by orders of magnitude (as many as 50:1 on the average enterprise network), and the gap keeps widening. Each carries entitlements, credentials, and dependencies that seldom die when their creator leaves or their purpose ends. This isn’t a new problem, but in 2026, the focus will shift from discovery to full-scale governance: mapping ownership chains, establishing kill-switches, and enforcing lifecycle controls at the policy level rather than per account. As enterprises replace human staff with task-specific agents, accountability will need to flow back to the human or system that created each NHI. We expect identity systems to move toward short-term creation of NHIs rather than the free-for-all approach that has led to the current situation. NHIs will be spun up for a defined job, execute their role, then self-terminate or be decommissioned. That promises lower standing risk but it also demands airtight traceability. Without a provable trail between the NHI, its creator, and its actions, organizations will find themselves unable to answer the most basic questions in an investigation: who (or what) spawned this NHI? Is it still needed? Why is it still active?
Data access governance will make its grand return as AI’s safety net - Robert Kraczek, Global Strategist
AI adoption is resurrecting an old discipline and making it a top priority once again: understanding who, or what, is interacting with data. As generative and agentic systems crawl corporate repositories, forgotten data stores are being rediscovered by machines faster than by humans. In 2026, the ability to enforce least-privilege at the data layer will define resilience. Enterprises will switch their focus back to file-share and data-access governance, tying entitlements directly to identity rather than to entire departments or use devices (which can change hands over time). The more AI integrates into decision-making, the higher the cost of data sprawl and accidental over-permissioning. Verification at every access request – human or machine – will be non-negotiable. Zero-trust principles will extend down to every data transaction, closing the blind spots that AI-driven reconnaissance are known to exploit.
“Bring your own ID” will become policy rather than privilege - Stuart Sharp, VP of Product Strategy
By 2026, the rollout of EU digital identity wallets under eIDAS 2.0 will make government-grade credentials usable across public and private services. Users will gain the right to authenticate their age or identity with external, verified IDs, forcing organizations to accept factors they don’t directly issue or control. On the plus side, this will raise interoperability and assurance standards across industries, but it will also expose how many enterprises still treat basic authentication as the full extent of security. We’ll see a clear line emerge between authenticating users and authorizing them for access. Because even when a digital wallet verifies someone’s identity beyond doubt, the company’s own least-privilege and task-specific access policies must still apply. Identity systems will be able to base the enforcement of their internal governance controls on trusted external user verification services.
Stealth model poisoning may emerge as a silent AI threat - Nicolas Fort, Director of Product Management
As organizations feed their own data into LLMs in pursuit of efficiency and automation, attackers will learn to “distort” them quietly rather than disable them outright. Poisoning the underlying training data or incremental updates can bias outputs in ways that look legitimate but drive faulty business logic. In 2026, a manipulated model won’t need to overtly leak data to cause damage; it can simply “fiddle the numbers” to cause quiet but potentially devastating harm, like a trading algorithm that miscalculates risk or a compliance filter that misclassifies content. Detecting this level of tampering will require deeper transparency and tracking, binding every model change, prompt, and retraining event to authenticated identities. In many ways, this will be the year that AI assurance joins forces with IAM as a board-level issue. Figuring out who accessed a model is one thing, but finding out if and how they’ve influenced its behavior is another challenge entirely.
“Cyber resilience” becomes the new “cyber recovery” - Stuart Sharp, VP of Product Strategy
Recent large-scale outages have shown that too many “resilience” plans are basically just glorified paperwork. And governments are responding. In 2024, the UK urged every CEO to verify their operational continuity against cyberattacks. In 2026 a huge gap will emerge between companies that can function in the midst of cyber incidents and those that can’t. True resilience will mean tested isolation plans, verified access control over backups, and the ability to re-establish trust when identities and directories are compromised. In other words, bouncebackability and the resilience to stay operational will become more important than the ability to recover. Backup and recovery will still be vital, but if it gets to the point where they’re the only path of recourse in 2026, it’s already too late. Trust in services will be a premium currency – for users and for regulators, especially in critical industries like finance, healthcare, and energy.
The basics strike back - Andrew Hartnett, CTO
It may sound counter-intuitive, but after years of chasing the “next big threat,” 2026 will likely mark a return to fundamentals. Many breaches still succeed because organizations neglect simple controls such as multi-factor authentication (MFA), patch management, change discipline, and clear role definitions. As AI adds layers of complexity, these bedrock principles will actually become more valuable and relevant. After all, zero-trust ambitions would collapse in an instant without solid IAM hygiene underneath them. Next year we’ll see industry conversations will swing back from hype to housekeeping. Regulators, insurers, and boards will ask for evidence of the basics before funding more advanced AI defenses. In the end, sophisticated automation is only as strong as the basic processes it is built on.