Agentic Identity: The Missing Layer in Enterprise AI Architecture

This is the fourth article in Arion Research's "Future Enterprise" series, exploring how AI agents are restructuring enterprise technology. The series examines the architectural layers, competitive dynamics, and strategic decisions that will define the next era of enterprise software.

Here is a question that should keep every enterprise technology leader up at night: when an AI agent negotiates a contract term on behalf of your company, who signed it?

Not which human approved the workflow. Not which system the agent ran on. Who, or what, is the verifiable, auditable, legally attributable entity that committed your organization to that term? Right now, in almost every enterprise deploying AI agents, the honest answer is: nobody knows.

This is not a theoretical concern. Enterprises are deploying agents that process invoices, modify vendor contracts, adjust pricing, and communicate with customers. Each of these actions carries legal and financial consequences. Yet the identity infrastructure that governs these agents (the mechanisms for establishing who an agent is, what it is authorized to do, who it acts on behalf of, and how its actions can be traced) is either absent or borrowed from frameworks designed for a different era.

In the first article in this series, I outlined the Future Enterprise architecture and identified Governance as one of the critical vertical services that spans the entire stack. In the third article, I explored the trade-offs between native and external agents, and flagged trust and accountability as one of the unresolved challenges of the hybrid model. Agentic identity sits at the intersection of both. It is the architectural prerequisite that makes governance enforceable and multi-agent collaboration trustworthy. And it is the layer that most vendors are neglecting.

Why Human Identity Frameworks Do Not Work for Agents

Enterprise identity management has evolved significantly over the past two decades. SAML gave us federated single sign-on. OAuth 2.0 and OpenID Connect gave us delegated authorization and portable user identity. Zero Trust architectures moved us from perimeter-based security to continuous verification. These are mature, battle-tested frameworks. And none of them were designed for what AI agents actually do.

The problem is not that existing identity standards are bad. It is that they rest on assumptions that do not hold in an agentic world.

Assumption 1: The principal is human. OAuth and OIDC are built around human users who authenticate interactively (entering credentials, approving consent screens, responding to MFA challenges). Agents do not authenticate this way. They operate continuously, spawn sub-agents, and act at machine speed without a human in the loop for each action. Treating an agent as just another "service account" misses the point. Service accounts have static permissions. Agents make dynamic decisions about what to do next based on context that changes in real time.

Assumption 2: Sessions are bounded. Human sessions have clear start and end points. You log in, you work, you log out. Agents may run for hours, days, or indefinitely. They may spawn child agents that inherit some permissions but not others. They may delegate tasks to other agents in other systems. The concept of a "session" does not map cleanly to agentic workflows, and the security mechanisms built around session management (token expiration, refresh cycles, idle timeouts) need to be rethought.

Assumption 3: Authorization is relatively static. In traditional IAM, a user gets a role, and that role grants a set of permissions. Those permissions change infrequently (when the user changes jobs, gets a promotion, or leaves the company). Agent authorization needs to be dynamic. An agent processing a routine invoice may have standard approval authority. The same agent encountering an invoice that exceeds a threshold, or that comes from a flagged vendor, or that triggers a regulatory reporting requirement, needs different authorization in real time. Static roles do not capture this.

Assumption 4: Trust is bilateral. Traditional identity is a two-party problem: a user proves their identity to a system. Agentic identity is a multi-party problem. An agent acts on behalf of a human, who is part of an organization, using a model from one vendor, running on infrastructure from another vendor, accessing data from a third vendor, and potentially collaborating with agents from entirely different organizations. The chain of trust is longer and more complex than anything current identity frameworks were designed to handle.

The OpenID Foundation acknowledged this gap directly in their October 2025 whitepaper on identity management for agentic AI, noting that traditional IAM frameworks "presume predictable application behavior and a single authenticated principal" and are insufficient for agents that make autonomous decisions. ISACA put it more bluntly, calling it a "looming authorization crisis" for agentic AI.

The Four Dimensions of Agentic Identity

A complete agentic identity framework needs to address four distinct dimensions. Most current approaches address one or two. None yet address all four.

Authentication: Proving Who the Agent Is

Authentication for agents is not just about credentials. It is about provenance. When an agent presents itself to a system, that system needs to verify not just "this agent has a valid token" but a richer set of claims: which organization deployed this agent, which model powers it, what version of the agent logic is running, and what platform it is executing on.

Think of it as a chain of attestation. The agent identity needs to be cryptographically bound to its deployment context. SPIFFE (Secure Production Identity Framework for Everyone), originally designed for workload identity in cloud-native environments, offers a useful starting point. SPIFFE assigns verifiable identities to workloads based on what they are and where they run, rather than relying on network location or static secrets. Extending this model to agents (where identity is tied to the agent's provenance, capabilities, and deployment context) is a natural evolution.

Microsoft's Entra Agent ID, announced at RSAC 2026, takes a step in this direction by giving each agent a unique identity in the Entra directory. But it is scoped to the Microsoft ecosystem. The broader challenge is establishing authentication that works across vendors, across platforms, and across organizational boundaries.

Authorization: Defining What the Agent Can Do

Authorization for agents needs to go beyond static role-based access control. It requires what I would describe as contextual, dynamic authorization: permissions that adapt based on what the agent is doing, what it is encountering, and what risks are present.

Consider an agent managing procurement. In its normal operating mode, it can approve purchase orders up to $10,000, from approved vendors, for standard categories. But context changes: the agent encounters a purchase order for $50,000 from a new vendor in a restricted country. The authorization decision is no longer just "does the agent have the procurement role?" It involves the transaction amount, the vendor risk profile, the regulatory jurisdiction, and possibly the agent's track record of decisions in similar situations.

This is not role-based access control. It is closer to what the Cloud Security Alliance calls "relationship-based access": ARIA (Agent Relationship-based Identity and Authorization), where authorization decisions incorporate the full context of the agent's relationships, delegations, and current operational state. The OpenID Connect for Agents (OIDC-A) proposal goes further, introducing delegation chain validation so that when Agent B acts on a task delegated by Agent A, the authorization system can verify the entire chain of delegation back to the human or policy that originated it.

Accountability: Establishing Legal Binding

This is the dimension that matters most to the C-suite and the general counsel's office, and it is the one where the infrastructure is weakest. When an agent takes an action that has legal or financial consequences (approving a transaction, modifying a contract term, committing to a delivery date, sending a communication to a customer), there needs to be an unambiguous, legally defensible record of who is accountable.

The legal landscape is moving fast. California's AB 316, effective January 2026, explicitly precludes using an AI system's autonomous operation as a defense to liability. The EU's Product Liability Directive, to be implemented by December 2026, classifies AI software as a "product" subject to strict liability. Courts and regulators are not going to wait for the technology industry to figure out agentic identity. They are going to assign liability, and the organizations that cannot demonstrate clear accountability chains will bear the burden.

Accountability requires more than logging. It requires binding: a verifiable link between the agent's action, the authorization that permitted it, the delegation chain that led to it, and the human or organizational policy that is ultimately responsible. Think of it as the digital equivalent of a signature with a notarized chain of custody. The technology for this exists in pieces (digital signatures, verifiable credentials, distributed ledgers), but nobody has assembled it into a coherent framework for agentic accountability.

Provenance: Tracing the Full History

Provenance answers the question: how did we get here? It is the complete, tamper-evident record of an agent's actions, decisions, and reasoning chain. Not just what the agent did, but why it did it, what data it used, what alternatives it considered, and how confident it was in its decision.

Provenance is critical for three reasons. First, it is a regulatory requirement in many industries (financial services, healthcare, and government all require audit trails that demonstrate how decisions were made). Second, it is an operational necessity for debugging and improvement: when an agent makes a bad decision, you need to trace back through its reasoning to understand what went wrong. Third, it is a trust mechanism; when agents from different organizations collaborate, provenance lets each party verify that the other's agents acted within agreed parameters.

The challenge is scale. An enterprise running thousands of agents, each making hundreds of decisions per day, generates an enormous provenance footprint. The governance infrastructure needs to capture this at the granularity required for compliance without creating a performance bottleneck or a storage problem that makes the data practically useless.

The DNS Analogy: Why Agent Identity Needs Its Own Naming Infrastructure
One of the more interesting proposals emerging in the standards community is the Agent Name Service (ANS), currently under discussion as a potential IETF standard. ANS would map agent identities to verified capabilities, cryptographic keys, and endpoints, similar to how DNS maps human-readable domain names to IP addresses.

The analogy is instructive. Before DNS, the internet worked, but it was fragile and hard to scale. Every system needed to maintain its own mapping of names to addresses. DNS created a shared, hierarchical naming infrastructure that made the internet usable at scale. Agentic identity faces a similar problem: right now, every platform maintains its own agent registry, and there is no shared way to look up an agent’s identity, capabilities, and trust credentials across platforms.

ANS would not solve the full identity problem; it addresses discovery and naming, not authorization or accountability. But it would provide a critical piece of infrastructure that the other layers can build on. Without something like ANS, cross-platform agent collaboration requires point-to-point trust relationships that do not scale beyond a handful of partners.

The Cross-Organizational Forcing Function

If agentic identity were only an intra-enterprise problem, it would be important but manageable. Enterprises can impose internal standards, mandate specific identity providers, and enforce governance policies across their own agent deployments. Messy, but solvable.

Cross-organizational agent collaboration changes the equation entirely.

Consider what is already emerging: supply chain scenarios where a buyer's procurement agent communicates with a supplier's fulfillment agent. Financial services where a client's portfolio agent negotiates terms with a counterparty's trading agent. Healthcare where a provider's clinical agent shares information with a payer's authorization agent. In each case, agents from different organizations (with different identity providers, different governance frameworks, different security postures, and different liability structures) need to establish mutual trust and act on it.

This is the problem that makes identity non-negotiable rather than merely important. Inside the enterprise, you can compensate for weak identity with strong network controls, manual oversight, or limited agent autonomy. Across organizations, those compensating controls do not exist. You cannot rely on the other organization's network security. You cannot manually oversee every cross-boundary interaction. And limiting agent autonomy defeats the purpose of deploying agents in the first place.

Cross-organizational collaboration demands what I would describe as federated agentic identity: a model where organizations can issue identities to their own agents, those identities are verifiable by external parties, and the trust relationships between organizations are explicit, auditable, and revocable. This is analogous to what federated identity (SAML, OIDC) did for human users across organizational boundaries, but with the additional complexity of dynamic authorization, delegation chains, and provenance tracking that agents require.

No vendor has solved this. Microsoft's Entra Agent ID works within the Microsoft ecosystem. SPIFFE handles workload identity within and across cloud platforms. A2A and MCP handle agent communication protocols. But the federated agentic identity layer (the infrastructure that lets Organization A's agent prove its identity and capabilities to Organization B's agent, with both sides confident in the trust chain) does not exist yet.

What Vendors Are Getting Wrong

The current vendor landscape for agentic identity falls into three categories, none of which are adequate.

Extending existing IAM. The most common approach is to treat agent identity as an extension of existing identity and access management. Give agents service accounts, assign them roles, manage them in the existing directory. This is what most enterprises are doing today, and it is the equivalent of using horse-drawn carriages on a highway. Service accounts were designed for static, predictable workloads. Agents are dynamic, autonomous, and context-sensitive. Stretching IAM frameworks to cover agents creates the illusion of governance without the substance.

Platform-specific identity. Some vendors are building agent identity into their specific platforms. This solves the problem within one ecosystem but creates new silos. If your agents from Vendor A have identities that Vendor B's systems cannot verify, you have reproduced the same interoperability problem that federated identity solved for human users two decades ago. We should not need to solve it again from scratch.

Security theater. A troubling number of vendors are conflating security with identity. They offer runtime monitoring, anomaly detection, and behavioral analysis for agents (all valuable capabilities), but present them as "agent identity" solutions. Monitoring what an agent does is not the same as verifying who the agent is. Detection is not authentication. Observability is not accountability. These tools complement an identity framework, but they cannot substitute for one.

The deeper problem is that most vendors are treating agentic identity as a security feature rather than an architectural layer. Security is one dimension of identity. But identity also enables trust, accountability, governance, and interoperability. Building it as an afterthought (bolting it onto existing architectures rather than designing it in from the start) will produce the same kind of brittle, incomplete result that enterprises spent years cleaning up when they retrofitted security onto early cloud deployments.

Toward an Agentic Identity Architecture

So what should an agentic identity architecture look like? Based on the analysis above, I see five essential characteristics:

Agent-native, not human-adapted. The identity framework needs to be designed for how agents actually work: continuous operation, dynamic authorization, delegation chains, sub-agent spawning, and cross-platform interaction. Adapting human identity frameworks to agents is a dead end. The industry needs purpose-built standards that account for agent-specific behaviors from the start. The emerging work from the OpenID Foundation, including the OIDC-A proposal, moves in this direction.

Federated by design. Identity must work across organizational boundaries from day one, not as a later extension. The SAML/OIDC lesson is clear: federation that is retrofitted onto a proprietary identity system always produces friction, exceptions, and gaps. Agent identity needs to be federated from the beginning, with organizations issuing and managing their own agent identities within a shared trust framework.

Contextually dynamic. Authorization cannot be a static lookup. It needs to incorporate the agent's current task, the sensitivity of the data it is accessing, the risk profile of the action it is taking, and the full delegation chain that led to the current operation. This requires a policy engine that evaluates authorization in real time, not a role database that is checked at login.

Provenance-rich. Every agent action needs to carry its full provenance: who authorized it, what model produced it, what data informed it, and what the confidence level was. This provenance needs to be tamper-evident, efficiently stored, and queryable at the granularity that regulators and auditors require.

Interoperable across the stack. Identity cannot be locked to one layer of the architecture. It needs to flow from the Enterprise Platform (where data lives) through the Agentic Platform (where agents reason and act) to the Collaboration layer (where agents interact with humans and each other). The same identity framework that governs an agent's access to an ERP database needs to govern its participation in an Agent Service Bus workflow and its communication with a human through the collaboration interface.

None of these characteristics are individually novel. Federated identity, dynamic authorization, provenance tracking, and cross-layer identity propagation all exist in other contexts. What does not exist is a coherent framework that combines them for the specific requirements of AI agents operating at enterprise scale.

What Enterprise Leaders Should Do Now

Agentic identity infrastructure is immature, but that does not mean enterprises should wait. The decisions you make now about how agents are identified, authorized, and governed will be expensive to reverse later. Here is where to focus:

Inventory your agent landscape. Most enterprises do not know how many agents they are running, what permissions those agents have, or how those agents interact with each other. Start with a complete inventory: what agents are deployed, what identities do they use, what data can they access, what actions can they take, and who approved each of those authorizations. If you cannot answer these questions today, you have an exposure you have not sized.

Separate identity from security. Agent monitoring and anomaly detection are important, but they are not identity. Ensure that your agent governance strategy distinguishes between knowing who an agent is (identity), knowing what it can do (authorization), knowing what it did (provenance), and detecting when something goes wrong (security). Conflating these creates gaps.

Push vendors on interoperability. When evaluating agent platforms, ask hard questions about cross-platform identity. Can agents from this platform authenticate to other platforms? Can external agents verify the identity and capabilities of agents running here? If the answer is no, you are building identity silos that will constrain your agent architecture. Demand support for emerging standards like OIDC-A, SPIFFE for agent workloads, and A2A Agent Cards for capability attestation.

Plan for cross-organizational scenarios. Even if your current agent deployments are internal, plan your identity architecture as if agents will need to interact across organizational boundaries. Because they will. Supply chain collaboration, partner ecosystems, and customer-facing agent interactions are all on the near-term roadmap for most enterprises. Building identity for internal-only use and then retrofitting federation is the expensive path.

Engage with the standards process. The OpenID Foundation's AI Identity Management Community Group, IETF discussions around Agent Name Service, and the ongoing A2A and MCP protocol development are all actively shaping the standards that will govern agentic identity. Enterprise voices in these processes matter. If you leave the standards to vendors alone, the standards will reflect vendor interests.

Agentic identity is not glamorous. It does not have the excitement of a new AI model launch or the drama of a trillion-dollar market cap correction. But it is the infrastructure that determines whether enterprise AI agent deployments are trustworthy, governable, and scalable, or whether they become the next generation of ungoverned shadow IT, operating at machine speed with human consequences.

The enterprise software industry spent twenty years building identity infrastructure for human users. It cannot afford to spend another twenty building it for agents. The time to start is now, before the retrofit costs become prohibitive and the regulatory pressure makes the decisions for you.

Next in the series: "Governance Beyond Compliance," examining why traditional security and compliance frameworks are insufficient for governing autonomous AI agents, and what a purpose-built agentic governance architecture looks like.

Michael Fauscette

High-tech leader, board member, software industry analyst, author and podcast host. He is a thought leader and published author on emerging trends in business software, AI, generative AI, agentic AI, digital transformation, and customer experience. Michael is a Thinkers360 Top Voice 2023, 2024 and 2025, and Ambassador for Agentic AI, as well as a Top Ten Thought Leader in Agentic AI, Generative AI, AI Infrastructure, AI Ethics, AI Governance, AI Orchestration, CRM, Product Management, and Design.

Michael is the Founder, CEO & Chief Analyst at Arion Research, a global AI and cloud advisory firm; advisor to G2 and 180Ops, Board Chair at LocatorX; and board member and Fractional Chief Strategy Officer at SpotLogic. Formerly Michael was the Chief Research Officer at unicorn startup G2. Prior to G2, Michael led IDC’s worldwide enterprise software application research group for almost ten years. An ex-US Naval Officer, he held executive roles with 9 software companies including Autodesk and PeopleSoft; and 6 technology startups.

Books: “Building the Digital Workforce” - Sept 2025; “The Complete Agentic AI Readiness Assessment” - Dec 2025

Follow me:

@mfauscette.bsky.social

@mfauscette@techhub.social

@ www.twitter.com/mfauscette

www.linkedin.com/mfauscette

https://arionresearch.com
Next
Next

Native vs. External Agents: The Depth-Breadth Trade-off in Enterprise AI