← Back to Blog
Domenic DiNatale ·
Zero Trust Doesn't Change When the Actor Is a Machine

Zero Trust Doesn't Change When the Actor Is a Machine

By Domenic DiNatale

Zero trust as a security model emerged from a specific observation: perimeter-based security rests on an assumption that if you are inside the network, you can be trusted. That assumption was always wrong. What changed over time was how expensively wrong it proved to be.

The answer was to remove the assumption. Don't trust based on location. Verify continuously. Grant least privilege. Treat every access request as if it originates from an untrusted source, regardless of where it comes from.

This is not a technology product. It is a design principle — a structural response to a structural problem. And it applies to AI agents with the same force it applies to human users. Not because the threat model is identical, but because the underlying architectural mistake is the same.

Where We Are

The previous piece in this series described a specific gap: AI agents are being deployed under identity models that don't accurately represent the authority being exercised. Actions taken by agents under delegated credentials appear in audit logs as user actions. Permissions granted to agents are scoped by convenience rather than function. Accountability chains are left unresolved.

This piece addresses a related but distinct problem: even when the identity layer is handled correctly, the access architecture surrounding AI agents is frequently reproducing the perimeter-based assumptions that zero trust was designed to replace.

The pattern looks like this. An AI agent is integrated with a production system. The integration is established once — a token issued, an API key provisioned, a credential stored. From that point forward, the agent is "inside" — trusted by virtue of having authenticated once, able to act across the scope of its granted permissions without re-verification. The access relationship is static. The trust is assumed rather than continuously established.

This is perimeter thinking applied to a new layer of infrastructure.

What Zero Trust Actually Requires

The core principles of zero trust have been formalized in various frameworks, but they reduce to a consistent set of structural requirements.

Never trust, always verify. Access should not be granted based on a prior authentication event. Each request to access a resource should be evaluated at the time of the request, against the current state of the requesting identity and the current risk context. A token issued yesterday does not establish legitimacy today.

Assume breach. The architecture should be designed with the assumption that some part of the system is already compromised. This means segmenting access to limit lateral movement, logging everything to support forensic reconstruction, and treating anomalous behavior as a signal rather than an error.

Verify explicitly. Access decisions should use all available signals — identity, device posture, location, behavioral pattern, time, sensitivity of the resource — rather than relying on a single factor or a single event.

Use least privilege. Access should be scoped to the minimum required for the defined function. Permissions should be time-bound where possible, reviewed regularly, and revoked when the function they support is no longer active.

None of these principles reference humans. They describe structural properties of the access architecture. They apply to any principal requesting access to any resource — and a principal that is an AI agent presents no principled exception.

Where AI Deployments Are Getting This Wrong

The patterns that violate zero trust in AI deployments are, with some variation, the same patterns that violated it in traditional infrastructure. The actors have changed. The mistakes have not.

Static, long-lived credentials. An AI agent provisioned with a long-lived API key or an OAuth token with no expiry is an instance of the same problem as a service account with a password that hasn't been rotated in three years. The credential is a standing invitation. If it's compromised — through a breach of the system storing it, through a misconfiguration that exposes it in a log, through a prompt injection attack that exfiltrates it — that standing invitation is now someone else's. The agent's persistent access has become the attacker's persistent access.

The zero trust response is well-established: short-lived, rotated credentials. Tokens that expire. Access that must be re-established rather than assumed. This is more friction. It is also the architecture that limits blast radius when a credential is compromised.

Overly broad permission scope. An agent integrated with a business application typically has access to everything that integration supports, not to the subset it actually needs. A calendar integration often includes contacts. An email integration often includes file storage. A CRM integration often includes billing records. Nobody explicitly granted access to those adjacent resources — they came along with the integration pattern.

Least privilege at the agent layer means scoping permissions to the function being served, not the integration being used. This requires knowing what the function actually needs — which requires designing the function before provisioning the access, rather than provisioning maximum access and building around it.

No continuous verification. Once authenticated, AI agents typically operate without re-verification until the session expires or the credential is revoked. The access decision made at provisioning time is the access decision that governs every subsequent action. There is no mechanism to detect that the agent is now acting outside its typical behavioral envelope, or that the system it's accessing has changed its sensitivity profile, or that circumstances have changed in ways that would have altered the original access decision.

Continuous verification at the AI layer means monitoring what the agent is actually doing, not just what it's authorized to do. Behavioral baselines. Anomaly detection on agent actions. Alerting when usage patterns diverge from what the function was designed to produce. This is operational overhead that most AI deployments are not currently investing in.

Trust inherited from the integration rather than established by the agent. This is the version of perimeter thinking most specific to AI deployments. When an AI agent operates within a trusted infrastructure — your own cloud environment, your own internal tools — there is a tendency to treat that location as establishing trust. The agent is inside your systems, so it can be trusted with access to those systems. The perimeter has just moved inward.

Zero trust does not accept location as a trust signal. The agent inside your infrastructure is not trusted because it is inside your infrastructure. It is trusted — or not — based on continuous verification of its identity, the legitimacy of its current request, and the behavioral context of that request. The architecture doesn't change because the agent has already crossed what used to be the perimeter.

What This Looks Like in Practice

The zero trust framework does not require building entirely new infrastructure. It requires applying existing principles consistently to a new category of principal.

Scope credentials to the function. Before provisioning agent access, define what the agent needs to do. Then provision access to exactly that. Not the access the integration supports. Not the access that might be useful later. The access the current function requires.

Use short-lived tokens where possible. Many APIs support short-lived token patterns. Many AI framework integrations do not use them, defaulting to persistent credentials for simplicity. Choosing the more friction-prone pattern is a deliberate architectural decision that pays off when something goes wrong.

Log agent actions separately. As described in the previous piece, agent actions should be distinguishable from human actions in audit logs. This is not only an identity issue — it is a zero trust requirement. You cannot apply behavioral baselines to agent activity if you cannot separate agent activity from human activity in your telemetry.

Build in approval gates for high-stakes actions. Zero trust's "verify explicitly" principle, applied to irreversible or high-consequence operations, means requiring human verification before the action executes. This is not a workaround for insufficient agent capability. It is an architectural decision to place a verification event at the boundary that separates reversible from irreversible.

Review access on a schedule. Access granted to AI agents should be reviewed on the same cycle as access granted to human users and service accounts. Functions change. Agents evolve. Access that was appropriate six months ago may be excessive today.

The Continuity of the Problem

This series has traced a single pattern through multiple instantiations: architectural assumptions, left intact, produce failures regardless of what technology is placed on top of them. The perimeter assumption produced failures in network security. The authentication-as-gate assumption produced failures in credential security. The prompt-as-trusted-input assumption produced failures in AI application security.

Zero trust is the accumulated response to the perimeter assumption. It works because it refuses the assumption rather than defending it. The framework is complete enough that applying it to AI agents doesn't require significant extension — it requires consistent application.

The barrier is not technical. The technical mechanisms are available: scoped OAuth flows, short-lived tokens, behavioral monitoring, approval workflows. The barrier is the same barrier it has always been: the decision to apply architectural discipline before the failure occurs rather than after.

That decision is available right now, for every AI agent being designed and deployed today.

The cost of deferring it is also familiar: it gets paid later, when the assumptions have hardened and the blast radius is larger.

This post is part of a series on security as an architectural problem. Read the full series on the Intellitech blog.

cybersecurity zero-trust artificial-intelligence architecture identity