← Back to Blog
Domenic DiNatale ·
What AI Actually Changes About Security — and What It Doesn't

What AI Actually Changes About Security — and What It Doesn't

By Domenic DiNatale

Every major technology shift produces a version of the same claim: this time, everything is different. The perimeter dissolved with cloud. Mobile endpoints made traditional controls obsolete. Now AI is described as a fundamental transformation of the security landscape — a new era requiring new thinking, new tools, new frameworks.

Some of that is true. Most of the important parts aren't.

What AI changes is surface area and speed. What it doesn't change is the underlying failure modes — the architectural assumptions that this series has been working through. The organizations that understand this distinction will navigate AI risk clearly. The ones that treat AI as a categorically new problem will repeat every mistake they've already made, at a faster pace.

What This Series Has Argued

It's worth naming the pattern before applying it to AI.

The first four pieces in this series traced the same failure through different framings. Phishing succeeds not because users fail, but because architectures assume they won't. Authentication fails not at the login screen, but in the session that follows it. MFA solved a specific problem and left the larger trust model intact. Human error is stable and predictable — the only variable is whether the architecture absorbs it or amplifies it.

In each case, the same structure: a system built on an optimistic assumption about how an input will behave. A threat that exploits that assumption with minimal technical effort. An incident whose severity is determined not by the sophistication of the attack, but by the architectural decisions made before it arrived.

AI is the next input. The architecture hasn't changed yet.

What AI Actually Changes

Three things are meaningfully different with AI in the picture.

Attack velocity. Phishing campaigns that previously required human operators — crafting messages, managing lure infrastructure, conducting reconnaissance — can now be automated at scale. Spear phishing, which required per-target research and was previously expensive to execute broadly, is now cheap. The attacks that required skill and time to personalize can be generated in bulk. Volume and targeting quality increase simultaneously.

This matters not because velocity creates new vulnerability classes, but because it shortens the time window between an architectural weakness existing and being exploited. A gap that might have survived for a year before a sophisticated attacker found it may now be found in a week.

New trust surfaces. AI applications are being integrated into enterprise environments with access to email, documents, codebases, calendars, and communication platforms. Each of these integrations creates a new surface through which the underlying system's trust model can be exercised. The AI component acts as a new principal — an entity capable of taking actions, making requests, and being influenced by inputs.

Most of these integrations are being deployed with the same architectural assumption that's produced every breach this series has described: the AI component is inside the perimeter, so it's trusted. It has access to the data it needs, and potentially a great deal more. It operates under the user's authority, inheriting their permissions without inheriting their judgment.

Reduced friction for exploitation. Vulnerabilities that previously required specialized knowledge to exploit are now accessible to a broader range of attackers. The skill threshold drops when capable AI systems can reason through attack paths, suggest techniques, and accelerate reconnaissance. This isn't hypothetical — it's already observable in the wild.

None of these changes are small. Combined, they compress timelines, expand surfaces, and broaden the pool of capable adversaries. That's a meaningful shift in the operating environment.

What AI Doesn't Change

It doesn't change the failure modes.

An AI-accelerated phishing campaign still succeeds because the session that follows authentication inherits trust it shouldn't. An AI agent with access to enterprise data still exfiltrates through the same overly-permissive access model that let human attackers do the same. A prompt injection attack — where malicious instructions are embedded in data the AI processes — still works because the system treats input as trustworthy once it clears the perimeter.

The specific inputs are new. The architecture that processes them isn't.

This matters because it determines where to focus. If AI created fundamentally new failure modes, the response would require fundamentally new thinking. But if AI primarily accelerates and expands existing failure modes, the architectural responses that address those failure modes still apply — and organizations that have addressed them are more resilient to AI-driven threats than organizations that haven't.

Least privilege limits what an AI agent can do with stolen context. Short-lived credentials limit the window during which captured tokens are usable. Behavioral monitoring that looks for unusual access patterns catches AI-driven lateral movement by the same mechanism it catches human-driven lateral movement. Segmentation limits blast radius regardless of whether the attacker is a person or a system.

This isn't to say existing defenses are sufficient — they rarely are. It's to say the direction of improvement is the same.

The Category Error

There's a specific mistake being made in how AI security risk is currently framed, and it mirrors a mistake this series has already traced.

In the phishing context, the category error was treating phishing as a user awareness problem. The response — training programs, simulated attacks, awareness campaigns — addressed a symptom while leaving the architectural root cause intact. Organizations that ran the programs felt they'd addressed the problem. They had not.

The equivalent category error with AI is treating AI security as an AI problem. The response — AI-specific tools, AI security frameworks, AI risk assessments — may address some real surface area while leaving the architectural foundation intact. Organizations that deploy these tools will feel they've addressed the AI security problem. Many will have not.

An AI system deployed on top of an architecture with implicit internal trust, broad static permissions, and minimal post-authentication monitoring inherits all of those vulnerabilities. Adding AI-specific scanning on top of that architecture is the same logic as adding more phishing training to an architecture that grants every employee access to every database.

The AI layer is new. The attack surface it opens into is old.

The Shift

The security conversation around AI is presently dominated by two failure modes: organizations that dismiss the risk because AI seems exotic, and organizations that treat it as so new that prior frameworks don't apply.

Both are wrong, in opposite directions.

AI is a meaningful shift in the operating environment — faster, broader, harder to detect. It deserves serious architectural attention. But that attention should start where this series started: with the assumptions baked into the existing architecture, and whether those assumptions can tolerate a more capable, higher-velocity, more widely distributed threat actor.

Most architectures can't. The answer is the same architecture conversation we were already having, now with more urgency and a new class of principal that needs to be designed for.

What AI changes is the stakes and the timeline. What it doesn't change is the work.

The next pieces in this series apply these principles to AI systems specifically — how they're built, what trust assumptions they inherit, and what it actually looks like to design an AI-assisted operation with architecture in mind rather than afterthought.

cybersecurity artificial-intelligence architecture failure-modes