Akamai acquires Fermyon to combine WebAssembly function-as-a-service (FaaS) with Akamai’s globally distributed platform. Read news

Beyond the Buzz: Why Zero Trust Matters More in the Age of AI

Jim Black

Dec 16, 2025

Jim Black

Jim Black

Written by

Jim Black

Jim Black is a Senior Product Marketing Manager in Akamai's Enterprise Security Group. He has spent his entire career in technology, with roles in manufacturing, customer support, business development, product management, public relations, and marketing. 

Share

Remember when Zero Trust was the hottest topic in cybersecurity? Conference keynotes, vendor pitches, and LinkedIn posts all proclaimed it as the future of security architecture. Then artificial intelligence burst onto the scene, and suddenly everyone pivoted. Zero Trust became yesterday's cybersecurity conversation, replaced by breathless discussions about AI-powered threats, machine learning detection, and autonomous security agents.

But here's the uncomfortable truth: While we've been chasing the shiny object that is AI, the fundamental problems that Zero Trust was designed to solve have only gotten worse. In fact, the rise of AI hasn't made Zero Trust architecture obsolete; it's made it absolutely critical.

The Zero Trust basics still apply

Let's revisit the fundamentals. Zero Trust is a term coined by John Kindervag at Forrester Research in 2010 and later championed by advocates like Dr. Chase Cunningham. It rests on a few core principles that fly in the face of traditional perimeter security:

  • Never trust; always verify: Don't assume that because a user or device is inside your network it’s safe. Every access request must be authenticated and authorized.

  • Assume breach: Operate under the assumption that attackers are already in your environment. Design your security to limit lateral movement and contain damage.

  • Least-privilege access: Users and systems should only have access to exactly what they need to do their jobs, nothing more.

  • Continuous validation: Security isn't a one-time checkpoint. It's an ongoing process of verification, monitoring, and validation.

These principles were not created as nice-to-have principles. Together, they were a fundamental shift in how we think about trust in digital environments. And, in the age of AI, they're more relevant today than ever.

How AI amplifies the need for Zero Trust

AI hasn't replaced the need for Zero Trust. It's actually exposed why we need Zero Trust so desperately.

Consider the modern attack landscape: AI-powered attacks are more sophisticated, faster, and harder to detect than anything we've seen before. 

  • Phishing emails are now often grammatically perfect and contextually aware. 

  • Deepfakes can impersonate executives on video calls. 

  • Automated reconnaissance tools can map your network infrastructure in minutes. 

Traditional perimeter defenses, the old "castle-and-moat" approach, are laughably inadequate against these threats.

But there's another dimension that we don't talk about enough: AI as a security liability within our own organizations.

Every company is rushing to deploy AI tools, such as ChatGPT, Microsoft Copilot, custom large language models (LLMs), and autonomous agents. These tools are powerful, but they're also voraciously hungry for data. An AI assistant needs access to documents, databases, communication channels, and code repositories to be useful. Without proper controls, you're essentially giving a black box system the keys to your kingdom.

Think about what happens when employees start using AI tools without governance. They paste sensitive customer data into public LLMs. They grant AI agents broad access to internal systems. They bypass security controls because AI tools promise productivity gains. This is shadow AI, and it's the shadow IT problem on steroids.

And here's where it gets really concerning: AI can be incredibly convincing. When an attacker uses AI to craft a social engineering attack or impersonate a trusted user, how do you know what's real? Identity verification becomes paramount, not optional.

The dangerous gap: What happens without Zero Trust

Let me paint a picture of what could go wrong when organizations deploy AI without Zero Trust foundations.

Scenario one: Not applying least-privilege access 

Your marketing team adopts an AI writing assistant. To make it useful, the team connects the assistant to your customer database, past campaign performance data, and internal strategy documents. The AI tool has vulnerabilities. An attacker exploits them. Suddenly, your entire customer list and competitive strategy are exposed. This happened because you trusted the tool implicitly instead of applying least-privilege access and continuous monitoring.

Scenario two: Inadvertently exposing intellectual property

An employee uses a popular AI coding assistant. It has access to your entire codebase to provide helpful suggestions. But the AI provider's data retention policies mean your proprietary code is now part of their training data. Your intellectual property leaks not through malice, but through blind trust in a third-party service.

Scenario three: Lack of proper segmentation and access controls

You deploy an autonomous AI agent to handle customer service inquiries. It needs database access to look up account information. Without proper segmentation and access controls, a prompt injection attack tricks the agent into exposing sensitive data or executing unauthorized commands. The AI agent becomes the attack vector.

These are not hypothetical scenarios. They're happening right now, across industries, because organizations are bolting AI onto insecure foundations.

Building AI security on a Zero Trust foundation

The good news? Zero Trust principles map beautifully onto AI security challenges.

Start with identity and access management

Every AI tool, agent, and API should have a verified identity. Every request for data or system access should be authenticated and authorized. Just because an AI agent is "yours" doesn't mean it should have blanket access to everything. Apply the principle of least privilege rigorously. Your AI writing assistant needs access to approved content libraries, not your entire file system.

Implement microsegmentation for AI workloads

Isolate AI processing environments from sensitive data stores. Use strict network controls to limit what AI systems can reach. If an AI tool is compromised, containment strategies should limit the blast radius.

Monitor everything

Log every API call, every data access, every action taken by AI systems. Apply behavioral analysis to detect anomalies. Is your AI agent suddenly accessing data it's never touched before? That's a red flag. Continuous validation means you're not just watching the perimeter; you're also watching what's happening inside your environment in real time.

Apply Zero Trust principles to the AI supply chain

This is critical. Vet your AI vendors. Understand where your data goes, how it's processed, and who has access to it. Don't trust the face value of vendor security claims. Verify them through audits, security assessments, and contractual guarantees.

The AI path forward

Here's the reality: AI isn't going anywhere. It's going to become more embedded in everything we do: more powerful, more autonomous, and more central to business operations. And AI-driven threats will continue to become ever more sophisticated. That's equally exciting and terrifying.

But AI security doesn't require reinventing the wheel. The principles of Zero Trust — never trust, always verify, assume breach, apply least-privilege access, and continuously validate — provide exactly the framework we need to deploy AI safely.

The organizations that will succeed in the AI era aren't the ones that chase every new AI capability without regard for security. They're the ones that build AI on top of solid Zero Trust foundations. They're the ones that understand that buzzwords come and go, but fundamental security principles endure.

Before you deploy that next AI tool, ask yourself these questions: 

  • Have we implemented Zero Trust? 

  • Are we continuously validating access? 

  • Are we operating under the assumption that something will go wrong? 

If the answer is no, then you're building on a weak foundation.

Choose wisely

Zero Trust security may not be the buzziest term in cybersecurity anymore. But it's the foundation that will determine whether your AI initiatives become transformative successes or catastrophic security failures.

The choice is yours.

Jim Black

Dec 16, 2025

Jim Black

Jim Black

Written by

Jim Black

Jim Black is a Senior Product Marketing Manager in Akamai's Enterprise Security Group. He has spent his entire career in technology, with roles in manufacturing, customer support, business development, product management, public relations, and marketing. 

Tags

Share

Related Blog Posts

Security
The 8 Most Common Causes of Data Breaches
April 19, 2024
Discover the primary causes of data breaches — and how to protect your organization from these pervasive threats.
Security
AI Pulse: How AI Bots and Agents Will Shape 2026
January 12, 2026
Read our reflections on AI bot traffic across the Akamai network in 2025 and get our predictions for how these trends will shape agentic commerce in 2026.
Security
Protecting Small and Medium-Sized Businesses from Cyberthreats
October 27, 2023
The cyber exposure of small and medium-sized businesses transcends their size. So, Akamai is partnering with Comcast Business to help protect SMBs from threats.