In this blog post, which is the final part of a three-part series, we discuss redefining control for the high-velocity, AI-driven data center.
By now, the takeaway from this series should feel both clear and urgent.
In the first post of this series, we challenged the long-standing belief that security must be sacrificed at the cost of performance. In the second post, we confronted the reality that most modern data center traffic flows east-west, never crossing the perimeter. This final post merges these ideas and reframes the conversation entirely.
Security frameworks have evolved
Artificial intelligence (AI) has fundamentally changed how applications behave, how data flows, and how risk manifests. AI security is no longer a single control problem; it’s an architectural one.
Today’s AI workloads are distributed across cloud environments, Kubernetes clusters, APIs, and containerized services. AI models consume massive datasets, operate at machine speed, and continuously generate outputs that feed downstream AI applications, business workflows, and real-world decisions.
In that world, no single security control (firewalls included) can do everything.
That isn’t a failure. It’s proof that security frameworks have evolved past single solutions.
Understand the growing misalignment between AI and security architecture
Most organizations haven’t ignored AI security. Instead, they’ve tried to secure AI systems using security controls that were designed for a very different era of computing.
Traditional firewalls remain essential for north-south protection. They play a critical role in cloud security, data security, authentication, and API protection by inspecting inbound requests, enforcing security controls, and protecting users from malicious or unsafe AI outputs.
Purpose-built solutions, such as Akamai Firewall for AI, add an essential layer of protection against AI-specific security risks, including prompt injection, data leaks, data poisoning, adversarial attacks, and misuse of generative AI (GenAI).
But firewalls, AI-specific or otherwise, were never designed to fully secure what happens inside AI environments once traffic is already trusted and flowing east-west.
Inside modern AI systems, reality looks very different.
AI workloads communicate constantly with other AI services.
Kubernetes pods scale dynamically.
Training data, runtime processes, and inference pipelines share infrastructure.
APIs exchange sensitive information in real time.
Cloud native and open source dependencies change continuously.
Automation accelerates everything.
When internal visibility is limited and segmentation is coarse, security teams are forced into uncomfortable trade-offs. Permissions become broader than intended. Access controls loosen, and validation gives way to assumed trust. Over time, those decisions expand the attack surface and weaken the overall AI security posture.
Where AI breaches actually escalate
Most AI-related security incidents don’t begin with catastrophic failure. They begin with something small and familiar, such as:
An exposed API
An overpermissive workload
A compromised endpoint
A poisoned dataset
A misconfigured cloud service
The real damage happens after initial access, when nothing prevents lateral movement.
In AI environments without microsegmentation, attackers can move freely between:
AI models, large language models (LLMs), and GenAI services
Training data, datasets, and sensitive information
Shared cloud services, Kubernetes dependencies, and data pipelines
Attackers can also move easily into downstream applications that implicitly trust AI outputs.
Without microsegmentation, ransomware can spread through AI workloads, data exposure can turn into data leaks, and intellectual property can exit the organization. Firewalls at the edge don’t fail in these scenarios — they simply aren’t positioned to stop what’s happening inside.
AI security requires multiple planes of control
AI security must be enforced where AI risk appears, not where it’s easiest to deploy tools.
That means aligning security controls across the entire AI lifecycle. At the edge and API layer, with solutions such as web application and API protection (WAAP) and AI guardrails, security must inspect prompts, outputs, and AI interactions in real time. Inside the data center and cloud fabric, security must control how AI workloads, AI services, and machine-learning systems communicate with one another.
This is where microsegmentation and Zero Trust Switching become non-negotiable.
Why microsegmentation and Zero Trust switching can’t wait
AI moves at fabric speed. Internal AI traffic cannot be hairpinned through centralized inspection points without breaking performance, compute efficiency, and real-time workflows. Security controls must live directly in the path of east-west communication.
With Akamai Guardicore Segmentation integrated into HPE Aruba CX 10000 Smart Switches, powered by AMD Pensando DPUs, policy enforcement moves into the data center fabric itself. Instead of relying on static IP-based rules, microsegmentation enforces identity-aware, context-rich access controls at workload granularity. Policies follow AI workloads, not infrastructure.
This approach fundamentally changes AI risk management. Lateral movement is stopped by default. Least-privilege access is enforced continuously. Attack vectors shrink instead of expand. And security teams gain real-time visibility into AI systems, AI data, and AI workflows — without sacrificing performance.
Zero Trust switching secures how AI systems interact internally, which is precisely where modern breaches escalate.
Alignment: A unified AI security architecture
The strongest AI security strategies don’t choose between controls. They align them.
Firewall for AI secures both AI inputs and outputs to AI applications. Akamai Guardicore Segmentation secures east-west workload communication across cloud native and containerized environments. Zero Trust switching with HPE/Pensando enforces those policies at fabric speed, without latency.
Together, they deliver a resilient security fabric across the entire AI lifecycle — from prompt to model, from workload to data, and from runtime to real-world impact.
That’s not redundancy; that’s resilience.
The urgency is real
AI environments will only become faster, more autonomous, and more interconnected. Attackers already understand this. Accordingly, they’re targeting internal AI workflows, data pipelines, and permissions — not just perimeter defenses.
Firewalls are foundational to protecting AI apps. AI-specific firewalls are purpose-built protection for AI and LLM risks.
But microsegmentation and Zero Trust switching are now critical to secure AI deployments and enterprise ecosystems adopting AI. Waiting doesn’t reduce risk. It compounds it.
Building trust in an AI-driven world
AI security is not about reacting to the latest news cycle or jumping on buzzwords. It is about establishing real, measurable confidence. That means protecting sensitive data, controlling access, properly isolating AI workloads, and ensuring that systems behave as expected in production environments.
Benefit from an integrated approach
If you are rethinking how to secure AI workloads, cloud environments, Kubernetes platforms, or GenAI systems, consider that Akamai offers a uniquely integrated approach. We serve as a strategic partner to customers around the world, helping them power and protect life online.
AI isn’t slowing down. Your security architecture shouldn’t either.
Check out the following resources
Read the Firewall for AI product brief for further information on secure AI interactions.
Read the solution brief on secure AI workloads with Akamai Guardicore Segmentation and Zero Trust security.
Learn about Akamai App & API Protector, which protects web applications and APIs from zero-day vulnerabilities, CVEs, and more.
Tags