East-West Is the New North-South: Rethink Security for the AI-Driven Data Center

Jan 15, 2026

Clint Huffaker

Written by

Clint Huffaker

Clint Huffaker started his career on the customer side, managing enterprise networking and security before moving into presales and architecture. Those early lessons gave him a deep appreciation for what customers do every day — balance innovation, risk, and business pressure. Today, as Director of Product Marketing for Security at Akamai, Clint leads initiatives around Akamai Guardicore Segmentation and Zero Trust. 

Share

In this blog post, which is the second of a three-part series, we are going to share how perimeter thinking falls short in a high-speed, high-density world. In the first post in the series, we explored the idea of rethinking the relationship between security and performance

Securing the data center used to be simple, at least on paper: Build a strong perimeter, deploy firewalls at the north-south boundary, and control what enters and exits the network.

That model worked when applications were monolithic, data stayed in one place, and artificial intelligence (AI) was a future ambition rather than an infrastructure reality.

But those days are over. Today, AI-powered data centers, high-performance machine learning workloads, and cloud native architectures generate more internal traffic than ever before. More than 76% of data center traffic now flows east to west, moving between GPUs, endpoints, APIs, datasets, and internal services. It no longer crosses the perimeter.

That shift has exposed serious security risks.

The rise of east-west traffic in AI infrastructure

In modern data centers, traffic patterns have changed significantly. Distributed AI systems rely on real-time data exchange among training nodes, orchestration tools, and storage systems. These connections occur deep inside the network between workloads rather than across networks. 

For example, traffic may go from Server 1 to Server 2, which may be in the same server rack. Traffic never leaves the rack or traverses the network to cross the firewall (or legacy segmentation solution).

A single AI model training session can generate terabytes of east-west traffic as datasets move across GPU clusters, container environments, and scalable compute fabrics. The tolerance for latency is extremely low. Traditional perimeter tools weren’t designed for this level of throughput or complexity.

More significantly, most perimeter tools can’t see this traffic at all. As a result, it’s important to ask these questions:

  • Do our current firewalls protect our AI pipelines, or just our front door?

  • Is our cybersecurity strategy built for east-west traffic, or is it still anchored in north-south assumptions?

From lobby security to departmental access: A new metaphor for segmentation

Imagine your data center as a large, high-security office building.

North-south traffic is like visitors entering and exiting the building. They pass through physical security, badge readers, or reception. Access is logged and monitored. This is where traditional firewalls operate, screening what comes in from the outside.

But once inside, the visitors move freely throughout the building. Some enter human resources offices, others go into the finance or engineering departments. But not everyone should have access to payroll systems or sensitive information.

That internal movement, which represents workload-to-workload communication, is east-west traffic. And in most data centers, it is barely secured, if at all.

Here is what that means for your AI infrastructure:

Even with endpoint agents, virtual firewalls, or traditional network security measures, internal traffic among services often bypasses deep inspection because of performance trade-offs.

Rethinking the architecture: Secure where the traffic lives

If the majority of your data center traffic flows east to west, your security controls must be positioned accordingly.

That is exactly what the integration of Akamai Guardicore Segmentation with the Aruba CX 10000 smart switch, powered by AMD Pensando data processing unit, is designed to deliver.

This approach embeds microsegmentation enforcement directly into the data center switch at the top of every rack. As traffic enters the network fabric, it is inspected, validated, and matched to policy in real time. There’s no need to redirect traffic to centralized firewalls, hairpin traffic, or consume resources on software agents.

With this model, you can gain:

  • High-speed, in-fabric enforcement built for both modernized applications and AI workloads

  • Automation of policy creation and lifecycle management

  • End-to-end visibility across the AI infrastructure ecosystem

  • Reduced attack surface without sacrificing performance

It’s a smarter and more scalable way to defend against unauthorized access, cyberattacks, and supply chain vulnerabilities. This model is especially critical in hyperscale and AI-driven environments.

More questions that every data center operator should ask

  • How do we monitor and control east/west movement between workloads today?

  • Are our existing security measures optimized for AI, or are they built for yesterday’s data center?

  • Can we detect security incidents across dynamic containers, APIs, and model training environments?

  • What happens to sensitive data once it moves past the perimeter?

The new mandate: Zero Trust inside the fabric

Zero Trust doesn’t stop at access control. It requires continuous visibility and enforcement across every connection, including internal ones.

In the third and final part of this series (coming soon!), we’ll explore how Zero Trust switching enables just that. You’ll see how Akamai Guardicore Segmentation and AMD Pensando together deliver a high-speed, embedded security architecture for AI-ready environments. This approach avoids the latency, overhead, and visibility gaps of traditional firewalls.

Need help securing east/west traffic in your AI infrastructure?

Akamai is working closely with security teams, network architects, and service providers to rethink segmentation for real-world AI systems. If you’re building next-generation AI data centers, managing large-scale machine learning workloads, or addressing risk management and threat detection across critical infrastructure, we’d love to connect.

Let’s define what secure AI should really look like — starting from the inside out.

Jan 15, 2026

Clint Huffaker

Written by

Clint Huffaker

Clint Huffaker started his career on the customer side, managing enterprise networking and security before moving into presales and architecture. Those early lessons gave him a deep appreciation for what customers do every day — balance innovation, risk, and business pressure. Today, as Director of Product Marketing for Security at Akamai, Clint leads initiatives around Akamai Guardicore Segmentation and Zero Trust. 

Tags

Share

Related Blog Posts

Security
From Clawdbot to OpenClaw: Practical Lessons in Building Secure Agents
February 18, 2026
OpenClaw’s rapid rise exposed gaps in agent security. Learn how its security failures map to the OWASP Top 10 for Agentic Applications and how to secure AI agents.
Security
Zero Trust Switching: Why Firewalls Alone Can’t Secure AI Workloads
February 18, 2026
Struggling with AI security? Find out how microsegmentation and Zero Trust switching can protect your AI workloads and cloud environments.
Security
Understand the Shift Toward AI-Driven Interactions: ​A Guide to MCP
February 12, 2026
Learn why enterprises are paying attention to MCP and discover the high-level leadership considerations that come with adopting it.