Over the past few years, much has changed in how applications are built and in what teams expect from their cloud infrastructure. When Akamai acquired Linode, we made a clear commitment to invest in the platform and to evolve it alongside our customers’ needs. That commitment was not just about adding capacity or new regions; it was about making deliberate improvements where they matter most.
As we’ve worked closely with customers, we’ve seen the types of applications and services they’re building change in meaningful ways. Some teams are still running familiar workloads, like game servers or community platforms. Others are small and medium-sized businesses building real, production-grade applications — performance, consistency, and reliability are no longer optional. These workloads behave differently; they are more sensitive to latency, more demanding of CPU and memory, and far less tolerant of variability.
Our existing plans have been a solid place to start, especially for early-stage projects and cost-conscious workloads. But as many of our customers have grown, we’ve heard that those plans can become constrictive. Inconsistent performance, noisy neighbors, and the need to work around variability by targeting specific instances or regions are all signs that infrastructure needs have outgrown the original model.
A new generation of compute infrastructure
As applications evolve, an organization’s infrastructure has to evolve with them. That is the broader context for the changes we are making to Akamai Cloud. We’ve introduced a new generation of compute infrastructure on Akamai Cloud with the launch of dedicated hardware powered by the latest 5th Gen AMD EPYC™ processors.
The new plans provide customers with predictable performance and transparent pricing — and the flexibility to match a wide variety of workload needs. They are designed to remove friction, deliver consistent and predictable performance, and give you clearer choices as you scale.
In this blog post, we take a look at how we are reshaping our compute lineup and why these changes matter for the workloads that our customers are running today and building for tomorrow.
The new compute lineup
The Akamai Cloud lineup now includes compute plans to give customers clear choices to balance performance and cost.
G8 dedicated plan for resource-intensive workloads
G7 dedicated plan for performance workloads
G6 dedicated plan for production workloads
Shared cost-effective CPU option
G8 dedicated plan for resource-intensive workloads
The G8 dedicated plan includes high-consistency compute powered by 5th Gen AMD EPYC processors with new 1:2 and 1:4 virtual machine (VM) shapes and expanded memory options. It’s best for enterprise-grade, latency-sensitive, and resource-heavy applications.
The addition of Compute Optimized (1:2) and General Purpose (1:4) VM shapes provides customers with more compute power per VM, enabling higher throughput and more predictable performance for demanding workloads. Unlike many hyperscaler instances, these shapes eliminate oversubscription; minimize noisy neighbor effects; and deliver stable, low-latency performance with clear pricing. This makes the G8 plan ideal for enterprise applications, real-time workloads, and resource-heavy systems for which consistency matters as much as speed.
G7 dedicated plan for performance workloads
This plan includes high-performance dedicated CPU cores backed by AMD Zen 3 processors with a premium memory configuration, which delivers consistent throughput even under load. It’s best for CPU-intensive or business-critical applications that require reliability, low-latency, and stable performance at scale.
G6 dedicated plan for production workloads
G6 includes dedicated CPU cores provisioned on available legacy hardware with no resource contention. It’s ideal for steady production workloads that need predictable performance without premium hardware requirements.
Shared cost-effective CPU option
This plan includes balanced resources powered by shared CPU cores on a mix of available legacy hardware. It’s best for development/testing environments and variable workloads where cost efficiency matters more than guaranteed performance.
Benefits of the new plans
With these options, Akamai Cloud customers can select the right level of performance for their specific use case while maintaining clear visibility into resource allocation. Bandwidth is also now unbundled, so you only pay for what you use, at the lowest rate in the market.
The new plans are designed to address common needs for modern workloads:
Clarity and control — Transparent, unbundled pricing makes it easier to predict and manage cloud costs
Right-sized performance — Shared, dedicated, and high-memory plans allow you to align compute with application requirements
Reliability you can trust — Predictable capacity and consistent performance across instances ensures that workloads run as expected
Future-ready infrastructure — The latest 5th Gen AMD EPYC processors support everything from general-purpose applications to advanced workloads, including AI
Versatility at scale — Flexible plans support both everyday compute tasks and demanding, performance-intensive applications
Availability
The new compute plans are rolling out across global regions, with capacity expanding regularly. This ensures that workloads can be deployed where they are needed most, with reliable access to modern hardware.
Use cases
The plans are designed to support a broad set of scenarios. Common examples include:
Enterprise clusters and production workloads — Run business-critical applications with predictable performance and no CPU contention
Media and data processing pipelines — Achieve consistent throughput for transcoding and batch jobs across multiple nodes
Latency-sensitive workloads — Support applications that require consistent and reliable CPU performance, including inference and real-time processing
Get performance, choice, and pricing transparency
The introduction of new Akamai Cloud computing plans powered by 5th Gen AMD EPYC processors gives customers the performance, choice, and transparency needed to scale workloads with confidence.
Whether you use them for development, production, or specialized performance-sensitive applications, the new plans provide a flexible foundation for building and running modern cloud workloads.
Tags