The market spent most of 2024 and 2025 deciding that the artificial intelligence trade was a story about three names — Nvidia, Microsoft, and the hyperscaler triumvirate of AWS, Google Cloud, and Azure — and that everyone else in cloud infrastructure was a price-taker waiting to be commoditized. Friday morning, Akamai Technologies forced investors to widen the lens. Shares of the Cambridge, Massachusetts-based content delivery and cybersecurity giant jumped 16% just after the opening bell after the company announced a $1.8 billion, seven-year cloud infrastructure services deal with an unnamed “leading frontier model provider” and posted first-quarter earnings that landed in line with analyst expectations. According to CNBC’s coverage of the rally, Akamai’s stock is now up 37% over the past 12 months — a return that puts the company in a different conversation than it has been in for most of the past decade.
The deal matters for reasons that go beyond a single quarter or a single contract. It is the clearest signal yet that the frontier AI model providers — the OpenAIs, Anthropics, and a small handful of others training the largest foundation models on the planet — are willing to write nine and ten-figure checks to infrastructure providers outside the hyperscaler club, provided the alternative gives them something the hyperscalers cannot match. In Akamai’s case, that something is a globally distributed edge network that pushes inference workloads physically closer to end users, cutting latency, reducing bandwidth costs, and making real-time AI applications feasible in places where round-tripping every query to a centralized data center would be a non-starter.
The $1.8 Billion Number Is the Validation
CEO Tom Leighton, the MIT applied mathematics professor who co-founded Akamai in 1998, framed the announcement on CNBC’s Squawk Box as the validation Akamai investors have been waiting for. “I think we’ve been undervalued for a while, and investors have been looking for some real validation that our different approach is going to pay off, and now we’re getting that validation,” Leighton told the network. He added that the company has a “very strong pipeline of major enterprise customers, including some that have very large cloud needs,” and said Akamai will be “in a great position to enable and secure the new AI economy.”
The deal terms break down to roughly $257 million per year in committed revenue, a figure that lands neatly into Akamai’s cloud infrastructure services line — currently the smallest but fastest growing of its three core businesses. For a company that posted total first-quarter revenue just over $1 billion, an incremental quarter-billion-per-year contract is meaningful in a way that hyperscaler announcements never quite are. AWS could lose a $1.8 billion contract on Tuesday and replace it on Wednesday. For Akamai, this is the contract that changes the multiple.
Wall Street’s repricing of the stock confirms the read. Akamai had been trading as a low-multiple cybersecurity and content delivery business with an interesting but unproven cloud bolt-on. The 16% pop signals that investors are now willing to underwrite the cloud infrastructure piece on hyperscaler-adjacent terms, even though the absolute revenue base remains modest.
The Q1 Numbers, Decoded
Akamai’s first-quarter earnings tell a coherent story once you stop looking at the headline number and start looking at the mix shift underneath. Total revenue rose 6% year-on-year to over $1 billion, which is not the kind of growth rate that gets technology investors excited. The line items, however, are doing different things on the same ledger.
Cloud infrastructure services revenue jumped 40% year-on-year to $95 million. That is the line that the $1.8 billion deal will accelerate further, and it is also the line that has the highest forward growth potential because it is selling into a structurally expanding category — AI training and inference compute — rather than a mature one. At a 40% growth rate, this segment is roughly tracking the broader public cloud market and outpacing the secular growth rate of either content delivery or perimeter security.
Cybersecurity revenue rose 11% to $590 million, which means the largest single business at Akamai is still pulling its weight in a market environment that has favored security spending almost without interruption since 2023. Eleven percent organic growth on a segment that already exceeds half a billion dollars per quarter is a credible result and supports the thesis that perimeter security and bot management remain durable enterprise spending priorities even as IT budgets tighten elsewhere.
Delivery and other cloud applications revenue fell 7% year-on-year to $389 million. That is the legacy content delivery network business, and the decline is not surprising. CDN pricing has been compressing for nearly a decade as bandwidth costs have fallen and competition from Cloudflare, Fastly, and the hyperscalers’ own delivery networks has intensified. Akamai’s challenge has been to manage that decline while reallocating capital and engineering attention into security and cloud infrastructure, and the quarter’s numbers suggest that pivot is working.
For the second quarter, Akamai guided to revenue between $1.08 billion and $1.1 billion and adjusted net income per share between $1.45 and $1.65. That guidance does not yet bake in any meaningful revenue from the new $1.8 billion contract, which means the next several quarters carry upside surprise potential as the new commitment ramps.
What “Edge Inference” Actually Means
The technical case for Akamai’s win is worth slowing down on. Frontier AI models — the largest foundation models being trained today — produce inference workloads that are, by some measures, even harder to serve efficiently than they were to train. Training happens once, in centralized GPU clusters, on a schedule the model lab controls. Inference happens millions of times per second, in response to user queries, across every geography the application serves. The economics of inference are dominated by latency, bandwidth, and proximity. Pushing every user query to a centralized data center in Northern Virginia or Oregon is a fine architecture for low-traffic applications. It is a terrible architecture for real-time agents, voice interfaces, multimodal assistants, and any AI product that needs to feel instant.
This is exactly where Akamai’s footprint becomes a strategic asset. Leighton told CNBC the company operates “the world’s most distributed platform,” with infrastructure in 4,300 locations across 700 cities in 130 countries. That distribution was originally built to deliver static web content faster than the open internet could route it, and then to intercept and absorb cyberattacks at the edge. The same physical footprint, with the right software layer on top, becomes a globally distributed inference network. The model lab uploads a smaller, optimized inference variant of its frontier model to Akamai’s edge nodes; user queries hit the closest node; latency drops from hundreds of milliseconds to single digits; bandwidth costs collapse because most of the conversation never traverses the public internet.
Chief Technology Officer Robert Blumofe described the architecture to CNBC last week, framing Akamai’s three pillars as content delivery, cybersecurity, and cloud infrastructure services, and identifying the cloud infrastructure piece as the fastest growing — even if it remains the smallest. The company already runs what it calls an AI inference cloud, which packages compute, storage, and developer tooling into a product that competes more or less directly with AWS Bedrock, Google Vertex AI, and Azure OpenAI Service for inference workloads, while differentiating on geographic distribution.
Where This Fits in the Broader AI Capital Cycle
The Akamai deal lands in the middle of a broader AI capital expenditure cycle that has been remaking the public cloud market for two years. Hyperscalers have committed hundreds of billions of dollars to data center buildouts, GPU procurement, and energy contracts. Frontier model labs have raised tens of billions in private equity and signed long-dated compute contracts to lock in supply. The signal Akamai’s contract sends is that frontier labs are not putting all of their inference eggs in the hyperscaler basket — they are willing to pay a premium for distributed edge architectures that the hyperscalers have not built and probably cannot build at the same scale without a meaningfully different cost structure.
For investors holding the best AI infrastructure stocks for 2026, the implication is straightforward: the inference layer of the AI stack is going to be more competitive than the training layer, and companies with pre-existing physical footprints — Akamai, Cloudflare, equipment vendors with installed bases — have a structural advantage that does not show up in their pre-AI revenue lines.
The contrast with the recent Anthropic-SpaceX compute deal is also instructive, even if the two transactions do not directly overlap. Both signal that frontier labs are looking past the obvious hyperscaler relationships when they need either novel architectures (orbital compute, in SpaceX’s case) or distributed footprints (Akamai’s edge). The diversification of compute supply chains is the megatrend underneath both stories.
What Could Go Wrong
The bull case for Akamai requires three things to keep working. First, the unnamed frontier model customer needs to actually consume the committed capacity rather than treating the contract as an option to be paid down through usage minimums. Second, Akamai needs to win at least a handful of additional large frontier and enterprise contracts to validate that the first deal was not a one-off. Third, the cybersecurity and content delivery businesses need to remain stable enough to fund the engineering and capex required to scale the cloud infrastructure pillar.
The risks are real. Hyperscalers can compete aggressively on price for inference workloads if they perceive the edge providers as a strategic threat. Cloudflare is an obvious competitor in the same edge inference category and has been moving aggressively into AI workloads of its own. The frontier model labs are also building proprietary inference stacks and may decide, over time, to bring more of the work in-house rather than splitting it across multiple infrastructure partners.
That said, the structural argument for distributed inference is not going away. AI applications are getting more interactive, more multimodal, and more globally distributed every quarter. The hyperscaler-only architecture is a 2020s artifact. The companies that can put inference closer to users — physically, not just logically — are positioned for a tailwind that the public cloud market on its own does not capture.
The Read for Markets
Akamai’s 16% rally is not just about one contract. It is the market repricing a stock that had been valued as a low-growth incumbent, on the basis of evidence that the company has earned a seat at the table where the next phase of AI infrastructure spending is being allocated. For traders, that creates a setup where forward earnings revisions and multiple expansion can both contribute to total returns over the next several quarters. For longer-term investors, the more interesting question is whether the entire edge category — Akamai, Cloudflare, and adjacent providers — gets rerated on the back of the same thesis.
For Akamai’s competitors, the deal is a wake-up call. The hyperscaler-only architecture for AI inference is not the only architecture, the frontier labs know it, and they are willing to pay a premium for diversification. The 12 to 24 months ahead will reveal whether Akamai converts this validation into a sustained earnings inflection or whether it remains, as Leighton put it, a company that has been undervalued for a while and is now getting its day in the sun.