Amazon Says Meta Will Deploy AWS Graviton CPUs at Scale for ‘Agentic AI’
Amazon said Meta has signed an agreement to deploy AWS Graviton processors at scale for the CPU-heavy workloads behind “agentic AI,” starting with tens of millions of cores and potentially expanding further.
The announcement, published Thursday in an AboutAmazon post credited to Amazon Staff, points to a notable shift in how Meta is building out its AI infrastructure. Rather than relying only on the GPU capacity that has dominated the AI buildout, Meta is also adding large-scale CPU capacity for tasks such as orchestration, search and real-time reasoning, while continuing to spread demand across multiple cloud and infrastructure providers.
Amazon said Meta “has signed an agreement to deploy AWS Graviton processors at scale” and described the social media company as “one of the largest Graviton customers in the world.” That characterization comes from Amazon. The company said the chips will be used for Meta’s agentic AI workloads, including real-time reasoning, code generation, search and orchestrating multi-step tasks.
Santosh Janardhan, Meta’s head of infrastructure, said in the Amazon post that the move is part of a broader diversification strategy.
“As we scale the infrastructure behind Meta's AI ambitions, diversifying our compute sources is a strategic imperative. AWS has been a trusted cloud partner for years, and expanding to Graviton allows us to run the CPU-intensive workloads behind agentic AI with the performance and efficiency we need at our scale,” Janardhan said.
Graviton is Amazon Web Services’ line of Arm-based server CPUs, used in its EC2 cloud instances. That matters here because the deal is centered on processor capacity for AI-related control and inference work, not just the graphics processors typically associated with training large AI models.
Amazon’s post specifically highlighted Graviton5. Amazon says the chip has 192 cores, a cache five times larger than the previous generation and delivers up to 25% better performance than the prior version. Those are vendor claims in the announcement. Amazon also said Graviton runs on the AWS Nitro System, and that Graviton5 instances support Elastic Fabric Adapter, or EFA, for low-latency, high-bandwidth communication between instances.
The deal also lines up with a broader industry argument from Amazon and Arm that so-called agentic AI — systems designed to plan, search, reason and carry out multi-step actions — creates growing demand for CPU-intensive infrastructure alongside GPUs. Arm said on March 24 that Meta is the lead partner for its new Arm AGI CPU, reinforcing Meta’s public interest in using CPUs for this class of AI workload.
Meta’s AWS agreement also fits a broader pattern of buying external AI capacity from multiple vendors. Nebius said on March 16 that it signed a five-year agreement worth up to $27 billion with Meta. CoreWeave said on April 9 that it expanded its Meta agreement to $21 billion through December 2032. Reuters also reported in August 2025 that Meta had signed a multiyear Google Cloud deal worth more than $10 billion.
Amazon framed the Graviton agreement as part of a larger AI stack that includes infrastructure, data and inference services. “This isn't just about chips; it's about giving customers the infrastructure foundation, as well as data and inference services, to build AI that understands, anticipates, and scales efficiently to billions of people worldwide,” said Nafea Bshara, an Amazon vice president and distinguished engineer, in the post.
No financial terms, contract length, minimum spend, exclusivity provisions or rollout timeline were disclosed in the public announcement. As of Friday, no separate Meta newsroom post confirming the agreement was identified; the public disclosure came through Amazon’s corporate post, which included Janardhan’s statement.