Nvidia Posts Another Record Quarter as AI Spending Soars, but Investors Flinch

Nvidia Corp. capped a year of extraordinary growth with another record quarter in late February, underscoring how quickly artificial intelligence has reshaped the semiconductor industry and corporate capital spending. But investors, already betting heavily on that boom, reacted with caution.

Record revenue and profit

On Feb. 25, the Santa Clara, California-based chipmaker reported fourth-quarter revenue of $68.1 billion, up 73% from a year earlier and 20% from the previous quarter. Net income for the three months ended Jan. 25 jumped 94% to $43 billion, or $1.76 per diluted share.

For its 2026 fiscal year, Nvidia’s revenue reached $215.9 billion, a 65% increase from fiscal 2025. Full-year net income rose 65% to $120.1 billion, or $4.90 per diluted share—figures that place a company once known primarily to gamers into the same revenue and profit league as the largest U.S. technology platforms.

Data centers dominate the business

The growth is being driven overwhelmingly by one business: chips and systems that power AI data centers.

In the fourth quarter, Nvidia’s data center segment generated $62.3 billion in revenue, up 75% from a year earlier and representing roughly 9 out of every 10 dollars the company brought in. For the full year, data center revenue totaled $193.7 billion, a 68% increase.

“Data center revenue has scaled by roughly 13 times since the emergence of ChatGPT in fiscal 2023,” Chief Financial Officer Colette Kress told analysts on a conference call following the results.

Those numbers have made Nvidia one of the clearest gauges of the global race to build AI infrastructure—and they underline the stakes of a capital-spending cycle accelerating across cloud providers, social media platforms and national governments.

A $700 billion capex cycle—and big commitments

Kress said that, based on external analyst estimates, capital expenditures by the five largest cloud and hyperscale providers for 2026 are now projected to approach $700 billion, up about $120 billion from expectations at the start of the year. Those companies—Amazon Web Services, Microsoft Azure, Google Cloud, Oracle Cloud Infrastructure and Meta Platforms—account for a little more than half of Nvidia’s data center sales, according to the company.

To support demand, Nvidia has locked in unprecedented future obligations. As of Jan. 25, it had $95.2 billion in manufacturing, supply and capacity commitments, most of which it expects to pay by the end of fiscal 2027. It has also signed approximately $27 billion in multi-year cloud service agreements, largely to secure computing resources for internal development, and committed $11.4 billion to various investments.

On the earnings call, Kress said supply for Nvidia’s advanced AI platforms remains “tight,” adding that even older lines—including Hopper and six-year-old Ampere graphics processors—are effectively sold out at major cloud providers.

The scale of commitments has also drawn attention to the risk that customers over-order hardware in anticipation of continued constraints—a pattern seen in prior semiconductor cycles. Nvidia has said it believes the AI transition is a structural shift rather than a short-lived spike similar to past booms driven by cryptocurrency mining or gaming upgrades.

Huang: “AI factories” and agentic systems

Chief Executive Jensen Huang argued that a new class of AI systems is driving the current phase of investment.

“Frontier agentic systems have reached an inflection point,” Huang said, referring to AI models that can perform multi-step tasks with limited human supervision. “Adoption is skyrocketing, and tokens are profitable, driving extreme urgency to scale up compute.”

Huang repeated his description of AI data centers as “AI factories,” and said customers view Nvidia platforms as a way to maximize revenue from each unit of electricity or floor space. Across large deployments, he said, “every data center is power constrained.”

Next platforms: Rubin and Blackwell Ultra

Nvidia is trying to reinforce its position with successive generations of hardware. During the quarter, it detailed its upcoming Rubin platform, combining new GPUs, CPUs, networking switches and data processing units.

The company says Rubin-based systems can reduce the cost of AI inference—the process of generating answers from trained models—by up to 10 times per token compared with its current Blackwell platform, and can train mixture-of-experts models with about one-quarter as many GPUs.

It also highlighted Blackwell Ultra, an enhanced version of its existing AI accelerators, which it says can offer as much as 50 times the performance and 35 times lower cost per “agentic AI” token than its older Hopper architecture.

Those claims are central to Nvidia’s pitch that, despite the staggering level of capital spending, its systems lower the unit economics of AI enough to make the spending rational.

Market reaction and guidance—and China excluded

For now, the market is not treating the growth as open-ended. Nvidia’s shares fell between 5% and 8% in the trading session after the earnings release, according to market data, even as the company issued robust guidance.

For the first quarter of fiscal 2027, Nvidia forecast revenue of $78 billion, plus or minus 2%, and a GAAP gross margin of about 74.9%. Notably, Kress said the outlook assumes “no data center compute revenue from China,” reflecting U.S. export controls on advanced AI chips.

Some shipments of modified products to China have received case-by-case approvals, Kress said, but the company had not recognized revenue from those at the time of the call. She also warned that Chinese rivals—backed by domestic capital markets—could eventually “disrupt the structure of the global AI industry” if forced to build around U.S. technology restrictions.

Sovereign AI becomes a major customer

The explicit exclusion of China from near-term guidance highlights how deeply geopolitics is intertwined with Nvidia’s growth. At the same time, governments in other regions have become major customers.

Kress said revenue from “sovereign AI” projects—national or state-backed computing infrastructure built on Nvidia hardware—exceeded $30 billion in fiscal 2026, more than triple the prior year. She cited work with Canada, France, the Netherlands, Singapore and the United Kingdom, among others.

Huang has likened sovereign AI initiatives to earlier eras of investment in electricity grids and telecommunications networks, arguing that every country will need domestic AI capability for security, economic and regulatory reasons.

Beyond the data center: “physical AI” and legacy segments

Nvidia is also extending its reach into “physical AI”—applying AI models to robots, vehicles and industrial systems. Automotive and robotics revenue, including its Nvidia Drive and Isaac product lines, rose 39% to $2.3 billion in fiscal 2026, a record for the segment. Kress said physical AI contributed more than $6 billion in revenue across the company when related industrial and robotics platforms are included.

Partners range from Mercedes-Benz, which uses Nvidia technology in its CLA compact cars, to industrial and robotics companies such as Caterpillar, Boston Dynamics, LG Electronics and Siemens.

Outside the data center, Nvidia’s legacy businesses also notched records, though they now represent a minority of sales. Gaming revenue grew 41% to $16 billion for the year, powered by its latest GeForce RTX graphics cards and AI-enhanced graphics features. Professional visualization rose 70% to $3.2 billion.

What the numbers signal about the AI build-out

The scale and concentration of Nvidia’s gains have raised broader questions beyond its balance sheet. The company generated about $97 billion in free cash flow in fiscal 2026 and returned $41.1 billion to shareholders through buybacks and dividends. It ended the year with $58.5 billion remaining under its share repurchase authorization, and it continues to rely on a small number of very large customers for much of its revenue.

Nvidia’s size also makes it a major component of stock indexes such as the S&P 500 and Nasdaq Composite, so swings in its valuation can ripple through portfolios held by pension funds and individual investors.

Huang told analysts he expects Nvidia’s revenue to grow sequentially through calendar 2026 and said demand should exceed the “$500 billion Blackwell and Rubin revenue opportunity” the company outlined a year earlier. But neither he nor Kress tried to predict how long hyperscalers and governments would maintain current levels of spending.

For now, Nvidia’s financial statements offer one of the clearest windows into the AI build-out. They show that, within three years of generative AI entering the mainstream, tens of billions of dollars are being committed each quarter to specialized computing systems, backed by long-term supply contracts and energy-hungry data centers.

Whether that pace can continue—and how companies and countries will manage strains on power grids, labor markets and geopolitics—will be tested in the same place investors watch everything else about the AI boom: Nvidia’s future earnings reports.

Tags: #nvidia, #ai, #semiconductors, #datacenters, #earnings