Skip to main content

Still the most important stock in the universe

Damien Klassen
by Damien Klassen
November 23, 2025

Every cycle has a bellwether. Today it's NVIDIA. We're operating in a two-speed economy where the US consumer looks fatigued, while corporate AI spending is a rocket ship.

Let me share how I'm thinking about NVIDIA: why the recent result mattered, why capex is the fulcrum, what the depreciation debate gets right and wrong, how rental economics are evolving, the circularity risk, and what the valuation implies for returns.

The market setup: expensive but bifurcated

Look around and you'll see a K‑shaped economy. On one side, consumer-facing businesses report weaker trends, tighter wallets, and cyclic fatigue. On the other, AI infrastructure has the pull of a black hole—drawing in capex, talent, power contracts, and strategy. That divergence makes the market fragile. If the AI leg stumbles, indices can drop fast because so much of the earnings growth is concentrated in a handful of names.

That's why NVIDIA's latest report carried so much weight. It wasn't just another tech print. It was a referendum on whether AI capex is still compounding in line with the story we've been told for 18 months.

What the numbers said

The result was solid. It met expectations and nudged guidance higher. Revenue rose about 22% quarter-on-quarter, and earnings followed. The accounting quality passes muster. GAAP and non‑GAAP numbers were very close— rare in a market where many tech firms massage "adjusted" metrics by excluding real costs like stock‑based compensation. NVIDIA doesn't need the window dressing.


So if the accounting is clean, the core question becomes durability. How long can this pace hold?

Follow the capex

If you want to understand NVIDIA, don't start with NVIDIA. Start with the buyers. Microsoft, Amazon, Alphabet, Meta—and to a lesser extent Oracle—are committing eye‑watering sums to data centres. The working numbers look like roughly $400 billion of data centre capex this year, well over $500 billion next year, and higher again after that. These plans have been reaffirmed across earnings calls. Plans can change, but today they are funded and pointed in one direction.

Most of that spending is financed by internal cash flows. These companies swim in cash. They hold large cash balances, raise occasional debt, and maintain optionality for buybacks and acquisitions. Oracle is an outlier, leaning more on debt, but the hyperscalers' capex is predominantly self‑funded. That matters because it lowers the risk of a sudden stop tied to capital market access.

Now zoom in. Approximately 40% of data centre spend flows into GPUs. That single line item explains why NVIDIA is central to this cycle and why margins look the way they do.

The depreciation debate, rental realities, and useful life

Short seller Michael Burry has argued that hyperscalers are overstating profits because they're depreciating chips too slowly. His point: if you spend $100 billion on GPUs and depreciate over five or six years, you book $16–$20 billion of costs per year. If the real economic life is closer to three years, you should be booking $33 billion.

There's real debate here. Prior to 2020, many firms used three to four years for servers. Over the last few years, useful lives have been extended—often to five to eight years in parts of the stack—as management teams argued that hardware is lasting longer.

Did changes in depreciation flatter earnings growth for some companies over the last five years? Yes. Are the new numbers wrong? I'm unconvinced.

Here's what I see in the market:

Explainer of NVIDIA’s role in the AI cycle, capex dynamics, and depreciation debate

- H100 rentals: These are 3 years old. Pricing is down from the peaks. Early 2025, you could rent an H100 for roughly $3 per GPU‑hour on some platforms. Today it's closer to $2. That's still comfortably above my rough estimate of about $1 per hour all‑in cost if you own the hardware.

- A100 rentals: The A100 is a five‑year‑old part. By my math, the all‑in ownership cost is roughly $0.50 to $0.70 per GPU‑hour, depending on utilisation and power assumptions. Even today, you can still rent A100s at a profit. If you're earning a 20%–40% markup on a five‑year‑old asset, that doesn't scream "three‑year economic life."

Plus, the marginal operating cost for these chips is less than half the cost numbers above. i.e. ignoring sunk costs, you would keep renting A100s out even if you only got $0.40 per GPU hour.

Net effect: current transaction data doesn't support the claim that the entire edifice is one quarter away from collapse. What it supports is this: earlier this year, the market overheated and pricing got well above trend. It has now cooled to still very profitable levels.

For NVIDIA, is the point moot anyway?

If hyperscalers need to buy chips every 3 years rather than every 6, that is a good thing for NVIDIA but bad for the hyperscalers. So why short NVIDIA?

The more nuanced argument is that the old chips are still profitable, but the new ones are more profitable. So, because datacentres can't be built fast enough, it is worth ripping out an old chip to put a new one in. That sounds reasonable, but it still sounds good for NVIDIA, and sounds less bad for the hyperscalers.  

The real bear case

Demand slows, and depreciation effectively accelerates because utilisation drops and pricing falls.

But that is not what is happening at the moment.

Fundamental AI demand: what I'm hearing and seeing

In real-world sectors that use AI, executives aren't cutting AI spend. Even the ones that are whinging about return on investment.

Take delivery and logistics. Full autonomy isn't here. But the outlines of the future of autonomous delivery are. Which company is going to refuse to spend and run the risk of not having a business in five or ten years' time?

The circularity risk

The part of the story I watch closely is the network around OpenAI. It's dense and can get circular. OpenAI signs massive multi‑year compute agreements with partners like Oracle. Those partners build or lease data centres, which means buying NVIDIA chips.

Insight into NVIDIA results, AI capex surge, and key risks for investors

NVIDIA funds OpenAI so that they can afford to pay the partners who, in turn, buy NVIDIA's chips. It is not hard to sketch a reflexive loop: funding leads to orders, orders lead to capex, capex feeds back into demand narratives that unlock more funding.

If funding to OpenAI (alone) dries up, parts of the chain will wobble. Would the whole sector unravel? Unlikely if it is an independent event. The overall AI spend is diversified across many firms and geographies. OpenAI is significant, but nowhere near the entirety. If OpenAI falls because some new company has a dramatically better model, the story stays largely intact.  

However, if funding to the entire sector dries up, that is a different story. Confidence breaks and hyperscalers slash capex, the dynamic changes. Duration is everything.

If this cycle runs another three years, even debt‑funded players like Oracle can amortise risks as cash flows come through. If it ends in six to twelve months, leverage, excess capacity, and unfilled contracts become real problems.

Geopolitics and China

China is the wild card that, paradoxically, looks less dangerous for NVIDIA than for many other US firms. US export controls cap the performance of chips that can be sold into China. NVIDIA's current China exposure is already limited, so downside from a further clampdown is not the main risk case. There is upside risk if Trump relents.

Competition: real, but the pie is growing.

AMD has some credible alternatives and a clear road map. Google's TPUs are proven in their environment. Amazon continues to push custom silicon and networking. All true. And yet NVIDIA still holds 90%+ share in the AI accelerator segment that matters most right now. Could share slip? Absolutely. But the pace of demand growth and NVIDIA's software moat keep it in the driver's seat for now.

I don't see evidence of an imminent, dramatic share loss in the data. A reasonable portfolio hedge is to pair NVIDIA with exposures that would benefit if share shifts—AMD, and platform owners like Alphabet and Amazon that capture value through their stacks regardless of which chip wins.

Valuation through a quality-value lens

I use a simple two‑axis framework: quality on the vertical, valuation on the horizontal. The sweet spot is high quality at a reasonable price. NVIDIA screens at the very top of quality right now. Earnings momentum is exceptional, upgrades keep arriving, and the balance sheet is clean. Valuation is not cheap. On most metrics, it sits around the 80th percentile.

The key driver here is earnings. The most useful single chart for me is earnings per share:

AI market commentary on NVIDIA’s performance during a split US economic cycle

In the last month alone, consensus pushed estimates for the next fiscal year up materially. As long as it rises, a premium multiple makes sense. Any cracks, and look out below.

Big picture, on valuations, semiconductors are brutal businesses. Leaders come and go—Intel and Texas Instruments have worn the crown and ceded it. Mature chip leaders typically trade at 10–15x earnings.

Investor analysis of NVIDIA’s recent earnings, circularity risks, and AI market outlook

NVIDIA is not mature; it's a hyper‑growth platform company. If earnings compound 25–30% for three to four years (keeping in mind they grew 21% in the last quarter alone), the forward multiple naturally compresses into the low‑teens even if the price does nothing. If growth stops next year, the stock can fall 30–40%. Both paths are plausible. The job is to weigh the odds.

What could break the thesis?

Keep a short, brutal list:

  • Capex cuts: Hyperscalers curtailing multi‑year data centre plans. Watch their tone, power contracts, and returns.
  • Revisions rollover: next 12 months earnings per share stops going up.
  • GPU rental pricing cracks: If GPU/hour rates fall below all‑in ownership costs across clouds, oversupply is arriving.
  • Funding shock at a keystone lab: If capital markets close, the reflexive loop weakens.
  • Power and build constraints: If grid connections or transformer shortages slow capacity additions materially, revenue timing becomes choppier.
  • Adverse policy: Export controls tighten further or antitrust pressure targets AI chip bundling in a way that hurts ecosystem lock‑in.

How to reduce your risk

NVIDIA screens as a high‑quality growth asset at a premium that is justified if the cycle runs another five years. That's not a prediction; it's a conditional statement. My positioning flows from that conditional:

  • Own, but size for volatility. A name with 5% daily swings is not a "set and forget" overweight.
  • Pair with beneficiaries of share shifts. AMD is the obvious one. Platform owners—Alphabet and Amazon—can monetise AI irrespective of chip vendor through search, cloud, and retail ecosystems.
  • Watch the revision tape like a hawk. The moment next 12 months earnings per share stall, assume the multiple gets questioned.
  • Hedge the macro. If your portfolio is AI‑heavy, consider offsetting exposures that benefit from higher yields or a stronger dollar, both of which could bite long‑duration tech.
  • Respect duration risk in the "debt‑levered AI" cohort. Oracle is the poster child. If the cycle runs, leverage magnifies good outcomes. If it shortens, the downside is real.

Is it a bubble?

Parts of the ecosystem look bubbly. Some software names trade on stories that are years ahead of their revenue lines. Some GPU rental marketplaces launched into peak pricing and are now resetting. But bubble dynamics require two ingredients: leverage and synchronised belief.

There's leverage in spots (Oracle, some private GPU clouds), but the hyperscalers are mostly self‑funding. Belief is strong, but it's diversified by use case and platform. My read: we're in the mid‑innings of a powerful capex cycle, not the final minutes of a mania. It will end, as all cycles do.

My bottom line

NVIDIA remains the fulcrum of a historic investment cycle. The accounting is clean, the cash is real, and the debate we should be having is about duration. If AI capex compounds for three to five more years, today's premium is defensible. If it stalls next year, downside is significant. Both paths live in the distribution. And remember: in cycles like this, the market pays up for earnings momentum until it doesn't. Your edge isn't predicting the turn—it's preparing for it.