top of page

The AI Supercycle — A Definitive Analysis

  • The Financial View
  • Mar 29
  • 19 min read

THE AI SUPERCYCLE — A DEFINITIVE ANALYSIS

The Financial View | Global Macro & Technology Research | Q2 2025

This report is produced for informational and educational purposes only. It does not constitute investment advice. All data and estimates reflect publicly available information as of Q2 2025. Past performance does not predict future results.

The Investment Thesis

I believe the market is analyzing the AI Supercycle from entirely the wrong vantage point. Most analysts, most coverage, and most capital is focused on the model layer — who has the best AI, which foundation model wins, whether OpenAI or Anthropic or Google Gemini takes the crown. That debate, while interesting, is almost entirely irrelevant to where the economic surplus in this cycle will actually accumulate. The real story is in the physical infrastructure that every AI model must pass through to function: the lithography machines, the memory stacks, the power grids. And the market has been systematically underpricing it.

The unique insight of this report: energy, not compute, is already the binding constraint on AI expansion. The companies positioned around that constraint — power generators, thermal management specialists, grid builders — are being valued as boring utilities when they are, in structural economic terms, the oil fields of the AI economy. I will explain exactly why.

"$325 billion per year. Four companies. One technological domain. This is not a technology story — it is a capital allocation story. And the capital is flowing to exactly the wrong places for most investors to notice the real winners." — The Financial View,

Section 1: The Asset-Heavy Pivot of the 21st Century

The combined capital expenditure of Microsoft, Alphabet, Meta, and Amazon in 2025 alone is an estimated $325 billion — the majority flowing directly into AI infrastructure: data centers, custom silicon chips, networking fabric, and power systems. This figure exceeds the annual GDP of Portugal. It is larger than the entire US defense discretionary budget. It is happening in a single year, by four companies, in a single technological domain. No precedent in modern corporate history comes close.


Combined Hyperscaler CAPEX 2019–2025E (Key Data)

Combined CAPEX trajectory: 2019 ~$97B → 2020 ~$97B → 2021 ~$100B → 2022 ~$150B → 2023 ~$139B → 2024 ~$228B → 2025E ~$325B. 4-year combined CAGR: ~35%. Microsoft CAPEX CAGR 2021–2025: ~54%. Microsoft's CAPEX alone grew by $30 billion in a single year between 2023 and 2024 — a number that, on its own, would rank as a top-10 global infrastructure project. CAPEX as % of Revenue (2025E): Microsoft ~28%, Meta ~28%, Amazon ~20%, Alphabet ~19.5%. Microsoft reinvesting nearly 30 cents of every revenue dollar is without precedent in modern software history.



For two decades, the defining financial characteristic of the technology megacap was the capital-light model. Google built a $280 billion annual revenue engine on advertising requiring relatively modest physical infrastructure. Meta generated 40%+ operating margins by connecting two billion people with a product costing almost nothing to replicate at the margin. That structural advantage is now being deliberately sacrificed — because the next competitive frontier requires owned data center capacity, owned networking fabric, and owned power supply. You cannot train next-generation AI models on rented spot instances.

Three forces explain this irreversible pivot. First: competitive necessity — the hyperscaler that falls behind on AI infrastructure loses cloud market share. Second: training economics — training next-generation models costs $500M–$1B+ in compute and requires owned capacity running at near-100% utilization for months. Third: inference demand compounds forever — unlike training (a one-time event), inference runs billions of times daily and compounds with user growth permanently. The Historical Analogy: When railroads were built in the 1860s–1880s, the railroad companies themselves mostly earned poor returns. The real wealth was captured by operators on top of the infrastructure. The pattern in AI is identical — the bottleneck inputs (power, specialized equipment, advanced packaging) will extract economic rent for years. Most investors are looking at the railroad company. The real money is in the land adjacent to the tracks.

Section 2: The CAPEX Deep Dive — Four Companies, One Arms Race

Microsoft Corporation (NASDAQ: MSFT) — ~$87B 2025E CAPEX

Microsoft is executing the most consequential partnership bet in corporate technology history. The reported $13 billion total commitment to OpenAI — structured as Azure computing credits and equity — has transformed Microsoft's entire strategic identity. The financial architecture is circular but powerful: OpenAI runs on Azure infrastructure, spends its Microsoft investment primarily as Azure credits, and Microsoft retains all the cloud revenue. Microsoft's core AI thesis is about monetizing AI through an existing install base of 380 million M365 commercial seats — a distribution advantage no startup can replicate. GitHub Copilot at $19–39/user/month. Microsoft 365 Copilot at $30/user/month. At just 10% M365 penetration, that is $13+ billion in incremental annual recurring revenue from a near-zero marginal cost software addition. Azure's AI contribution to growth has climbed from 3 percentage points to 7+ in consecutive quarters.

However: Microsoft carries the most concentrated partner risk of any hyperscaler. Structural dependence on a single external AI provider is a genuine long-term vulnerability. The Stargate announcement in January 2025 — OpenAI fronting a $500 billion infrastructure initiative with SoftBank and Oracle, notably without Microsoft as the primary named partner — raised legitimate questions about whether OpenAI is actively building toward compute independence.

Alphabet Inc. / Google (NASDAQ: GOOGL) — $75B 2025 CAPEX Guidance

I think Alphabet is the most misunderstood AI story in the market right now. The narrative is that Google is the threatened incumbent — disrupted by ChatGPT, losing search share to AI, scrambling to catch up. That narrative is wrong in important ways. The reason: Google's TPU vertical integration. Since 2015, Google has designed custom silicon optimized for AI workloads. Their 6th-generation Trillium (TPU v6) offers a 4.7x performance improvement over its predecessor. The strategic consequence: Google does not purchase NVIDIA GPUs for its internal AI workloads. Every H100-equivalent computation at Google runs on proprietary silicon at a cost structure Google controls entirely. For approximately 80% of Google's AI compute — Search, YouTube, Gmail, Maps, Translate — TPUs provide a structural cost advantage that compounds with every chip generation. No other hyperscaler has this.

📊 Google TPU Performance by Generation


TPU v1 (2016): 92 TFLOPS → TPU v2 (2017): 180 TFLOPS → TPU v3 (2018): 420 TFLOPS → TPU v4 (2022): 275 TFLOPS → TPU v5e (2023): 197 TFLOPS → TPU v6 Trillium (2024): 918 TFLOPS. The Trillium represents a 4.7x improvement over v5e. This compounds every generation as a structural CAPEX efficiency advantage that no other hyperscaler possesses. Google Cloud Revenue: 2020 $13.1B → 2021 $19.2B → 2022 $26.3B → 2023 $33.1B → 2024 $43.2B → 2025E $55B, growing at ~28-30% YoY.

The search cannibalization concern deserves acknowledgment — when Google's AI Overview answers a query directly, the user may not click through to an advertiser's website. The long-term advertising revenue impact of AI search is genuinely uncertain. However: Google is the only hyperscaler with an AI hardware moat. That asymmetry matters more than the search narrative. Alphabet trades at a valuation discount to Microsoft that appears unwarranted on a fundamental basis.

Meta Platforms Inc. (NASDAQ: META) — $60–65B 2025E CAPEX

Meta's AI strategy is the most intellectually provocative and strategically unconventional of the four hyperscalers — and I think it is also the most underestimated. While competitors lock down their models, Zuckerberg is giving his away. The LLaMA open-source strategy is a deliberate bet against the dominant industry logic, and the market has not yet decided whether this is genius or recklessness. I lean toward the former, with one significant caveat. The monetization architecture is elegant: Meta AI doesn't charge users $20/month — it embeds inside products used by 3.5 billion people daily and improves advertising targeting so precisely that advertisers pay more per impression. Meta's advertising revenue grew approximately 20% in 2024, with management citing AI improvements as a key driver. This is probably the most capital-efficient AI monetization model in the industry.

📊 Meta Platforms — Revenue vs Operating Income 2019–2025E


Revenue: 2019 $70.7B → 2020 $85.9B → 2021 $117.9B → 2022 $116.6B → 2023 $134.9B → 2024 $164.5B → 2025E $191B. Operating Income: 2022 $28.9B (Year of Efficiency bottom) → 2024 $58.9B → 2025E $68B. The AI-driven advertising improvement is the primary revenue mechanism — not direct AI subscriptions. Open-Source Policy Risk: LLaMA model weights are publicly available to Chinese research labs and every actor the US government would prefer to exclude from frontier AI access. If the US government mandates that frontier models cannot be publicly released, Meta's entire strategic differentiation disappears overnight. This risk is not priced into the stock.

Amazon Web Services (NASDAQ: AMZN) — ~$105B 2025E Total CAPEX


Amazon's AI infrastructure position is the most straightforward and the most durable of the four hyperscalers. AWS controls approximately 32% of global cloud infrastructure market share and generates $107+ billion in annualized revenue with operating margins expanding from 28% in 2021 to nearly 38% in 2024. The deepest competitive moat in cloud computing is not having the best AI models — it is having the most integrated enterprise relationships. AWS has spent 20 years building 200+ services that Fortune 500 enterprises have embedded into their core operations. Switching costs are operational, organizational, and political — not just financial. AWS Revenue: 2020 $45.4B → 2021 $62.2B → 2022 $80.1B → 2023 $90.8B → 2024 $107.6B → 2025E $130B. Operating Margin expanding from 28.8% (2020) to ~39% (2025E) as AI workloads (structurally higher-margin) grow as a proportion of revenue. The $4–8 billion Anthropic investment is structured correctly: not a bet on Anthropic winning the model wars, but a bet on AWS remaining the preferred deployment platform regardless of who wins.

Section 3: NVIDIA — Why the Pickaxe Seller Wins the Gold Rush



NVIDIA's rise from gaming graphics company to the most valuable semiconductor company in history is the product of a 20-year strategic bet on software ecosystem development that most competitors ignored. The most durable competitive advantage is not the H100 or the B200 — it is CUDA, the programming model NVIDIA began developing in 2006. CUDA is integrated into every major AI framework: PyTorch, TensorFlow, JAX. The entire academic AI community learned their craft on CUDA. Switching to AMD's ROCm requires rewriting institutional codebases in a different programming paradigm. For most organizations, this switching cost is prohibitive regardless of the hardware comparison.

📊 NVIDIA Revenue by Segment — Data Center Takeover

Data Center Revenue: FY2020 $6.8B → FY2021 $10.0B → FY2022 $16.6B → FY2023 $15.0B → FY2024 $47.5B → FY2025 $115.2B → FY2026E $160B. Gaming Revenue FY2025: $11.4B (only 9% of total, down from ~50%). Data Center is now 88%+ of total revenue. Operating Margins comparison (most recent full fiscal year): NVIDIA 55.0%, Microsoft 44.6%, Meta 36.9%, Apple 31.5%, Alphabet 32.0%, Amazon AWS 30.3%. NVIDIA's 55%+ operating margin on $130B+ revenue is the defining financial achievement of the AI cycle. For context, Apple — long considered the gold standard of tech margin — operates at 31.5%.



The DeepSeek Moment — Why Efficiency Strengthens, Not Weakens, Demand: In January 2025, DeepSeek demonstrated that a frontier-quality AI model could be trained for approximately $6 million versus the $100M+ OpenAI reportedly spent. For 48 hours, the market decided this broke the GPU demand thesis. It didn't. The Jevons Paradox explains why: when the efficiency of using a resource improves dramatically, total consumption tends to increase because lower cost makes previously uneconomical applications viable. When DeepSeek made capable AI 95% cheaper, it made AI economically viable for millions of new applications. The addressable market for AI inference expanded faster than the cost per inference fell. The analysts who sold NVIDIA after DeepSeek were looking at the cost side of the equation while ignoring the demand side entirely.

Section 4: The Seven-Layer AI Supply Chain

Every ChatGPT query, every Copilot suggestion, every Google AI Overview is the visible end product of a seven-layer physical supply chain spanning four continents, involving hundreds of specialized companies, and containing several single points of failure. The most important structural fact: a failure at any layer propagates downward through all subsequent layers. This is not a resilient, distributed system — it is a series of sequential dependencies, several of which have no substitutes whatsoever. The entire AI economy rests on the continued uninterrupted operation of a handful of firms most financial analysts cannot name.

📊 Supply Chain Geopolitical Risk by Layer (Score: 1=Competitive, 5=Single Point of Failure)

Layer 7 Infrastructure/Power: 3.0 (Competitive). Layer 6 Systems Integration (GPU): 3.5 (Duopoly, NVIDIA/AMD). Layer 5 HBM Memory: 3.2 (Oligopoly). Layer 4 Wafer Fabrication: 4.5 (TSMC near-monopoly + Taiwan risk). Layer 3 Semiconductor Equipment: 5.0 — ASML MONOPOLY — CRITICAL SINGLE POINT OF FAILURE. Layer 2 EDA Tools & IP: 4.5 (Synopsys/Cadence US duopoly). Layer 1 Raw Materials: 3.2 (Oligopoly, China rare earth concentration).



Layer 1 — Raw Materials

Silicon wafers must be manufactured with 99.9999999% purity. The global silicon wafer market is controlled by essentially two Japanese companies: Shin-Etsu Chemical (~30% global share) and SUMCO (~25%), together approximately 60% of global supply. China controls approximately 85% of global rare earth processing capacity — a genuine long-term supply chain vulnerability. Key companies: Shin-Etsu Chemical (Japan), SUMCO Corporation (Japan), Siltronic AG (Germany), Air Products APD (USA), DuPont / Merck KGaA (USA/Germany).

Layer 2 — EDA Tools & IP (Critical Risk)

This is the most underappreciated chokepoint in the entire AI supply chain. Every semiconductor in the world — NVIDIA's B200, Huawei's Ascend, Apple's M4 — is designed using software from either Synopsys or Cadence Design Systems. These two American companies have an effective duopoly on Electronic Design Automation (EDA) software. The US government's ability to restrict these export licenses to China is more powerful than restricting chip shipments directly, and it has already been partially deployed. ARM Holdings provides the CPU/NPU architecture IP underlying most mobile chips and a growing share of AI inference chips — every chip shipped using ARM architecture pays a royalty fee. Key companies: Synopsys SNPS (~30% EDA market), Cadence CDNS (~30% EDA market), ARM Holdings ARM (~99% smartphone chip architecture).

Layer 3 — Semiconductor Equipment (CRITICAL: Single Point of Failure)

ASML is the most strategically important company in the global economy that most people cannot name. ASML holds a complete monopoly on Extreme Ultraviolet (EUV) lithography machines — the equipment required to etch transistors at 7nm and below. There is no backup supplier. There is no substitute technology. Without ASML EUV machines, TSMC cannot produce NVIDIA's Blackwell GPUs. The machines generate plasma hotter than the surface of the sun, contain components from over 5,000 global suppliers, and the newest High-NA EUV machine is priced at approximately €350 million per unit. China cannot legally import any of them. That single export control is the most consequential economic restriction imposed on any country in modern history. Key companies: ASML ASML (Netherlands) — 100% EUV monopoly, Applied Materials AMAT (~20% wafer fab equipment), Lam Research LRCX (~15%), KLA Corporation KLAC (~50% process control — an overlooked near-monopoly), Tokyo Electron TEL (~14%).

Layer 4 — Wafer Fabrication (Critical Risk: Taiwan)

TSMC fabricates approximately 92% of the world's most advanced semiconductor chips. Every NVIDIA GPU, Apple M-chip, AMD EPYC, Amazon Trainium, and Google TPU is manufactured at TSMC. Any military conflict over Taiwan — even a blockade — would halt advanced chip production globally within 12–18 months as inventory depletes. There is no short-term substitute. The CoWoS advanced packaging process (TSMC near-monopoly) was the primary bottleneck limiting NVIDIA's H100 supply throughout 2024. Key companies: TSMC TSM (Taiwan, 92% of sub-10nm logic), Samsung Foundry (South Korea, ~8%), Intel Foundry (USA, strategically important for supply chain resilience), SMIC (China, permanently limited at ≤7nm by ASML export controls).

Layer 5 — Advanced Memory (High Risk: HBM Supply Constraint)

High Bandwidth Memory (HBM) is the most important component of AI GPU economics that receives the least coverage. An NVIDIA H100 delivers 3.35 TB/s of memory bandwidth; a B200 delivers 8 TB/s. Without this bandwidth, AI training at scale is physically impossible. SK Hynix's production yield on HBM3E 12-stack chips directly determines how fast NVIDIA can ship Blackwell GPUs. HBM Market Share: HBM3E (2024) — SK Hynix 53%, Samsung 33%, Micron 14%. HBM3E 12H (2025E) — SK Hynix 52%, Samsung 30%, Micron 18%. SK Hynix has maintained approximately 50%+ share across all HBM generations.



Layer 6 — Systems Integration (NVIDIA ~80% AI GPU Market Share)

Key players: NVIDIA NVDA (~80% AI GPU share; CUDA software moat is the durable advantage), AMD AMD (~15%; ROCm software gap narrows), Google TPU (internal use only; structural cost advantage for GCP workloads), AWS Trainium 2/Inferentia 2 (cost efficiency play for price-sensitive workloads), Broadcom AVGO (custom AI ASICs for Google/Meta + dominant in data center Ethernet networking), Huawei Ascend 910B/C (China domestic; approaching A100 performance for inference; supply limited by SMIC capacity).

Layer 7 — Infrastructure, Power & Thermal Management (The Binding Constraint)

This is the layer the market has been slowest to price correctly, and where I believe the most compelling long-term economic opportunity currently resides. Power availability is already the primary constraint on AI infrastructure expansion — and unlike GPU supply constraints, which can be resolved by building more fabs, power constraints are fundamentally physical and geographic. You cannot permit and build a new transmission line in 18 months. Key companies: Vertiv Holdings VRT (dominant in liquid cooling for high-density AI racks; mandatory above 40kW/rack threshold), Eaton Corporation ETN (power distribution and backup for all major data centers), Schneider Electric SU (full-stack data center management), Constellation Energy CEG (nuclear power; restarted Three Mile Island under 20-year PPA with Microsoft), Vistra Corp VST (largest competitive US power generator; nuclear fleet), Quanta Services PWR (builds transmission lines and substations), NuScale/Kairos/X-energy (Small Modular Reactor development for 2030–2035 AI baseload).

Section 5: Power Is the New Oil

The phrase 'power is the new oil' is used casually in technology circles. Most analysts treat it as a colorful metaphor. I think it is a precise structural claim with significant analytical implications that the market has not yet fully absorbed. In the 20th century, access to cheap, reliable hydrocarbon energy was the defining input advantage of industrial economies. Wars were fought over oil fields. The same dynamic is re-emerging with electricity — specifically, firm, reliable, dispatchable power available 24/7, at the scale of hundreds of megawatts to multiple gigawatts. The entities controlling access to this resource adjacent to data center demand clusters are in the structural position that oil field owners occupied in 1950. The market has partially priced this into Constellation Energy CEG and Vistra VST. It has not yet priced it into the second-order plays: Vertiv, Quanta, Eaton. These companies are being valued as boring industrial suppliers when they are, in economic terms, the pipelines and refineries of the AI economy.


📊 US Data Center Power Demand 2015–2030E

US Data Center Power Demand: 2015 ~90 GW (traditional baseline) → 2023 ~121 GW (AI inflection begins) → 2025E ~160 GW → 2027E ~248 GW → 2030E ~432 GW (with AI-attributed demand). The AI-attributed component grows from near-zero in 2022 to approximately 290 GW by 2030 — representing the addition of Canada's entire electricity consumption in under a decade. Power Density per Rack: Standard Server 2015: 5 kW. High Density 2019: 15 kW. A100 GPU Rack 2022: 50 kW. H100 GPU Rack 2023: 80 kW. GB200 NVL72 Rack 2025: 120 kW. Above approximately 40 kW (the air cooling threshold), air cooling becomes physically inadequate. Liquid cooling becomes a mandatory engineering requirement — not an optional upgrade. All current AI GPU racks have crossed this threshold.

The US power grid was designed for a different era. Northern Virginia — home to more data center capacity than any other geography on Earth — already has a multi-year backlog of data center power requests at Dominion Energy. New data center construction in some Loudoun County locations has been effectively paused pending grid upgrades requiring transmission line construction — a process taking 5–8 years to permit and build. This is why Quanta Services, which builds physical transmission lines and substations, is in structural economic terms an AI infrastructure company despite having no direct relationship with semiconductor or cloud technology.


📊 Hyperscaler Nuclear Power Commitments (Megawatts)


Committed/sought nuclear capacity: Microsoft (Three Mile Island restart via Constellation Energy): 835 MW under 20-year PPA. Amazon (Talen Energy nuclear campus): 960 MW. Meta (Open RFP midpoint): 2,000 MW. Oracle (various PPAs): 1,000 MW. Google (Kairos SMR): 500 MW. Total committed/sought: 5,295+ MW. These are long-term Power Purchase Agreements or direct acquisitions representing structural demand for nuclear power that will persist for 20+ years regardless of AI adoption timelines. The economics: nuclear plants have very high fixed costs and very low variable costs. Microsoft's deal with Constellation at reportedly ~$0.10/kWh for 20 years locks in cheap, clean, reliable electricity while hedging against power price inflation. Constellation Energy CEG and Vistra VST have been repriced accordingly (+150%+ and +300%+ since 2023 respectively). The second-order beneficiaries — Vertiv, Quanta, Eaton — have not been fully repriced as AI infrastructure companies. That is the mispricing.

Section 6: The Software & Model Wars

Here is the uncomfortable truth about the AI model competition that most coverage refuses to state plainly: the model layer is commoditizing faster than the companies competing in it want to acknowledge. DeepSeek trained a GPT-4-competitive model for $6 million. Meta's LLaMA 3.1 405B is freely available. The algorithmic innovations driving efficiency are openly published and replicated within months. OpenAI's brand moat is real and durable — 200+ million weekly active users. But its technology moat is weaker than valuations imply. As of 2025, independent benchmarks rate Google Gemini 1.5 Pro, Anthropic Claude 3.5, and Meta LLaMA 3.1 as competitive with GPT-4 across most evaluation tasks.

Anthropic: The Regulatory Optionality Nobody Prices. Anthropic's Constitutional AI — embedding safety constraints at the base layer rather than adding guardrails afterward — is interesting not just as a technical approach but as a strategic asset. As AI legislation tightens in the EU, US, and UK, a company with demonstrably rigorous safety methodology may access regulated industries (healthcare, finance, legal, government) that competitors cannot. This is a moat that grows as regulatory stringency increases — an unusual and underappreciated competitive dynamic. The structural conclusion: the mid-tier model layer will commoditize. The durable moats are distribution at consumer scale (OpenAI/Google), enterprise integration depth (Microsoft/AWS), regulatory safety positioning (Anthropic), and open-source ecosystem control (Meta). A standalone 'we have a good model' proposition is not a durable business. This is why the infrastructure layer will generate more durable economic returns than any individual AI model company.

Section 7: The Silicon Curtain & Sovereign AI


The US has constructed a layered export control regime targeting China's access to advanced semiconductors: NVIDIA H100, H200, and B200 GPUs cannot be legally shipped to China. ASML EUV machines are blocked. EDA software export licenses are under restriction. This is the most consequential economic restriction imposed on any country in modern history. Overstating US dominance is as analytically dishonest as understating it. China's response has been sophisticated: Huawei Ascend 910B has achieved performance approaching NVIDIA A100 for certain inference workloads. DeepSeek's January 2025 publication demonstrates algorithmic efficiency can partially compensate for hardware disadvantage. US/China AI capability scores (out of 100): Advanced Hardware — US: 95, China: 42 (export controls are the primary asymmetry). Algorithmic Research — US: 85, China: 75 (DeepSeek demonstrates near parity). Training Data Scale — US: 88, China: 85. Talent Pipeline — US: 80, China: 72. Cloud Infrastructure — US: 95, China: 58. AI Investment Capital — US: 90, China: 70.

📊 Sovereign AI — Committed Investment 2025–2030E

Committed government and strategic AI investment: UAE & Saudi Arabia $100B. Japan (SoftBank + Government) $100B. European Union (combined) $60B. India (Government programs) $10B. Rest of Asia & Middle East $50B. TOTAL committed: ~$320B+. This demand is ROI-agnostic — Saudi Arabia's HUMAIN initiative is not optimized for payback periods; Japan's investment is a national competitiveness decision. Critically, this demand must be sourced from the US-allied supply chain — not China. This creates a structural demand floor for NVIDIA, TSMC, SK Hynix, ASML, Vertiv, and Quanta regardless of commercial AI adoption timelines. Free Cash Flow comparison (most recent fiscal year): Apple $108.8B, Microsoft $74.1B, Alphabet $72.5B, NVIDIA $60.8B, Meta $52.1B, Amazon $44.9B. The infrastructure buildout is largely self-funded by extraordinary cash generation.



Section 8: Why This Thesis Might Be Wrong — The Honest Counterarguments

Counterargument 1: The Efficiency Cascade Outpaces the Jevons Effect

DeepSeek proved models can be trained 95% cheaper. If algorithmic efficiency continues improving faster than new use cases materialize, compute demand could plateau or contract. The Jevons Paradox is a historical pattern, not a law of physics. It has failed before — energy efficiency improvements in Japan in the 1990s did not increase total energy consumption because economic stagnation suppressed demand growth. If enterprise AI adoption proves disappointingly slow, the demand response to lower AI costs may not come quickly enough to sustain GPU pricing. Assessment: Real risk for near-term NVIDIA pricing power. Partially mitigated by the Sovereign AI demand floor.

Counterargument 2: The ROI Reckoning Arrives Before the Revenue

Goldman Sachs asked the right question in 2024: is there enough AI revenue to justify $325B+ in annual CAPEX? The honest answer: not yet. OpenAI generates approximately $4B in revenue against $5B+ in annual losses. Enterprise AI adoption cycles are 24–36 months, not 6 months. If 2026–2027 brings flat cloud growth alongside continued CAPEX escalation, the 'AI premium' in equity multiples compresses regardless of the long-term thesis being correct. Assessment: The most likely near-term challenge. Does not change the structural 5-year thesis.

Counterargument 3: SMR Timelines Slip and Nuclear Fails

The nuclear power thesis relies on Small Modular Reactors delivering commercial power by 2030–2032. SMR development has a history of timeline overruns. NuScale — the only NRC-licensed SMR design in the US — recently cancelled its first commercial project due to cost overruns. If nuclear timelines slip and traditional grid capacity cannot be expanded fast enough, data center operators face genuine power constraints that create a ceiling on AI expansion. Assessment: Genuine tail risk for the power play specifically. Does not affect the chip/equipment layer thesis.

Counterargument 4: China Breaks Through the Hardware Constraint

Export controls assume hardware is the binding constraint. DeepSeek suggests it isn't — or at least, not as binding as assumed. If China achieves sufficient algorithmic efficiency to build competitive frontier AI on domestically produced chips (Huawei Ascend + SMIC 7nm), the Silicon Curtain loses strategic significance. This doesn't eliminate US advantages but undermines the case that ASML's EUV export control is a permanent structural disadvantage for Chinese AI development. Assessment: Medium-term concern for the geopolitical thesis. Does not affect the power/energy layer argument.

These are real risks. The ROI reckoning and the efficiency cascade are the two most likely near-term challenges to the thesis. But none of them alter the structural conclusion: the demand for compute, power, and AI infrastructure is secular, not cyclical. A correction would be a buying opportunity in supply chain bottleneck companies, not evidence that the underlying thesis was wrong.

Section 9: Three Scenarios for 2025–2030

Scenario A — 'The Electricity Analogy' (Probability: 40%): AI becomes a general-purpose utility embedded in every enterprise workflow, consumer product, and government system. Trigger: enterprise AI adoption accelerates in 2026; measurable GDP contribution from AI productivity; inference demand compounds quarterly. Implication: Full CAPEX thesis validated. Infrastructure layer companies re-rate toward utility-grade long-term valuations. Demand cycle extends beyond 2030.

Scenario B — 'The Enterprise Lag' (Probability: 45%): Real but uneven AI productivity gains. Enterprise adoption genuine but slower than CAPEX implies. 2026–2027 sees a 'digestion' period where CAPEX growth slows. Implication: The digestion period that follows every major infrastructure build-out. Capital deployed patiently in supply chain bottleneck companies during this period historically generates the best long-term returns. Sovereign AI demand sustains the underlying ecosystem through the digestion.

Scenario C — 'The Bubble Break' (Probability: 15%): ROI disappointment + efficiency cascade + regulatory shock triggers multi-quarter de-rating of AI equities broadly. Implication: This is a correction, not an end. AI applications already generate genuine value — this is not 2001. A correction creates entry points that 2023–2024 valuations precluded. The underlying secular demand is real and durable; the multiple was the problem, not the thesis.


Section 10: Final Perspective

The market is having the wrong debate. While analysts argue about which AI model wins, the economic surplus of this cycle is quietly accumulating in the companies that control what every AI model needs to function: the equipment to etch transistors, the memory to store weights, and the power to run inference. These are not exciting stories. They are boring, industrial, essential — which is precisely why they are underpriced.

I believe the AI Supercycle is real, secular, and durable. I also believe it is being analyzed through the wrong lens. The infrastructure layer — the seven physical layers described in this report — will generate more durable economic returns than any individual AI model company, because infrastructure moats compound while algorithmic advantages erode. The companies controlling firm baseload power adjacent to data center demand are in the structural position that oil field owners occupied in 1950. Most investors are looking at the car. The real money is in the gasoline.

"Everyone is building the railroad. Very few are watching who controls the coal." — The Financial View

DISCLAIMER: This report is produced by The Financial View for informational and educational purposes only. Nothing herein constitutes financial advice, investment advice, a recommendation to buy or sell any security, or any form of investment counsel. All data, estimates, and analysis reflect publicly available information and the author's analytical judgment as of Q2 2025. Market conditions change and information may become outdated. Past performance of any sector or company does not predict future results. Readers should conduct independent research and consult qualified professionals before making any financial decisions.

Comments


bottom of page