When you sum up what every company in a sector needs to earn to justify current valuations, you sometimes discover an impossible number. The aggregate imputed revenues exceed the total market size. Not by a little. By multiples.
This is the crowding gap, and it's the most practical tool Cornell and Damodaran offer us. Not as a trading signal. As a regime flag. When the gap exceeds one, you know the sector is collectively overpriced. You don't know when the correction comes. But you know the direction is overdetermined.
The beauty of the crowding gap is its simplicity. You can calculate it with public data, basic DCF assumptions, and reasonable market size estimates. And once you see a gap above one, it changes everything about how you should position: time horizons, position sizing, sector rotation, and exit discipline.
How to calculate the crowding gap
The methodology requires three steps and reasonable assumptions. Nothing exotic. Nothing requiring proprietary data or complex models.
Step one: Calculate imputed revenues for each major company in the sector. Start with current market capitalization. Work backwards using a discounted cash flow framework to determine what Year 10 revenues would justify today's price. This requires assumptions about cost of capital (Cornell and Damodaran used 9%), target operating margins (they used 20% or current margin, whichever is higher), sales-to-capital ratios, and terminal growth rates.
The key is to use generous assumptions that bias toward making valuations look reasonable. You're not trying to prove companies are overvalued individually. You're trying to see if they're overvalued collectively. So assume low discount rates, high margins, and efficient capital deployment. If the sector still shows crowding with favorable assumptions, the problem is real.
Step two: Sum the imputed revenues across all significant players. Include public companies at market cap. Include private companies at their most recent funding round valuations. Include the sector-specific value embedded in large conglomerates. For AI, this means isolating the AI-specific infrastructure value in Microsoft, Google, Amazon, and Meta, not their entire market caps.
This sum represents what the market is collectively pricing these companies to achieve in revenue ten years out. It's the aggregate expectation embedded in current valuations.
Step three: Estimate the realistic total addressable market. Use bottom-up analysis of end-user spending patterns. Use top-down analysis from GDP growth and historical sector penetration curves. Triangulate to a reasonable range. Be generous here too. Assume strong adoption, growing budgets, market expansion beyond current users.
The crowding gap is the ratio: summed imputed revenues divided by realistic TAM. When this exceeds 1.0, the sector is arithmetically overpriced. The companies collectively expect to capture more revenue than will exist in the market.
The AI crowding gap in late 2025
Apply this framework to AI infrastructure as of November 2025, and the crowding is unmistakable. The calculations below use public market caps, recent private funding rounds, and estimated AI-specific value within conglomerate market caps.
Note: Consider the figures here "back of the napkin" calculus
Compute Infrastructure Layer
(Chips, data centers, hyperscaler infrastructure)
Aggregate value: ~$4.5 trillion
Imputed annual revenues by 2035: $1.2 trillion
Realistic annual spend on AI compute infrastructure by 2035: $600 billion
Crowding gap: 2.0x
| Metric | Size | Details | Calculation Logic |
|---|---|---|---|
| Aggregate Valuation | $4.5T | Nvidia market cap reached ~$4.5T in Nov 2025. Broadcom (~$1.6T) and AMD (~$400B) add further weight. | Sum of Market Caps (Nvidia + AI % of Broadcom/AMD) |
| Imputed 2035 Revenue | $1.2T | Consistent with reverse DCF. To justify $4.5T at 9% WACC, the sector must generate ~$1.2T/yr. | $4.5T Value → requires ~$1.2T revenue in Yr 10 |
| Realistic 2035 TAM | $600B | Analyst forecasts for "AI Hardware" and "AI Chips" range from $473B to $769B by 2035. | Midpoint of analyst forecasts ($473B - $769B) |
| Crowding Gap | 2.0x | The market expects 2x more revenue ($1.2T) than the realistic market size ($600B). | $1.2T Imputed Rev / $600B TAM = 2.0x |
Foundation Model Layer
(OpenAI, Anthropic, model divisions of large tech companies)
Aggregate implied value: ~$3 trillion
Imputed annual revenues by 2035: $900 billion
Realistic annual spend on foundation model API access and licensing by 2035: $250 billion
Crowding gap: 3.6x
| Metric | Size | Description | Calculation Logic |
|---|---|---|---|
| Aggregate Valuation | $3.0T | OpenAI valued ~$500B; Anthropic ~$229B. Implied "AI value" of Microsoft/Google/Meta accounts for remainder. | Private Valuations ($730B) + Big Tech AI Premium (~$2.3T) |
| Imputed 2035 Revenue | $900B | To justify $3T today, these companies must collectively earn ~$900B/yr in pure model revenue. | $3.0T Value → requires ~$900B revenue in Yr 10 |
| Realistic 2035 TAM | $250B | Forecasts for "Generative AI Models" & "Enterprise LLM" markets are $71B - $150B. $250B is a safe upper bound. | User estimate is higher than consensus ($150B), bias to generosity. |
| Crowding Gap | 3.6x | Market is pricing 3.6x the available revenue. (Gap is 6.0x if using stricter $150B TAM). | $900B Imputed Rev / $250B TAM = 3.6x |
Application Layer
(AI SaaS, copilots, agents, vertical solutions)
Aggregate value: ~$1.2 trillion
Imputed annual revenues by 2035: $400 billion
Realistic annual spend on AI applications by 2035: $150 billion
Crowding gap: 2.7x
| Metric | Size | Validated Data & Sources | Calculation Logic |
|---|---|---|---|
| Aggregate Valuation | $1.2T | Top AI startups alone exceeded $1.2T market cap in Nov 2025. Excludes legacy SaaS AI value. | Sum of AI-native startup valuations. |
| Imputed 2035 Revenue | $400B | Reverse DCF on $1.2T valuation requires ~$400B in future annual revenue. | $1.2T Value → requires ~$400B revenue in Yr 10 |
| Realistic 2035 TAM | $150B | Some "GenAI Software" forecasts are ~$150B. Broader "Agentic AI" forecasts reach $450B. | $150B aligns with specific "Generative AI" forecasts. |
| Crowding Gap | 2.7x | Using the $150B TAM yields 2.7x. Using broader $450B TAM yields 0.9x. | $400B Imputed Rev / $150B TAM = 2.7x |
Energy and Infrastructure Layer
(Power generation, cooling, specialized data centers)
Aggregate value: ~$1 trillion
Imputed annual revenues by 2035: $200 billion
Realistic annual revenue opportunity from AI infrastructure power and cooling by 2035: $250 billion
Crowding gap: 0.8x
| Metric | User Claim | Validated Data & Sources | Calculation Logic |
|---|---|---|---|
| Aggregate Valuation | ~$1.0 Trillion | Sum of Eaton (~$140B), Vertiv (~$65B), and AI-exposed Utilities/Industrials approaches $1T. | Market Cap of "AI Power Basket" & Data Center Industrials. |
| Imputed 2035 Revenue | $200B | $1T valuation implies ~$200B future revenue requirement. | $1.0T Value → requires ~$200B revenue in Yr 10 |
| Realistic 2035 TAM | $250B | Data center power demand (~1300 TWh = ~$130B) + Cooling market (~$73B) ≈ $200B+. | Power Rev ($130B) + Cooling Rev ($73B) + Svcs |
| Crowding Gap | 0.8x | Sector is priced to capture less revenue ($200B) than is available ($250B). | $200B Imputed Rev / $250B TAM = 0.8x |
The foundation model layer shows the most acute crowding. A 3.6x gap means the market is pricing these companies for nearly four times the realistic revenue pool. This is classic winner-take-most thinking applied to every company simultaneously. The market is pricing OpenAI as if it will dominate, and Anthropic as if it will dominate, and Google's Gemini as if it will dominate, and Meta's Llama, and Microsoft's integrated offerings. Arithmetically, most of these expectations will fail.
The energy layer is the only part of the stack without crowding. In fact, it shows a gap below 1.0, suggesting this layer is underpriced relative to realistic demand. This is exactly what happens late in Perez' Installation phase. The market focuses on exciting, visible infrastructure while underweighting the mundane constraints that will ultimately matter most.
What the gap tells you and what it doesn't
The crowding gap is not a timing tool. For example, Cornell and Damodaran calculated their online advertising crowding gap in August 2015. The sector performed well into 2016 and beyond. Their cannabis analysis in October 2018 showed obvious crowding, but stocks kept rising for nearly a year.
The gap tells you direction, not timing. It tells you that collective expectations exceed what's arithmetically possible. Some combination of the following must occur: market size estimates are drastically wrong, margin assumptions are drastically optimistic, or the sector will reprice. Usually it's the third.
Think of the crowding gap as measuring pressure in a closed system. When pressure exceeds the container's rating, you know a rupture is inevitable. You don't know if it happens in a week or a year. You don't know which weld fails first. But you know the direction.
This makes the gap useless for short-term trading but invaluable for risk management. If you're sizing a sector bet and the crowding gap exceeds one, you're making a momentum bet, not a fundamental bet. The math doesn't work. You're betting on continued capital inflows and narrative strength, not on companies achieving the revenue scales their valuations imply.
How to use the gap for portfolio construction
The crowding gap changes how you should structure sector exposure across five dimensions: position sizing, time horizons, layer selection, company selection, and rebalancing triggers.
Position sizing. Use the gap as a multiplier on your normal position size. At 1.0x gap, use normal sizing. At 1.5x gap, cut position size by 25%. At 2.0x gap, cut by 40%. At 3.0x gap, cut by 60%. These are heuristics—there are, of course, no rules. The foundation model layer at 3.6x gap should be tiny positions, not core holdings. You're making venture-style bets on which one or two companies become the platform, not diversified bets on sector growth.
Time horizons. Use the gap to set holding period assumptions. At 1.0x gap, use ten-year DCF horizons. At 1.5x gap, reduce to seven years. At 2.0x gap, reduce to five years. At 3.0x gap, you're making a three-year momentum trade with explicit exit criteria. The AI compute infrastructure layer at 2.0x gap is a mid-cycle trade, not a buy-and-hold foundation position.
Layer selection. Rotate toward uncrowded layers and away from crowded ones. The energy and infrastructure layer at 0.8x gap offers the best risk-reward in the entire AI stack right now. These are the companies building power generation capacity, grid interconnects, cooling systems, and specialized facilities. They'll capture value regardless of which foundation models or applications win.
Company selection. Within crowded layers, concentrate on companies with the strongest competitive moats and most defensible market positions. In the foundation model layer, this means focusing on companies with unique data advantages, superior model performance, or strong distribution channels. Avoid broadly diversified positions across multiple competitors when the sector gap exceeds 2.0x.
Rebalancing triggers. Watch for gap compression, which signals that repricing is underway. If the foundation model layer gap falls from 3.6x to 3.0x because valuations are declining, reduce exposure by 20%. Don't wait for the gap to reach 1.0x or try to buy the bottom. The compression itself is the signal that reality is reasserting.
Building a crowding gap monitor
The practical question is how to operationalize this into ongoing portfolio management. This requires a quarterly monitoring framework.
- Define sector boundaries clearly and consistently.
- Establish standard DCF assumptions. Cost of capital, operating margins, sales-to-capital ratios, terminal growth rates.
- Build a TAM model with multiple approaches. Bottom-up from end-user budgets and adoption curves. Top-down from GDP growth and historical penetration rates.
- Calculate gaps quarterly for each layer and major company. Track the time series.
- Set position size and time horizon rules mechanically based on gap levels.
- Monitor catalysts that might trigger narrative shifts. Disappointing earnings from sector leaders. Margin compression. Adoption slowdowns.
- Execute rebalancing rules systematically when gaps compress.
When arithmetic meets psychology
The crowding gap persists before it corrects because arithmetic doesn't immediately override psychology. Understanding why helps you position better.
Entrepreneurs and VCs are systematically overconfident. Each believes they'll win even though arithmetic says most will lose. This is selection bias. The people who start companies and fund them are those who believe, against odds, in their own exceptionalism. Without this overconfidence, transformative innovation doesn't get funded. But the aggregate effect is collective overpricing.
The pricing game dominates during Perez' Installation period. Investors aren't valuing cash flows. They're predicting what others will pay tomorrow. As long as momentum persists and capital keeps flowing, overpricing expands. The greater fool theory works until suddenly it doesn't.
Sector-level gap analysis is uncommon. Most investors analyze companies individually using peer comparables. They miss the aggregate impossibility. Even sophisticated investors often don't sum imputed revenues across competitors and compare to TAM. The exercise feels academic until it's not.
Using the gap without timing the turn
The key to using the crowding gap effectively is accepting that you can't time corrections. You'll exit crowded sectors before the peak. You'll rotate to uncrowded sectors before it's obvious they're the next winners. You'll feel early. That's the cost of using arithmetic instead of trying to predict psychology.
But you'll also avoid the catastrophic losses that come from holding crowded sectors through repricing. You'll compound in uncrowded sectors that offer better risk-reward. And you'll sleep better knowing your portfolio isn't dependent on continued narrative strength in sectors where the math doesn't work.
The crowding gap in AI as of late 2025 is clear. Foundation models at 3.6x. Applications at 2.7x. Compute infrastructure at 2.0x. Energy and enablers at 0.8x. This tells you where to lean and where to lean away. It tells you which positions to size small and which to size large. It tells you where to use short time horizons and where to compound.
It doesn't tell you whether repricing happens in 2026 or 2028. It doesn't tell you whether Nvidia falls 30% or keeps rising another 50% before falling. It doesn't tell you which specific foundation model company becomes the platform. The timing is unknowable. Position accordingly.