As I reflect on 2025, there are perhaps only 4 things I can say about it with certainty. Not a day has gone by without:
From my vantage point, this combined body of hype is the real danger in this situation. Certainly in the UK, it seems like companies are paralyzed by uncertainty combined with 20 years of building regulation upon regulations.
Point 4 has particularly been prominent during the last month, with Michael Burry, (the finance expert who predicted the 2008 crash with a Big Short, laying the groundwork for Christian Bale to play him in the movie by that title) featuring in the headlines. Burry recently made a ~$912m short against NVIDIA and Palantir, perhaps the two biggest beneficiaries of the AI boom.
The there was this interview with Gil Luria, someone with an impressive background in both finance and technology, who seems like a great guy:
https://www.youtube.com/watch?v=DS5IIClBA9E&list=TLPQMjIxMTIwMjWUDsCl1lVk9w&index=9
It’s a long interview so I’ll summarize it here, but Gil gives a compelling case that builds upon Burry’s endless diatribes over the last year, suggesting that all of the Hyper-Scalers (Microsoft, Google, ~OpenAI etc) have misrepresented the value of their chips.
Early in 2025, Burry did some ‘back of the napkin’ math that essentially calculated there was no possible way the hyperscalers would be able to achieve ROI on these chips. Burry’s calculations focused on the depreciation, an accounting standard that calculates the value of an asset over time.
Burry argued that since Hyperscalers calculated this depreciation over 5 years, they’d barely recoup the cost of the chip itself. Therefore we must assume that the asset is loss-making after that period.
In October Burry revised this further, on the basis that the 10x increase in NVIDIA’s bi-yearly chip iterations meant that the real depreciation is 3 years in practice. Burry was so convinced of this he placed his gigantic short on the two darlings of AI, eager to prove he wasn’t a one trick pony.
NVIDIA’s Jensen Huang hit back, that even their Turing - T4’s from 2018 are at 100% utilization.

NVIDIA chips have demonstrated a 350x energy reduction over 8 years. Turing is suspiciously missing… Not due to massaging the numbers.
All Recent NVIDIA Generations: Pascal (2016), Volta (2017), Turing (2018, missing from visualization), Ampere (2020), Hopper-Ada (2022), Blackwell (2024), Vera Rubin (~2026)
In the interview with Big Tech Podcast, Gil argued that there is a difference between ‘utilization’ and ‘valuation / ROI / depreciation’. Therefore the millions of Turing, Ampere and Hopper generations still in use must all be loss making, and by implication the hyperscalers are generously funding the worlds travel and cooking tips via their favourite chatbot.
Blackwell chips will have provided NVIDIA with well over a trillion dollars during their lifecycle, pushing their market cap up by $2tn in just 2 years. Luria argues that the T4’s utilization is driven by temporary market conditions of an insatiable appetite for AI compute.