Three post ideas:

In this post I’ll address some of the important questions raised in previous articles that it fell outside of the scope of those articles to address.

Suppose that we do see rapid advances in AI capabilities in 2026, and that this does lead to a lot of algorithmic progress. Will this actually be sufficient for rapidly decreasing time horizon doubling time? Or is AI progress largely the result of just throwing more compute at the problem, a process which can’t be sped up very much without more hardware coming online? You do an expensive training run, you see some results, this tells you what worked and what didn’t, you learn. An empirical process.

But there’s a lot of evidence against this view. Large breakthroughs lead to massive efficiency gains and occur at the conceptual level rather than as the result of throwing more compute at the problem of training a better model. Smaller innovations occur constantly which do not require expensive experimentation to verify. There’s no evidence of an exhaustion of low-hanging fruit of this kind. Nor is the problemspace so simplistic that we should expect returns on intelligence to be diminishing, but we should expect the opposite. As models reach the level of human expert researchers in certain important areas (and progress from there) we should expect gains like we have seen before; dramatic, effective, impactful; at ever-increasing pace. Of course, as we scale in parallel returns will be nonlinear but bountiful, and as we scale in intelligence returns will be superlinear.

https://epoch.ai/blog/algorithmic-progress-in-language-models