The lacklustre performance of ChatGPT 4.5 provides further evidence that the era of steady pretraining scaling is over. This has many implications for AI governance.
The lacklustre performance of ChatGPT 4.5 provides further evidence that the era of steady pretraining scaling is over. This has many implications for AI governance.

www.tobyord.com
Inference Scaling Reshapes AI Governance — Toby Ord

what's more likely is that OpenAI just lost all their talent to other companies/startups
Talent alone can't just exponentially improve something that has fundamentally peaked. They could say "oh this is the limit of this kinda model, I guess we need a different kind of architecture from the ground up" if they are lucky, or will just keep trying and failing.
It's like fitting a linear model to a parabola points, you can keep trying and improve the results a bit every time but you have a limit, you can't overcome that unless you change how your model works.
Current AI is just a statistical model, it's not intelligent. They made a model that can take data and generate similar data, they've exhausted resources training it, and it's hit the limits. Unless it can actually think, extrapolate from the data it has in a coherent way, it's not going to grow.
Yes, I think that is what the problem is here. Some people used to have the idea that more scaling would be enough for reasoning to appear, but that hasn't happened.
AI models have a logarithmic progress. In the end the amount of resources needed to increase the performance goes up rapidly. If you have to double the number of processors and data to add another few percent you eventually run out of data and money. This was true with previous systems, even chess engines. It was expected here too, and will be true for the successor of LLMs.