The GPT-5 and the End of 'We Just Need Scale'

In his post 2 months ago, Sam Altman stated,

We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be.

Fast forward to yesterday, OpenAI, finally unveiled the long-awaited GPT-5 model.

All the rumors, the "AGI achieved internally" memes, the Death Star, and all we got was... underwhelming.

GPT-5...
GPT-5... supposedly

In benchmarks, it seems to be just okay, on par with other models like Claude Opus, Gemini, and even Grok 4, which entered the race later. We'll have to wait until the independent benchmarks are updated, but the fact that they couldn't claim significant improvements even in their own measurements speaks for itself.

There are neither audio nor video inputs.

Its biggest advantage seems to be the competitive pricing, though it's pretty clear they're burning investors' money on this.

So what exactly happened?

Exponential curve or diminision returns

There are two possible explanations of what happened.

The first explanation relates to the thesis "we just need more scale." This thesis implies that we have already developed a clear path to AGI or ASI, and all we need is more computing power.

More computing power leads to more progress, placing us on an exponential growth curve.

This thesis alone recently raised Nvidia to a $4 trillion valuation.

But what we see in reality is that tech companies are burning through billions of dollars on computing now, without moving the needle much since GPT-4.

This is not exponential growth; rather, it's logarithmic. The more money and computing we spend, the more marginal the results become.

The second explanation is that it's just OpenAI that has lost its way. Since the infamous coup, a lot of talent has left the company, starting with Ilya Sutskever and including recent reports about top talent leaving for Meta.

And perhaps with the brain drain, the company has just lost its ability to innovate.

I personally think that both of these explanations have played their part.

What's next

I think the next several months will be defining for the industry. We'll have to wait for the next generation of models from Google, X.ai, and Anthropic before we can make an educated guess about whether we have really hit a wall or if it's only OpenAI that has lost its way.

In either case, I believe we need to stop chasing generalized AI. Instead, we should focus on specialized models and real-world applications in medicine, robotics, and other fields.

#AI #Public #Technology

More posts by @yt →
Powered by Mind This.