Every time a new technology comes along, we seem to first believe that it will (a) replace all previous technologies and (b) that it will solve all future problems. Inevitably we then discover the limitations and then finally we put the new tool where it belongs, in its own niche where it shines.
As expected, the same is now happening to large language models. The hype machine would have us believe that increasing the model size would lead to “emergent properties” that would bring us general artificial intelligence (AGI) in no time. The misconception arose that 10x in model parameters would directly translate to 10x in machine intelligence and that this would come with GPT-5 or surely at the latest with GPT-6. Luckily, we have experts like Gary Marcus and Yann LeCun remind us of LLM limitations.
Sam Altman of OpenAI has been a prominent voice in pumping the hype, leading to anxiety in society and with policy makers. So why did he just now acknowledge that LLMs will not get us to AGI and that a new path of research is needed? Is it possible that GPT5 development is not working out as hoped? Maybe he’s getting ahead of the “trough of disillusionment’? Should we see this statement in light of his quest to raise more money?
In any case he’s owning the AI narrative for the moment which must be pretty frustrating to other big players.
The positive news might be that attention and investment can shift a bit away from LLMs and new avenues can be explored again. As the field continues to evolve, I’m curious about others’ views on the future of AI. What do you think will be the next significant breakthrough?