Is AI’s hype cycle leading us into a trough of disillusionment? Dive into the reality behind GPT-5’s launch and its impact on the tech world.
A recent MIT report on AI in business found that 95 percent of all generative-AI deployments in business settings generated “zero return.”
i know it’s popular to be very dismissive, but a lot of “AI” has already been integrated into normal workflows. AI autocomplete in development text editors, software keyboards, and question asking bots isn’t going away. speech-to-text, “smart eraser”, subject classification, signal processing kernels like DLSS and frame generation, and so many more will be with us and improving for a long time. Transformers, machine learning optimized chips, and other ML fields are going to be with us for a long time. the comparison to NFTs is either angst or misunderstanding.
Nearly all of those can run just fine on-device. I think the part of the bubble that’s ripe to burst is the gigantic gigawatt data centers; we don’t even have the power to run them if all the ones under construction were completed. The current trajectory is not sustainable, and the more contact it has with reality the harder that will be to ignore.
That’s the issue, AI right now means LLMs not deep learning/ML.
The Deep Learning/ML stuff will keep chugging along.
but LLMs do represent a significant technological leap forward. i also share the skepticism that we haven’t “cracked AGI” and that a lot of these products are dumb. i think another comment made a better analogy to the dotcom bubble.
ETA: i’ve been working in ML engineering since 2019, so i can sometimes forget most people didn’t even hear about this hype train until ChatGPT, but i assure you inference hardware and dumb products were picking up steam even then (Tesla FSD being a classic example).
We definitely haven’t cracked AGI, that’s without a doubt.
But yeah, LLMs are big (I’d say really Transformers were the breakthrough). My point though was that Deep Learning is the underlying technology driving all of this and we certainly haven’t run out of ideas in that space even if LLMs may be hitting a dead end.
I feel like LLMs didn’t hit a deadend so much as people started trying to use them in completely inappropriate applications.
I think it’s a mixture of that and the fact that when OpenAI saw that throwing more data drastically improved the models, they thought they would continue to see jumps like that.
However, we now know that bad benchmarks were misleading how steep the improvements were, and much like with autonomous vehicles, solving 90% of the problem is still a farcry away from 100%.