Here we go again... the exit of AI leaders has begun (and your biggest warning)
Scientists at Radboud University recently sounded the alarm about the direction artificial intelligence is taking. Not because AI itself is dangerous, but because its development is not being conducted in a scientific manner. In this blog, I will share what IT organizations need to be aware of.
Big tech is setting the direction, and that direction is rather narrow. In fact, when AI is used within science itself—for example, to write papers or analyze data—the quality of the output noticeably declines.
More than a century ago, Henry Ford proved that seemingly impossible innovations are achievable as long as there is a concrete technical goal. His engineers declared a V8 engine made from a single piece of cast iron to be madness—until Ford let them persevere. The result changed the entire industry.
The challenge with AI is of a completely different order. We are not solving a technical problem; rather, we are ignoring the fundamental questions. Major players such as OpenAI, Anthropic, and Google invest almost exclusively in language models (LLMs). These are smart for text, but the world does not consist solely of words.
This one-sided focus creates a bubble: each new model costs billions, delivers minimal quality improvements, and remains financially unsustainable. Meanwhile, the truly scientific breakthroughs, such as understanding context, reasoning, or causality, remain untouched.
It's a familiar pattern. Think of the dot-com bubble, or blockchain projects where billions disappeared without delivering any real value. It's always the same dynamic: investors pump money into a promise and only insiders who get out in time cash in. And the rest... are left with the mess and the losses.
The signs are now also piling up with AI. For example, this sign: leaders of leading companies “taking on a new challenge.” Then you know what time it is. Then the peak is over and the crash follows. For investors and for the entire sector.
If all you have is a hammer, everything looks like a nail
Large Language Models are marketed as universally applicable. As if one model can do everything: write, analyze, reason, decide. Fortunately, more and more people are realizing that this image is inaccurate. At their core, LLMs are nothing more than extremely advanced text predictors. Compare it to the word prediction on your phone, but on steroids. Useful for certain tasks, but definitely not a jack of all trades.
Yet AI agents that rely on LLMs are being built and deployed everywhere and anywhere. These are agents that supposedly make autonomous decisions and take work off our hands. The problem is that an LLM is designed to generate or transform text, not to make choices. As soon as decisions are linked to it, things go wrong. A text generator is not a driver.
In fact, the name “Artificial Intelligence” is misleading. Intelligence requires reasoning ability. LLMs do not have that. And research shows that they never will. They mimic language; they do not reason. Anyone who presents them as intelligent is selling hot air.
There is another factor to consider: the so-called “general nature” of LLMs is precisely their Achilles heel. Because they have to provide an answer to everything – “I don't know” is obviously not an option – they require enormous computing power. Not to mention the increasing amount of well-packaged nonsense that is increasingly coming out as answers. A model that wants to respond to everything therefore wastes capacity on a structural basis. And that makes the business case even more fragile.
So the question is not: “What can LLMs do?”
The question is: “What are they really suitable for?”
Those who fail to see the difference will fall straight into the trap of the AI bubble.
AI: high costs, underperforming — a classic bubble
You often only recognize a bubble in hindsight. But the signs are already visible with AI.
Imagine a balloon. Each investment round blows more air into it. The balloon becomes bigger, shinier, more impressive. From the outside, it seems as if there is no end to its growth. But inside, nothing has changed: it is mainly filled with air.
That is precisely the problem with LLMs.
The costs of developing and running them are astronomical, while the returns lag behind. To cover costs, subscriptions would have to cost a multiple of what companies are willing to pay. Who would pay $24,000 per year (which is what it would really cost) for a tool that at best generates text and saves some work?
Economically speaking, the signs are textbook. A bubble occurs when expectations and speculation rise faster than the actual value. Investments no longer go to proven value, but to the hope that someone else will pay even more. That was the case with the dot-com hype, the crypto wave, and now with AI.
Another sign: the technology is reaching its ceiling. New LLM versions are becoming increasingly expensive, but offer little extra quality. Economists call this “diminishing returns” – a red flag if ever there was one.
We know how the movie ends. As soon as investors see the stretch, the mood changes. Capital dries up, valuations collapse, and companies that are built on thin air fall over. Only those who can deliver real value and scale will survive.
The AI bubble will continue to float for a while, but the tension in the rubber is increasing. It's not a question of if, but when it will burst.
Quantum AI is not the future, but science fiction
There are whispers here and there that “Quantum AI” will be the salvation of Large Language Models. The idea: quantum computers calculate so much faster that they can effortlessly break through the limits of LLMs. A nice promise. The reality: anyone who says that either doesn't understand quantum computers or doesn't understand LLMs. Or wants to get rich quick.
Quantum computers are not faster versions of ordinary computers. They are built to solve very specific problems with very specific algorithms. Examples of specific applications include factorization or optimization problems. The algorithm behind an LLM—which calculates billions of parameters in a probabilistic model—simply does not fit that mold. Quantum and LLMs do not speak the same language.
And then there is the timeline. Even the most optimistic researchers expect that large-scale, stable quantum computers will not exist for another 15 years. Others estimate it will take decades, or even a century. And perhaps we will never solve the physical puzzles that are necessary for their development.
That makes “Quantum AI” not a visionary plan, but science fiction. Of course, companies will emerge that use the label to attract capital. That's always the case with hypes. But investing in an idea whose fundamental building blocks are still missing is nothing more than buying hot air.
In short: Quantum AI is not going to save LLMs. It's a sales pitch, not a future scenario.
The real future of AI fits on a server costing €100 per month.
So, is AI just a passing trend? No, of course not. AI isn't a one-hit wonder. But the way we use AI is about to change a lot.
Today's large, comprehensive LLMs—which cost billions and consume enormous data centers—are simply not sustainable. What will remain are tailored, compact models. Models that don't have to be able to do everything, but instead do one thing very well.
Other techniques, such as machine learning, are much ‘lighter’ and more practical. They run smoothly on affordable hardware and are often more effective in solving specific problems. And that is precisely where the real future lies! Not in mega-infrastructures that promise everything. But in smart, small-scale solutions that deliver immediate value.
At Sciante, we are already doing this today. Our AI applications run on servers that cost less than a hundred euros per month. Yes, really, read that last sentence again. It all runs efficiently, quickly, and without the data risks of big tech. Your prompts and your data remain yours. No shady contracts, no hidden revenue models.
We believe this is the future of AI: small, targeted, and completely under your control. No pipe dreams, just practical applications that deliver real value.
Want to know what that looks like for your organization? Make a no-obligation appointment. We'll show you how AI in compact form can pay off: safely, affordably, and tailor-made.
Sources:
https://zenodo.org/records/17065099