Artificial Stupidity: welcome in The Matrix

I recently posted something on Facebook. In a group I've been hanging out in for years. With a mention of someone I know personally and am connected to. Nothing exciting: a request to send me some non-public information via DM. I've made posts like this before.

Within a second: rejected. Account locked. For a week. W*F?!

Reason? I was allegedly sending “spam” to strangers. Only after I asked further questions did that come out. I filed an objection. And within five seconds, everything was reversed. With apologies. By the exact same machine that had just condemned me.

This is exactly where we are now.

More and more decisions are being made by AI. Not because it's better. Because it's cheaper than people. It's not innovation. It's a cost-cutting measure with a popular label on it. And the damage is cheerfully passed on to everyone who is not on the balance sheet: users, customers, and citizens.

Yes, the models are getting better. But “better” is not the same as reliable.

In the OpenAI GPT-5 system card, for example, you can see that in some settings there are still clear hallucination rates (incorrect claims) – including a 6.9% score on a FactScore measurement with browsing turned off. And OpenAI itself explains why this is so persistent: standard training often rewards “guessing” over “I don't know.”

That's fine when it comes to questions like: which pizza is good?

Not fine when it comes to: “Are you allowed to post?”, “Are you allowed to bank?”, “Are you allowed to keep your data?”, “Are you suspicious?”, “Will you get that mortgage?”

Today, it's a platform that you can “just turn off.” Tomorrow, it's a provider that closes your account and takes your entire digital life down with it. And in the meantime, the same logic is spreading to finance and compliance: automatic scoring, automatic blocks, automatic notifications. Because: efficient.

This is the core of artificial intelligence—which I increasingly refer to, not so affectionately, as artificial stupidity: we use text machines that are not fully functional as referees. And we pretend that “almost right” is good enough, when the consequences are not “almost” at all.

So yes, I find this worrying. It's not happening in 50 years. It's happening today.

Welcome in The Matrix.

From gatekeeper to robot: how distrust is being automated

Increasing regulatory pressure: a major cause

The move towards AI in finance is not driven by vision. It is driven by pain. And by cost considerations.

According to the Dutch Banking Association, more than 13,000 bank employees – approximately one in five – are now working on customer background research, transaction monitoring and reporting unusual transactions. The total annual cost: €1.4 billion.

This is not an ‘extra layer of administration’. It is a completely parallel business, built alongside the real banking business.

And banks are not the only ones. The entire chain is becoming a gatekeeper: accountants, administrative offices, bookkeepers, mortgage lenders. Nowadays, you can't ‘just’ start doing BV accounting without UBO information and client research. The UBO register is held by the Chamber of Commerce, and Wwft institutions must request and retain an extract.

The result? Distrust as the default setting.

If you want to make extra mortgage repayments after selling your house—something that is often encouraged in policy—you suddenly have to relabel your own money: prove its origin, provide documents, answer questions that feel like you are applying for your own checking account.

And this is where it rubs us the wrong way: the presumption of innocence is a fundamental right. The European Convention on Human Rights states it clearly: anyone who is accused is presumed innocent until proven guilty.

But compliance processes are rarely built on the principle of ‘innocent until proven guilty’. They are built on the principle of ‘prove that you are not suspicious’.

And then comes the logical next step: automation. Not because AI understands the nuances, but because the spreadsheet screams that people are too expensive.

That's where it gets dangerous.

If your policies, forms, and training data are steeped in mistrust, then a model will take over. An LLM doesn't think. It repeats patterns. Fundamental rights are not automatically included in those patterns—especially if they are not explicitly part of the design choices, the thresholds, the exceptions, the human escalations.

And then you get what I call artificial stupidity, on a scale: false positives that are not ‘just annoying’, but completely disruptive. Blocked payments. Cancelled accounts. Suspicions that you can never properly ‘get out of the system’. And a customer service department that says, ‘the computer says no’. The humour of the past has become reality. 

Regulatory pressure is pushing finance toward AI. The question is not if. The question is: do we build in brakes—or do we automate the missteps?

Where LLMs do belong: alongside your people

So is AI completely useless?

No. But it is useless if you use it as a substitute for human judgment.

The current generation of AI—LLMs and machine learning—can be extremely helpful with work that is large, boring, repetitive, or data-heavy, as long as you keep it within boundaries. Think of: summarizing tickets and incident reports, clustering reports, finding patterns in logs, creating initial concepts (email, change plan, runbook), generating test cases, cleaning up data, or proposing hypotheses that an engineer then verifies.

In short: decision support. Not: decision automation.

The aviation industry has understood this principle for decades: autopilot can do a lot, but pilots remain responsible. And for good reason. When automation is declared “foolproof” and people are cut back around it, you get exactly the kind of drama that became painfully apparent with the Boeing 737 MAX, where an automated system intervened based on faulty input. With fatal consequences.

And now you see the same pattern in the agent hype: tools that “really do things” and are therefore allowed deep into your system. That's powerful... and dangerous. The popularity of Moltbot (formerly Clawdbot) immediately attracted abuse, including a malicious VSCode extension that installed malware. Security parties also warn that these types of personal agents—with broad rights—can be a nightmare if your deployment, access, and isolation are not mature.

So the valid scope of application is clear:

  • Low risk + human oversight

  • Defined tasks + clear consent

  • Auditability + logging + rollback

  • Sandboxed execution + approvals for actions

  • Measure errors, don't hope for magic

AI is a brilliant assistant. But a terrible judge. And certainly not an autopilot for your organization.

AI that actually works: people make the decisions, the model assists

At Sciante, we use Computer Aided Human Intelligence: AI for the heavy lifting, humans for the judgment.

So no “computer says no.” No black box that can hold your processes hostage. Instead: models that can run through logs, metrics, and configurations faster than any team can. And then an expert who, based on facts, decides what the real cause is, what the risks are, and which fix will yield the best results.

This is how we uncover leaks that are currently costing you money, performance, and energy. Cost-efficient, measurable, and without you relinquishing control. Our knowledge guides the end result—not the “whims” of a model.

This delivers what you want: optimization that is fast, accurate, and affordable. And above all: defensible. To your management, your security, your auditor, and yourself.

Do you want reliable optimization? Would you like to discuss how to use AI responsibly without extra hassle or unexpected damage?

Schedule a no-obligation appointment with me. One conversation. Several eye-openers. No costs.