When AI is just duct-taped to your product
“We need to do something with AI.”
In many boardrooms, that now sounds like a logical step. And then that question often automatically lands with IT. After all, AI has to do with software, data, and technology, doesn’t it?
So IT is tasked with figuring it out. And that’s exactly what you’re seeing happen in many organizations right now.
Software vendors feel their product must have something to do with AI. “AI-powered.” “AI-centric.” “AI-first.” Not always because the product demonstrably benefits from it, but because they don’t want to give the impression they’re falling behind. As if software without AI suddenly doesn’t count anymore.
That’s understandable. But it also leads to many additions that haven’t really been logically integrated into the product yet.
You see features that weren’t designed based on the user’s workflow, but were mainly added simply because AI has to be on the packaging. Not built from a user-centric perspective. Not developed based on the customer’s actual needs. But visibly present.
And you notice that quickly.
More buttons. More suggestions. More steps. Sometimes not to work better, but to work around the AI. The software doesn’t really help you then, but actually demands extra attention while you’re working.
We see this, for example, with ActiveCampaign. For some time now, they’ve incorporated AI into many parts of the product. But in the meantime, important questions remain unclear. Which engine exactly do they use? Where does your data go? What do they do with it? How long is it stored? And under what conditions?
These aren’t minor issues. This directly touches on governance, data usage, and responsibility. That’s exactly where you want clarity.
And it gets even more complicated when your organization already uses AI that you’ve consciously chosen. Suppose you refine your texts beforehand using tools that fit your way of working, your tone of voice, and your goals. If every platform then layers its own built-in AI on top of that, you don’t automatically gain efficiency. Instead, it creates noise. No consistency, just a break in style.
You’re simply adding complexity under a modern label.
As soon as AI is involved, control shifts
As soon as AI processes your data, control often shifts—at least in part—outside your own environment.
This isn’t meant to be alarmist, but it is something to take seriously.
A Large Language Model doesn’t just run locally on an average laptop, desktop, or web server. It requires serious computing power. So the moment you let such an AI edit texts, documents, emails, or notes, that data usually goes to infrastructure outside your direct control.
And that’s exactly where the governance issue begins.
You often don’t know exactly where that data ends up. Not always how it’s processed. Not who has access to it. Not how long everything is retained. And also not whether the data is used exclusively for your task at that moment.
You see this with multiple providers. ActiveCampaign isn’t unique in this regard. Microsoft isn’t always clear either. Copilot is now woven into many parts of the ecosystem, but clear and easily accessible answers about data processing often remain elusive. Yes, in an Azure tenant you can configure certain aspects regarding classification and confidentiality. But that doesn’t eliminate your dependence on the provider. And if something goes wrong there, you suddenly discover just how much information was processed outside your direct oversight.
For consumers, that influence is usually even smaller.
And with Google, too, built-in AI doesn’t automatically mean that everything is automatically handled properly. Gemini is prominently present in Google Docs. There, too, data can be used for product improvement. That sounds nice, but precisely for that reason, it calls for clear choices and explicit agreements.
So when a provider adds AI, they’re not just adding functionality.
It also changes the way your data is processed.
Good AI is a conscious choice
“We need to do something with AI” isn’t a bad idea in itself. In fact, it’s wise to seriously consider where AI can play a role in your organization.
But the quality lies in the execution.
AI is not a strategy. AI is a tool. It should therefore be treated as such: as something you only use when it demonstrably solves a problem better than the tools you already have. Not because it sounds modern. Not because the market demands it. And not because a vendor has added an AI button somewhere.
Good AI is a conscious choice.
That means: chosen for a clear purpose, within clear parameters, with clear agreements on data, output, and usage. Not turned on everywhere at once, but applied where it truly adds value. And turned off precisely where it mainly causes noise.
There is also a clear responsibility here for software vendors.
If you add AI to your product, make it optional. Not forced upon users. Not hidden in the workflow. Simply as a conscious choice by the customer. Even better: give organizations the ability to integrate their own preferred AI. So they can work with the same model, the same instructions, and the same tone of voice everywhere.
That’s when consistency emerges.
Then you’re not working in every channel with yet another AI layer, each with its own style, its own assumptions, and its own data stream. Then you maintain control over quality. Control over governance. And control over your own way of working.
That’s the difference between using AI because it’s useful
and accepting AI because it’s everywhere.
Not more AI, but better choices around AI
Do you want to use AI because it actually solves a problem?
Or because you feel like you’re falling behind otherwise?
That difference is bigger than it seems.
Because as soon as AI enters your processes without a clear choice, it quickly creates extra friction, less control over your data, and less consistency in how you work. Then AI doesn’t help your organization become sharper, but more complex.
Good AI starts with control.
It involves knowing where AI actually adds value, where it mainly creates noise, and how to prevent every vendor from layering their own AI solution over your workflow.
Would you like to discuss this further?
Schedule a no-obligation consultation with me. Together, we’ll assess your situation and determine where AI truly helps you move forward—and where you’d be better off making a different choice.