"Secure by design" seems to stop at Windows all too often
These days, we expect something from software vendors that I believe should be the norm: security by design. Security shouldn’t be an afterthought, but a fundamental principle from the drawing board onward. And rightly so. Because if security is added only later, you rarely end up with a solid foundation. Instead, you get add-ons, quick fixes, and band-aids in places where concrete should have been.
That is precisely why it is ironic that one of the most widely used foundations in business IT has never actually been built according to that principle itself: Windows.
Yes, Windows has security. File permissions. Process permissions. Groups, policies, tools, dashboards, alerts, Defender, hardening guides, and yet another layer to monitor that previous layer. But that is precisely the problem. Much of that security doesn’t feel like a single, well-thought-out whole. It feels like years of patching things up. And that’s exactly what it is. Like trying to save a rickety window frame with ever-thicker coats of paint.
The recent ClickFix attacks make that painfully clear. First, attackers exploit the Run command. Then a countermeasure is implemented. Next, the attack simply shifts to Terminal. And when that becomes more difficult, Command Prompt, PowerShell, or Script Host are already waiting in the wings. Unfortunately, this is not an isolated incident. It is a design flaw and must be fixed. By you.
Of course, user awareness helps. The right training is necessary. But anyone who relies primarily on human behavior for security is building on quicksand. People make unintentional mistakes. They get distracted. They’re tired. And not least, they get misled. That’s precisely why you want the layer underneath not to contribute to the attack.
In this blog, we’ll look at an uncomfortable truth: at its core, Windows is designed to be far less secure than many organizations like to tell themselves. And we’ll show why Linux is fundamentally stronger in this regard.
The difference between truly limiting and reacting after the fact
When you read the last sentence of the previous paragraph, you might be thinking, “Hugo, there’s no way we’re switching to Linux.” Okay. But please read on anyway, because I’m sure there are some things you’ll want to do.
There are roughly two ways to approach security. You intervene as soon as something exhibits suspicious behavior. Or you ensure in advance that an application is allowed to do very little in the first place. The second approach is almost always stronger.
Under Linux, you can very specifically restrict what processes and applications are allowed to do. Take a Mail Transfer Agent, the digital mail carrier for your email. It only needs to communicate with email traffic and only have access to the files and folders where email is stored. Nothing more. If such a process is compromised, it can’t suddenly start happily downloading malware from a dubious website or installing software elsewhere on the system. The damage remains limited, precisely because the process itself is already caged.
This is a fundamentally different security model than simply applying permissions to files or users. You don’t blindly trust the user under which something is running. You restrict the application itself - even if it tries to break out through a vulnerability.
On modern Linux distributions, this is not an exotic exception. This principle is widely applied to system processes. As a result, a compromised process does not automatically mean a compromised system. This not only reduces the impact but also the appeal to attackers.
Windows still too often operates according to the reverse model. There, you mainly see tools that analyze behavior and intervene as soon as something seems suspicious. But by then, it’s already too late. The process has already started, has already attempted actions, and the initial damage has often already been done. On top of that, continuous behavior analysis consumes resources and slows down systems. So what happens in practice? It is precisely critical system processes that are sometimes monitored less strictly.
And that is another point of contention. On an average Windows machine, hundreds of services run by default, often with very broad or even unrestricted privileges. These are not minor details. This represents a significant attack surface with controlled chaos as a design choice.
In discussions on this topic, Windows Defender Application Control often comes up. It sounds robust. It’s only half the story. WDAC primarily determines whether an application is allowed to start or not. That’s useful, but limited. Once that application is running, the control stops. The process is still allowed to do far too much afterward, as long as it fits within Windows’ logic. That’s not a limitation of behavior or permissions at the process level. The name “Application Control” therefore sounds more impressive than it actually is in practice: allowing is not the same as controlling.
Feel free to call me “grumpy” about Windows. Here’s what I have to say about it: “If everyone were to look through my figurative ;) glasses, there would be a lot less Windows in use. It’s the false sense of security that sells well, and I’m fed up with it.” Let’s move on to solutions!
The old draft decree that continues to cause problems for security
One of the smartest ideas in the history of computing is also one of the greatest security legacies. John von Neumann made it possible to treat software as data. Thanks to that design, you no longer had to physically embed software into hardware or build it using absurd manual constructions. You could store, move, and run code from a disk. Without that idea, modern IT would have looked very different.
And that is precisely where a persistent problem lies.
Anyone who can store code as data can also deliver malware as data. A file arrives innocently via the web, email, or download. Only to then transform from harmless information into malicious executable trouble. Many modern attacks rely on exactly that mechanism.
Linux offers a fundamentally stronger approach here. You can mark partitions—and with modern file systems like ZFS, even specific datasets or folders—so that data there truly remains just data. Files cannot be executed there and are not granted any additional permissions. More importantly: users and processes cannot temporarily disable this just because it happens to be more convenient. The separation remains intact. Data is data. Period.
Windows takes a much “looser” approach to this. At its core, virtually every file is a potential stepping stone to execution, as long as the right path is found. Yes, you can set permissions and restrict execution in certain areas. But these are often measures that can be circumvented, disabled, or misconfigured. This leaves the core vulnerable.
That is the difference between security as a setting and security as a design choice.
Security by design has become the norm, except in Windows
Even outside of Linux, you can see that modern operating systems embed security into their core by default. macOS does this with System Integrity Protection. While technically implemented differently than Linux, it serves the same purpose and has the same effect: protecting critical components by default. Android builds on Linux and thus inherits that same principle. iOS and iPadOS follow the same approach as macOS, supplemented by strict app controls and permissions that are enforced centrally. On mobile platforms, an app is generally only allowed to do what has been explicitly permitted in advance. That is design discipline. And that is precisely where the problem lies: among modern operating systems, Windows is the striking exception. Windows doesn’t build security in; it slaps it on afterward. And by no means across the entire surface.
Safer IT starts with reducing reliance on Windows
For a surprisingly large number of organizations, Windows has long since ceased to be the obvious choice. Java runs perfectly well on Linux. .NET does too. Even SQL Server no longer has to be tied to Windows. What was viewed for years as “simply a necessity” often turns out, in practice, to be mostly a matter of habit.
And habits rarely make for a strong security strategy.
At Sciante, we run our servers on Linux, work on Linux and macOS desktops, and have configured that environment so that security relies entirely on security by design. Taking corrective action after the fact costs far too much money, both for large companies and for us as a smaller organization. Setting everything up properly from the start not only results in a stronger security model, but also brings greater peace of mind, less overhead, and more control.
Let’s talk about what can and cannot be separated from Windows in your environment. Not everything has to be a massive undertaking. But do make sure you don’t become the next Odido. Schedule an appointment. It won’t cost you a thing and will only provide you with insights. During such a conversation, we’ll look together at the options, the risks, and the feasibility of any changes.