The Netherlands - and the rest of the world - does not have a capacity problem — it has a waste problem.

In boardrooms and on the work floor, the talk is often about shortages. One of the most important ones is the shortage of data centers. But I have been wondering for some time now whether that is really the problem we are facing. Could there be something else going on?

The Netherlands is not only struggling with a shortage of data centers; we are stuck in our own inefficiency. The Dutch Datacenter Association has been warning for years that we need to build more to remain relevant. At the same time, construction is stalling due to public resistance, which is understandable as long as the sector remains vague about its environmental impact. Add to that rising prices, filling facilities, and a vulnerable international connection. The result: a digital traffic jam. Expansion is becoming difficult, slow, or simply unaffordable for many organizations.

Isn't there a faster and smarter way than even more square meters and megawatts? Yes, there is: reduce your demand. That sounds obvious. And you may be thinking as you read this: nice idea, but not workable in practice. It is, though. Because we are seeing more and more companies tackling their ‘waste’. And they are doing so without compromising on performance, computing power, or quality.

How? By designing your capacity requirements. Instead of hoping that “more” will solve it. Capacity is not a cause, it is the result; it is a derivative of choices in architecture, data flows, algorithms, retentions, and failover models. Those who actively control those choices, or in other words, control the cause, gain space without concessions.

The key: measurement. We often make assumptions. But we don't measure. Those who measure instead of guessing win. Design based on actual usage and optimize continuously. Start with the places where capacity is hidden in assumptions: retention that has never been recalibrated, replication that is enabled “just in case,” peak dimensioning that runs permanently. Map your actual load, dampen peaks with buffering and asynchronous processing, and process data where it already is to avoid unnecessary transport. Make instance sizes, storage classes, and network paths the result of measurements, not defaults.

This is not cost-cutting disguised as downsizing. It is a performance upgrade:

  • shorter wait times

  • higher throughput

  • predictable scalability

And who wouldn't want that? It gives you back your strategic agility. In a market where capacity is scarce and prices fluctuate rapidly, you don't want to be dependent on the next price round. Or on the goodwill of anyone else to build the next data center. You want to be able to reduce demand by adapting your own design. And thereby reduce your costs.

In the rest of this blog, I'll show you how to make that shift systematically: from “buying more” to “designing better.” With concrete steps, measurement methods, and decision frameworks that you can apply immediately. So you can start freeing up capacity today, without compromising on speed, continuity, or security.

Speed in sprints, sluggishness in production

Over the past few decades, software has grown faster than hardware can keep up with. Processors have become faster, but our consumption has become even faster. In fact, it has exploded. And while hardware and data center contracts are fixed for years, software adds another layer to your stack with every sprint. The result: demand for capacity is growing exponentially, while supply is stagnating. This is not a law of nature. It is a choice.

Below are a number of choices you can reconsider:

1) We choose convenience over efficiency

Popular development languages and frameworks are great for speed of construction but expensive to run. Interpreters, garbage collectors, thick dependencies, ORMs, microservice architectures that call half the world with every request: it all adds up. What one server needs in a low-level language or lean runtime quickly requires (dozens) times more resources in a high-level stack. Gaining development time and wasting runtime is a trade-off that no longer works in a capacity crisis. From now on, choose better: language and tooling that run like a diesel engine, not a thirsty V8.

2) Feature velocity supersedes quality

The market rewards “ship it.” Performance, efficiency, and footprint disappear into the backlog. “We'll throw some hardware at it later.” Later is now coming back like a boomerang: capacity is scarce, prices are rising, and waiting times are increasing. Those who only resolve performance at the end pay twice: in expensive infrastructure and in lost productivity. Make a better choice from now on: treat performance as a first-class citizen, not as a backlog extra.

3) Telemetry eats away at resources... and takes your data with it

What started as “is it okay for product improvement?” has grown into standard data mining. Products send behavior, context, and sometimes content back to the supplier. Add AI assistants that want to read every window, and you burn bandwidth, CPU, and storage on anything but your core work. You pay for capacity that doesn't add value and increase your dependence. Better to choose telemetry that you control... not the supplier.

4) Bloat wins over utility

Every release comes with shiny buttons, animations, and “useful” extensions. Nice? Sure. Useful? Rarely. Bloat increases binaries, increases load times, slows down cold starts, and fills up your memory. It costs users time, you capacity, and no one asks: does this deliver demonstrable value? From now on, choose value over widgets, utility over “nice.”

5) We no longer train for efficiency

Curricula and boot camps are all about “getting the job done”: frameworks, tooling, cloud-native patterns.

Efficiency, locality, data structures, algorithmic choices, cache coherency are footnotes. Developers learn to build faster but not to run more efficiently. Employers don't ask for it, so educators don't teach it. The bill ends up on your desk, with warm regards from your data center. Choose instead: developers who understand how computers really work — not just how to stack frameworks.

Conclusion: waste is not a technical accident, but the sum of culture (faster!), choices (ease!), and blind spots (telemetry! bloat! education!). In a country where capacity is scarce and expensive, “buying more” is not a strategy. You win by designing down demand: leaner stacks, less chitchat between services, processing data where it is, and measuring what it really costs. Efficiency isn't old-fashioned... it's your competitive advantage when data centers are filling up and costs are skyrocketing. And efficiency is easier to achieve than you might think.

Stop buying -> Start measuring!

Efficiency is not achieved with a new data center, but with new behavior.

Start by measuring, and put off buying for the time being.

What does one user or one transaction cost in euros, milliseconds, and kWh? Compare these three KPIs and you will immediately see where the waste is: CPU that does nothing, data that travels unnecessarily, features that consume capacity without delivering value.

Then you reduce the bottlenecks to decisions: architecture, configuration, defaults. Often you don't need to rewrite anything “big.” You win by designing smartly:

  • Turn off bloat and useless telemetry (yes, a lot is enabled by default).

  • Process data where it is (locality first).

  • dampen peaks with buffering and async;

  • choose instance sizes and storage classes based on measurement, not feeling.

“But Hugo, Microsoft/supplier X really won't customize their software for me.”

They don't have to. 50% of the profit is in how you deploy the product: settings, policies, retentions, replication, cache, service limits. These are knobs that you control and that directly affect throughput and response time. And... your invoice.

This is not a theoretical exercise. It is a cyclical rhythm that pays off:

  1. measure €/tx, ms/tx, kWh/tx;

  2. find the top 3 wasters;

  3. fix, remeasure, repeat.

Is it time to measure? Then make a no-obligation appointment with me.

Do you want to deliver more with less capacity, without compromising on speed or quality?

Let's take a closer look at your environment and identify the biggest gains in a single conversation. At Sciante, we do this every day for CIOs and IT managers who can't wait for new data centers, but want results today.

👉 Book a no-obligation Discovery Call with me. One hour. Three concrete optimizations. Measurable effect within your current capacity. No costs, only gains.

Book your appointment now

Click Me