Why The Best Leaders Ignore What The Data Says

What if the precision of data blinds us to the questions that really matter?

Why The Best Leaders Ignore What The Data Says
The problem with data-driven decision-making is not the data itself, but the confusion of data with wisdom. Photo by Tahir osman / Unsplash

The Paradox of Perfect Information

We live in an age of uncertainty, where a CEO sits at three monitors. Customer acquisition cost is trending down. Retention metrics are up 14%. The A/B test shows conclusive results—variant B wins with 97% confidence. The machine learning model predicts Q4 revenue within a 3% margin of error. The data is clean, and his next decision is obvious.

But what I experienced in my year-long journey in different industries, my CEOs ignored all kinds of these results.

They would have approved the “lost” variant. My CEO would have doubled down on the segment the algorithm flagged as unprofitable and would have made, by every quantifiable measure, the wrong decision.

Six months later, the company was worth three times what it was. The data was right about everything except what mattered. This happens every day in Canada, the US, and around the world. Even though we work so data-oriented and are so in love, especially with our new AI tools to predict nearly everything, when it comes to a final decision, we often go in a different direction.

This is not a story about intuition triumphing over analysis. This is a story about a category error—a fundamental misunderstanding of what decisions actually are.

The Algorithm Knows Everything Except What It Cannot Know

The seduction of algorithmic decision-making rests on a beautiful lie: that the world is a closed system, that the future resembles the past, that what can be measured is what matters.

Data is the perfect history instructor that tells you, with extraordinary precision, what happened. It cannot tell you what has never happened before.

Consider what algorithms optimize for:

  • Efficiency (the fastest path between two known points)
  • Consistency (the elimination of variance)
  • Measurability (if it cannot be quantified, it does not exist)
  • Prediction (the future as a weighted average of the past)

This is not totally wrong; the problem is that it is incomplete. The algorithm sees the world as it is, translated into code, but it cannot see the world as it could be.

When we speak of “non-algorithmic choice,” we are not speaking of randomness or whim. We are speaking of the uniquely human capacity to:

Navigate moral ambiguity — The algorithm optimizes for profit, but it cannot distinguish between profit earned through dignity and profit extracted through exploitation. It knows the number but has no moral or ethical values; it has no conscience.

Hold space for unknown-unknowns — Historical data cannot anticipate the unprecedented. The most consequential events—technological breakthroughs, cultural shifts, black swans—are, by definition, absent from the training set.

Make normative judgments — A decision about what should happen or be in line with the social values and standards rather than what is likely to be. The algorithm can tell you how to win the game, but it cannot tell you if the game is worth playing.

The CEO who overrides the data is not dismissing evidence. Besides the data, there are, in most cases, employees in a company that he is responsible for and clients' needs he has to serve. If he puts everything in danger because of the machine's decision, he will be held responsible. The machine is never responsible, and suing the ones who wrote the code is also not as easy as it sounds when the algorithm fails.

A CEO has to react to cultural changes and to the daily challenges he faces; variables like these cannot be incorporated into algorithms' work. The result of an Agentic AI thinking process leads to unemotional, blank decisions without knowing anything about the latest changes in the company or market. Everything happens in real time, but the AI reacts with a time lag, which makes the data, in many cases, not valuable.

But Isn’t This Just Glorifying Gut Feeling?

The case against algorithmic determinism can easily slide into a romanticization of the “visionary leader” who operates on pure instinct, unencumbered by pedestrian concerns like evidence. The opposite of algorithmic rigidity is not vibes-based decision-making.

What looks like intuition is often rapid pattern recognition across decades of embodied experience—a form of intelligence the algorithm does not possess because it has no body, no stakes, and no mortality.

Emotions lead our thinking, no matter which profession you work in. They often blur our perception and poison our reactions, leading us to overreact. But what would be a business or private life without an emotional decision, besides big data sheets? Emotional intelligence helps successful managers lead team members with very different skills and personalities, especially in uncertain times.

The paradox is this: the value of non-algorithmic choice depends entirely on deep algorithmic literacy. You can only productively override the data if you first understand exactly what the data is telling you—and more importantly, what it is structurally incapable of telling you.

The danger is not too much data, but mistaking data for wisdom.

The Line Between What to Automate And What to Preserve

In today's world, where decisions are made based on AI, companies must weigh up how and with which data an algorithm should make decisions in which areas.

Some decisions should be algorithmic:

  • Resource allocation within known constraints
  • Pattern detection in high-volume, low-stakes environments
  • Optimization of clearly defined, measurable outcomes

Some decisions should never be algorithmic:

  • Choices that define organizational identity and values
  • Strategic pivots that require imagining futures that don’t yet exist
  • Situations where the measurable and the meaningful diverge

The best leaders are not those who ignore data, nor those who blindly follow it. They are those who know exactly where the algorithm’s authority ends, and theirs begins.

We humans are the bearers of moral considerations, who must advocate for arguments even when the data is clear, and yet do the opposite because we understand something that the algorithm will never understand: that not everything worth doing makes sense on a dashboard.

The risk is not that we will automate too much. The risk is that we will automate decision-making itself and, in doing so, forget that a decision is not an output; it is a consequence for many of us.

The data can tell you what is optimal, but only a human tells you what is right.

Will you know the difference when it matters?


Jens Koester is a strategic advisor focused on the structural friction between exponential technology and the enduring patterns of human culture. Through The Human Datum, he provides the intellectual architecture and foresight necessary for leaders to navigate the AI-driven decade with clarity and intentionality.

Share this reflection: LinkedIn X