The Machine Decided. But No One Can Tell You Why.

When using AI for decision-making, the question remains: Is accuracy the same as good judgment?

The Machine Decided. But No One Can Tell You Why.
Judgment is not decision-making. Judgement is the thing that happens before you make a decision. Photo by Thomas Delacrétaz / Unsplash

There is an important word that we no longer use, even though it is one of the most important in our vocabulary. It is “judgment.” Today, we mostly just talk about “decisions.” About decision-making, decision-making frameworks, or decision fatigue. A decision, as we use the term today, is a choice among several alternatives. And so, in our world today—dominated by AI applications—it was only a matter of time before we handed this task over to computers and said, “Carry out these decisions for me to achieve further economic and personal efficiency gains.”

But judgment is not the same as decision-making. Judgment is what happens before a decision is made. And sometimes it goes against that decision. Like a doctor in a hospital who examines the skin, finds nothing alarming, and yet orders a biopsy—not based on the data, but because something in the patient’s voice, a fleeting impression he has learned to recognize in 30 years of practice, tells him he needs to look more closely. The sailor decides to take a different route because he knows the dynamics of the weather at this very point in the voyage and knows that the wind will shift. Judgment is unpredictably human because it is unpredictably mortal. It requires a person who is fully immersed in the game of life.

Are AI Agents Good Decision-Makers?

What we are currently observing, as AI agents are deployed across industries, is that we are building a civilization that increasingly delegates its highest-level decisions to the only actors who have nothing to lose in the process. An algorithm that rejects your mortgage application, for example, won’t lie awake at night. An autonomous system that selects a military target does not bear the burden of that decision because it has no body. It merely processes and optimizes, and then moves on. It simply proceeds to the next functional target.

Let’s assume that, on average, algorithms perform better than human judgment when it comes to making predictions—a common argument in data science. Automated tools detect tumors that tired radiologists might miss, and many algorithms are more reliable than gut feelings.

But the question remains, is accuracy the same thing as judgment?

Imagine a perfectly ordinary Monday. The map on your phone shows you the fastest route to work. And that’s the route you’ll take; that’s a decision to optimize your commute. An algorithm can handle that for you. But now consider another situation: Your aging mother calls you in the morning while you’re at work. She’s very confused and isn’t feeling well. And you have a meeting in 20 minutes. Your calendar app advises you to keep the conversation with your mother brief and alerts you to a scheduling conflict, since your workday—optimized for efficiency—should not be disrupted. But you stay on the phone and end up late, which is completely inefficient in today’s fast-paced, schedule-driven world.

In that moment, you make a decision—a moment that no system can replicate, because no system understands what it means to you personally to have a mother. Your relationship with your mother and the behavior that stems from it reveal the often difficult and challenging nature of human life. And it is precisely this nature that systems supported by AI agents seek to eliminate.

Let’s take the example of a university professor. If he disregards a test result and recommends a quiet student for an advanced course—not because the data supports it, but because he has closely observed the student during a specific type of assignment—then he recognizes something that the test cannot detect. This isn’t about overriding an algorithm; it’s once again a matter of judgment. And that depends entirely on the professor, who was once a student himself and, as a reserved person, also struggles with the fast pace of life.

Many who currently discuss full AI automation in education or the business sector argue that humans are biased and inconsistent. They say humans get tired and are easily distracted. The algorithm, on the other hand, delivers the same results every time. That is absolutely true, but to claim that we need machines because humans are imperfect would mean that we have given up on the endeavor to promote human wisdom.

In medicine, we are just beginning to rely on AI analysis, and the radiologist’s skills—those hard-earned, embodied human abilities—are atrophying. We don’t fire the office worker; instead, we turn her into a clerk who processes the algorithm’s results. Judgment—in many areas of our lives, the most human of all skills—is slowly being pushed out of the very institutions that need it most.

How Should the AI Systems We Are Developing Today For the Next Generation Work?

I certainly don’t mean to justify incompetence here, but when we automate decision-making processes, we’re not just saving time. Rather, we’re telling the next generation: You don’t need to learn how to make decisions, because the system will decide for you. And they’ll believe it—thanks to the aggressive marketing and sales campaigns promoting AI applications that seem capable of making better judgments than humans. We are the ones currently developing these applications for future generations and making them ready for use.

If we look more closely, we discover something that is harder to put into words and therefore easier to dismiss: that a particular judgment must cost the one who passes it something, or that it is not a judgment at all, but merely a process. And a civilization that replaces judgments with processes will be optimized, efficient, and precise. It will then be irresponsible in the deepest sense of the word, for responsibility requires a self that can be held accountable, or a conscience that weighs things, a body that cannot sleep, while the algorithm is always working and never sleeps—and that is precisely the problem.

As CEO, let an AI handle the fast, repetitive decisions—it can certainly perform these monotonous tasks better than a human, who is prone to making mistakes when performing them over and over again. When it comes to major, important decisions, however, it is crucial that a human make those decisions. Communicate this clearly to your teams and throughout your entire organization so that the impression is not created that machines will control the entire company or that a system will be held responsible if a mistake ever occurs in these major, important decisions.

Here’s a question I’d like you to think about—and I’d like you to answer it briefly right now: If the most important decisions in your life were made by an AI-powered system that is statistically more accurate than any human but will never know what it feels like to be wrong—would you call that progress?

And if you hesitate to answer this, pay attention to that resistance you have. And maybe that last flicker is something the technical, AI-driven optimization has yet to reach. Because this may be your judgment.


Jens Koester is a strategic advisor focused on the structural friction between exponential technology and the enduring patterns of human culture. Through The Human Datum, he provides the intellectual architecture and foresight necessary for leaders to navigate the AI-driven decade with clarity and intentionality.

Share this reflection: LinkedIn X