The Zero-Marginal-Cost CEO

The uncomfortable question is: What happens to humans inside an "orchestrated" system by AI?

The Zero-Marginal-Cost CEO
When an AI agent can execute tasks instantly, in parallel, at negligible marginal cost, we have a philosophical crisis dressed in business casual. Image: AI-Conceptualized

A Paradox of Abundance

There is a photograph—you’ve probably seen a version of it—of a nineteenth-century factory floor. Rows of workers, identical in posture, operating looms or punch-presses. The image is usually presented as an example of industrialization. But now, look more closely at what it actually depicts: the scarcity of work made visible.

Every body in that room represents a bottleneck. Human time, human attention, human hands—these were the expensive things, the things that determined what could and could not be made.

We have spent two hundred years building management science around this scarcity. The entire architecture of modern leadership—org charts, performance reviews, utilization metrics, the very concept of “headcount”—is an elaborate technology for squeezing more output from limited human capacity.

And now, suddenly, that scarcity no longer exists.

When an AI agent can execute tasks instantly every day, every minute, no matter what the circumstances are, in parallel, at negligible marginal cost. When “busyness” becomes not just a misleading signal but a meaningless one, we do not simply have a new tool.

We have a philosophical crisis dressed in business casual.

The Revaluation of Decision

As AI agents reduce execution costs to near zero, the essential source of value shifts decisively from efficient action to sound judgment. The future belongs to those who can consistently make and frame high-quality decisions—the main determinant of organizational impact as automation spreads.

This is not hyperbole; it is arithmetic. When execution was expensive, a mediocre decision might cost you a few wasted work-hours. When execution is instantaneous and infinitely scalable, a single wrong choice propagates across channels, markets, and workflows in minutes. The bad decision does not die quietly in a memo; it metastasizes at the speed of automation.

Conversely—and this is where the executives’ eyes light up—a good decision, translated into agent instructions, compounds at unprecedented scale. The algorithm never tires. It does not need a coffee break or a vacation. It runs continuously, multiplying your judgment across every customer interaction, every logistics calculation, every market response.

Leadership leverage has shifted. The old question—“Can you get people to do things?”—has become almost quaint. The new question is starker: “Can you consistently choose and frame the right things to be done?”

This is what I mean by the shift from Managing People to Orchestrating Outcomes. The leader becomes less a foreman and more an architect—designing teams, objectives, constraints, and feedback loops for systems that include both humans and machines.

The vocabulary changes: from “supervising” to “setting guardrails”; from “annual feedback” to “continuous performance analytics”; from “org charts” to “dynamic resource allocation.”

It sounds clean. It sounds like progress. And perhaps it is.

But I want to pause here, because something is being lost in the efficiency gains, and I am not certain we have noticed it yet.

The Emptiness of Orchestration

There is a term in the Slow Culture lexicon: Analog Resonance. It refers to the unique, tactile, often imperfect quality of physical objects and face-to-face interactions—the depth of connection we feel when using a fountain pen, or reading a paper book, or sitting in silence with a friend. Qualities that a digital simulation cannot replicate.

What, then, is the analog resonance of leadership?

Consider what we are discarding in the move from “managing people” to “orchestrating outcomes.” The old model—for all its inefficiencies, its petty politics, its exhausting emotional labor—contained something irreducible: relationship.

A manager who knew that Sam was struggling with his divorce and might need a lighter load this quarter. A team leader who could read the room and sense the unspoken tension. The handshake at the end of a difficult negotiation, the coffee meeting to chat. Teambuilding events, team dinners, and a common team spirit humanize the daily work.

The new model speaks of “human-AI team performance,” “outcome metrics,” and “decision quality.” These are not evil words. But they are thin words. They describe a system, not a community. They optimize for results, not for meaning.

And here is the uncomfortable question: What happens to the humans inside an “orchestrated” system?

The document on my desk speaks of “reserving scarce human expertise for ambiguous, high-impact decisions.” But what does it mean for a person to be “reserved”—like a table at a restaurant, like a resource in a database? What does it feel like to be valuable only at moments of ambiguity, and otherwise… optimized around?

The Human Datum as Constraint

I do not wish to be naïve. The old world of expensive execution had its own cruelties—burnout, exploitation, the quiet desperation of doing meaningless work simply because human time was the only resource available. The AI-augmented future may liberate us from some of that.

But liberation is not the same as meaning.

The organizations that will thrive in the next decade will not be the ones that automate everything, nor the ones that nostalgically refuse to automate anything. They will be the ones who know exactly what not to automate—that draw the line with intention rather than efficiency.

This is where the concept of The Human Datum becomes operational, not merely philosophical. In a sea of big data, The Human Datum is the single, irreducible point of truth: the individual human experience. It asks: What is this system doing to the people inside it? Not just what outcomes is it producing, but what kind of life is it creating?

The Zero-Marginal-Cost CEO faces a choice that no spreadsheet will show. You can design systems that treat humans as “exception handlers”—nodes activated only when the algorithm encounters ambiguity.

Or you can design systems that preserve space for what I will call Kairos: the qualitative, opportune, meaningful time that cannot be scheduled or scaled.

The former is more efficient. The latter is more human.

And efficiency, it turns out, is not a value. It is a ratio. It tells you nothing about what is worth doing—only how cheaply you can do it.

A Question Left Open

So here is where I will leave you, slightly uncomfortable, as promised.

You have read the projections. You understand that AI agents will transform every dimension of the leadership table—primary resources, units of management, performance lenses, and key skills. You are prepared to become an orchestrator of outcomes.

But answer me this:

When execution is free, what becomes the most expensive asset in your organization?


Jens Koester is a strategic advisor focused on the structural friction between exponential technology and the enduring patterns of human culture. Through The Human Datum, he provides the intellectual architecture and foresight necessary for leaders to navigate the AI-driven decade with clarity and intentionality.

Share this reflection: LinkedIn X