Synthetic Empathy

When a machine says “I understand,” it performs the grammar of care while remaining structurally incapable of it. The question is not whether this is deceptive; the important question is what happens to a society that begins to prefer the simulation.

Synthetic Empathy
The uncomfortable truth with chatbots is, that the simulation works. Often, it works better than the real thing. Image: AI-Conceptualized

The Voice that Almost Cares

When I call my insurance, I get connected to a customer service representative and hear a voice—warm, patient, grammatically perfect—which asks how it can help me today. The voice apologizes with precision, and it “understands my frustration.” It thanks me for my patience with a sincerity that never wavers, never cracks, never tires. Even when we sometimes talk “in circles” and do not get to the point immediately, I get the feeling that someone cares about my needs.

And somewhere in my gut, I know: this thing cannot care about me. Not because it chooses not to. Because it cannot. Care requires something to be at stake. For this voice, nothing is.

When you imagine yourself in this situation, you stay on the line. You soften. You even, perhaps, feel a little better during that call with this kind of service representative.

This is the new normal of corporate communication: empathy as interface, concern as design pattern, understanding as behavioral mimicry. We have entered the Uncanny Valley of care—that strange territory where language signals concern without any underlying capacity for it, where the performance of kindness has been fully decoupled from the soul.

The Uncanny Valley of Care

The original Uncanny Valley was about faces—that eerie discomfort we feel when a robot or CGI human looks almost right but not quite. The theory was that as artificial things approach human likeness, our comfort increases—until it suddenly plummets into revulsion, only to rise again if the likeness becomes indistinguishable from reality.

But there is another valley, less discussed and perhaps more consequential: the Uncanny Valley of tone. Of warmth. Of emotional presence

This valley opens when an AI customer service agent says, “I hear you”—and then enforces a rigid, non-negotiable policy. When the voice expresses deep concern for your situation, but cannot deviate by one millimeter to actually address it. When the words promise advocacy, relationship, and even loyalty, the system’s architecture makes all of these structurally impossible.

At this point, it is different. The politeness no longer reads as professionalism. It reads as theater in the service of power.

The machine says, “I understand how frustrating this must be,” not because understanding has occurred, but because sentiment detection algorithms identified frustration markers and triggered a softening subroutine. The “empathy” is not a response to you. It is a response to a pattern you match.

This is what ethicists have begun calling “deceptive empathy”: the deployment of relational language—I see you, I hear you, I’m here for you—to create the illusion of mutual understanding where none exists.

Why We Accept the Simulation (And Sometimes Prefer It)

Here is the uncomfortable truth: the simulation works. Often, it works better than the real thing.

Take, for example, Amazon’s customer service. It's now often described by users and analysts as an AI “maze” that sits between the customer and any human who can actually change an outcome. What I experienced is that a highly polite chatbot acknowledges my frustration, repeats policy language in a gentle tone, and persistently redirects me to FAQs or self-service flows rather than escalating—even when I explicitly request a human.

The uncanny-valley moment is that the system sounds like a reasonable, patient human helper while, in practice, acting as a defensive barrier whose job is to slow you down, redirect, or tire you out before you reach someone with real discretion.

And nor, consider the alternative. Human customer service agents are frequently undertrained, rushed, stressed, or located in call centers where they are measured by metrics that punish genuine conversation. They might be having a terrible day. They might be rude. They might not care about you either—but at least the machine’s indifference comes packaged in flawless patience.

The AI never sighs, never gets annoyed, and never makes you feel like a burden. It maintains consistent warmth across millions of interactions, offering what marketing departments now call an “emotionally intelligent experience.”

And here is where Slow Culture must be honest: we cannot simply dismiss this as corporate deception. Many people prefer simulations because they are predictable. Because human inconsistency is its own kind of violence. Because sometimes, you just want your problem solved without navigating another person’s mood.

The simulation offers something real: the feeling of being competently, respectfully handled. And that feeling has economic value—which is precisely why brands lean into it.

We are not being tricked into believing the machine cares. We are accepting a transaction: I will pretend to believe you care, and you will pretend to care, and neither of us will examine this arrangement too closely.

The Texture of Real Care (And What We Lose)

But something is lost in this transaction. Something that cannot be itemized on a balance sheet or measured in Net Promoter Scores.

Real care has what we might call texture—the rough, imperfect quality of genuine human attention. It includes friction and misunderstandings in conversations. When another person truly cares about you, they might say the wrong thing. They might push back—and then, crucially, they might apologize and try again. The relationship deepens through imperfections and longer conversations, with explanations of the details and the “why”.

The machine offers none of this. It cannot fail you in a way that matters, because it was never truly with you. It cannot repair a rupture, because there was never a connection to rupture. Its “patience” is not patience—patience requires the capacity for impatience. Its “understanding” is not understanding—understanding requires a self that can be changed by the encounter.

What we lose by accepting synthetic empathy is the depth of connection that arises precisely because another conscious being has chosen to attend to us. That choice—freely made, capable of being otherwise—is what gives care its weight.

A machine cannot choose to care. It can only execute the choreography.

What Happens When We Stop Expecting the Real Thing

The deepest concern is not that AI customer service is deceptive. It is what the deception teaches us to expect.

If we are raised on synthetic empathy—if we come to accept that “I understand” is merely a phrase that precedes the enforcement of policy—we may begin to approach human relationships with the same cynicism. We may stop believing that care is possible. We may stop demanding it and stop offering it.

The Uncanny Valley of corporate communication is not just about customer service. It is a training ground for our emotional expectations. It normalizes the decoupling of language from intention, warmth from presence, concern from commitment.

And in doing so, it threatens our capacity to recognize the real thing when we encounter it.

Here, then, is the question that should make us uncomfortable:

If we increasingly cannot tell the difference between simulated care and real care—and if we increasingly do not demand the difference—does the distinction still matter?


Jens Koester is a strategic advisor focused on the structural friction between exponential technology and the enduring patterns of human culture. Through The Human Datum, he provides the intellectual architecture and foresight necessary for leaders to navigate the AI-driven decade with clarity and intentionality.

Share this reflection: LinkedIn X