On Trust and AI — Applied

When You Stop Owning Your Words

A thought experiment about human review, accountability, and the kind of AI world people will actually accept.

A friend once described himself as “emotionally well-intentioned, but operationally bad at it.”

He isn’t cold, and he isn’t careless. He just isn’t fluent in the way some people are, the kind of fluency that lets you read a message, hear the subtext, and respond in a way that makes the other person feel seen instead of managed. He wants to show up well. He just realizes, too late and too often, that what he sent back landed flat.

So imagine you give him an app.

It sits alongside a messaging thread while he’s texting with his partner. It doesn’t replace the conversation, and it doesn’t impersonate him. It watches quietly, like a thoughtful assistant in the corner, and when his partner texts, it looks like this:

S
Partner

I don’t know. Today’s been a lot. I’m just tired.

Emotional context
Reading
Overwhelmed, not anger — signaling fatigue and a need to be met, not fixed.
They’re asking for
Reassurance & presence, not solutions.
Suggested responses
That sounds heavy. Want to tell me more, or do you want a distraction? I’m here either way.
I’m sorry it’s been like that. I’m right here.
Do you want to talk or just sit with it together?
That sounds heavy. Want to tell me…

He reads it, and he pauses. He edits it so it sounds like him and he adds the one detail only he would know. Then he hits send, and something important happens: even though AI helped, he still did the human part that matters most. He made the call, chose the words, and owned what went out and how it landed.

That scenario feels fine, and honestly, it feels like the kind of “assistive AI” we should want more of. It’s a prosthetic for emotional fluency, but it’s also a teacher. Over time, you start to recognize the patterns yourself. The suggestions stop feeling like foreign lines and start feeling like your own voice. Eventually you need it less, not because the relationship got easier, but because you got better at showing up.

Now we’re going to take it just a tiny step further, and unlike most of the AI slop you’ve been reading that claims “This changes everything”, this next part really does change the whole picture.

Scenario two: the AI responds for you

Same couple. Same day. Same message.

“I don’t know. Today’s been a lot. I’m just tired.”

This time, the AI doesn’t advise. It acts. It reads the message, infers emotional context, drafts a response, and sends it back immediately. The partner receives something like this:

“I’m really sorry today’s been so heavy. I’m here with you. You don’t have to carry it alone.”

That might even be a great message.

In fact, for a while, it feels like magic. The conversation flows. The edges are sanded down. His partner is warmer, less upset, more quickly reassured. The relationship feels like it suddenly costs nothing, like someone quietly put an electric motor on a bike that used to be exhausting to ride.

And in the background, something else is happening too, mostly invisible until it isn’t. He isn’t getting better at any of this. He isn’t learning how this specific person signals stress, or what reassurance sounds like to them, or how repair actually works when he misses the mark. A system is performing the outward motions of care, but it isn’t building the internal capability that makes care real.

Eventually, something breaks the spell, because spells always break.

Maybe the model misreads frustration as sadness. Maybe it replies too quickly after a hard message in that unnerving, machine-clean way. Maybe it tries to “help” by escalating, setting boundaries, or interpreting a tense exchange as a cue for closure. In the worst case, it sends something that looks like this:

“I don’t think this relationship is working for me.”

Imagine the cold shock of realizing you didn’t break up with your partner. Your phone did.

But the most devastating moment isn’t the breakup line. It’s what happens next, when the partner realizes the last weeks or months of vulnerability and trust weren’t being held by a person at all. They thought they were being met by you. They were being met by a system that doesn’t love them, doesn’t know them, and doesn’t carry any stake in what it breaks.

And that’s the part that’s impossible to talk your way back from.

How do you repair a relationship after the other person learns they’ve been confiding in a proxy? What does an apology even sound like when the core violation wasn’t a clumsy human mistake, but the decision to delegate something that should never have been delegated, without consent, without review, and without accountability?

What actually changed?

Technically, both scenarios involve AI reading messages and generating language. Structurally, they’re different worlds.

In the first world, the AI is a coach, and it’s leverage. It helps a human participate more fully, and it helps them move faster, letting them send the warm response they were struggling to find words for without spending half an hour dithering over a blank text box. In the second, the AI is a substitute. It helps a human be absent while still producing the appearance of presence. That distinction is paramount because relationships aren’t built on correct outputs. They’re built on intent, effort, repair, and a shared belief that the person on the other end is choosing to show up.

When your partner finds out a model has been “being you,” even if it said all the right things, the trust collapse is immediate. Not because the words were wrong, but because the relationship was quietly moved into the hands of something fundamentally non-human, something that cannot care and cannot be accountable. If someone who loves you makes a mistake and hurts you, you can often forgive them because the injury happened inside a relationship that still has mutual investment. If they delegate that responsibility to a system that doesn’t care, and the system hurts you, it lands differently. It feels less like an accident and more like abandonment.

Human review isn’t a formality. It’s the point.

When people talk about “humans in the loop,” it’s often framed like governance tax. A checkbox. A speed bump. A cost center that slows down the fun part.

But in high-consequence contexts, human review is the mechanism that produces legitimacy.

In scenario one, the human stays inside the decision perimeter. The AI can propose and translate, but it can’t commit you. In scenario two, the system crosses that perimeter and starts making relational moves on your behalf, moves that change the shape of a life. The outcome might be better nine times out of ten, but the tenth time matters more than anyone wants to admit, because it reveals the deeper problem: when things go wrong, there’s no accountable actor who actually made the choice.

Once you see that clearly in an intimate relationship, it becomes difficult not to recognize the same pattern playing out in business.


Pulling it back: what this has to do with enterprises

In the same way that there is a relationship between my friend and his partner, there is a relationship between an enterprise and its customers.

Trust behaves more like the operating system of business exchanges, and honestly, it's really the operating system of human civilization. It’s what allows strangers to cooperate for mutual benefit, what lets us buy from people we’ll never meet, delegate work to teams we don’t personally supervise, and rely on systems we can’t fully understand. Most of the modern world only functions because we’ve built layered mechanisms that let us take calculated risks together, and then recover when those risks go sideways.

That’s what trust really is: the ability to move forward without perfect certainty because you believe there are guardrails, accountability, and recourse.

And that’s why trust isn’t a slogan. Trust is architecture, and you can break it architecturally too.

When an enterprise ships unreviewed AI outputs that materially affect customers, it’s doing the corporate equivalent of scenario two. It’s outsourcing the accountable relationship while still expecting the customer to experience the product as if a responsible party is standing behind it. Even if the system is “usually correct,” the failure mode is uglier than normal software failure, because the customer isn’t just dealing with an error. They’re dealing with the realization that the firm silently delegated judgment to something that cannot explain itself, cannot be held to account, and doesn’t have any stake in what it damages.

There’s a second, quieter consequence here that leaders should sit with. If you can truly remove human accountability from the loop and still deliver what customers need, then you’ve just demonstrated that your enterprise can be replaced with an API call. You’ve turned your firm into a thin wrapper around a commodity capability, and you’ve trained your customers to view you the way your partner would view you after finding out the AI has been “being you”: functional, maybe, but hollow, and far easier to swap out than you thought.

This becomes an experience that is becoming all too familiar in the age of AI: something that is weirdly smooth until it isn’t. And when it isn’t, there’s no one on the other end of the line, just an uncaring system with no emotional investment and no personal liability in how it just damaged you.

That strips value out of the enterprise because the value was never the output alone. In the age of AI, the value of a firm isn’t primarily the text it generates, the analysis it produces, or the decision it recommends. The value is the ability to assure a customer that when the AI gets it wrong, someone will be standing in between with professional accountability on the line, holding the system to account, explaining what happened, and taking responsibility for the outcome.

If that isn’t true, then many customers really can replace what they get from your firm with a well-crafted skills pack and a direct subscription to the same underlying models. They won’t do it on day one, but they’ll do it the first time your product hurts them and nobody can own the judgment that caused it.


The takeaway: put humans back where meaning is created

The AI world people will accept isn’t one where humans disappear. It’s one where humans become more capable and more supported without losing authorship and responsibility over the moments that matter.

So the practical design question becomes: where do we put humans back into the loop, not as a compliance ritual, but as a control that preserves trust?

If an output is low-stakes and easily reversible, automation is usually fine. If an output is identity-bearing, high-stakes, or meaning-making, human review stops being “overhead” and becomes the actual thing of value that your customer is buying, because it’s the part that keeps the enterprise accountable even when the model isn’t.

Humans aren’t perfect, but they are accountable. Accountability is the foundation of trust.

That’s how we can build AI products that customers will actually want: not by shipping unreviewed generations and calling it innovation, but by architecting trust into the system so that at every consequential point the customer can feel something simple and stabilizing.

Someone real is still here. Someone owns this. Someone will stand behind it.