Who’s Actually Thinking?
Does this sound familiar? You receive a draft document from someone on your team. It’s well structured and covers the key points. But something feels off. The tone is generic. Important details are buried in long paragraphs that say less than they seem to.
You find yourself working backwards to work out what this person is actually trying to tell you, and which details they believed were important.
You’re mentally ‘de-AI-ing’ the communication to extract the core message.
This isn’t a problem with AI accuracy. It’s a problem with when people are using it.
When someone asks AI to draft something before they’ve done the thinking about what needs to be said, the output can be well structured, verbose, and pretty hollow. The task gets completed, but the thinking gets outsourced.
Communication without visible judgment compounds over time. Eventually, it becomes harder to tell the difference between work someone has actually thought through and work that simply sounds reasonable. At best, this can be annoying - at worst, it can lead to an erosion of trust.
For leaders, the stakes are higher.
If you’re checking with AI before forming your own view on decisions that shape organizational direction, you’re not just delegating the writing. You’re delegating the judgment. The AI’s framing becomes your framing. Its priorities quietly become yours. Over time, that muscle weakens.
Whether AI strengthens or weakens capability comes down to a simple principle: never outsource your thinking and judgment to AI.
You can use it as a thinking partner. But the thinking has to come first.
Why this happens
When you ask AI a question before you’ve formed your own view, the persuasive nature of the interaction takes over. The response is structured, confident, and internally coherent. It reads like analysis from someone who has already done the thinking.
If your own view is still forming, it’s easy to default to what’s in front of you. Not because it’s correct, but because it feels credible.
The natural language interface makes this particularly seductive. You’re not interrogating data or testing assumptions. You’re reading something that sounds like a knowledgeable colleague explaining their conclusion. The same cognitive shortcuts apply. Scrutiny drops.
One AI provider has started naming this dynamic explicitly. Anthropic recently released its Claude Constitution, which includes a principle called “epistemic integrity”. The idea is that AI should strengthen human understanding rather than undermine it.
That’s a meaningful step. But it can’t solve the problem at a platform level.
Even AI designed with epistemic integrity in mind can’t stop people from using it as a shortcut past their own thinking. This is an organizational behaviour issue, not a tooling issue.
Never outsource your thinking and judgment
The principle itself is straightforward.
For decisions, form your own view first. Work through what matters. Weigh the trade-offs. Reach a tentative conclusion. Then use AI to challenge that view. Ask what you might be missing. Test your assumptions. Explore counterarguments. Used this way, AI becomes a sparring partner, not a substitute.
For creative work, the same logic applies. Start with intent. What’s the core message? What actually matters to this audience? Get that clarity first. Then use AI to refine the expression, tighten the structure, or surface gaps.
If AI is generating the idea, you’ve outsourced the thinking that creates value.
You can see the difference in the work. When someone has done the thinking first, their judgment is explicit. The point of view is clear. Choices are visible.
What leaders need to do
Make this principle core to how work gets done.
When you’re setting up pilots or evaluating AI use cases, success shouldn’t just be measured in efficiency gains. It should include whether people are using AI in ways that preserve and strengthen judgment.
Build it into usage guidelines. ‘Use AI as a thinking partner, not a replacement for judgment’ belongs alongside data security and approved tools.
Make it cultural. Name good examples when you see them. Call out hollow work as a learning moment, not a punishment. This is what ‘AI on purpose’ looks like in practice.
Clear ownership matters. Network activation matters. But if people own the process while outsourcing the thinking, you get adoption without capability.
Anthropic describes epistemic integrity as empowering human thought rather than degrading it. Leaders need to translate that principle into everyday practice.
This is how you protect the capability you’re trying to build.