The Two Fatal Flaws of the Data Age (Part 2)
Why AI Doesn’t Fix the Problem (It Exposes It)
First, we believe data produces knowledge.
Second, we believe data explains people.
Both assumptions are wrong.
What’s interesting about AI is not that it fixes either flaw — it’s that it makes them impossible to ignore.
AI Is Not the Next Step — It’s the Stress Test
Most organizations talk about AI as if it’s a smarter database.
More data.
Better models.
Faster insights.
But AI doesn’t change the rules of knowledge. It simply amplifies the consequences of misunderstanding them.
AI systems do not generate explanations. They generate outputs that resemble explanations.
They work entirely inside the same assumption space:
Past data stands in for understanding
Patterns stand in for causes
Correlation stands in for meaning
If your underlying data is assumption-laden — and it always is — AI doesn’t correct that. It scales it.
Why AI Feels Smart (Until It Doesn’t)
AI feels intelligent because it is very good at producing legible answers.
Clear sentences.
Confident tone.
Convincing structure.
But legibility is not knowledge.
David Deutsch’s standard still applies:
If an output can be varied easily without breaking an explanation, it isn’t knowledge.
AI outputs are extremely easy to vary.
Change the prompt.
Change the context window.
Change the temperature.
Nothing “breaks” — because there was no explanation holding it together in the first place.
The Psychological Trap Gets Deeper
Rory Sutherland’s critique becomes sharper with AI.
Humans already over-trust numbers.
Now we over-trust well-phrased text.
AI doesn’t understand motivation, intent, or meaning. It predicts what sounds right given the data it was trained on. That’s impressive — but it’s not psychological insight.
The danger is not hallucination.
The danger is plausibility.
When an answer sounds reasonable, we stop asking why.
AI Turns Assumptions into Infrastructure
Here’s the real shift.
Before AI:
Assumptions lived in fields, schemas, and dashboards
With AI:
Assumptions live in models, prompts, and training data
They are harder to see. Harder to challenge. Harder to explain.
The system still doesn’t know why something is true — but now it can argue back.
This is not intelligence.
It’s confidence without accountability.
Why “Better Data” Still Won’t Save You
The usual response is predictable:
“We just need cleaner data.”
“We need better prompts.”
“We need guardrails.”
But none of those address the core problem.
AI doesn’t fail because it lacks information.
It fails because it lacks explanation and criticism.
And no amount of data fixes that.
What Actually Changes in an AI World
AI forces a choice organizations have avoided for decades.
You can:
Keep treating information as self-explanatory
Keep demanding answers instead of explanations
Keep optimizing for legibility
Or you can accept a harder truth:
Understanding cannot be automated.
AI can assist explanation.
It cannot replace it.
The human work — interpretation, disagreement, judgment, context — becomes more important, not less.
The Real Risk of AI
The risk is not that AI will be wrong.
The risk is that it will be convincingly wrong, at scale, inside systems that already confuse memory for knowledge and patterns for causes.
AI doesn’t introduce a new flaw.
It removes the last excuse for ignoring the old ones.
Where That Leaves Us
The Data Age taught us to trust what is measurable.
The AI Age tempts us to trust what is articulate.
Both are mistakes.
Data remembers.
AI predicts.
Only people explain.
And explanation is still the only thing that turns information into knowledge.
