Required Non-Facts: How Enterprise Systems Quietly Poison Reality
How Placeholder Data Quietly Rewrites Reality
Every large organization is full of facts that everyone knows aren’t true.
They’re not lies.
They’re not mistakes.
They’re not even incompetence.
They’re required.
Welcome to the strange world of Required Non-Facts.
A Required Non-Fact is a piece of information that must exist in a system in order for work to proceed—even though everyone involved knows it doesn’t accurately describe reality.
You don’t enter it because it’s true.
You enter it because the form won’t submit without it.
The Checkbox That Ate Reality
You’ve seen these:
A project start date chosen because something must go in the field.
A priority level selected because the dropdown insists on one—even when everything is “sort of urgent, sort of not.”
A customer industry picked from a list that stopped making sense in 2014.
A “root cause” filled in before anyone has time to actually understand the problem.
A confidence level, estimate, or forecast that everyone quietly treats as fictional.
No one believes these values.
But the system does.
And the system is very serious about them.
Why Required Non-Facts Are So Dangerous
Individually, Required Non-Facts feel harmless.
A little white lie to get past the guardrail.
A placeholder.
A “we’ll fix it later.”
Collectively, they are catastrophic.
Because enterprise systems don’t know the difference between truth and compliance.
Once a Required Non-Fact enters the system, it becomes:
Searchable
Reportable
Auditable
Optimizable
Feedstock for dashboards
Training data for AI
Evidence in future arguments
At that moment, fiction hardens into infrastructure. And infrastructure does not forget.
The Accumulation Problem
The real danger isn’t that Required Non-Facts exist. It’s that they accumulate quietly, like plaque in an artery.
Each system adds a few more:
CRM needs deal stages.
Ticketing systems need categories.
Project tools need estimates.
Risk systems need scores.
Compliance tools need attestations.
AI systems need labeled data.
None of these fields are wrong in theory. They’re just too certain for the reality they’re trying to represent. So humans do what humans always do under constraint:
they improvise.
And then the system treats improvisation as fact.
When Optimization Optimizes the Fiction
Here’s where things get interesting—and expensive.
Once Required Non-Facts exist at scale, organizations start optimizing around them.
Dashboards light up.
KPIs improve.
Executives nod.
Consultants celebrate.
But what’s being optimized isn’t reality. It’s the internal consistency of the lie.
You end up with:
Sales forecasts that get “better” without getting more accurate.
Risk scores that look precise but don’t reduce actual risk.
Productivity metrics that rise while real work slows down.
AI models that confidently learn patterns from data everyone knew was made up.
This is how organizations become locally rational and globally delusional. Everyone is doing what the system rewards. No one is doing what reality requires.
Why Humans Aren’t the Problem
It’s tempting to blame people for “bad data.” That’s unfair. Humans are responding rationally to irrational constraints.
If a system demands certainty where none exists, humans will provide something that looks like certainty.
If a form requires an answer, people will give an answer. If ambiguity isn’t allowed, fiction steps in. The problem isn’t human dishonesty. It’s systemic intolerance for ambiguity.
Modern enterprise software is deeply uncomfortable with “I don’t know,” “it depends,” and “this will change.”
Reality, unfortunately, is built almost entirely out of those things.
Required Non-Facts and AI: Gasoline Meets Fire
AI makes this worse—not better. AI systems don’t know which data fields were filled out reluctantly. They don’t know which numbers were guesses. They don’t know which categories were chosen to make the workflow shut up.
They see patterns.
They infer meaning.
They extrapolate confidently.
Which means Required Non-Facts become training data for automated misunderstanding. AI doesn’t hallucinate out of nowhere. It hallucinates from what we gave it. And what we gave it includes years of structurally coerced fiction.
The Real Cost: Reality Drift
Over time, Required Non-Facts cause something more subtle than bad decisions.
They cause reality drift.
The organization slowly stops arguing about what is actually happening and starts arguing about what the system says is happening.
Meetings become debates between dashboards.
Judgment gives way to metrics.
Context gets labeled “anecdotal.”
Experience becomes “unstructured data.”
Eventually, reality loses.
Not because it’s wrong—
but because it’s harder to enter into a form.
Why “Better Data Hygiene” Won’t Save You
This is where most organizations reach for familiar remedies:
More validation rules
Better governance
Cleaner schemas
Tighter definitions
More required fields (ironically)
This only increases the pressure to fabricate certainty. You cannot solve a meaning problem with stricter formatting. Required Non-Facts are not a data quality issue. They are a context design failure.
The InContextable View
From a context lens, Required Non-Facts are a warning sign:
Your systems are demanding answers faster than your organization can form understanding.
They are optimizing for legibility over truth.
For closure over learning.
For efficiency over sensemaking.
The fix is not fewer systems.
And it’s definitely not more dashboards.
The fix is designing systems that:
Allow ambiguity to exist without punishment
Distinguish unknown from known
Carry rationale alongside values
Treat context as first-class, not optional
Preserve uncertainty instead of forcing premature certainty
In other words:
Stop requiring non-facts.
Start supporting interpretation.
The Line That Matters
Every Required Non-Fact is a small betrayal of reality. One or two don’t matter. Thousands do. Because once fiction becomes required, truth becomes optional. And no organization survives that for long.
