The Curse of Proxy Linearity
Knowledge doesn’t flow through predefined channels
Organizations don’t actually know how knowledge moves through them.
They know how information is supposed to move. The org chart. The reporting structure. The meeting cadence. The documentation system. The approval workflow. These describe the designed path — the linear sequence from input to decision that somebody drew on a whiteboard and called a process.
What actually happens is different. The insight that changes the decision comes from an unexpected conversation in a hallway. The understanding that prevents the mistake lives in someone who wasn’t invited to the meeting. The context that would have reframed the entire problem existed in three different people who never spoke to each other. The decision got made anyway, on incomplete understanding, and the organization filed the outcome and moved on.
This is not a failure of execution. It is a failure of design. And it has been designed in from the beginning.
The Linear Approximation
When organizations began building information systems, they faced a genuine constraint. Knowledge — the real thing, the understanding that exists in people’s minds, the pattern recognition that develops through experience, the contextual judgment that resists articulation — couldn’t be put in a system. It was too distributed, too tacit, too dependent on the specific person holding it and the specific moment in which it was needed.
So they built systems for the next best thing. Documents. Records. Processes. Artifacts that captured the explicit residue of knowledge — the part that could be written down, stored, and retrieved. And they designed those systems around the assumption that knowledge moved linearly. From source to recipient. From input to output. From the person who knew to the person who needed to know, along a path that could be mapped and managed.
The approximation was reasonable. It was the best available option given the tools. A document is better than nothing. A process is better than chaos. A linear path is better than no path.
But an approximation treated as truth produces a specific kind of blindness. The organization stopped asking whether the approximation was accurate. The linear model became the design constraint for every system built on top of it. Which meant every system was designed for a version of knowledge flow that was always a simplification of the real thing.
What Knowledge Actually Does
Knowledge in organizations is not linear. It is emergent.
Understanding doesn’t travel from A to B along a designed path. It crystallizes at the intersection of partial understandings held by different people at different times in different functions. The person who understands the customer and the person who understands the technical constraint and the person who remembers what happened last time — when those three people are in the same conversation at the right moment, something becomes possible that none of them could produce individually.
That emergence can’t be stored. It can’t be retrieved. It can’t be documented after the fact in a way that recreates it. The filing cabinet can hold the output of the conversation — the decision, the record, the artifact — but not the understanding that produced it. The understanding was an event. Events don’t file.
This is what quantum means in an organizational context. Not the physics — the principle. The state of knowledge in an organization cannot be fully described at any given moment. It exists in superposition across the people who hold pieces of it. It collapses into a decision when the right collision happens. Trying to manage that by building better linear paths is like trying to catch light by building a better box.
The organization has been building better boxes for fifty years.
The Cost of the Approximation
The cost is visible once you know what to look for.
The decision that keeps getting made wrong. Not because people are incompetent but because the understanding that would change it exists somewhere in the organization and never reaches the person making the decision. The path was never designed. The collision never happened.
The project that fails in implementation. The strategy that was coherent at the executive level and incoherent at the point of execution. The understanding that the strategy required never crossed the functional boundary between the people who made it and the people who had to carry it out.
The institutional knowledge that walks out the door when someone leaves. Not because it wasn’t documented — sometimes it was documented thoroughly — but because the documentation captured the explicit residue of the understanding, not the understanding itself. The tacit knowledge, the judgment, the pattern recognition that made the person valuable — that was in their head and it left with them and the filing cabinet has a perfectly organized record of everything except what actually mattered.
Every organization has all three of these. Most have all three operating simultaneously, continuously, at significant cost. None of it appears on the dashboard because the dashboard measures the linear approximation, not the underlying reality.
What Designing for Emergence Looks Like
The question nobody has designed for is: how do you create the conditions for the right collision at the right time?
Not how do you store the information so it can be retrieved. How do you hold the context — the distributed partial understandings across functions, across time, across people — in a way that surfaces the intersection when a decision needs it.
This is what AI makes newly possible. Not because it’s smarter than the people in the organization. Because it can hold context at a scale and across a scope that no human and no previous system could manage simultaneously. It can notice that the understanding the sales team needs exists in a conversation that happened in customer success six months ago and a pattern that the finance team has been seeing for two quarters and a constraint that the engineering team documented last year. It can surface that collision rather than waiting for the hallway conversation that may never happen.
That is not a filing cabinet. That is not storage and retrieval. That is a system designed for the actual nature of organizational knowledge rather than for the linear approximation that was the best available proxy for fifty years.
Designing for emergence means starting with different questions. Not where does this information live. What understanding needs to exist in this person at this moment for this decision to go well. Not how do we document what we know. How do we create the conditions for knowledge to crystallize when it’s needed rather than after it’s too late.
Not what is the path. What is the collision.
The Design Constraint That Changes Everything
Every system built on the linear approximation asked: how do we move information from where it is to where it needs to be?
The right question is: how do we create the conditions under which understanding emerges in the people who need it, at the moment they need it, from the distributed context the organization already holds?
Those are different questions. They produce different systems. They require different metrics. They cross functional boundaries that the linear model never crossed because linearity respects the path and emergence doesn’t respect anything except the collision.
The organizations that start asking the right question will build something the current model cannot produce. Not incrementally better knowledge management. A different category of organizational capability — one where the understanding needed for a decision is available at the moment the decision is made, not stored somewhere that nobody navigates to in time.
The approximation served its purpose. It was the best available option for fifty years.
It is no longer the best available option.
The question is whether enough people notice before the next generation of systems gets built on the same wrong constraint, faster and more expensively, with better metrics for the thing that was never the point.


