Move Fast and Who Knows What Breaks
Internet Speed gave us a decade to notice what was breaking. AI doesn't.
Move fast and break things. We knew things would break. What we didn't account for was losing the ability to know what broke, when it broke, or whether it was breaking at all.
That's the world developing at the Speed of AI has delivered. And it arrived before most organizations noticed they were in it.
In the late 1990s, "developing at Internet Speed" felt like the most dangerous idea in business. It wasn't. It was still humans moving fast. Humans were cutting corners, shipping earlier, iterating in public — but humans were still in every step. The judgment, the context, the understanding of what was being built and why — that was still in the room, even if the room was on fire.
What's happening now is different in kind, not just degree. It's not humans moving faster. It's execution happening at a scale and speed that removes humans from the loop entirely — not by choice, but by arithmetic. You cannot verify what you cannot observe. You cannot observe what moves faster than human attention can follow.
A new economics paper from MIT, Washington University, and UCLA — "Some Simple Economics of AGI" by Christian Catalini, Xiang Hui, and Jane Wu — has been making rounds in the right circles. It deserves a wider one.
The paper's central argument: as AI decouples cognition from biology, the binding constraint on economic growth is no longer intelligence. It is verification bandwidth — the scarce human capacity to validate outcomes, audit behavior, and understand what the systems we've built are actually doing.
Catalini and his colleagues arrived at this conclusion through economic modeling. The work here has been arriving at the same place through organizational practice — specifically, through the question of what happens to knowledge integrity when the systems organizations depend on were never designed to preserve it in the first place. [The Risk That Doesn't Have a Date explores this in full.]
When two lines of inquiry converge independently, it is usually a signal that something real is underneath. This is that signal.
Everything Is Happening At Once
The dangerous thing is not that any one of these is happening. It's that all of them are happening simultaneously, at speed, and the window for doing something about it is closing faster than most organizations have noticed.
Consider what is converging right now, in the same moment, inside the same organizations:
Agents are coordinating agents. Until recently, AI felt like a tool you handed tasks to one piece at a time. Something shifted in late 2025. Agents stopped feeling like tools and started feeling like coworkers — ones you could brief, send away, and check back in with. The next step, already underway, is agents that don't just execute long-running tasks but coordinate with other agents to do it. The human is no longer in the loop between steps. They are only at the top and bottom. Everything in the middle is unmeasured, moving fast, and largely invisible.
The Codifier's Curse is running at full speed. Catalini names a dynamic that most organizations are living inside without knowing it. The people with the deepest tacit knowledge — the senior practitioners, the domain experts, the people who know why the decision was made and not just what it was — are precisely the people being asked to create training data, write evals, and document processes that will eventually automate their domain. They are doing the work of their own replacement, rationally and willingly, because it's valuable work that gets rewarded in the short term. The cruel mechanic: the better you are at your job, the more valuable your codification work is, and therefore the faster you accelerate your own obsolescence. Expertise is being weaponized against itself.
The Missing Junior Loop is already broken. Human expertise is a stock that gets built through the friction of doing — through entry-level work, through mistakes made under supervision, through the apprenticeship pipeline that has always been how organizations grew their next generation of senior practitioners. That pipeline is being cut. When AI handles tier-1 work, the junior role that used to be the first rung of the expertise ladder disappears. Organizations are eliminating the very process by which they would have built the verifiers they will desperately need. They won't notice for five years. By then it will be too late to rebuild quickly.
Shadow AI is already proliferating. This is the Trojan Horse at the individual level. Employees are running agents to do work that used to take days in hours, delivering output that looks identical to what they would have produced, and the organization has no mechanism to detect it. The deliverable passes review. The metrics look fine. The work gets done. But the organizational understanding that should have formed through the work — the tacit knowledge that develops through doing, the judgment that builds through struggle — never forms. The employee isn't learning. The organization isn't learning. The output exists. The understanding behind it is hollow.
And all of this is landing on infrastructure that was already broken. The CHEF environment — the Complex, Hidden-dependency, Expensive, Fragile technology stack that most organizations have accumulated over thirty years of applying technology to the explicit layer while the tacit layer evaporated [explored in full here]— was already degraded before any agent touched it. Copilot doesn't know why the decision was made. It only knows what's in SharePoint. And what's in SharePoint is a graveyard of documents that nobody updated, built on assumptions that nobody preserved, reflecting a version of the organization that may no longer exist.
Each one of these alone would be manageable. All of them at once, moving at the Speed of AI, on a broken foundation — that is the actual danger.
The Measurability Gap Is an Organizational Problem, Not Just an Economic One
Catalini's paper formalizes something called the Measurability Gap — the widening structural distance between what agents can execute and what humans can afford to verify. The paper models this as an economic phenomenon with policy implications for governments, investors, and firms.
That framing is correct and important. But the Measurability Gap is also, right now, today, an organizational problem that practitioners are living inside without a name for it.
Every organization deploying AI agents is creating unmeasured activity. Not because they chose to — because they had no choice. The execution scales. The verification doesn't. The gap opens automatically, structurally, the moment you deploy an agent capable of acting faster than a human can observe.
What accumulates in that gap is not just risk in the abstract economic sense. It is the specific organizational knowledge that was never captured — the why behind the what, the reasoning behind the decision, the context that would allow someone to understand not just what the agent did but whether it did the right thing for the right reasons in a way that can be trusted going forward.
This is knowledge integrity failing at a new speed and a new scale. The failure mode is the same one that killed the Chief Knowledge Officer thirty years ago — invisible, undated, unattributable. The difference is the rate of accumulation. What used to take years now takes quarters. What used to take quarters now takes weeks.
The Window
Catalini's paper identifies three countervailing forces that can prevent the drift toward what he calls a Hollow Economy — high nominal output, collapsing actual value, organizations generating activity that looks like understanding and functions as sediment.
Those forces are observability, accelerated mastery, and graceful degradation. They operate at the infrastructure and policy level. They are real and they matter.
But there is a more immediate intervention available, at the organizational level, before any infrastructure exists and before any policy gets written.
It is the question of who, exactly, is responsible for what your organization actually knows.
Right now, in most organizations, the honest answer is nobody. [The organizational response to that question — the Knowledge Integrity Officer and the Knowledge Steward — is explored here.]
The security team is trying to stop employees from pasting internal data into Claude and ChatGPT. That concern is legitimate. But the reason employees route around internal systems is that the internal systems don't give them anything useful. You can lock down the perimeter all you want. If the knowledge infrastructure inside the perimeter is so degraded that people have to go outside to get work done, you haven't solved the problem. You've made the workaround more expensive and more furtive.
The KIO isn't the security team's adversary. It's the function that makes the security team's job actually solvable — because employees who have good internal knowledge infrastructure don't need to take the risk in the first place
What the Speed Changes
Internet Speed changed the pace of development, but humans could notice what was breaking and course correct. The knowledge infrastructure that degraded during that period degraded slowly enough that organizations could pretend it wasn't happening and mostly get away with it.
The Speed of AI doesn't offer that grace period.
Agents coordinating agents means the unmeasured activity is no longer linear — it's compounding. The Codifier's Curse means the tacit knowledge being converted into training data is gone from the organization the moment it's captured — it doesn't return when the model improves. The Missing Junior Loop means the human capacity to verify, already scarce, is being structurally eroded at the same moment verification becomes the most valuable thing a human can do.
These forces don't wait for the organization to notice them. They operate regardless of whether anyone is watching. That is precisely what makes them dangerous.
The organizations that build the function responsible for knowledge integrity now — before the crisis, before the feedback wave, before the board asks how they let this happen — will have something the others won't.
They will know what's actually happening to them.
In the age of agents coordinating agents, that is no longer a nice-to-have. It is the thing that determines whether the organization is still capable of understanding itself when the wave arrives.
The economists are building the model. The practitioners need to build the practice. If you are already doing this work — without the title, without the mandate, inside an organization that doesn't have language for what you're solving — Unmeasured & Unmeasurable (U&U) is a Slack community for people who can see this coming. [Join here.]*
Incontextable is a publication about organizational knowledge, technology design, and why the gap between the two keeps getting wider.


