About Incontextable

A Knowledge Turn is what happens when knowledge actually does work — captured, refined, applied, and updated for the next person who needs it. It is the basic unit of organizational learning, the mechanism by which knowledge compounds rather than accumulates.

Most organizations have zero effective knowledge turns. Not because knowledge doesn’t move. It moves constantly. But it degrades at every boundary it crosses.

The industrial model didn’t just organize work into departments. It made the hierarchy the knowledge architecture. Information traveled up one chain — filtered at every level, translated by every manager, distilled by every director — and then back down a different chain as instruction rather than understanding. By the time knowledge reached the person who needed to decide, and by the time the decision reached the people who needed to act, both had been processed into something that looked like communication but no longer functioned as it.

The tools tried to fix this. Email created informal cross-boundary channels. Slack created more of them, faster. Teams created channels for every possible intersection. What they produced was a second parallel knowledge system alongside the hierarchy — unstructured, unaccountable, and impossible to compound. The hierarchy gives you distortion with structure. The tools give you noise without it. Neither produces knowledge turns.

Knowledge Turns measure the health of your information environment the way inventory turns measure the health of a supply chain. High turns mean knowledge is crossing boundaries with integrity intact, improving with each cycle, doing actual work. Low turns — which is the condition of almost every organization — mean the knowledge is moving but degrading. Producing the most dangerous organizational condition of all: confident decisions made on distorted information.

That gap — between what was known and what was understood by the time it mattered — is the territory this publication covers.


Why this matters now

We are still running a model of knowledge work that was built for mass production and has stayed intact ever since. The industrial model gave us the job description, the org chart, the KPI, the ticketing system, the knowledge base. Nobody designed the knowledge architecture underneath those systems. It accumulated. It became organizational common sense.

AI doesn’t fix this. It deepens it. Every assumption gets more embedded, more confident, and harder to question. The organization mistakes fluency for understanding and output for insight. The flywheel spins faster inside each department. The cross-boundary turns stay at zero.

The organizations that pull away in the next decade won’t be the ones with the largest AI budgets. They will be the ones that asked the question the industrial model was never designed to answer: what would knowledge work look like if we designed it today, from scratch, for the world we actually live in?

That question is what this publication is trying to answer.


Who is writing this

Alexander J. Cooper has spent thirty years at the intersection of organizational thinking and knowledge work — close enough to the ideas that shaped modern management thinking to watch them get simplified, institutionalized, and eventually mistaken for permanent truth.

That experience produced a specific kind of skepticism. Not toward the ideas themselves, which were genuinely important. Toward what happens to good ideas when they stop being questioned.

The argument in this publication has been thirty years in the making. The moment it describes has finally arrived.


Who should read this

If you have watched a well-funded knowledge system produce confident answers to the wrong questions — this is for you.

If you have sat in a meeting where the AI summary captured everything that was said and nothing that mattered — this is for you.

If you have wondered why every generation of information technology promises transformation and delivers a faster version of the same dysfunction — this is for you.

Subscribe. The flywheel is waiting to engage.

User's avatar

Subscribe to INCONTEXTABLE

Making Knowledge Work

People