The Risk That Doesn’t Have a Date
Your organization has a CISO. It doesn't have a KIO. That gap is about to get very expensive.
Most organizations of any size now have a Chief Information Security Officer.
The CISO has a budget, a team, a board presentation slot, and a mandate that has expanded with every headline breach and every new compliance requirement. The risk is taken seriously because the failure mode is visible — it has a date, a cost, a notification requirement, and consequences that are immediate and attributable.
For a brief moment in the late 1990s, some organizations also had a Chief Knowledge Officer.
The CKO was trying to solve a different problem — how to capture, preserve, and leverage what the organization actually knew, so that institutional wisdom didn’t walk out the door with every retirement, so that knowledge accumulated rather than evaporated, so that the organization got smarter over time rather than just busier.
The intellectual foundation for this came largely from Ikujiro Nonaka and Hirotaka Takeuchi’s 1995 book The Knowledge-Creating Company, which argued that Western management theory had been thinking about organizational knowledge in the wrong way. The prevailing assumption was that knowledge was explicit — something that could be captured in documents, procedures, systems, best practices. If you could write it down and store it, you could manage it.
What Nonaka and Takeuchi argued was that the most valuable organizational knowledge is t
acit — embedded in practice, in judgment, in the understanding that comes from doing something repeatedly in a specific context. It lives in people and in the relationships between people. It cannot be fully extracted and stored because extraction is precisely what destroys what makes it valuable. The master craftsman’s knowledge of when the metal is ready is not in any manual. The account manager’s understanding of what a particular client actually needs, as distinct from what they say they need, is not in the CRM.
Their SECI model — moving knowledge between tacit and explicit forms through socialization, externalization, combination, and internalization — was essentially a design framework for the conditions under which tacit knowledge becomes shared organizational understanding. Not capture it. Cultivate the conditions for it.
What happened to their ideas is instructive. The explicit half of the model got productized into knowledge management systems, wikis, intranets, and document repositories. The tacit half — the part that was actually the important half — got ignored because it couldn’t be turned into software. The conditions for cultivating tacit knowledge are social, structural, and cultural. They don’t ship in a box.
The Learning Management System is the clearest example of this failure. The LMS tracks completion, not understanding. It measures whether the course was clicked through, not whether anything changed in how someone thinks or works. It produces a compliance report that proves learning happened. It has no mechanism for detecting whether it did. The organization gets a dashboard showing certification rates and hours logged. The tacit knowledge — the judgment that comes from experience, the pattern recognition that can’t be put in a module — continues to evaporate without appearing on any report.
The LMS is how organizations prove that learning is happening without having to check whether it is.
The CHEF ecosystem — the Complex, Hidden-dependency, Expensive, Fragile environment that most organizational technology stacks have become (explored in full here) — is in large part the residue of applying technology to the explicit layer while the tacit layer evaporated without anyone building a system to notice. The CKO role was supposed to hold that distinction. Without it, nobody did.
Then the dot-com bubble burst, budgets contracted, and the CKO disappeared from org charts while the CISO grew into one of the most powerful roles in the enterprise.
That divergence is not accidental. It is a precise record of which risks organizations know how to take seriously.
The Risk That Has a Date
Cyber security commands the attention it does because its failure mode is legible.
A breach happens at a specific moment. It has a scope — this many records, this many systems, this many customers affected. It has a cost that can be calculated — remediation, legal fees, regulatory fines, reputational damage, customer churn. It has parties who are accountable and parties who demand accountability. Insurers price it. Regulators mandate responses to it. Board members ask about it in terms they understand.
The security stack that organizations have built in response to this risk is genuinely impressive. Zero Trust architecture. Endpoint detection and response. Security information and event management. Multi-factor authentication. Penetration testing. Compliance frameworks. Detailed platform security assessments that evaluate every significant technology decision against data residency, access controls, and encryption standards.
All of it is aimed at the risk that has a date. The risk that, if it materializes, will appear in a headline.
The Risk That Doesn’t
Knowledge integrity fails differently.
There is no breach notification for the moment when an organization’s assumptions about its customers stopped being accurate. There is no incident report for the meeting in which forty slides of data produced no shared understanding and no decision. There is no regulatory requirement to disclose that the institutional knowledge of how to serve a key account walked out the door when the account manager retired and was never captured anywhere the next person could find it.
The failure is real. The cost is enormous — in decisions made on drifted assumptions, in strategies built on models of the market that no longer reflect the market, in customer relationships that degrade because nobody preserved the context of how they were built. But the cost is invisible in the way that hidden dependencies are invisible. It accumulates without a date, without a headline, without a notification requirement, and gets attributed to other causes — market conditions, competitive pressure, talent gaps, execution failures — because there is no forensic process that traces organizational underperformance back to the erosion of knowledge integrity.
So the CISO has a budget and a board slot and a mandate. And nobody has a title or a team or a budget line for the risk that is, in most organizations, considerably more likely to determine whether the business succeeds or fails over the next decade.
What the Security Framing Misses
The platform security assessment your IT team runs before any major technology decision is detailed, serious, and asks exactly the right questions for what it is assessing — compliance certifications, data residency, access controls, encryption standards. It is exactly the right assessment for the risk it is assessing.
But it is assessing the wrong risk.
Any platform that passes the security assessment, regardless of which one you choose, shares the characteristics that actually determine knowledge integrity. Every major platform encodes assumptions about how organizational work should flow. Every one strips context from information to make it portable. Every one creates hidden dependencies between tools, workflows, and data that nobody fully maps. Every one produces outputs that look like knowledge and function as sediment. The security certification provides organizational comfort that the risk has been addressed. The CHEF dynamics get worse regardless of which certified platform you picked.
The security assessment asks: which platform is better at keeping unauthorized people out?
The knowledge integrity question asks: which platform is better at keeping understanding in?
Those are different questions. The second one almost never gets asked, because there is no organizational role with the mandate to ask it.
The Brief Life of the Chief Knowledge Officer
The CKO era deserves more credit than it gets in retrospect.
The people working on knowledge management in the late 1990s were asking the right questions. How does organizational knowledge actually flow? Where does it accumulate and where does it evaporate? How do you design systems that preserve the why alongside the what? How do you measure whether an organization is getting smarter rather than just busier?
Practitioners working at the intersection of lean manufacturing and product development were building the concept of Knowledge Turns — a metric for knowledge health borrowed from inventory turns, designed to measure not how much information the organization had but how efficiently it converted experience into understanding and action. The emphasis was on metrics linked to causal actions, not surface indicators.
The idea faded because the measurement problem was genuinely unsolved. Inventory turns works because inventory is physical and its movement observable. Knowledge isn’t. Documents, training hours, and patents are proxies — they can look healthy while organizational understanding degrades. Without a credible way to measure it, Knowledge Turns stayed conceptually right and practically stranded.
The work was serious and the questions were important. What happened to it is instructive.
The knowledge management discipline got commercialized into products and consulting frameworks. The dot-com contraction eliminated the budget tolerance for anything that couldn’t demonstrate immediate ROI. The CKO role, unable to point to a prevented breach or a compliance certification, couldn’t compete for resources with functions whose risks were legible and whose failures were attributable.
The CISO, meanwhile, got a steady supply of regulatory mandate and catastrophic breach. The Patriot Act and Know Your Customer requirements created serious obligations around identity verification and transaction monitoring that needed real infrastructure and oversight. Then the breach history did the rest — a drumbeat of high-profile compromises that made the case for security investment self-evident to any board and gave the CISO role both budget and urgency that the CKO could never match.
The visible risk won. The invisible risk lost its organizational champion. And in the quarter century since, the gap between how seriously organizations take security and how seriously they take knowledge integrity has grown into one of the most significant and least discussed structural problems in organizational life.
What Knowledge Integrity Would Actually Require
The point is not that organizations should spend less on security. The security risks are real and the investment is largely justified.
The point is that knowledge integrity deserves the same organizational seriousness — a function with a mandate, a budget, a seat at the table, and metrics that actually measure what they claim to measure.
Before getting to what that function looks like, one distinction needs to be made clearly, because the entire backup and recovery industry obscures it.
Saving data is not the same as preserving knowledge integrity.
A perfect backup of every document, every database, every email thread preserves the artifact. It does nothing for the meaning. The backup gives you the what. It never had the why. And the why is what degrades first and restores never. When someone says “we have robust data backup and recovery” as evidence that their knowledge is protected, they are describing a solution to a different problem. The data will survive. The understanding of what it means, why it was created, what problem it was solving, what assumptions were in the room — that was never in the file. No recovery process will restore what was never captured in the first place.
This is Nonaka and Takeuchi’s explicit/tacit distinction applied to backup strategy. Backup systems are extraordinarily good at preserving explicit knowledge. They are structurally incapable of preserving tacit knowledge because tacit knowledge was never in the file to begin with.
So what would genuine knowledge integrity look like organizationally? Two roles, at different levels.
The Knowledge Integrity Officer — call it the KIO — is the organizational peer of the CISO. A function with a mandate to ask the knowledge integrity question before any significant technology decision gets made. Not after implementation, when the hidden dependencies are already baked in and the context is already being stripped. Before. The KIO asks whether a proposed system preserves the reasoning behind decisions or just their outcomes. Whether it makes organizational assumptions visible or buries them in configuration. Whether it creates conditions for genuine understanding or just faster information retrieval. Whether it will make the organization more capable of learning from its own experience five years from now or less.
That role doesn’t exist in most organizations. Nobody has that mandate. Which is why the CHEF environment keeps getting worse with every implementation cycle — each decision locally rational, nobody responsible for the cumulative effect on the organization’s capacity to understand itself.
The Knowledge Steward operates at a different level — within teams and functions rather than across the whole organization. The best analogy is a good board secretary.
A good board secretary doesn’t transcribe everything. They don’t produce a verbatim record that nobody reads. They capture the reasoning behind decisions — the alternatives that were considered and rejected, the context that explains why this choice made sense at this moment, the dissenting view that didn’t prevail but might matter later. Someone reading the minutes a year on can understand not just what was decided but what problem it was solving and what assumptions were in the room.
That’s a skill. It requires judgment about what matters and what doesn’t. It requires enough understanding of the substance to know which disagreement was consequential and which was noise. And it produces something that compounds over time — an organization with good board minutes learns from its own history. An organization without them keeps relearning the same lessons because nobody preserved the reasoning that would have prevented the repetition.
Most organizations apply that discipline at the board level and nowhere else. The decisions that actually shape daily operations — made in project meetings, in vendor negotiations, in the conversations that precede the formal decision — get no equivalent stewardship. They produce action items. The reasoning evaporates.
The Knowledge Steward brings the board secretary’s discipline to the level where most organizational knowledge actually lives and dies. Not documenting everything — that way lies the data lake fallacy applied to meeting notes. Capturing just enough context that the next person can understand not just what was decided but why it made sense, what was traded away, and what would need to change for the decision to be revisited.
This is not an expensive role. It does not require new software. It requires someone with the judgment to know what’s worth preserving and the discipline to preserve it briefly and well. In Nonaka and Takeuchi’s terms, it is the practice that keeps tacit knowledge from evaporating entirely as it moves toward the explicit — the human in the process who holds the context that no system can hold.
The KIO sets the organizational conditions. The Knowledge Steward operates within them. Together they represent the minimum viable answer to the question this piece has been circling: who, exactly, is responsible for what your organization actually knows?
Right now, in most organizations, the honest answer is nobody.
The Question Worth Asking
Your organization almost certainly has a process for evaluating the security posture of every significant technology decision.
It almost certainly does not have an equivalent process for evaluating the knowledge integrity implications of the same decision. Whether the system preserves or destroys context. Whether it creates or obscures hidden dependencies. Whether it makes the organization more or less capable of understanding itself over time.
The platform security assessment will tell you which system is more secure. It will not tell you which platform is less likely to make your organization blind to its own behavior over the next five years.
That question needs someone responsible for asking it.
The CISO role exists because organizations eventually understood that security risk, left unmanaged, compounds into catastrophic failure. The same is true of knowledge integrity risk. The failure mode is slower, the headline less dramatic, the board conversation harder to initiate.
But the organizations that take it seriously before it becomes a crisis will have something the others won’t — the organizational capacity to understand what’s actually happening to them, and to do something about it while there’s still time to choose.
That capacity has a name. It used to have a title.
It’s time to give it one again.
If you are already doing this work — without the title, without the mandate, inside an organization that doesn’t have language for what you’re solving — you are not alone. Unmeasured & Unmeasurable (U&U) is a Slack community for practitioners who can see this coming. [Join here.]
Incontextable is a publication about organizational knowledge, technology design, and why the gap between the two keeps getting wider.


