Institutional expertise remains undocumented and unavailable to agents. Critical judgment lives in people's heads. Agents cannot access contextual knowledge that makes human decisions sound.
Organizations develop deep expertise through years of operational experience. A lending manager has approved thousands of loans and developed intuition about which applicants will successfully repay even if credit scores suggest risk. An insurance underwriter has assessed thousands of claims and knows which claim patterns suggest fraud or bad faith. A clinician has treated thousands of patients and can recognize subtle signs of serious illness.
This expertise is largely tacit: it lives in people's heads, embedded in pattern recognition, judgment, and intuition. It is not fully codified in guidelines, policies, or decision rules. It is developed through experience and mentorship.
Agentic systems cannot access this tacit knowledge. An agent has access to documented policies, data, and rules, but not to the intuitive judgment that comes from experience. When agents replace humans, the tacit knowledge is lost. The agent may be more consistent than the human (applying rules uniformly), but it will be less accurate in edge cases where human judgment would have recognized that the rule did not apply.
The problem is particularly severe when organizations deploy agents without first codifying the tacit knowledge that the agents need. The organization loses the tacit knowledge (as experienced employees leave), gains an agent that operates only on explicit knowledge, and discovers too late that critical judgment is missing.
A manufacturing firm implements an agentic system for quality control inspection, designed to identify defective products using computer vision and machine learning. The system is trained on images of acceptable and defective parts.
In the traditional process, experienced quality inspectors visually inspect parts and make pass/fail decisions. The inspectors have years of experience and can recognize subtle defects: a hairline crack that suggests a part will fail under stress, a surface finish that indicates a manufacturing problem even though dimensions are correct, an edge that has burrs indicating a tool wear issue.
The agent is trained on images of known defects and makes decisions based on learned patterns. Initial testing shows the agent has 98% accuracy on a validation dataset. However, in production, the agent misses defects that experienced inspectors would have caught. A part passes the agent's inspection but fails customer stress testing. Investigation reveals that the part had a hairline crack that the agent did not recognize.
The organization interviews experienced inspectors to understand what they would have noticed. The inspectors explain that they look for subtle signs: a change in light reflection at a 45-degree angle that suggests a subsurface defect. They had learned this from experience; they could not easily articulate the rules. The codification is incomplete. The rule is imprecise.
Over time, experienced inspectors retire. New inspectors are never hired because the agent is supposed to replace them. The tacit knowledge that was never fully codified is lost. The agent operates at 98% accuracy, but it is not as accurate as experienced human inspectors were, because the agent lacks access to the tacit knowledge that the experienced inspectors developed.
| Dimension | Score | Rationale |
|---|---|---|
| D - Detectability | 4 | Knowledge at rest is not visible until the agent makes a decision that an expert would have questioned. Errors become apparent only in edge cases or when an expert reviews the agent's work post-hoc. |
| A - Autonomy Sensitivity | 5 | Knowledge gaps are most severe for autonomous agents. For agents with human oversight, experts can provide missing judgment. For autonomous agents, missing judgment leads to uncorrected errors. |
| M - Multiplicative Potential | 4 | Knowledge gaps affect decisions in domains where the missing knowledge is relevant. In organizations with deep expertise and complex decision-making, knowledge gaps compound. |
| A - Attack Surface | 2 | Knowledge at rest is not typically a security vulnerability. It is a knowledge management and governance issue. |
| G - Governance Gap | 4 | Most organizations have not systematically codified tacit knowledge before deploying agents. Knowledge management is often informal. Governance processes do not require codification before automation. |
| E - Enterprise Impact | 4 | Knowledge gaps can lead to decision errors, quality degradation, customer dissatisfaction, and financial loss. |
| Composite DAMAGE Score | 3.2 | High. Requires systematic knowledge capture before agent deployment. |
How severity changes across the agent architecture spectrum.
| Agent Type | Impact | How This Risk Manifests |
|---|---|---|
| Digital Assistant | Low | DA works with human experts who have access to tacit knowledge. Experts can provide judgment that the agent is missing. |
| Digital Apprentice | Low | AP is supervised by experts. Experts provide the tacit knowledge through supervision and feedback. |
| Autonomous Agent | Critical | AA operates independently. If agents lack access to tacit knowledge, decisions are made based only on explicit knowledge. Edge cases will result in errors. |
| Delegating Agent | Medium | DL invokes tools and APIs. If the tools encode some of the tacit knowledge, the agent may have partial access. Gaps remain where knowledge is not encoded in tools. |
| Agent Crew / Pipeline | High | CR chains multiple agents. Each agent in the pipeline may have gaps in tacit knowledge. Cross-domain judgment and exception handling often require tacit knowledge. |
| Agent Mesh / Swarm | High | MS features dynamic peer-to-peer delegation. If the mesh lacks agents with relevant domain expertise, tacit knowledge gaps will emerge. |
| Framework | Coverage | Citation | What It Addresses | What It Misses |
|---|---|---|---|---|
| NIST AI RMF 1.0 | Minimal | MEASURE | Recommends assessment of AI system knowledge and completeness. | No guidance on codifying tacit knowledge before automation. |
| MAS AIRG | Partial | Section 2 (Strategy and Governance) | Requires firms assess the adequacy of AI systems before deployment. | No guidance on knowledge codification requirements. |
| SR 11-7 | Minimal | Model validation | Recommends validation of models before deployment. | No guidance on ensuring agents have adequate domain knowledge. |
| ISO 42001 | Partial | Section 6 (AI management system context) | Requires knowledge management for AI systems. | Does not require organizations to document tacit knowledge before automation. |
| OCC Guidance | Minimal | N/A | Operational risk focus. | No guidance on knowledge codification or agent knowledge sufficiency. |
In banking and lending, credit decision-making relies on expertise that credit analysts have developed over years. A good credit analyst can recognize which borrowers are likely to succeed despite marginal credit scores, or which borrowers with strong credit scores are likely to fail due to industry risk or personal circumstances. This expertise is tacit. When agents automate credit decisions without access to this expertise, credit quality may decline.
In insurance and actuarial practice, underwriting and claims decisions rely on actuarial judgment and domain expertise. An experienced underwriter knows which claim patterns suggest fraud, which policyholder circumstances suggest future claims, which underwriting exceptions are safe. Agents trained without access to this expertise will miss fraud and underwrite excessive risk.
In compliance and anti-money laundering, experienced compliance officers have developed intuition about suspicious transaction patterns and sophisticated evasion techniques. Agents trained without access to this expertise will miss sophisticated money laundering.
Knowledge at Rest requires systematic tacit knowledge capture and hybrid decision systems that preserve expert judgment. Our advisory engagements are purpose-built for banks, insurers, and financial institutions subject to prudential oversight.
Schedule a Briefing