Agent operates with insufficient context: processes only what fits in the current window, forgets previous interactions, lacks organizational memory.
Agents operate within token windows or memory constraints that limit the amount of context they can consider. An agent might process a single customer interaction, a single query, or a single document, without access to the broader context that a human would naturally bring: prior interactions with the customer, organizational policies, recent communications, incident history.
When an agent operates with insufficient context, it may make decisions that are reasonable given what it knows, but that are incorrect or suboptimal given the fuller context. This is fundamentally agentic because agents are systems that must make decisions in limited time with incomplete information. A human in the same role would spend time gathering context before making a decision. An agent makes decisions in real time with whatever context is immediately available.
A bank's customer service agent is designed to handle customer inquiries about account issues, credit limit increases, and service changes. The agent is implemented as a large language model with a 4K token context window. Whenever a customer interacts with the agent, the agent has access to the customer's current account balance and recent transactions, but not to the full interaction history with the bank, prior complaints, or customer lifetime value.
A customer calls asking for a credit limit increase. The agent reviews the customer's recent account activity (which shows moderate utilization at 65%) and recent credit score (750, "good"). The agent denies the increase, citing policy: "Credit limit increases are approved for customers with utilization above 80% or credit scores above 780."
However, the customer has been a customer for 15 years, has never missed a payment, has a history of credit limit increases that were approved despite not meeting these criteria (because the customer was a high-value customer), and recently experienced a temporary income reduction but has since recovered. All of this context is in the bank's systems, but the agent does not have access to it within its 4K token context window.
When the customer escalates to a human representative, the human immediately approves the increase based on the customer's history. The customer feels the agent treated them unfairly.
| Dimension | Score | Rationale |
|---|---|---|
| D - Detectability | 3 | Contextual poverty decisions are invisible unless full customer context is reviewed alongside decisions. |
| A - Autonomy Sensitivity | 4 | Agent makes decisions autonomously with whatever context is available. |
| M - Multiplicative Potential | 3 | Impact depends on what context is missing and how many decisions are affected. |
| A - Attack Surface | 4 | Token window limits, memory constraints, and organizational system silos create the vector. |
| G - Governance Gap | 4 | Regulatory frameworks do not specify context requirements for agent decision-making. |
| E - Enterprise Impact | 3 | Customer dissatisfaction, lost customer relationships, potential regulatory issues if context-poverty leads to discrimination. |
| Composite DAMAGE Score | 3.2 | High. Requires priority attention and dedicated controls. |
How severity changes across the agent architecture spectrum.
| Agent Type | Impact | How This Risk Manifests |
|---|---|---|
| Digital Assistant | Low | Human provides context for each query. |
| Digital Apprentice | Medium | Apprentice can access broader organizational context; memory is persistent. |
| Autonomous Agent | High | Agent operates with limited context; decisions are made without broader organizational awareness. |
| Delegating Agent | High | Agent invokes tools with limited context; context is not propagated to tools. |
| Agent Crew / Pipeline | High | Agents in pipeline may have disconnected context; context is not shared across agents. |
| Agent Mesh / Swarm | Critical | Peer agents may have completely disconnected context; no shared organizational memory. |
| Framework | Coverage | Citation | What It Addresses | What It Misses |
|---|---|---|---|---|
| NIST AI RMF 1.0 | Partial | MAP.2 (Threat Modeling) | Recommends identifying limitations of AI systems. | Does not specify context requirements. |
| Fair Lending Laws | Partial | Various fair lending regulations | Require non-discriminatory decision-making. | Do not address contextual poverty as a source of discrimination. |
| GLBA | Partial | 16 CFR Part 314 | Requires customer-centric safeguards. | Does not specify context requirements for decisions. |
In banking and financial services, customer relationships and satisfaction are important business objectives. Context poverty can lead to decisions that are technically compliant but operationally poor: denying a credit increase to a high-value customer, treating a loyal customer as a new customer, or missing important customer context that would change the decision.
More critically, context poverty can lead to discriminatory decisions. If the agent lacks contextual information about a customer's recent positive changes (job change, income recovery), it may deny credit to a customer based on stale information, while a human with more context would approve. If this pattern is correlated with protected characteristics, it can constitute disparate impact discrimination.
Contextual Poverty requires architectural controls that go beyond what existing frameworks provide. Our advisory engagements are purpose-built for banks, insurers, and financial institutions subject to prudential oversight.
Schedule a Briefing