Agents communicating through human channels (Slack, Teams, email) can be indistinguishable from human participants. No institutional policy requires agents to identify themselves.
When agents communicate through the same channels as humans (email, Slack, Teams, SMS), users cannot reliably distinguish agent outputs from human messages. An agent responding to a question is indistinguishable from a human responder. This creates impersonation and trust vulnerabilities.
A user receives a message in Slack claiming to be from "Project Manager Chris" providing project status. The user assumes the message is from Chris (a human colleague) and trusts the information. But the message came from an agent trained on Chris's writing style. Additionally, humans may behave differently when interacting with agents vs. humans, and the lack of transparency about agent vs. human enables manipulation.
A financial services company deploys an agent-based customer support system. When customers email support@company.com, the email goes to an agent that responds to routine questions (account balance, transaction history, payment processing). The agent's responses are formatted like human support emails, with signature "Customer Support Team."
A customer emails asking about a late charge on their account. Agent-Support reads the email and generates a response explaining the charge and offering a refund. The response is professional, empathetic, and accurate. The customer reads the response and replies, continuing the conversation.
However, the customer does not know the conversation is with an agent. The customer assumes they are communicating with human support. Over 5 emails, the customer shares sensitive information (account numbers, prior payment difficulties) because they believe they are communicating confidentially with a human support representative. The agent's responses are logged to a system database (unlike human support emails which might be deleted). The sensitive information is now in a database accessible to many employees.
Additionally, an adversary could exploit this impersonation. If the adversary can inject messages into the support channel, customers will trust messages from "Customer Support Team" without realizing the message is from an attacker.
| Dimension | Score | Rationale |
|---|---|---|
| D - Detectability | 2 | Agent messages are often indistinguishable from human messages. Users may not realize they are communicating with an agent. |
| A - Autonomy Sensitivity | 2 | Affects all agent communication types. Transparency and labeling would address risk. |
| M - Multiplicative Potential | 2 | Affects user interactions with agents. Impersonation risk scales with number of agent interactions. |
| A - Attack Surface | 3 | Human communication channels are attack surfaces. Compromised channels can be used to impersonate agents. |
| G - Governance Gap | 3 | Institutions may not have policies requiring transparency about agent vs. human communication. |
| E - Enterprise Impact | 2 | Affects user trust and information security. Does not directly impact financial transactions unless users make decisions based on impersonation. |
| Composite DAMAGE Score | 3.4 | High. Requires dedicated controls and monitoring. Should not be accepted without mitigation. |
How severity changes across the agent architecture spectrum.
| Agent Type | Impact | How This Risk Manifests |
|---|---|---|
| Digital Assistant | Low | Digital assistants are typically labeled as assistants in UI. Users know they are interacting with AI. |
| Digital Apprentice | Low-Med | Agents may be labeled in communication, but labeling might not be obvious. |
| Autonomous Agent | Medium | If agent communicates in human channels without labeling, impersonation risk is high. |
| Delegating Agent | Medium | If delegating agent invokes tools that communicate in human channels, impersonation can occur. |
| Agent Crew / Pipeline | Medium | Multiple agents in crew may communicate in human channels, creating impersonation risk if not labeled. |
| Agent Mesh / Swarm | High | Mesh agents may communicate in human channels directly. Impersonation risk is high. |
| Framework | Coverage | Citation | What It Addresses | What It Misses |
|---|---|---|---|---|
| FTC AI Guidance | Partial | Deceptive AI Practices | Prohibition on deceptive AI use. | Requirement for transparency about agent vs. human. |
| GDPR Article 21 | Partial | Right to Object | Transparency about automated decision-making. | Requirement to disclose when communication is from agent. |
| SEC Disclosure Rules | Partial | Plain English Requirements | Clear communication to investors. | Disclosure of agent-generated content. |
| NIST CSF | Minimal | Cybersecurity governance. | Human channel security and agent impersonation. |
In financial services, customers rely on clear communication about who they are communicating with. If a customer believes they are communicating with human support but are actually communicating with an agent, they may share sensitive information or make decisions based on incorrect assumptions about the conversation's confidentiality.
Additionally, regulatory frameworks increasingly require transparency about AI involvement in customer interactions. The FTC has issued guidance against deceptive AI practices, including impersonation.
Human Channel Impersonation requires architectural controls that go beyond what existing frameworks provide. Our advisory engagements are purpose-built for banks, insurers, and financial institutions subject to prudential oversight.
Schedule a Briefing