From Proxy.Me: Agentic AI Digital Apprentices, Appendix A

Taxonomy of AI Actors and Deployment Contexts

Three categories of AI actor every enterprise will encounter, and how each deployment context shapes the governance challenge.

"The same qualities that make a digital apprentice valuable, persistence, learning, and coordination, are the qualities that make it difficult to govern. Governance must account for what the apprentice becomes, not just what it is today."

Not All AI Actors Are the Same

The book frames governance as architecture: embedded in Roles, enforced through veto lenses, scaled through scenarios, and made visible through the Work Graph. That framework applies to every digital participant in a Kinetic Organization. But applying it well requires recognizing that different kinds of AI actors present fundamentally different challenges.

Most discussions of AI governance treat all agents as roughly equivalent. They focus on what an agent can do in a single interaction: what data it can access, what actions it can take, what guardrails prevent it from going too far. This is a reasonable starting point for agents that live inside a single application or execute a single workflow. It is not sufficient for a digital apprentice.

To understand why, consider three categories of AI actors that an enterprise is likely to encounter as it adopts agentic technology.

Embedded Agents

An embedded agent lives inside a single application or system. A fraud detection model scoring transactions is an embedded agent. So is an automated rule that flags anomalies in a monitoring dashboard or a chatbot that answers customer questions from a knowledge base.

Embedded agents are naturally contained. They operate within the boundaries of the system they inhabit. Their inputs come from that system. Their outputs stay within that system. Their authority does not extend beyond the application's walls. When something goes wrong, the impact is limited to the system where the agent operates.

Governing embedded agents is relatively straightforward. You validate their inputs, constrain their outputs, monitor their behavior, and maintain the ability to roll back their actions. The application itself serves as the governance boundary.

Orchestrated Agents

An orchestrated agent operates across multiple systems, typically coordinated by a workflow engine or automation platform. An onboarding workflow that provisions accounts, schedules training, sends welcome messages, and updates the HR system involves an orchestrated agent moving through several boundaries in sequence.

Orchestrated agents introduce a problem that embedded agents do not: accumulated authority. Individually, the agent might have modest permissions in each system it touches. But the combination of those permissions has a greater impact than any single permission alone. An agent that can both read sensitive employee data and send external emails has a different risk profile than one that can only do one of those things.

Governing orchestrated agents requires attention to the path, not just the individual steps. What systems does the workflow traverse? What data moves between them? What is the combined effect of all the permissions the agent accumulates as it progresses? The governance challenge is not any single action but the compound exposure created by chaining actions together.

Digital Apprentices

A digital apprentice, what this book calls a Proxy, is different from both. It is persistent. It does not start fresh with each task or terminate when a workflow completes. It carries forward everything it has learned: the lenses it has refined, the patterns it has recognized, the scenarios it has navigated, the decisions it has observed its human steward make. Its capabilities evolve over time because learning is fundamental to its purpose.

It is role-bound. Its identity, authority, and reasoning are tied to a specific Role in the organization, not to a specific workflow or application. When the human who fills that Role leaves, the Proxy remains. When a new person steps into the Role, the Proxy provides continuity. It carries the institutional memory that would otherwise walk out the door.

And it coordinates. Through the mesh, Proxies communicate with other Proxies, negotiate shared scenarios, reconcile dependencies, route work, and flag conditions that require attention. This coordination is what gives the Kinetic Organization its momentum. But it also means that the combined reach of coordinating Proxies extends well beyond what any individual Proxy was authorized to do alone.

These three qualities, persistence, role-binding, and coordination, make the digital apprentice the most capable AI actor in the enterprise. They also make it the most challenging to govern.

Three Governance Regimes

Each category of AI actor requires a governance approach fitted to its nature. Mapped to the four governance surfaces (knowledge, judgment, action, and learning), the regimes distribute differently. An embedded agent's governance lives almost entirely on the knowledge and action surfaces. An orchestrated agent's primary risk is on the action surface as authority accumulates across a chain. A digital apprentice requires governance across all four, because it persists, reasons, acts, and learns.

Embedded Agent Orchestrated Agent Digital Apprentice
LifespanTransient. Activated per event or request.Transient. Activated per workflow run.Persistent. Endures across tasks, sessions, and personnel changes.
ScopeSingle system or application.Multiple systems in sequence.Role-bound. Operates wherever the Role's work exists.
AuthorityFixed. Defined by the application.Cumulative. Grows with each system traversed.Evolving. Expands as the Proxy learns and connects to more systems.
LearningNone or minimal.None. Each run is independent.Continuous. Learns from every decision and interaction.
CoordinationIsolated within its system.Follows a predefined sequence.Coordinates with other Proxies through the mesh.
Primary riskMalfunction within a contained boundary.Compound exposure across system boundaries.Gradual drift in reasoning or unchecked growth in reach.
Governance focusInput validation, output constraints, rollback.Path analysis, cumulative authority review, scope limits.Reasoning curation, connection containment, mesh oversight.

The remainder of this appendix focuses on the third column. But before examining how to govern the apprentice itself, we need to understand where agents live, because the environment an agent operates within shapes the governance challenge it presents.

Where Agents Live: Deployment Context Shapes Governance

The three categories above describe what agents are. But agents also differ in where they operate, and this matters enormously for governance. An agent embedded in a tightly controlled enterprise platform faces different constraints than one running on a developer's desktop or one that reaches across a firewall to interact with customers. Understanding the deployment context is essential for designing the right governance posture.

Agents That Work with Walled Applications

Enterprises depend on applications that govern what they own. Each such application has its own data model, process logic, and security model. Any AI embedded inside the application inherits all of that. Its governance is the application's governance. For work that lives entirely inside the application's walls, this is a well-formed containment model. The interior is the application's own responsibility, and the enterprise's Proxy governance does not reach inside it.

This appendix is about how a Proxy behaves when working with such an application from the outside. A Proxy in this relationship is a peer to the application, not an inhabitant of it. It reaches the application through the same surfaces any other outside actor uses: the user interface a person would operate, and the programmatic surface of APIs, events, messages, and files that a system process would call. The Proxy's authority at these surfaces is bounded exactly as any external caller's would be, by the credentials and scopes the application is willing to accept.

Several properties of this landscape shape how Proxies must operate within it. Enterprises typically run many such applications at once, often multiple instances of the same application in different parts of the business. They are typically loosely coupled, synchronizing frequently but lagging, and reconciled only periodically. Business context and metadata are often just bridged between applications rather than truly integrated. At any given moment, two applications can hold related information that is slightly out of step, and the same nominal entity can carry subtly different meanings in different systems.

No single application holds the enterprise's full picture. Data is pulled from each and consolidated into a separate data plane, where reconciliation and cross-system reasoning can occur. Once data leaves an application, the application's governance no longer protects it. Lineage, access control, masking, retention, and semantic reconciliation on that plane are the enterprise's responsibility, not the source application's.

A Proxy working in this environment has three governance concerns:

  • Behaviour at the integration surface. Credentials, scopes, logging, and human-in-command review. A Proxy operating through a user interface is as much an external actor as one that calls an API, and should be governed accordingly. The surface through which an action is taken does not change the authority the action carries.
  • Reasoning about observed state. Applications are loosely coupled and not always synchronized. The Proxy must treat what it reads as a view taken at a moment, not a permanent truth. Acting on stale or semantically mismatched state is a reasoning failure governed by the judgment surface.
  • Handling data once it leaves its source. The data plane is where a Proxy's reach compounds most quickly and most quietly. Separately governed data becomes jointly reachable once it arrives there. Lineage and cumulative operational authority must be enforced by the enterprise itself, not by any of the source applications.

Across all three concerns, the Proxy's posture is the same. It is an integrated external actor. It does not govern the interior of the applications it works with. It does not inherit their governance either. It operates in the space between and around them.

Agents Inside Workflow Engines

Workflow engines like Pega, ServiceNow, or similar orchestration platforms host agents that coordinate multi-step processes. These agents operate within the engine's own governance model: workflows are predefined, steps are sequenced, and permissions at each step are typically configured by platform administrators.

The governance advantage here is visibility. The workflow engine knows every step the agent takes, every system it touches, and every decision point it encounters. The risk is that the engine becomes a trusted intermediary that aggregates authority across multiple backend systems. An agent that orchestrates twenty steps across five systems accumulates the permissions of all five, even if the workflow engine's own access controls appear modest. Governance for these agents must account for the cumulative authority the engine grants, not just the authority of any individual step.

Agents in Cloud and On-Premise AI Platforms

A different challenge emerges with agents running on general-purpose AI platforms, whether cloud-hosted services or on-premise deployments. These agents are by nature generic and unconstrained. They are designed to be flexible, connect to many systems, handle diverse tasks, and operate across domains. That flexibility is their purpose. It is also their governance challenge.

Unlike a platform-embedded agent that inherits its host's security model, a general-purpose agent starts with no inherent boundaries. Every connection must be explicitly granted and governed. Every system it can reach expands its authority surface. Because these agents are designed to be helpful across many contexts, they tend to accumulate connections rapidly, and each connection increases the potential blast radius if something goes wrong. Organizations deploying agents on general-purpose platforms must build the governance that the platform does not provide, defining boundaries, connection limits, and authority thresholds from scratch.

Agents That Cross the Firewall

The most complex governance challenge concerns agents who interact with parties outside the organization. An agent that communicates with customers, coordinates with partners, submits regulatory filings, or participates in supply chain exchanges crosses boundaries that internal governance cannot fully control.

When an agent sends a message to a customer, it represents the organization. When it shares data with a partner, it creates an exposure that extends beyond the enterprise's own systems. When it receives information from an external source, it introduces data whose provenance and integrity cannot be guaranteed by internal controls alone. Every external interaction carries reputational, legal, and operational risk that internal interactions do not.

Governing cross-firewall agents requires not only the internal mechanisms described throughout this appendix but also contractual frameworks, data classification policies, and clear boundaries around what the agent can communicate externally without human approval. These agents should face the highest governance scrutiny and the narrowest autonomy of any agent in the enterprise.

Desktop Agents and the Emergence of Memory

A newer category of agent is beginning to reshape the landscape: desktop agents like Cowork, Copilot, and similar tools that operate on individual workstations rather than inside enterprise systems. These agents assist individual users with daily tasks, from drafting documents and managing email to searching files and summarizing meetings.

What distinguishes the latest generation of desktop agents is memory. They are developing the ability to retain context across sessions, remember preferences, learn from repeated interactions, and build an increasingly detailed model of the user's work patterns. This makes them more useful over time, but it also means they are quietly accumulating something earlier desktop tools never had: persistent knowledge of how a person works, what they work on, and who they work with.

This is directly relevant to digital apprentices. A desktop agent with memory begins to resemble, in primitive form, the Proxy described throughout this book. It carries continuity. It learns. It operates with growing independence. In many organizations, these desktop agents will become the foundation for Proxies. The Proxy may leverage a desktop agent's capabilities, or the desktop agent may evolve into the core of the Proxy system itself.

Desktop agent governance is therefore not a separate concern from Proxy governance. It is the earliest form. Organizations that allow desktop agents to accumulate memory without oversight, without reviewing what they remember, without understanding what systems they can reach, and without considering how they might eventually coordinate with other agents are building the foundation for ungoverned Proxies. The governance disciplines described in this appendix should begin the moment an agent starts retaining context across sessions, not after it has already become a full digital apprentice.

Agents on Mobile Devices and in Chat

Agents are also appearing in places that lie entirely outside the traditional enterprise perimeter: mobile devices, messaging platforms, and conversational interfaces. Tools like OpenClaw bring agentic capabilities to personal devices. Features like Dispatch in Cowork allow agents to be triggered and coordinated through chat-based interactions. These agents meet users where they already work, making them powerful adoption vehicles but also raising governance concerns that other deployment contexts do not.

Mobile and chat-based agents operate on devices the organization may not fully control. They interact through channels that blend personal and professional use. They may run on consumer hardware with consumer-grade security. They can be invoked casually, in a conversation, without the formality of logging into an enterprise system or launching a governed workflow. This informality is part of their appeal, but it may lead the user to view the interaction as an enterprise action subject to enterprise governance, even when it is not.

The security concerns are significant. A mobile agent that can access enterprise data on a personal device creates an exposure that does not exist when the same data is accessed through a managed workstation behind a firewall. A chat-based agent that can dispatch tasks to other systems introduces a coordination surface that is difficult to monitor because it originates from a conversational interface rather than a structured workflow. If these agents develop memory, they carry enterprise context on devices and platforms the organization does not govern.

Like desktop agents, mobile and chat-based agents may become building blocks of the Proxy system. An organization might construct a Proxy that uses a chat-based agent as its primary interface, a mobile agent as its field-operations extension, a desktop agent as its workstation presence, and platform-embedded agents for its backend access. Any of these could form part of the Proxy's architecture. This makes their governance not a peripheral concern but a foundational one: the security posture of the Proxy is only as strong as the least governed component in its assembly.

Deployment Context and the Proxy

A Proxy does not live in just one of these environments. It is likely to draw on agents across several of them: a chat-based agent as its conversational interface, a desktop agent for workstation tasks, a mobile agent for field operations, a platform-embedded agent for ERP data, a workflow engine agent for business processes, and a cloud-hosted agent for analysis. Any of these could be used to construct the Proxy, either as components within it or as the core of the Proxy system itself. The Proxy's governance challenge is therefore a composite of all the deployment contexts it touches.

This is why cumulative operational authority matters so much. Each deployment context contributes its own permissions, data access, and connection points to the Proxy's total authority surface. A Proxy that individually has modest access in five different environments may collectively have extraordinary reach. Governing the Proxy means understanding not just what it can do in any single environment, but what it can do across all of them.

Continue Through the Governance Appendices

Appendix B examines the two governance domains in detail. Appendix C operationalises the mesh layer. Appendix D walks the apprentice lifecycle.

Appendix B About the Book