A single AI agent has modest permissions. A chain of coordinating agents has combined permissions no role was ever intended to authorise.
The traditional access model assumes one actor, one set of permissions. AI agents break that assumption. Each agent has reasonable, scoped access to its own systems. A flow that chains five agents accumulates the union of all their permissions, often without anyone declaring that combination as a single authority.
This is cumulative operational authority. It cannot be governed if it cannot be measured. The book treats it as a first-class metric: every flow has a calculated total, every total is compared against thresholds, and combinations the enterprise hasn't authorised get blocked or escalated automatically.
For every proposed flow, the question is simple: if this flow fails, goes wrong, or is misused, what is the impact? Not just the data accessed. Systems modified, communications sent, downstream agents' decisions based on what they received. Authority measures capacity. Blast radius measures consequence.
Authority that cannot be measured cannot be governed.Proxy.Me · Appendix C
Mesh governance becomes a security architecture in its own right. Identity propagation rules. Direct-channel security configurations for agent-to-agent protocols. Replayable evidence as the audit substrate. Drift detection on coordination patterns, not just on individual agent behaviour.
For CISOs designing the security model for the next decade of enterprise AI, Proxy.Me is the reference. Three of the four governance appendices are addressed directly to this work.
Including Appendix C (mesh governance: cumulative authority, blast radius, drift, control towers, sentinels). By Christopher Jackson, May 2026.
Read about the book arrow_forwardA single email when Proxy.Me is available.