Patent Pending

Governed AI Reasoning

A system for making AI reasoning transparent, inspectable, and auditable. Every perspective considered, every trade-off made, every boundary enforced, all recorded in a complete audit trail.

Patent Application

A System and Method for Governed and Auditable Artificial Intelligence Reasoning Using Composable Lenses and Hierarchical Points of View

A system and method for governed AI reasoning, providing a technical system for dynamically instantiating a context-aware governance framework and generating a verifiable, machine-readable audit trail to satisfy specific regulatory and safety-compliance requirements. The system stores: a "Contextual Identity Substrate" (role, persona, scenario) to instantiate context; "lenses", which are inspectable data structures defining atomic reasoning rules; and "Points of View" (PoVs), which are constellations of lenses. A processor executes a dual-lattice debate: a first-level debate among lenses generates an informed PoV, and a second-level debate occurs among informed PoVs. The system outputs a synthesised outcome and a complete auditable trace of both debates. This process is governed by "veto lenses" for real-time compliance and a "human reflection loop" for long-term evolution, transforming opaque reasoning into a fully transparent and governable process.


Filed: November 2025 (Singapore) | Status: Patent pending

The Problem

AI systems make decisions but don't show their working. You see the conclusion but not which perspectives were weighed, which trade-offs were made, or whether mandatory constraints were respected. In a consumer application, this opacity may be acceptable. In regulated industries where every material decision must be explainable and auditable, it is not.

A bank's AI advisor recommends a portfolio rebalance. A fraud detection system flags a transaction. An underwriting model declines a loan. In each case, regulators and risk officers need to know: what reasoning led to this outcome? Were all relevant viewpoints considered? Were compliance boundaries enforced? Current AI systems cannot answer these questions because they treat reasoning as a monolithic, opaque process.

How It Works

Cognitive Substrate Architecture decomposes AI reasoning into small, inspectable building blocks and forces them through a structured debate before any conclusion is reached.

Inspectable building blocks

Each unit of reasoning is called a "lens." A lens captures a single perspective: what it is trying to achieve, what it assumes, how it decides. Lenses are stored in a versioned library where they can be reused, combined, and audited independently. A compliance lens might encode data privacy requirements. A commercial lens might encode revenue targets. Each is explicit, inspectable, and versioned.

Structured perspectives

Lenses are grouped into "Points of View" representing different stakeholder perspectives. A regulatory compliance Point of View might combine lenses for data privacy, financial regulation, and audit requirements. A business growth Point of View might combine lenses for market opportunity, competitive positioning, and revenue optimisation. Each perspective carries configurable weights that determine how much influence it has.

Multi-level debate

The system forces these perspectives to argue with each other through a formal protocol. First, lenses within each Point of View debate internally: propose, critique, refine. This produces an informed position for each perspective. Then the informed perspectives debate at a higher level, with contributions scored on soundness, completeness, alignment, and relevance. The result is a synthesised outcome that has survived structured scrutiny from multiple viewpoints.

Hard boundaries

"Veto lenses" enforce non-negotiable constraints that no debate outcome can violate. These represent regulatory limits, safety requirements, and ethical boundaries. If a proposed outcome crosses a veto boundary, it is blocked, downgraded, or sent back for re-debate with additional constraints. Veto boundaries can only be changed through explicit human approval.

Complete audit trail

Every step is recorded in a persistent, machine-readable audit trail: every proposal, every critique, every refinement, every veto, every score, and the final synthesis logic. Any auditor can reconstruct the exact reasoning process and verify that all perspectives were considered and all boundaries were enforced.

What Makes It New

No prior system combines composable reasoning units, structured multi-level debate, real-time boundary enforcement, and complete audit trails in a single architecture. Existing approaches either provide explainability without governance, or governance without transparency.

widgetsComposable

Reasoning is built from reusable, versioned components that can be independently inspected, tested, and audited. Not a monolithic model that produces opaque outputs.

forumAdversarial

Perspectives are required to argue, critique, and defend their positions through structured debate. Conclusions survive scrutiny rather than emerging unchallenged.

blockBounded

Non-negotiable constraints are enforced in real time. Compliance and safety boundaries cannot be overridden by debate outcomes, regardless of how compelling the argument.

history_eduAuditable

Every step of every debate is recorded in a machine-readable audit trail. The complete reasoning process can be reconstructed and verified at any time.

Example Applications

Financial advice. An AI advisor recommends portfolio changes. Lenses representing tax optimisation, risk management, regulatory compliance, and client preferences debate through the protocol. A fiduciary veto lens ensures no recommendation violates obligations. The client receives a synthesised recommendation with full transparency into how each perspective contributed.

Fraud detection. Multiple perspectives (transaction patterns, customer behaviour, regulatory compliance, risk tolerance) each contribute informed positions. A false-positive reduction lens prevents over-aggressive flagging. The complete audit trail provides evidence for regulatory examination.

Contract analysis. Lenses representing different legal domains debate within a legal perspective, while business objective lenses debate within a commercial perspective. The higher-level debate surfaces tensions between legal risk and commercial opportunity, with compliance constraints enforced throughout.

Healthcare diagnostics. Clinical, treatment, and patient-context perspectives debate diagnostic and treatment options. A patient safety veto lens enforces non-negotiable clinical boundaries. The audit trail satisfies medical documentation and liability requirements.

Why Current Approaches Fall Short

Several existing technologies address parts of the AI reasoning challenge. None provides composable, governable reasoning with a complete audit trail.

Technology / Approach What It Does Gap
Explainable AI (XAI) Post-hoc rationalisations of model outputs Explains after the fact. Does not make the reasoning process itself transparent.
Concept Probing (TCAV) Probes trained neural networks for concepts Post-hoc probes, not composable reasoning primitives specified by design.
AI Governance Platforms Data lineage + model lineage + output logging No deterministic trace of the internal reasoning process itself.
Multi-Agent Systems Flat-level agent interactions Combinatorial explosion. Post-hoc rule extraction, not predetermined reasoning primitives.
Constitutional AI Single monolithic global rule that self-refines Monolithic (not composable library), opaque (no debate trace), context-free (no Role/Persona/Scenario).
RLHF Alignment enforced by learned weights Not inspectable at inference time. Alignment baked into weights, not governable at runtime.

Key Concepts

The core terminology of Governed AI Reasoning and Cognitive Substrate Architecture.

Lens
An atomic, inspectable reasoning primitive. Each lens encodes a single perspective with explicit objectives, assumptions, and decision criteria. Lenses are versioned and stored in a reusable library.
Point of View
A constellation of lenses representing a stakeholder perspective. Each Point of View carries configurable weights that determine the relative influence of its component lenses.
Contextual Identity Substrate
A structured representation of the reasoning context, comprising Role (organisational function), Persona (behavioural profile), and Scenario (situational parameters). The substrate instantiates context-aware governance for each reasoning session.
Veto Lens
A special-class lens enforcing non-negotiable constraints (regulatory limits, safety boundaries, ethical rules). Veto lenses can block, downgrade, or force re-debate of any outcome that crosses their boundaries. Changes to veto lenses require explicit human approval.
Lattice Debate Protocol
The structured multi-level argumentation mechanism. First-level debate occurs among lenses within a Point of View (propose, critique, refine). Second-level debate occurs among informed Points of View. Contributions are scored on soundness, completeness, alignment, and relevance.
Lens Library
A versioned repository of reusable lenses and Points of View. Enables composability: new reasoning configurations can be assembled from existing, tested components.
Human Reflection Loop
A feedback mechanism through which human reviewers can refine lens definitions, adjust weights, and evolve the reasoning framework over time based on observed outcomes.
Auditable Trace
A persistent, machine-readable record of every proposal, critique, refinement, veto, score, and synthesis decision. Enables complete reconstruction and verification of the reasoning process.
SCAR Rubric
The scoring framework used to evaluate contributions during debate. Each contribution is assessed across four dimensions: Soundness, Completeness, Alignment, and Relevance.

Scoring

Contributions during the Lattice Debate are evaluated using the SCAR Rubric. The composite score determines how much weight a contribution carries in the final synthesis.

σ = (S + C + A + R) / 4
  • S (Soundness): Is the argument logically valid and free from fallacies?
  • C (Completeness): Does it address all relevant factors and edge cases?
  • A (Alignment): Is it consistent with the declared objectives of the lens or Point of View?
  • R (Relevance): Does it directly address the question at hand within the current context?

Regulatory Alignment

The system directly addresses explainability, auditability, and human oversight requirements across major regulatory frameworks.

Learn How This Applies to Your Organisation

Schedule a complimentary briefing to discuss how governed AI reasoning can address your institution's explainability and audit requirements.

Schedule a Briefing Contact Us