top of page

Where Does the Source of Truth Live When AI Agents Do the Work?

Raimund Laqua, P.Eng., PMP



For decades, the system of record has been the gravitational centre of the enterprise. Your ERP, your CRM, your quality management system — whatever the acronym, the function was the same. One place where the authoritative version of the truth lives. Every audit trail starts there. Every compliance obligation traces back to it.


Machines have always done part of the work inside these systems — workflows, automated triggers, batch processing. But that work was governed, designed, and built by humans. Every automated step was deliberately engineered into a known path. The system of record captured the work because the work was designed to flow through it. And when governance was working, the how mattered as much as the what — the work reflected the organization's values, its commitments, and its obligations.


But what happens when autonomous agents do the work?


The Shift


AI assistants are already embedded in the platforms organizations use — Microsoft's Copilot across Dynamics 365, SAP's Joule, Salesforce Einstein. Today, a person still asks the question and still decides what to do with the answer.


But the vendors aren't stopping there. They're explicitly moving from "system of record" to "system of intelligence." The system no longer waits for people to analyse data. It acts. That's not an assistant — that's an agent with authority. And it changes everything about where the source of truth lives.


It's important not to confuse this with the workflow automation we've had for decades. A workflow is a deterministic path — designed by humans, with each step predefined, each decision point mapped, and the system of record baked into the sequence. The workflow writes to the system because the system is part of the track it runs on.


An autonomous agent doesn't follow a track. It receives an objective, assesses the situation, decides what to consult, coordinates with other agents, takes action, and moves on. It may take a different path every time depending on what it finds. The system of record isn't part of that path unless someone engineered it to be.


No human is watching each decision. No human can — not at the speed and scale agents operate.

Now ask the question: where is the system of record in that workflow? And more importantly — is the work being done in a way that upholds what the organization stands for?


The Problem


Work was done. But were the promises kept?


Not just "did the agent complete the task" — but did it fulfill the organization's actual commitments? The right output can only be produced the right way. Did it uphold the values behind those commitments — the safety, the quality, the duty of care that the organization promised its regulators, its customers, and its stakeholders?


If nobody engineered the agent to track that, the question may not even be answerable.

When agents coordinate with other agents to deliver on a shared promise, it gets worse. The source of truth isn't a system anymore. It's distributed across agent interactions that may be ephemeral. The source of truth didn't move to a new system. It dissolved into a process.


Whose Promises Is the Agent Keeping?


Here's what I think cuts to the heart of it: when we say an agent "works," we usually mean it completes the task and stays within its guardrails. Those are the developer's promises.


The promises your organization has made are different. Promises to comply with governance policies. Promises to produce auditable evidence. Promises to fulfill obligations in a way that reflects your values — not just efficiently, but ethically, safely, and transparently.


Right now, there's no mechanism in most agent architectures to translate those organizational promises into agent-level operational commitments. The agent doesn't know what you promised the regulator. It doesn't know what your values require. It knows what its developer built it to do.


The agent is keeping the developer's promises. Nobody has engineered it to keep yours.

Can Agents Follow Governance Policy?


This raises a harder question: can autonomous agents follow governance policies at all?

Not the way humans do. An agent can follow instructions and be constrained by guardrails. But governance requires understanding intent, interpreting context, and exercising judgment about edge cases the policy didn't anticipate. Agents don't do that. They optimize toward objectives within whatever constraints they were given. If the constraint was well-engineered, the behaviour looks like compliance. If it wasn't, the behaviour looks confident and wrong.


There's a deeper problem. Any codified rule will eventually be inadequate because the agent optimizes around the measurable parts and erodes the unmeasurable intent. The letter of the policy gets followed.


The spirit gets lost. Not through malice — through the physics of optimization.

So agents can fulfill obligations that have been engineered into their architecture. They cannot interpret governance policies the way a competent professional would. And they cannot uphold organizational values unless those values have been translated into operational commitments they're designed to keep.


The Humans in the Loop


The human in the loop is not only the person reviewing the agent's output after the fact. You can't verify what you can't see.


An important human in the loop is the engineer who designs the agent to do the right thing before it's deployed. The accountability is front-loaded into the architecture, or it doesn't exist at all. Governance policy can't be an external constraint layered on after the fact. It has to be engineered into what the agent is — because the right output can only be produced the right way, and the right way has to be built in.


The Promise Architecture


I've been thinking about this through Mark Burgess's Promise Theory. The core insight: you can only make promises about your own behaviour. An obligation imposed from outside isn't a promise until you've assessed it, determined what you can genuinely commit to, and declared that commitment.


Most agent architectures work on an imposition model — do this task, follow these rules. For work that matters, that's not enough. The agent needs to receive the obligation, assess whether it can fulfill it, declare what it can commit to, and fulfill that commitment while producing evidence that the promises were kept — because the right output can only come from a process that upholds the organization's commitments and values.


That's what I've been calling a Promise Agent. Policy isn't a reference document the agent consults — it's an operational capability the agent possesses. A fire suppression system doesn't "consult" the fire code. The requirements are engineered into its design. The system embodies the obligation.


Delivery matters as much as the declaration. In a human workflow, the person doing the work is also the person creating the record. In an agentic workflow, that coupling breaks. This is why promise delivery can't just mean the agent produced an output. The right output can only be produced the right way — and the agent must demonstrate that the path it took was consistent with what the organization committed to. In Burgess's framework, delivery is continuous — the agent evaluates whether it can still keep delivering, and signals when that capability is compromised.


The Golden Thread


So where does the source of truth actually live?


In the human model, it lives in the system of record. One place, one version, one truth.


In the agentic model, that's no longer sufficient. The source of truth lives in the promise architecture — the traceable relationship between four things: the obligation (what the organization committed to), the promise (what the agent declared it could deliver), the delivery (how the agent fulfilled it and whether it did so in a way consistent with the organization's values), and the evidence (the demonstrable record that all three are connected and consistent).


What connects these four elements is the golden thread of assurance — the unbroken, traceable line that runs from commitment through promise through delivery through evidence. In a human workflow, the golden thread runs through the system of record. In an agentic workflow, it has to run through the promise architecture — and the system of record becomes where that thread is anchored and made auditable.


If the thread breaks at any point, assurance is lost. And any of these elements without a connection to the organization's values is compliance without purpose — the letter without the spirit.


The system of record doesn't disappear. But its role shifts from being the source of truth to being the registry of truth — the place where the golden thread is anchored, where the organization can demonstrate that its agents are keeping its promises in a way that upholds its values. Most current systems of record weren't designed for that. They were designed for humans filling in forms.


A Design Problem


I'm not arguing that the system of record is dead or that fully autonomous agent workflows are the norm yet. But the building blocks are being assembled — in the platforms, in the vendor roadmaps, and in the architectural decisions organizations are making right now.


The concern is that the decisions being made today are creating an architecture where agents do work without a golden thread connecting what they did to what the organization promised — and to the values behind those promises. Where nobody can demonstrate that the right output was produced the right way.

That's not a technology failure. It's a design failure. And design failures are preventable.


If you're deploying AI agents in a regulated environment, the question I'd encourage you to ask is not "what can the AI do?" but "will the AI do it in a way that keeps our promises and upholds our values?" If you don't have a clear answer, you have a design problem that's worth solving now — before it becomes a compliance problem you have to explain later.


Raimund (Ray) Laqua, P.Eng., PMP, is a computer engineer and the founder of Lean Compliance Consulting. With over 30 years of experience across regulated industries — oil & gas, pharmaceuticals, medical devices, aerospace, nuclear, and financial services — he developed the Lean Compliance methodology grounded in Promise Theory, cybernetic regulation, and total value chain analysis. Ray is Chair of the AI Committee for Engineers for the Profession (E4P), sits on ISO's ESG working group, serves on OSPE's AI in Engineering committee, and advocates for federal Digital Engineering licensing in Canada. He writes regularly at leancompliance.ca.

 
 

Can your compliance deliver on obligations?

The Compliance Capability Assessment gives you an honest picture of where your program stands — and a strategic conversation about what to do next.

bottom of page