Promise Agents: Autonomous Policy Fulfillment in Security Architecture
top of page

Promise Agents: Autonomous Policy Fulfillment in Security Architecture


The systems that run our world make implicit promises — to route traffic, to process transactions, to keep data where it belongs. Most of those promises are never explicitly declared, never monitored against, and never reported on until something breaks. Promise Theory, the framework Mark Burgess developed to model autonomous commitment, sits at the heart of the Lean Compliance methodology. This briefing extends it further, asking what becomes possible when security infrastructure is designed to keep its promises the way we expect people to keep theirs.


Most current thinking places AI at the monitoring or response layer: detecting anomalies, flagging incidents, accelerating analyst workflows. That is useful, but it still treats the underlying security equipment as passive infrastructure, governed by static rules and assessed from outside.


Mark Burgess, who developed Promise Theory and built CFEngine on its principles, had a different intuition — one rooted in a security problem he identified before most of us were thinking about it. His observation was that the command-and-control model of managing devices was itself producing vulnerabilities. A device designed to receive and execute external commands is a device that can be exploited by anyone who can issue those commands. His response was to model a different design principle: devices that govern themselves from within by declaring what they will do, rather than waiting to be told. Autonomy, in his framework, is not just an architectural preference. It is a security property.


He found a concrete example of this already operating in live infrastructure: BGP — the Border Gateway Protocol that governs routing between the large independent networks that make up the internet. BGP routers do not wait for a central controller. They declare their routing promises to neighboring routers and cooperate through voluntary exchange of those declarations. Burgess states this directly: "BGP is a promise-based system." Each router is already a promising agent, governing itself from within, building trust through its history of kept promises.


That is the design principle. The question worth exploring is what it would mean to apply it to security obligations — not routing tables, but the high-level commitments an organization makes about what its infrastructure will and will not allow.


I have written a briefing note that develops this as a formal proposal: **Promise Agents** — security equipment with embedded, fine-tuned AI models that receive obligations, assess what they can genuinely commit to, declare those commitments as promises, fulfill them autonomously, and monitor their own performance against them continuously.


The briefing covers the theoretical foundation in Promise Theory, the BGP precedent Burgess himself identifies, the problem that makes this direction worth considering, the architecture it implies, and the prerequisites that would need to be in place before it becomes buildable.


It is offered as a starting point for discussion — not a finished design, but a direction worth examining for security architects, compliance practitioners, AI engineers, and equipment vendors who may see potential in it.


The full briefing note is linked below. I would welcome responses from anyone working in these areas.




Raimund (Ray) Laqua, P.Eng., PMP is the founder of Lean Compliance Consulting, helping organizations build compliance as operational capability rather than procedural overhead. He serves on ISO's ESG working group and OSPE's AI in Engineering committee, and chairs the AI Committee for Engineers for the Profession (E4P), where he advocates for federal licensing of digital engineering disciplines in Canada.

 
 
bottom of page