SEARCH
Find what you need
577 results found with an empty search
- The Foundations of Lean Compliance
Lean Compliance rests on foundational principles drawn from promise theory, cybernetic regulation, and value chain analysis. This article presents the logical progression that connects these principles and demonstrates why they necessarily lead to a different understanding of compliance itself. Understanding Obligations and Promises Promise Theory & Operational Compliance Compliance is fundamentally about meeting obligations. For compliance to be successful, these obligations must be operationalized through the fulfillment of promises associated with each obligation. This connection is grounded in Promise Theory, which recognizes that organizations make voluntary commitments to maintain cooperative relationships with their stakeholders. Regulatory obligations come in four distinct types based on what they require and at what level they operate. The four types determine whether compliance requires procedural adherence (means) or outcome achievement (ends), at either specific (micro) or systemic (macro) levels. To meet these obligations, organizations must develop operational capabilities to fulfill their commitments—to keep their promises. Compliance as Regulation Compliance fulfills promises through regulation—regulating organizational effort to achieve targeted outcomes. This includes static controls, but more importantly, dynamic cybernetic systems that adapt and respond through feedback and feedforward controls. The Foundation: Lean Thinking Lean is about creating value by eliminating waste in operations. Waste is the manifestation of risk that has become reality. The root cause of both waste and risk is uncertainty, which lean practitioners call variation or variability. The Core Insight: Regulation Reduces Variation The act of regulation—through feedback and feedforward controls—reduces variation and variability. This is the fundamental principle underlying both Lean Six Sigma in operations and compliance functions like quality management and safety programs. Both regulate processes to reduce uncertainty. Expanding Value: From Shareholder to Stakeholder Expanded Value Chain Analysis (VCA) Traditional Value Chain Analysis measures value as financial margin, optimized for shareholder value. Michael Porter developed VCA as a tool for achieving competitive advantage through superior margin creation. However, modern organizations must create value for all stakeholders: customers, employees, communities, regulators, the environment, and shareholders. This expansion requires redefining value beyond margin to include quality, safety, security, sustainability, ethics, and trust. These aren't optional extras—they are obligations and promises to stakeholders. By extending VCA to encompass these dimensions, we create a more comprehensive model that affords management better decision-making tools for achieving competitive advantage in today's stakeholder-driven environment. From Productivity to Total Value Value Chain Analysis reveals secondary activities designed to improve productivity—the traditional domain of lean and operational excellence, focused on margin creation. Achieving Total Value requires more. Total Value includes financial margin plus quality, safety, security, sustainability, and ethics, plus value as perceived through the eyes of all stakeholders. This requires activities that improve certainty rather than just productivity. This is the domain of certainty programs and the practice of Lean Compliance. The Integrative Force: Certainty + Productivity Programs Productivity programs use regulation to reduce variation and improve margins. Certainty programs use regulation to reduce uncertainty and ensure Total Value is created. Together, these programs serve as an integrative force within the value chain, ensuring both shareholder and stakeholder value. Conclusion Value and Compliance Streams Lean Compliance is not compliance adapted to lean thinking. It is the natural extension of lean principles into the domain of certainty—making visible what has always been implicit in Total Value creation. These foundational principles enable practical applications including compliance streams, operational compliance models, and cybernetic governance systems that transform external obligations into internal operational capability. This is Lean Compliance.
- Taking Ownership: The First Step to Operational Compliance
For decades, compliance has been one of the most reactive functions in the enterprise—more reactive than finance, operations, or even IT. While there are reasons why this is the case, this excessive reactivity has created a mission-critical gap: a dangerous vacuum where managerial accountability should exist but has been replaced with busywork. The Abdication Problem Managers, for the most part, have quietly abdicated their compliance responsibilities. They've handed them off to third-party consultants, delegated them to understaffed compliance departments, or worst of all, outsourced their thinking entirely to external auditors. When audit findings arrive (although not the only measure of effectiveness), these same managers treat them as someone else's problem to fix rather than their failure to prevent. This abdication means obligations go unowned. And unowned obligations don't get fulfilled—they get tracked, reported on, and documented, but not actually fulfilled. The organization drifts outside the lines, remains blind to emerging risks, and loses sight of its mission while everyone points to procedures that nobody truly owns. Why "Be Proactive" Doesn't Work The obvious answer seems to be: stop being reactive and start being proactive. Get ahead of issues. Anticipate problems. Be forward-thinking. If only it were that simple. Telling a reactive organization to become proactive is like telling someone who can't swim to simply start swimming better. The problem isn't their technique—it's that they haven't learned to stay afloat. You cannot be genuinely proactive about obligations you don't actually own. Ownership Comes First The path forward begins with a foundational shift: organizations must take ownership of their obligations and the risks those obligations address. Not delegated ownership. Not documented ownership. Real ownership—where specific people accept responsibility for ensuring specific promises are kept and specific hazards are controlled. This means: Managers understanding their obligations as personal commitments, not corporate procedures Leaders recognizing that compliance risk is operational risk, not a separate concern Executives accepting that audit findings represent their management failures, not their auditors' discoveries What AI Cannot Do And if you thought AI can help you with this, you will be left wanting. Here's the thing: AI cannot take ownership of your obligations. It can't even take ownership of its own outputs. AI might be able to analyze some of your compliance gaps, generate your procedures, monitor your controls, and flag your risks—assuming you even have a complete set of those. It can make compliance activities faster, cheaper, and more efficient. But it cannot look your stakeholders in the eye and promise them anything. It cannot accept accountability when things go wrong. It cannot decide what matters and what doesn't. Ownership is an irreducibly human act. It requires judgment, commitment, and the willingness to be held responsible. These aren't features that can be automated or algorithmic capabilities that can be trained. They're moral choices that only people can make. Organizations rushing to deploy AI for compliance are often doing so precisely to avoid ownership—creating yet another layer of delegation, another place to deflect accountability. "The system didn't flag it" becomes the new "the auditor didn't catch it." Until Ownership, Nothing Changes Without this ownership foundation, compliance will remain exactly as it is: reactive, fragmented, and procedural. It won't improve. It won't integrate into operations. It won't create value. Organizations will continue generating documentation that nobody reads, attending training nobody remembers, and responding to findings nobody prevents. They'll add AI tools to the stack, automate the busywork, and still fail to keep their promises because nobody has actually accepted responsibility for keeping them. The transformation to operational compliance—where obligations become capabilities and compliance creates value—cannot begin until someone looks at the organization's promises and risks and says: "These are mine. I own them." Everything else follows from that moment. Nothing meaningful happens before it. And no technology, no matter how intelligent, can say those words for you.
- Compliance 2.0 System Requirements
For years, I've been tracking the evolution of compliance technology—and I've noticed a persistent gap between what organizations need and what the market delivers. Many, and perhaps most, compliance systems are designed around a basic understanding: they treat compliance as a documentation problem, or at most a data problem, rather than an operational problem. This made sense when compliance was only about legal adherence, where the goal was to provide evidence of compliance to regulatory requirements. However, compliance is no longer just about that, and hasn't been for decades, particularly in highly regulated, high-risk sectors. Compliance does not mean just passing an audit or obtaining a certificate. Compliance is about meeting obligations across many domains, including safety, security, sustainability, quality, ethics, regulatory, and other areas of risk. This requires contending with uncertainty and keeping organizations on-mission, between the lines, and ahead of risk. It's about making certain that value is created and protected. We have called this Compliance 2.0, although each domain has its own name for it: Total Quality Management, Safety II, HOP, Functional & Process Safety, Cybernetics, Lean, and others. It's all about reducing variability to make certain that value is created rather than waste, which is what you get when risk becomes a reality. Compliance 2.0 requires operational capabilities to achieve targets and advance outcomes towards better safety, security, sustainability, quality, ethics, legal adherence, and other expectations. Some may argue about the particulars, but overall most agree that this is the purpose of compliance (not the department) implemented as programs led by director and officer-level managerial roles. Compliance 2.0 programs are built on systems and processes that implement and deliver on promises associated with both mandatory and voluntary obligations. The essential (not the basic) capacity to deliver is what we call, Minimum Viable Compliance (MVC). The problem is that while compliance has changed, most technology and practitioners have not kept up. Traditional methods and practices based on inspection and audits are firmly entrenched and are difficult to change. This is why I created Lean Compliance close to 10 years ago—to bridge this gap. To help organizations evaluate their compliance systems, I've created a list of system requirements for what is needed to support an operational view of compliance. This is not complete, and more work needs to be done. However, it's a start that I hope you might find helpful. Requirements for Compliance 2.0 Systems Managing Operations, Not Just Documents: Manage ALL four types of obligations—prescriptive rules, practice standards, performance targets, and program outcomes Trace promises to the operational capabilities required to fulfill them Track promise-makers and promise-keepers—who commits versus who delivers (RACI Model) Maintain the golden thread of assurance from obligation through to operational delivery Establish provenance—knowing where obligations come from and how they flow through operations Align stated values with how work actually gets done Integrate cross-functionally, breaking down silos between compliance, operations, and quality Real Intelligence, Not Just Documentation: Monitor compliance status AND operational capacity to maintain it in real-time Distinguish between operational risk (failure to keep promises) and compliance risk (failure to deliver value) Surface operational insights before issues become incidents Establish cybernetic feedback loops between operational reality and compliance commitments Enable self-regulating mechanisms that maintain compliance through operational design Advance capabilities that drive better outcomes across safety, security, quality, sustainability, and ethics Provide balanced scorecard/dashboard across the hierarchy: outcomes (results) → performance (capacity) → conformance (practices) → adherence (rules) Forward-Looking Operations: Enable management pre-view instead of only management review Plan front-view capabilities instead of reporting rear-view activities Conduct pre-incident investigations and program pre-mortems—not just post-mortems Assess organizational capability to fulfill obligations (close the "compliance effectiveness gap") Improve continuously across conformance, performance, effectiveness, and assurance Provide end-to-end visibility from obligation to operational outcome Built-In, Not Bolted-On: Integrate compliance requirements into operational design from the start Build in compliance (poka-yoke principles) rather than inspect for it Make obligation alignment immediately visible through operational transparency Generate evidence as operational by-product, not separate activity Identify and eliminate compliance waste—redundant controls and non-value-adding activities What's Next? If you want to learn more about Compliance 2.0, I invite you to sign up for our upcoming, L ean Compliance Leadership Workshop : How to Lead Compliance 2.0 Transformation ( Feb 11 ). Raimund Laqua (Ray) is a Professional Engineer (P.Eng.) and Project Management Professional (PMP) with over 30 years of experience in highly regulated industries including oil & gas, medical devices, pharmaceuticals, financial services, and government sectors. He is the founder of Lean Compliance Consulting and co-founder of ProfessionalEngineers.AI . Ray serves on ISO's ESG working group, OSPE's AI in Engineering committee, and as AI Chair for Engineers for the Profession (E4P), where he advocates for federal licensing of digital engineering disciplines in Canada.
- Is This The Best GRC Has To Offer?
I just attended a webinar from a leading GRC vendor promoting continuous risk assessment for AI. The topic seemed timely and the solution promising, so I gave it my full attention. What I heard : AI introduces significant risk across organizations and within every functional silo. Fair enough. ⚡ The pitch: With all this risk, you need a system to manage it comprehensively. OK. What they demonstrated was little more than a risk register combined with task management—where tasks are defined as regulatory requirements, framework objectives, and controls tagged with risk scores. The only novel feature was hierarchical task representation. Everything else was standard fare, complete with the obligatory heat map. ⚡ Not Understanding AI Risk Risk was presented as the typical likelihood x severity calculation. They tried to present risk aggregation, but here's the issue: you can't simply add up risks and average them. Risk is stochastic. Proper aggregation requires techniques like Monte Carlo simulation across probability density functions for each risk. It's even better when you understand how risk-connected elements interact, enabling evaluation of risk propagation through the system. The bottom line : This was traditional (and basic) risk management applied to AI—and done poorly. The promise of continuous risk assessment tied to AI was not delivered. ⚡ What AI Risk Actually Requires If this represents the best that GRC can offer for AI, we're in deep trouble. With infinite possible inputs and outputs, generative AI is better described as an organizational hazard rather than a foundation for stable, predictable performance. We need: Real-time controls, monitoring, and assessments Managed risk, not just bigger risk management databases And we need all of this to be operational. ⚡ Learning From Other Risk Domains Perhaps we should adopt risk measures and methods from high-hazard sectors: Hazard isolation HAZOP studies Functional and process safety approaches STAMP/STPA/CAST analysis Cybernetic regulation And others Regardless of methodology, we need advanced software engineered for adaptive real-time systems —not yesterday's tools repackaged. The alternative? What many companies are doing now: buying bigger databases to track all the new risks they've created by deploying AI. We can—and must—do better. If you're looking to effectively contend with AI risk within your organization—beyond heat maps and risk registers—let's talk. I work with organizations to build operational approaches that actually manage hazards in real time, not just document them.
- Why GRC Should be GRE
What GRC Should BE Traditionally, GRC activities were centered around integrating the siloed functions of Governance , Risk , and Compliance (GRC). While this is necessary, it is based on an old model where meeting obligations (the act of compliance) is a checkbox activity reinforced by audits. Similarly, risk management was building risk registers and heat maps, and governance was providing oversight of objectives completed in the past. All this to say: This was all reactive, misaligned, and focused on activity not outcomes. However, when you start with an integrative, holistic, and proactive approach to meeting obligations, a different model emerges where the bywords are: Govern , Regulate , and Ensure (GRE). These are essential capabilities that, when working together, improve the probability of success by governing, regulating, and ensuring the ends and the means in the presence of uncertainty. There is no need to integrate disparate functions, as these are already present in their proactive, integrative, and holistic form to deliver the outcome of mission success. If you're interested in learning more about transforming reactive GRC functions into proactive GRE capabilities, explore T he Total Value Advantage Program™
- Regulating the Unregulatable: Applying Cybernetic Principles to AI Governance
As artificial intelligence systems reshape entire industries and societal structures, we face an unprecedented regulatory challenge: how do you effectively govern systems that often exceed human comprehension in their complexity and decision-making processes? Traditional compliance frameworks, designed for predictable industrial processes and human-operated systems, are proving inadequate for the dynamic, emergent behaviors of modern AI. The rapid proliferation of AI across critical sectors—from healthcare diagnostics to financial trading, autonomous vehicles to criminal justice algorithms—demands a fundamental rethinking of how we approach regulatory design. Yet most current AI governance efforts remain trapped in conventional compliance paradigms: reactive rule-making, checklist-driven assessments, and oversight mechanisms that struggle to keep pace with technological innovation. This regulatory lag isn't merely a matter of bureaucratic inertia. It reflects a deeper challenge rooted in the nature of AI systems themselves. Unlike traditional engineered systems with predictable inputs and outputs, AI systems exhibit emergent properties, adapt through learning, and often operate through decision pathways that remain opaque even to their creators. The answer lies in applying cybernetic principles—the science of governance and control—to create regulatory frameworks that can match the complexity and adaptability of the systems they oversee. By understanding regulation as a cybernetic function requiring sufficient variety, accurate modeling, and ethical accountability, we can design AI governance systems that are both effective and ethical. The stakes couldn't be higher. Without deliberately designing ethical requirements into our AI regulatory systems, we risk creating governance frameworks that optimize for efficiency, innovation, or economic advantage while systematically eroding the safety, fairness, and human values we seek to protect. What regulatory approaches have you seen that effectively address AI's unique challenges? Ray Laqua, P.Eng., PMP, is Chair of the AI Committee for Engineers for the Profession (E4P), Co-founder of ProfessionalEngineers.AI , and Founder of Lean Compliance.
- Ethical Compliance
Technology is advancing faster and further than our ability to keep up with the ethical implications. This applies also to the systems using them that: govern, manage, and operate the businesses we work for and this includes compliance. The speed of technological change poses significant challenges for compliance and its function to regulate activities of an organization to stay within (or meet) all its regulatory requirements and voluntary obligations. Whether you consider compliance in terms of safety, quality, or professional conduct, these are all closely intertwined with ethics which are rooted in values, moral attitudes, uncertainty and ultimately decisions between what is right and wrong. "It is impossible to design a system so perfect that no one needs to be good." – T.S. Eliot Ethical Compliance In this article I explore what makes a compliance system good (or effective) and secondly, and more importantly, can it be made to be ethical assuming that's what you want for your organization. To answer these questions, we will dive into the topic of cybernetics and specifically the works of Roger C. Conant and W. Ross Ashby along with the more recent works by Mick Ashby. To start, we need to define what cybernetics is and why it is important to this discussion. What is Cybernetics? Cybernetics is derived from the Geek word for "governance" or "to steer." Although this word may not be familiar to many, cybernetics is an active field of science involving a "nondisciplinary approach to exploring regulatory systems – their structures, constraints, and possibilities." This is where we derive much of our understanding of system dynamics, feedback, and control theory that we use to control mechanical and electrical systems. However, cybernetics extends far beyond engineering to: biology, computer science, management, psychology, sociology, and other areas. At the basic level governance has three components: (1) the system that we wish to steer, (2) the governor (or regulator) which is the part that does the steering, and (3) the controller, the part that decides where to go. The following diagram illustrates the basic functions of a system under regulation. In this example, we have an HVAC system used to maintain a constant temperature in a house: A thermostat regulates the heating and conditioning sub systems which are controlled by the owner. It is important to understand the difference between the controller and regulator roles. The thermostat cannot tell if it is too hot or too cold, it only knows the number for the temperature. It is the owner (acting as the controller) that must decide whether the temperature is comfortable or not. This distinction is useful to better understand how companies need to be regulated. Regulatory bodies create regulations, however, it is each organization's responsibility to control and perform the function of regulation not the regulatory body. In a sense, each company must decide on the degree by which each compliance commitment is met (i.e. is it too high, is it too low, or is it just right) according to the level of uncertainty. What is a Good Regulator? To govern, you need a way of steering, and that is the role of the regulator. A regulator adjusts the system under regulation so that its output states are within the allowable (or desirable) outcomes. The Good Regulatory Theorem posited by Conant and Ashby states that "Every Good Regulator of a System Must be a Model of that System." Examples of models that we are more familiar with include: a city map which is a model of the actual city streets, a restaurant menu which is a model of the food that the restaurant prepares, a job description which is a model of an employee's roles and responsibilities, and so on. In more technical terms the model of the system and the regulator must be isomorphic. The theorem does not state how accurate the model needs to be or the technical characteristics. Sometimes a simple list of directions can be more helpful than a detailed map where there is too much information. The theorem is sufficiently general and is applicable to all regulating, self-regulating and homeostatic systems. What is necessary is sufficient understanding of how the system works to properly know how to regulate it. A critical characteristic to know is how much variety (or variation) exists in the output of the system under regulation. The Law of Requisite Variety The Law of Requisite Variety (posited by W. Ross Ashby) states that for a system to be stable, the number of states of its regulator mechanism must be greater than or equal to the number of states in the system being controlled. In other words, variety destroys variety which is what regulation does. This law has significant implications when it comes to systems in general but also to management systems. For example, according to the law of requisite variety, a manager needs as many options as there are different disturbances (or variation) in the systems he is managing. In addition, when systems are not able to meet compliance (for example), it may be due to a lack of sufficient variety in the controls systems. This may help explain why existing controls may not be as effective as we would like. There needs to be enough variation in the control actions to adjust the management system and stay within compliance be it performance, safety, quality, or otherwise. What is an Ethical Regulator? Now, that we have a sense of what regulation does and what is needed for it to work, we will consider what it means for the regulation function to be ethical. First and foremost, we need to explain what it means to be ethical. By definition, something that is ethical is (1) related to ethics (ethical theories), (2) involved or expresses moral approval or disapproval (ethical judgments), or (3) conforms to accepted standards of conduct (ethical behavior). According to Mick Ashby, a regulator could be considered ethical if meets nine requisite characteristics (six of which are only necessary for the regulator to be effective). An ethical regulator must have: Truth about the past and present. Variety of possible actions (greater than or equal to the number of states of the system under regulation) Predictability of the future effects of actions. Purpose expressed as unambiguously prioritized goals. Ethics expressed as unambiguously prioritized rules. Intelligence to choose the best actions. Influence on the system being regulated. Integrity of all subsystems. Transparency of ethical behaviour (this includes retrospectively) The challenges to build such a system are many. However, there are three characteristics (indicated in bold ) that are requisites for a regulator to be ethical. Interestingly, these are the areas where we have the greatest hurdles to overcome: It is not yet possible to build ethical subroutines where goals are unambiguously prioritized Transparency of ethical behaviour is not possible when the rules are not visible or cannot be discovered. This is very much the case with current advances in machine learning and artificial intelligence systems were we don't even know what the rules are or how they work. Systems do not have sufficient integrity to protect against tampering along with other ways they can be manipulated to produce undesired outcomes. We can conclude that current limitations prohibit building systems that incorporate the necessary characteristics for the regulation function to be ethical as measured against the ethical regulator theorem. Before we look at how these limitations can be addressed, there is another law that is important to understand for companies to have systems that are ethical. The Law of Inevitable Ethical Inadequacy This law is simply stated as, “If you don’t specify that you require a secure ethical system, what you get is an insecure unethical system." This means that unless the system specifies ethical goals it will regulate away from being ethical towards the other goals you have targeted. You can replace the word ethical with "safety" or "quality" or "environmental" which are more concrete examples of ethical-based programs that govern an organization. If they are not part of a value creation system, according to this law, the system will always optimize away from "quality", "safety", or environmental" goals. This may help explain the tensions that always exist between production and safety, or production and quality, and so on. When productivity is the only goal the production system will regulate towards that goal at the expense of all others. Perhaps this provides a form of proof that compliance cannot be a separate objective that is overlaid on top of production systems and processes. We know that quality must be designed in and we can conclude that this is also applies to all compliance goals. Definition of Ethical Compliance As previously mentioned, cybernetics is a governance function that at a basic level includes: the system under regulation, the regulator, and the controller. We also stated that compliance performs the role of regulation to steer a system towards meeting compliance obligations. When these obligations incorporate such things as quality, safety, and professional conduct, we are adding an ethical dimension to the compliance function. Based on the laws of cybernetics along with the limitations previously discussed, we can now define "Ethical Compliance" as: Ethical Compliance = Ethical System + Ethical Controller + Effective Regulator The system under regulation must be ethical (i.e. must incorporate quality, safety, and other compliance goals.) – Law of Inevitable Ethical Inadequacy The regulator must be a good regulator (i.e. must be a model of the system under regulation) – Good Regulator Theorem The regulator must be effective (i.e. it must at least meet the 6 characteristics of the ethical regulator that make it effective) – Ethical Regulator Theorem The controller must be human and ethical (as the regulator cannot be) – Ethical Regulator Theorem The controller must be human and accountable (i.e. transparent, answerable, and with integrity) – Ethical Regulator Theorem, and Regulatory Statutes and Law The last one is ultimately what makes compliance ethical and more than just codified values and controls. Taking responsibility and answering for our decisions is imperative for any ethical system. Machines are not accountable nor do they take responsibility for what they do. However, this is what humans do and must continue to do. References: 1. Ethical Regulators - http://ashby.de/Ethical%20Regulators.pdf 2. Good Regulators - http://pespmc1.vub.ac.be/books/Conant_Ashby.pdf 3. Law of Requisite Variety - http://pespmc1.vub.ac.be/REQVAR.html 4. Requisite Organization and Requisite Variety, Christopher Lambert - https://vimeo.com/76660223
- Operationalizing AI Governance: A Lean Compliance Approach
AI governance policies typically describe what organizations intend to do. Lean Compliance focuses on how those intentions become operational capabilities that keep promises under uncertainty. Mapping an AI governance policy means creating an operational, regulation framework that links legal , ethical , engineering , and management commitments across AI use‑cases and life-cycle stages. The goal isn't compliance documentation—it's designing the operational capabilities that provides assurance of promise-keeping to regulators, customers, and other stakeholders in real time – a necessity to contend with AI uncertainty. From Policy to Capability Traditional compliance treats AI governance as a paper exercise. Instead, Lean Compliance treats it as operational infrastructure with three components: Guardrails : Controls that prevent harm and contain risk Lampposts : Monitoring that makes system behavior visible Compliance streams : Flows of promises from legal/ethical commitments through engineering controls to demonstrated outcomes Start by inventorying AI assets and dependencies, classifying systems by impact and risk, then mapping controls to data quality, model validation, deployment architecture, ongoing monitoring, and human decision points. Seven Elements of Operational AI Governance 1. Purpose & Scope Define mission, enumerate AI assets, identify high-risk use-cases that trigger enhanced controls. 2. Roles & Accountability Assign decision rights: executive sponsor, AI/Model Compliance lead, Engineering, Data Stewards, Legal. Clear accountability prevents governance failure. 3. Life-cycle Controls Design standards, pre-deployment risk assessment, validation protocols, controlled pilots, change management. Each stage produces evidence of promise-keeping. 4. Operational Controls Data governance for quality and provenance. Drift detection and performance monitoring. Access controls and third-party assurance. Containment for operational technology and critical systems. 5. Assurance & Metrics KPIs for safety, fairness, reliability, incidents. Minimal Viable Compliance (MVC) measurement—enough to demonstrate compliance effectiveness without waste. 6. Escalation & Human Oversight Human judgment layer for ethical decisions, incident response, regulatory reporting. Accountability resides with people, not algorithms. 7. Continuous Improvement Build-measure-learn cycles. AI-assisted operational controls where they add value. Periodic alignment with ISO 42001, NIST AI RMF, sector frameworks. Minimal Viable Program (MVP): A Bayesian Approach Don't build the entire program at once. Treat governance as a learning system that updates its understanding of risk and control effectiveness based on operational evidence—what Bayesian learning does with beliefs, MVP does with governance capability: Prior : Start with initial risk assessment and minimal controls for highest-risk systems Evidence : Deploy controls and measure actual outcomes—incidents, false positives, operational friction Update: Revise your understanding of which controls create value vs. waste Iterate : Strengthen what works, eliminate what doesn't, expand to next-priority systems This is the Lean Startup model applied to governance. Your first control framework is a hypothesis. Operational data tells you if you're right. Each cycle, incident, or signal improves your understanding of how to keep promises effectively. The difference from traditional compliance: you're not trying to build perfect governance upfront. You're building a learning system that gets smarter about risk and control effectiveness over time, using evidence from operations to update your governance model. The test isn't whether your policy document passes audit. It's whether your organization reliably keeps its AI-related promises under conditions of uncertainty and change, learning and adapting as both AI systems and risk landscape evolve. Governance becomes operational capability when it ensures and protects stakeholder value through evidence-based learning, not just regulatory coverage through documentation. Is your AI governance capable of ensuring and protecting Total Value? Find out by getting your Total Value Assessment available here .
- Compliance as Wisdom
Compliance as Organizational Wisdom: The Strategic Practice of Restraint Organizations that run algorithmic processes without restraint—or blindly follow operating processes that serve purposes misaligned with their mission—act unwisely. They optimize metrics divorced from their core purpose, cut costs that destroy capabilities essential to their mission, and follow recursive loops that lead them away from sustainable value creation. Compliance is the means by which organizations practice restraint in service of wisdom. When market pressures create impulses to cut corners, governance uses compliance mechanisms to maintain the discipline to keep promises. When algorithms identify short-term profit opportunities, or when standard procedures push for quarterly targets, compliance provides the means to ask whether these actions serve the organization's actual mission. This transforms compliance from procedural overhead into the operational means of organizational wisdom. Instead of rule-following, it becomes the systematic means of promise-keeping—providing governance the mechanisms to interrupt processes that serve purposes misaligned with organizational mission. Consider the difference: A cost-cutting algorithm that reduces expenses by 15% regardless of impact on core capabilities Governance that uses compliance mechanisms to ask: "What are we actually trying to achieve, and what promises are we keeping or breaking?" The first serves narrow financial purposes. The second uses compliance as the means to maintain organizational integrity while pursuing the actual mission. In this way, compliance becomes the means by which governance maintains organizational purpose—ensuring that efficiency serves effectiveness, not the other way around.
- From Chaos to Order: The Creation Process
The opening of Genesis describes a progression: formlessness to form, potential to purpose, chaos to order. The sequence—formless and void, then light, then separation, then foundation, then rhythm, then inhabitants, then agency, then rest—keeps showing up when building new organizations, new capabilities, new systems from the ground up. Each stage creates conditions for the next. Skip one, and the whole thing stumbles. This isn't prescriptive or scientific. But as a lens for understanding how new things come into being, the pattern proves useful. Starting With What Is "The earth was formless and void, and darkness was over the surface of the deep." The Hebrew is tohu wabohu —formless and void. No structure, and nothing inhabiting the structure. Both conditions matter. Every new venture, every new organizational capability, every genuine innovation begins here. Potential exists. Intent is present—the spirit hovering over waters. But structure hasn't emerged yet, and there's nothing coherent to populate even if it had. This is the natural starting point for creation. Not a problem to solve, but a condition to work from. You have potential energy, raw materials, purpose—but no form yet. The work starts with naming what is, not what we wish were true. Observability Precedes Control "Let there be light." The first act of creation isn't building anything. It's establishing the capacity to observe. Light enables feedback—the fundamental requirement of any control system. In cybernetic terms: you cannot regulate what you cannot sense. Before structure, before process, before any attempt at order, you need the ability to distinguish signal from noise, day from night, progress from mere activity. When creating something new, we often rush to build before we can see clearly. We start with solutions before we understand what we're actually working with. But observability comes first. Creating light means establishing conditions where truth becomes visible. What feedback mechanisms will tell you whether this new thing is working? How will you know if you're making progress? What will reveal the difference between what you imagine and what's actually happening? Many new ventures fail here. They build elaborate structures without the sensing mechanisms needed to know whether those structures serve any purpose. Separation Creates Domains "Let there be an expanse between the waters to separate water from water." Separating water from water—what meaningful distinction does that create? When creating something new without clear boundaries, you cannot distinguish the new thing from its environment. Internal operations blur with external relationships. What you're creating bleeds into everything around it. The expanse creates domains. Not barriers, but appropriate separation that allows different types of work to occur under different conditions. What belongs inside this new thing versus outside it? Where does governance sit relative to operations? What boundaries define the system you're creating? Without these boundaries, the new thing never achieves coherent identity. The boundary isn't about isolation. It's about creating conditions where the new system can develop its own character, separate from everything else. This is about requisite variety in control structures. Different levels of the system need different operating conditions to function effectively. Foundation and Self-Reproduction "Let the dry land appear... let the land produce vegetation bearing seed according to its kind." Two things happen on day three: stable foundation emerges, creating conditions for opportunities to grow. The dry land creates those conditions—stable ground where something can take root. You cannot build on water. The foundation isn't bureaucracy or rigidity. It's the stable platform that makes growth possible. Then vegetation appears, bearing seed according to its kind. Self-reproducing capability. Practices that don't require constant intervention to survive. Knowledge that transfers between people. Patterns that perpetuate themselves without heroic individual effort. The dry land creates the conditions. The vegetation represents what grows from those conditions—opportunities realized, capabilities developed, patterns that regenerate themselves. When creating something new, you need both. The stable platform that creates conditions for growth, and the self-regenerating capacity that allows the system to develop and persist. A new organization, a new capability, a new system isn't established until its essential patterns can reproduce without depending on specific individuals or constant oversight. Coordination Through Rhythm "Let there be lights in the expanse to mark seasons and days and years." This isn't about creating a calendar. It's about establishing rhythmic structures that allow distributed activity to coordinate without requiring constant direct communication. Consider how celestial bodies function: they don't command anything. They provide reliable patterns that other systems can synchronize to. Migration, planting, sleeping, waking—all coordinated by rhythm rather than control. New systems need temporal architecture. When does planning occur? When do we review? When do we commit? When do we reflect? These rhythms are coordinating mechanisms that allow the new thing to operate coherently. The fourth day establishes the governance cadences that allow the emerging system to coordinate itself across time and distance. It's not time management. It's the creation of predictable patterns that enable distributed decision-making. Populating Structure With Capability "Let the waters teem with living creatures, and let birds fly across the expanse." Only now—after observation, boundaries, foundation, and rhythm are established—does the text populate the system with specialized actors. Fish in water, birds in air. Each in the domain suited to their nature. We typically try to staff new ventures before we've established what domains exist. Before we know what boundaries matter. Before there's stable ground to work from. Before there are coordinating rhythms to synchronize around. When you populate too early, people don't know where they belong or what they're optimizing for. When you populate after establishing structure, roles emerge more naturally. The domains reveal what capabilities they need and where those capabilities fit. This isn't about org charts or hierarchy. It's about alignment between capability and context—putting specialized excellence in the environment where it can function effectively. The Emergence of Agency "Then God said, 'Let us make mankind in our image, in our likeness, so that they may rule...'" Day six distinguishes between land animals and humans. Both are sophisticated—the animals represent complex operational capability. But humans represent something different: the capacity for responsible agency. What separates execution from stewardship? The ability to exercise judgment. To make promises and adapt means while honouring ends. To take responsibility for outcomes, not just follow processes. To understand purpose, not just complete tasks. This is where promise-keeping capability emerges. Where people can say "this is my responsibility" and mean it—not just in their assigned domain, but for the coherence of the whole. All the previous stages create conditions where this becomes possible. You cannot ask people to exercise responsible judgment when they're working on unstable ground, within unclear boundaries, with no ability to observe what's actually happening, and no coordinating rhythms to synchronize their choices with others'. Agency isn't demanded. It emerges when conditions support it. Building Rest Into the Rhythm "By the seventh day God had finished the work he had been doing; so on the seventh day he rested from all his work." The text declares each stage "good" and the whole "very good." Rest comes not from exhaustion, but as part of the pattern itself. The sabbath principle is about building rest into the rhythm of creation. Not as recovery from depletion, but as integral structure. As space for reflection. As pause that allows what's been built to settle and stabilize. When creating something new, we rarely pause. There's always more to build, more to perfect, more to add. But the pattern suggests rest isn't optional—it's part of the architecture. Systems need time to stabilize. New patterns need space to settle. People need breathing room to see what they've built. Systems that never rest eventually break. Not from the work itself, but from the inability to consolidate learning, to reflect on what's been accomplished, to let new patterns take hold. Sustainability requires rhythm that includes rest. Not as weakness, but as structure itself. The Pattern This isn't a methodology. You cannot follow seven steps and create whatever you're trying to build. What this offers is a pattern for noticing—a way of observing what might be missing, or what you might be attempting before conditions are ready to support it. The sequence matters. Not rigidly—creation isn't a linear process—but directionally. You build observability, then boundaries, then foundation, then rhythm, then populate with capability, then enable agency, then build in rest and reflection. You might cycle through these patterns multiple times, at different scales, in different aspects of what you're creating. The pattern recurs because it describes something fundamental about how complex systems come into being. After the Seventh Day The Genesis narrative doesn't end with creation. It continues with stewardship, with relationship, with the ongoing work of maintaining and developing what's been brought into being. Creation establishes structure. What follows is the responsibility of those who inhabit it—the promise-keeping work of honouring what's been built while adapting to what emerges. The pattern suggests something important: bringing order from chaos isn't the end of the work. It's the foundation for what comes next. Once you've created the conditions for life, for growth, for agency—the real work begins. The work of stewardship. Of maintenance. Of continuous adaptation within stable structure. Ancient wisdom doesn't provide formulas. It offers patterns that generations have found useful for making sense of recurring challenges. Whether this particular pattern proves useful in your work with creating new things—that's for you to discover. The creation process described in Genesis might simply be reminding us: there are natural progressions in how complex things come into being. You work with those progressions, not against them. You create conditions in sequence. You respect the time things need to stabilize. You build rest into rhythm. You enable agency through structure, not despite it. And then, after the seventh day, the real work of inhabiting what you've created begins. What patterns have you noticed in how new things come into being?
- Cultivating Opportunities
As we wind down for the year, I find myself looking ahead and wondering what's in store. As leaders, we know there are many forces at work—often too many to deal with, and many outside our control. But here's what I've been thinking: What we experience is also the result of the opportunities we cultivate in the current year. This insight came to me recently from working with someone I consider wise—a man now retired from a distinguished career as a physician and researcher, well known in his field. I call him the Great Gardener. The Cultivation Principle In a project I'm working on with him, he's demonstrated time and again the value of cultivating opportunities. He's shown me how important it is to cultivate opportunities much the same way we cultivate a garden—which, by the way, is one of his greatest passions. His approach is simple but profound: whenever you see an interest, desire, a spark, or a possibility from someone who can contribute to your endeavour, you need to cultivate it. Even from people you might consider your "enemy" or "competitor." We may not have control over what will bear fruit and what doesn't, but we do have control over preparing the soil to provide the greatest chance for something good to happen. We also have control over the seeds we plant. The question for us is: Will we plant seeds of purpose, unity, and partnership? Or will we scatter seeds of chaos, discord, and resistance? Cultivating at Work In compliance, we also see this principle at work. The organizations that thrive aren't just those with the best control frameworks—they're the ones that have cultivated trust with regulators, built genuine partnerships with business units, and developed the conditions for mission and compliance success. They spend time cultivating the soil. When they need to find a way forward through complex challenges, these cultivated relationships and developed capabilities— not external forces —are what they lean on to move ahead. Getting Ready for Spring Even though winter is almost here and many aren't thinking of gardening, this is precisely the time for us to consider what opportunities to cultivate in the year ahead. What vision needs casting? What sparks in your organization need fanning? What relationships need nurturing to create the probability for opportunities to grow? In our field, we're experts at spotting threats and building defences. We excel at risk assessments, gap analyses, and control design. These capabilities are essential. But what if our greatest competitive advantage lies not just in the problems we prevent, but in the possibilities we cultivate today? We may not be able to control everything that happens to us, but we can choose where we invest our time, resources, and energy. This year, let's commit to balancing our portfolio: continue the essential work of managing risks, but also dedicate intentional effort to planting and cultivating opportunities. Let's see what good things will grow.
- Deploy First, Engineer Later: The AI Risk We Can’t Afford
The sequence matters: proper engineering design must occur before deployment, not afterwards. by Raimund Laqua, PMP, P.Eng As a professional engineer with over three decades of experience in highly regulated industries, I firmly believe we can and should embrace AI technology. However, the current approach to deployment poses a risk we simply cannot afford. Across industries, I’m observing a troubling pattern: organizations are bypassing the engineering design phase and directly jumping from AI research and prototyping to production deployment. This “Deploy First, Engineer Later” approach or as some call, "Fail First, Fail Fast": treats AI systems like software products rather than engineered systems that require professional design discipline. Engineering design goes beyond validation and testing after deployment; it’s a disciplined practice of designing systems for safety, reliability, and trust from the outset. When we want these qualities in AI systems and the internal controls that use them, we must engineer them in from the beginning, not retrofit them later. Here’s the typical sequence organizations follow: Research and prototype development Direct deployment to production systems Hope to retrofit safety, security, quality, and reliability later What should happen instead: Research and controlled experimentation Engineering design for safety, reliability, and trust requirements Deployment of properly engineered systems AI research and controlled experimentation have their place in laboratories where trained professionals can systematically study impacts and develop knowledge for practice. However, we’re witnessing live experimentation in critical business and infrastructure systems, where both businesses and the public bear the consequences when systems fail due to inadequate engineering. When companies deploy AI without proper engineering design, they’re building systems that don’t account for the most important qualities: safety, security, quality, reliability, and trust. These aren’t features that can be added later; they must be built into the system architecture from the start. Consider the systems we rely on: medical devices, healthcare, power generation and distribution, financial systems, transportation networks, and many others. These systems require engineering design that considers failure modes, safety margins, reliability requirements, and trustworthiness criteria before deployment. However, AI is being integrated into these systems without this essential engineering work. This creates what I call an “operational compliance gap.” Organizations have governance policies and risk management statements, but these don’t translate into the engineering design work needed to build or procure inherently safe and reliable systems. Without proper engineering design, governance policies become meaningless abstractions. They give the appearance of protection, but without the operational capabilities to ensure that what matters most is protected. The risk goes beyond individual organizations. We currently lack enough licensed professional engineers with AI expertise to provide the engineering design discipline critical systems need. Without professional accountability structures, software developers are making engineering design decisions about safety and mission-critical systems without the professional obligations that engineering practice demands. Professional engineering licensing ensures accountability for proper design practice. Engineers become professionally obligated to design systems that meet safety, reliability, and trust requirements. This creates the discipline needed to counteract the “deploy first, engineer later” approach that’s currently dominating AI adoption. The consequences of deploying unengineered AI systems aren’t abstract future concerns; they’re immediate risks to operational integrity, business continuity, and public safety. These risks are simply too great for businesses and society to ignore, especially as they try to retrofit engineering discipline into systems never intended for safety or reliability. Engineering design can’t be an afterthought. The sequence matters: proper engineering design must occur before deployment, not afterwards. Deploying systems first and then engineering them is a risk we simply can’t afford.











