top of page

SEARCH

Find what you need

573 results found with an empty search

  • Why GRC Should be GRE

    What GRC Should BE Traditionally, GRC activities were centered around integrating the siloed functions of Governance , Risk , and Compliance (GRC). While this is necessary, it is based on an old model where meeting obligations (the act of compliance) is a checkbox activity reinforced by audits. Similarly, risk management was building risk registers and heat maps, and governance was providing oversight of objectives completed in the past. All this to say: This was all reactive, misaligned, and focused on activity not outcomes. However, when you start with an integrative, holistic, and proactive approach to meeting obligations, a different model emerges where the bywords are: Govern , Regulate , and Ensure (GRE). These are essential capabilities that, when working together, improve the probability of success by governing, regulating, and ensuring the ends and the means in the presence of uncertainty. There is no need to integrate disparate functions, as these are already present in their proactive, integrative, and holistic form to deliver the outcome of mission success. If you're interested in learning more about transforming reactive GRC functions into proactive GRE capabilities, explore T he Total Value Advantage Program™

  • Regulating the Unregulatable: Applying Cybernetic Principles to AI Governance

    As artificial intelligence systems reshape entire industries and societal structures, we face an unprecedented regulatory challenge: how do you effectively govern systems that often exceed human comprehension in their complexity and decision-making processes? Traditional compliance frameworks, designed for predictable industrial processes and human-operated systems, are proving inadequate for the dynamic, emergent behaviors of modern AI. The rapid proliferation of AI across critical sectors—from healthcare diagnostics to financial trading, autonomous vehicles to criminal justice algorithms—demands a fundamental rethinking of how we approach regulatory design. Yet most current AI governance efforts remain trapped in conventional compliance paradigms: reactive rule-making, checklist-driven assessments, and oversight mechanisms that struggle to keep pace with technological innovation. This regulatory lag isn't merely a matter of bureaucratic inertia. It reflects a deeper challenge rooted in the nature of AI systems themselves. Unlike traditional engineered systems with predictable inputs and outputs, AI systems exhibit emergent properties, adapt through learning, and often operate through decision pathways that remain opaque even to their creators. The answer lies in applying cybernetic principles—the science of governance and control—to create regulatory frameworks that can match the complexity and adaptability of the systems they oversee. By understanding regulation as a cybernetic function requiring sufficient variety, accurate modeling, and ethical accountability, we can design AI governance systems that are both effective and ethical. The stakes couldn't be higher. Without deliberately designing ethical requirements into our AI regulatory systems, we risk creating governance frameworks that optimize for efficiency, innovation, or economic advantage while systematically eroding the safety, fairness, and human values we seek to protect. What regulatory approaches have you seen that effectively address AI's unique challenges? Ray Laqua, P.Eng., PMP, is Chair of the AI Committee for Engineers for the Profession (E4P), Co-founder of  ProfessionalEngineers.AI , and Founder of Lean Compliance.

  • Ethical Compliance

    Technology is advancing faster and further than our ability to keep up with the ethical implications. This applies also to the systems using them that: govern, manage, and operate the businesses we work for and this includes compliance.   The speed of technological change poses significant challenges for compliance and its function to regulate activities of an organization to stay within (or meet) all its regulatory requirements and voluntary obligations. Whether you consider compliance in terms of safety, quality, or professional conduct, these are all closely intertwined with ethics which are rooted in values, moral attitudes, uncertainty and ultimately decisions between what is right and wrong. "It is impossible to design a system so perfect that no one needs to be good." – T.S. Eliot Ethical Compliance In this article I explore what makes a compliance system good (or effective) and secondly, and more importantly, can it be made to be ethical assuming that's what you want for your organization. To answer these questions, we will dive into the topic of cybernetics and specifically the works of Roger C. Conant and W. Ross Ashby along with the more recent works by Mick Ashby. To start, we need to define what cybernetics is and why it is important to this discussion. What is Cybernetics? Cybernetics is derived from the Geek word for "governance" or "to steer." Although this word may not be familiar to many, cybernetics is an active field of science involving a "nondisciplinary approach to exploring regulatory systems – their structures, constraints, and possibilities." This is where we derive much of our understanding of system dynamics, feedback, and control theory that we use to control mechanical and electrical systems. However, cybernetics extends far beyond engineering to: biology, computer science, management, psychology, sociology, and other areas. At the basic level governance has three components: (1) the system that we wish to steer, (2) the governor (or regulator) which is the part that does the steering, and (3) the controller, the part that decides where to go. The following diagram illustrates the basic functions of a system under regulation. In this example, we have an HVAC system used to maintain a constant temperature in a house: A thermostat regulates the heating and conditioning sub systems which are controlled by the owner. It is important to understand the difference between the controller and regulator roles. The thermostat cannot tell if it is too hot or too cold, it only knows the number for the temperature. It is the owner (acting as the controller) that must decide whether the temperature is comfortable or not. This distinction is useful to better understand how companies need to be regulated. Regulatory bodies create regulations, however, it is each organization's responsibility to control and perform the function of regulation not the regulatory body. In a sense, each company must decide on the degree by which each compliance commitment is met (i.e. is it too high, is it too low, or is it just right) according to the level of uncertainty. What is a Good Regulator? To govern, you need a way of steering, and that is the role of the regulator. A regulator adjusts the system under regulation so that its output states are within the allowable (or desirable) outcomes. The Good Regulatory Theorem posited by Conant and Ashby states that "Every Good Regulator of a System Must be a Model of that System." Examples of models that we are more familiar with include: a city map which is a model of the actual city streets, a restaurant menu which is a model of the food that the restaurant prepares, a job description which is a model of an employee's roles and responsibilities, and so on. In more technical terms the model of the system and the regulator must be isomorphic. The theorem does not state how accurate the model needs to be or the technical characteristics. Sometimes a simple list of directions can be more helpful than a detailed map where there is too much information. The theorem is sufficiently general and is applicable to all regulating, self-regulating and homeostatic systems. What is necessary is sufficient understanding of how the system works to properly know how to regulate it. A critical characteristic to know is how much variety (or variation) exists in the output of the system under regulation. The Law of Requisite Variety The Law of Requisite Variety (posited by W. Ross Ashby) states that for a system to be stable, the number of states of its regulator mechanism must be greater than or equal to the number of states in the system being controlled. In other words, variety destroys variety which is what regulation does. This law has significant implications when it comes to systems in general but also to management systems. For example, according to the law of requisite variety, a manager needs as many options as there are different disturbances (or variation) in the systems he is managing. In addition, when systems are not able to meet compliance (for example), it may be due to a lack of sufficient variety in the controls systems. This may help explain why existing controls may not be as effective as we would like. There needs to be enough variation in the control actions to adjust the management system and stay within compliance be it performance, safety, quality, or otherwise. What is an Ethical Regulator? Now, that we have a sense of what regulation does and what is needed for it to work, we will consider what it means for the regulation function to be ethical. First and foremost, we need to explain what it means to be ethical. By definition, something that is ethical is (1) related to ethics (ethical theories), (2) involved or expresses moral approval or disapproval (ethical judgments), or (3) conforms to accepted standards of conduct (ethical behavior). According to Mick Ashby, a regulator could be considered ethical if meets nine requisite characteristics (six of which are only necessary for the regulator to be effective). An ethical regulator must have: Truth about the past and present. Variety of possible actions (greater than or equal to the number of states of the system under regulation) Predictability of the future effects of actions. Purpose expressed as unambiguously prioritized goals. Ethics expressed as unambiguously prioritized rules. Intelligence to choose the best actions. Influence on the system being regulated. Integrity of all subsystems. Transparency of ethical behaviour (this includes retrospectively) The challenges to build such a system are many. However, there are three characteristics (indicated in bold ) that are requisites for a regulator to be ethical. Interestingly, these are the areas where we have the greatest hurdles to overcome: It is not yet possible to build ethical subroutines where goals are unambiguously prioritized Transparency of ethical behaviour is not possible when the rules are not visible or cannot be discovered. This is very much the case with current advances in machine learning and artificial intelligence systems were we don't even know what the rules are or how they work. Systems do not have sufficient integrity to protect against tampering along with other ways they can be manipulated to produce undesired outcomes. We can conclude that current limitations prohibit building systems that incorporate the necessary characteristics for the regulation function to be ethical as measured against the ethical regulator theorem. Before we look at how these limitations can be addressed, there is another law that is important to understand for companies to have systems that are ethical. The Law of Inevitable Ethical Inadequacy This law is simply stated as, “If you don’t specify that you require a secure ethical system, what you get is an insecure unethical system." This means that unless the system specifies ethical goals it will regulate away from being ethical towards the other goals you have targeted. You can replace the word ethical with "safety" or "quality" or "environmental" which are more concrete examples of ethical-based programs that govern an organization. If they are not part of a value creation system, according to this law, the system will always optimize away from "quality", "safety", or environmental" goals. This may help explain the tensions that always exist between production and safety, or production and quality, and so on. When productivity is the only goal the production system will regulate towards that goal at the expense of all others. Perhaps this provides a form of proof that compliance cannot be a separate objective that is overlaid on top of production systems and processes. We know that quality must be designed in and we can conclude that this is also applies to all compliance goals. Definition of Ethical Compliance As previously mentioned, cybernetics is a governance function that at a basic level includes: the system under regulation, the regulator, and the controller. We also stated that compliance performs the role of regulation to steer a system towards meeting compliance obligations. When these obligations incorporate such things as quality, safety, and professional conduct, we are adding an ethical dimension to the compliance function. Based on the laws of cybernetics along with the limitations previously discussed, we can now define "Ethical Compliance" as: Ethical Compliance = Ethical System + Ethical Controller + Effective Regulator The system under regulation must be ethical (i.e. must incorporate quality, safety, and other compliance goals.) – Law of Inevitable Ethical Inadequacy The regulator must be a good regulator (i.e. must be a model of the system under regulation) – Good Regulator Theorem The regulator must be effective (i.e. it must at least meet the 6 characteristics of the ethical regulator that make it effective) – Ethical Regulator Theorem The controller must be human and ethical (as the regulator cannot be) – Ethical Regulator Theorem The controller must be human and accountable (i.e. transparent, answerable, and with integrity) – Ethical Regulator Theorem, and Regulatory Statutes and Law The last one is ultimately what makes compliance ethical and more than just codified values and controls. Taking responsibility and answering for our decisions is imperative for any ethical system. Machines are not accountable nor do they take responsibility for what they do. However, this is what humans do and must continue to do. References: 1. Ethical Regulators - http://ashby.de/Ethical%20Regulators.pdf 2. Good Regulators - http://pespmc1.vub.ac.be/books/Conant_Ashby.pdf 3. Law of Requisite Variety - http://pespmc1.vub.ac.be/REQVAR.html 4. Requisite Organization and Requisite Variety, Christopher Lambert - https://vimeo.com/76660223

  • Operationalizing AI Governance: A Lean Compliance Approach

    AI governance policies typically describe what organizations intend to do. Lean Compliance focuses on how those intentions become operational capabilities that keep promises under uncertainty. Mapping an AI governance policy means creating an operational, regulation framework that links  legal ,  ethical ,  engineering , and  management commitments across AI use‑cases and life-cycle stages. The goal isn't compliance documentation—it's designing the operational capabilities that provides assurance of promise-keeping to regulators, customers, and other stakeholders in real time – a necessity to contend with AI uncertainty. From Policy to Capability Traditional compliance treats AI governance as a paper exercise. Instead, Lean Compliance treats it as operational infrastructure with three components: Guardrails : Controls that prevent harm and contain risk Lampposts : Monitoring that makes system behavior visible Compliance streams : Flows of promises from legal/ethical commitments through engineering controls to demonstrated outcomes Start by inventorying AI assets and dependencies, classifying systems by impact and risk, then mapping controls to data quality, model validation, deployment architecture, ongoing monitoring, and human decision points. Seven Elements of Operational AI Governance 1. Purpose & Scope Define mission, enumerate AI assets, identify high-risk use-cases that trigger enhanced controls. 2. Roles & Accountability Assign decision rights: executive sponsor, AI/Model Compliance lead, Engineering, Data Stewards, Legal. Clear accountability prevents governance failure. 3. Life-cycle Controls Design standards, pre-deployment risk assessment, validation protocols, controlled pilots, change management. Each stage produces evidence of promise-keeping. 4. Operational Controls Data governance for quality and provenance. Drift detection and performance monitoring. Access controls and third-party assurance. Containment for operational technology and critical systems. 5. Assurance & Metrics KPIs for safety, fairness, reliability, incidents. Minimal Viable Compliance (MVC) measurement—enough to demonstrate compliance effectiveness without waste. 6. Escalation & Human Oversight Human judgment layer for ethical decisions, incident response, regulatory reporting. Accountability resides with people, not algorithms. 7. Continuous Improvement Build-measure-learn cycles. AI-assisted operational controls where they add value. Periodic alignment with ISO 42001, NIST AI RMF, sector frameworks. Minimal Viable Program (MVP): A Bayesian Approach Don't build the entire program at once. Treat governance as a learning system that updates its understanding of risk and control effectiveness based on operational evidence—what Bayesian learning does with beliefs, MVP does with governance capability: Prior : Start with initial risk assessment and minimal controls for highest-risk systems Evidence : Deploy controls and measure actual outcomes—incidents, false positives, operational friction Update: Revise your understanding of which controls create value vs. waste Iterate : Strengthen what works, eliminate what doesn't, expand to next-priority systems This is the Lean Startup model applied to governance. Your first control framework is a hypothesis. Operational data tells you if you're right. Each cycle, incident, or signal improves your understanding of how to keep promises effectively. The difference from traditional compliance: you're not trying to build perfect governance upfront. You're building a learning system that gets smarter about risk and control effectiveness over time, using evidence from operations to update your governance model. The test isn't whether your policy document passes audit. It's whether your organization reliably keeps its AI-related promises under conditions of uncertainty and change, learning and adapting as both AI systems and risk landscape evolve. Governance becomes operational capability when it ensures and protects stakeholder value through evidence-based learning, not just regulatory coverage through documentation. Is your AI governance capable of ensuring and protecting Total Value? Find out by getting your Total Value Assessment available here .

  • Compliance as Wisdom

    Compliance as Organizational Wisdom: The Strategic Practice of Restraint Organizations that run algorithmic processes without restraint—or blindly follow operating processes that serve purposes misaligned with their mission—act unwisely. They optimize metrics divorced from their core purpose, cut costs that destroy capabilities essential to their mission, and follow recursive loops that lead them away from sustainable value creation. Compliance is the means by which organizations practice restraint in service of wisdom. When market pressures create impulses to cut corners, governance uses compliance mechanisms to maintain the discipline to keep promises. When algorithms identify short-term profit opportunities, or when standard procedures push for quarterly targets, compliance provides the means to ask whether these actions serve the organization's actual mission. This transforms compliance from procedural overhead into the operational means of organizational wisdom. Instead of rule-following, it becomes the systematic means of promise-keeping—providing governance the mechanisms to interrupt processes that serve purposes misaligned with organizational mission. Consider the difference: A cost-cutting algorithm that reduces expenses by 15% regardless of impact on core capabilities Governance that uses compliance mechanisms to ask: "What are we actually trying to achieve, and what promises are we keeping or breaking?" The first serves narrow financial purposes. The second uses compliance as the means to maintain organizational integrity while pursuing the actual mission. In this way, compliance becomes the means by which governance maintains organizational purpose—ensuring that efficiency serves effectiveness, not the other way around.

  • From Chaos to Order: The Creation Process

    The opening of Genesis describes a progression: formlessness to form, potential to purpose, chaos to order. The sequence—formless and void, then light, then separation, then foundation, then rhythm, then inhabitants, then agency, then rest—keeps showing up when building new organizations, new capabilities, new systems from the ground up. Each stage creates conditions for the next. Skip one, and the whole thing stumbles. This isn't prescriptive or scientific. But as a lens for understanding how new things come into being, the pattern proves useful. Starting With What Is "The earth was formless and void, and darkness was over the surface of the deep." The Hebrew is tohu wabohu —formless and void. No structure, and nothing inhabiting the structure. Both conditions matter. Every new venture, every new organizational capability, every genuine innovation begins here. Potential exists. Intent is present—the spirit hovering over waters. But structure hasn't emerged yet, and there's nothing coherent to populate even if it had. This is the natural starting point for creation. Not a problem to solve, but a condition to work from. You have potential energy, raw materials, purpose—but no form yet. The work starts with naming what is, not what we wish were true. Observability Precedes Control "Let there be light." The first act of creation isn't building anything. It's establishing the capacity to observe. Light enables feedback—the fundamental requirement of any control system. In cybernetic terms: you cannot regulate what you cannot sense. Before structure, before process, before any attempt at order, you need the ability to distinguish signal from noise, day from night, progress from mere activity. When creating something new, we often rush to build before we can see clearly. We start with solutions before we understand what we're actually working with. But observability comes first. Creating light means establishing conditions where truth becomes visible. What feedback mechanisms will tell you whether this new thing is working? How will you know if you're making progress? What will reveal the difference between what you imagine and what's actually happening? Many new ventures fail here. They build elaborate structures without the sensing mechanisms needed to know whether those structures serve any purpose. Separation Creates Domains "Let there be an expanse between the waters to separate water from water." Separating water from water—what meaningful distinction does that create? When creating something new without clear boundaries, you cannot distinguish the new thing from its environment. Internal operations blur with external relationships. What you're creating bleeds into everything around it. The expanse creates domains. Not barriers, but appropriate separation that allows different types of work to occur under different conditions. What belongs inside this new thing versus outside it? Where does governance sit relative to operations? What boundaries define the system you're creating? Without these boundaries, the new thing never achieves coherent identity. The boundary isn't about isolation. It's about creating conditions where the new system can develop its own character, separate from everything else. This is about requisite variety in control structures. Different levels of the system need different operating conditions to function effectively. Foundation and Self-Reproduction "Let the dry land appear... let the land produce vegetation bearing seed according to its kind." Two things happen on day three: stable foundation emerges, creating conditions for opportunities to grow. The dry land creates those conditions—stable ground where something can take root. You cannot build on water. The foundation isn't bureaucracy or rigidity. It's the stable platform that makes growth possible. Then vegetation appears, bearing seed according to its kind. Self-reproducing capability. Practices that don't require constant intervention to survive. Knowledge that transfers between people. Patterns that perpetuate themselves without heroic individual effort. The dry land creates the conditions. The vegetation represents what grows from those conditions—opportunities realized, capabilities developed, patterns that regenerate themselves. When creating something new, you need both. The stable platform that creates conditions for growth, and the self-regenerating capacity that allows the system to develop and persist. A new organization, a new capability, a new system isn't established until its essential patterns can reproduce without depending on specific individuals or constant oversight. Coordination Through Rhythm "Let there be lights in the expanse to mark seasons and days and years." This isn't about creating a calendar. It's about establishing rhythmic structures that allow distributed activity to coordinate without requiring constant direct communication. Consider how celestial bodies function: they don't command anything. They provide reliable patterns that other systems can synchronize to. Migration, planting, sleeping, waking—all coordinated by rhythm rather than control. New systems need temporal architecture. When does planning occur? When do we review? When do we commit? When do we reflect? These rhythms are coordinating mechanisms that allow the new thing to operate coherently. The fourth day establishes the governance cadences that allow the emerging system to coordinate itself across time and distance. It's not time management. It's the creation of predictable patterns that enable distributed decision-making. Populating Structure With Capability "Let the waters teem with living creatures, and let birds fly across the expanse." Only now—after observation, boundaries, foundation, and rhythm are established—does the text populate the system with specialized actors. Fish in water, birds in air. Each in the domain suited to their nature. We typically try to staff new ventures before we've established what domains exist. Before we know what boundaries matter. Before there's stable ground to work from. Before there are coordinating rhythms to synchronize around. When you populate too early, people don't know where they belong or what they're optimizing for. When you populate after establishing structure, roles emerge more naturally. The domains reveal what capabilities they need and where those capabilities fit. This isn't about org charts or hierarchy. It's about alignment between capability and context—putting specialized excellence in the environment where it can function effectively. The Emergence of Agency "Then God said, 'Let us make mankind in our image, in our likeness, so that they may rule...'" Day six distinguishes between land animals and humans. Both are sophisticated—the animals represent complex operational capability. But humans represent something different: the capacity for responsible agency. What separates execution from stewardship? The ability to exercise judgment. To make promises and adapt means while honouring ends. To take responsibility for outcomes, not just follow processes. To understand purpose, not just complete tasks. This is where promise-keeping capability emerges. Where people can say "this is my responsibility" and mean it—not just in their assigned domain, but for the coherence of the whole. All the previous stages create conditions where this becomes possible. You cannot ask people to exercise responsible judgment when they're working on unstable ground, within unclear boundaries, with no ability to observe what's actually happening, and no coordinating rhythms to synchronize their choices with others'. Agency isn't demanded. It emerges when conditions support it. Building Rest Into the Rhythm "By the seventh day God had finished the work he had been doing; so on the seventh day he rested from all his work." The text declares each stage "good" and the whole "very good." Rest comes not from exhaustion, but as part of the pattern itself. The sabbath principle is about building rest into the rhythm of creation. Not as recovery from depletion, but as integral structure. As space for reflection. As pause that allows what's been built to settle and stabilize. When creating something new, we rarely pause. There's always more to build, more to perfect, more to add. But the pattern suggests rest isn't optional—it's part of the architecture. Systems need time to stabilize. New patterns need space to settle. People need breathing room to see what they've built. Systems that never rest eventually break. Not from the work itself, but from the inability to consolidate learning, to reflect on what's been accomplished, to let new patterns take hold. Sustainability requires rhythm that includes rest. Not as weakness, but as structure itself. The Pattern This isn't a methodology. You cannot follow seven steps and create whatever you're trying to build. What this offers is a pattern for noticing—a way of observing what might be missing, or what you might be attempting before conditions are ready to support it. The sequence matters. Not rigidly—creation isn't a linear process—but directionally. You build observability, then boundaries, then foundation, then rhythm, then populate with capability, then enable agency, then build in rest and reflection. You might cycle through these patterns multiple times, at different scales, in different aspects of what you're creating. The pattern recurs because it describes something fundamental about how complex systems come into being. After the Seventh Day The Genesis narrative doesn't end with creation. It continues with stewardship, with relationship, with the ongoing work of maintaining and developing what's been brought into being. Creation establishes structure. What follows is the responsibility of those who inhabit it—the promise-keeping work of honouring what's been built while adapting to what emerges. The pattern suggests something important: bringing order from chaos isn't the end of the work. It's the foundation for what comes next. Once you've created the conditions for life, for growth, for agency—the real work begins. The work of stewardship. Of maintenance. Of continuous adaptation within stable structure. Ancient wisdom doesn't provide formulas. It offers patterns that generations have found useful for making sense of recurring challenges. Whether this particular pattern proves useful in your work with creating new things—that's for you to discover. The creation process described in Genesis might simply be reminding us: there are natural progressions in how complex things come into being. You work with those progressions, not against them. You create conditions in sequence. You respect the time things need to stabilize. You build rest into rhythm. You enable agency through structure, not despite it. And then, after the seventh day, the real work of inhabiting what you've created begins. What patterns have you noticed in how new things come into being?

  • Cultivating Opportunities

    As we wind down for the year, I find myself looking ahead and wondering what's in store. As leaders, we know there are many forces at work—often too many to deal with, and many outside our control. But here's what I've been thinking: What we experience is also the result of the opportunities we cultivate in the current year. This insight came to me recently from working with someone I consider wise—a man now retired from a distinguished career as a physician and researcher, well known in his field. I call him the  Great Gardener. The Cultivation Principle In a project I'm working on with him, he's demonstrated time and again the value of cultivating opportunities. He's shown me how important it is to cultivate opportunities much the same way we cultivate a garden—which, by the way, is one of his greatest passions. His approach is simple but profound: whenever you see an interest, desire, a spark, or a possibility from someone who can contribute to your endeavour, you need to cultivate it. Even from people you might consider your "enemy" or "competitor." We may not have control over what will bear fruit and what doesn't, but we do have control over preparing the soil to provide the greatest chance for something good to happen. We also have control over the seeds we plant. The question for us is: Will we plant seeds of purpose, unity, and partnership? Or will we scatter seeds of chaos, discord, and resistance? Cultivating at Work In compliance, we also see this principle at work. The organizations that thrive aren't just those with the best control frameworks—they're the ones that have cultivated trust with regulators, built genuine partnerships with business units, and developed the conditions for mission and compliance success. They spend time cultivating the soil.   When they need to find a way forward through complex challenges, these cultivated relationships and developed capabilities— not external forces —are what they lean on to move ahead. Getting Ready for Spring Even though winter is almost here and many aren't thinking of gardening, this is precisely the time for us to consider what opportunities to cultivate in the year ahead. What vision needs casting? What sparks in your organization need fanning? What relationships need nurturing to create the probability for opportunities to grow? In our field, we're experts at spotting threats and building defences. We excel at risk assessments, gap analyses, and control design. These capabilities are essential. But what if our greatest competitive advantage lies not just in the problems we prevent, but in the possibilities we cultivate today? We may not be able to control everything that happens to us, but we can choose where we invest our time, resources, and energy.  This year, let's commit to balancing our portfolio: continue the essential work of managing risks, but also dedicate intentional effort to planting and cultivating opportunities. Let's see what good things will grow.

  • Deploy First, Engineer Later: The AI Risk We Can’t Afford

    The sequence matters: proper engineering design must occur before deployment, not afterwards. by Raimund Laqua, PMP, P.Eng  As a professional engineer with over three decades of experience in highly regulated industries, I firmly believe we can and should embrace AI technology. However, the current approach to deployment poses a risk we simply cannot afford. Across industries, I’m observing a troubling pattern: organizations are bypassing the engineering design phase and directly jumping from AI research and prototyping to production deployment.  This “Deploy First, Engineer Later” approach or as some call, "Fail First, Fail Fast": treats AI systems like software products rather than engineered systems that require professional design discipline. Engineering design goes beyond validation and testing after deployment; it’s a disciplined practice of designing systems for safety, reliability, and trust from the outset. When we want these qualities in AI systems and the internal controls that use them, we must engineer them in from the beginning, not retrofit them later. Here’s the typical sequence organizations follow: Research and prototype development Direct deployment to production systems Hope to retrofit safety, security, quality, and reliability later What should happen instead: Research and controlled experimentation Engineering design for safety, reliability, and trust requirements Deployment of properly engineered systems AI research and controlled experimentation have their place in laboratories where trained professionals can systematically study impacts and develop knowledge for practice.  However, we’re witnessing live experimentation in critical business and infrastructure systems, where both businesses and the public bear the consequences when systems fail due to inadequate engineering. When companies deploy AI without proper engineering design, they’re building systems that don’t account for the most important qualities: safety, security, quality, reliability, and trust. These aren’t features that can be added later; they must be built into the system architecture from the start. Consider the systems we rely on: medical devices, healthcare, power generation and distribution, financial systems, transportation networks, and many others. These systems require engineering design that considers failure modes, safety margins, reliability requirements, and trustworthiness criteria before deployment. However, AI is being integrated into these systems without this essential engineering work. This creates what I call an “operational compliance gap.” Organizations have governance policies and risk management statements, but these don’t translate into the engineering design work needed to build or procure inherently safe and reliable systems.  Without proper engineering design, governance policies become meaningless abstractions. They give the appearance of protection, but without the operational capabilities to ensure that what matters most is protected. The risk goes beyond individual organizations. We currently lack enough licensed professional engineers with AI expertise to provide the engineering design discipline critical systems need. Without professional accountability structures, software developers are making engineering design decisions about safety and mission-critical systems without the professional obligations that engineering practice demands. Professional engineering licensing ensures accountability for proper design practice. Engineers become professionally obligated to design systems that meet safety, reliability, and trust requirements. This creates the discipline needed to counteract the “deploy first, engineer later” approach that’s currently dominating AI adoption. The consequences of deploying unengineered AI systems aren’t abstract future concerns; they’re immediate risks to operational integrity, business continuity, and public safety. These risks are simply too great for businesses and society to ignore, especially as they try to retrofit engineering discipline into systems never intended for safety or reliability. Engineering design can’t be an afterthought. The sequence matters: proper engineering design must occur before deployment, not afterwards. Deploying systems first and then engineering them is a risk we simply can’t afford.

  • AI Regulating AI: Are we pouring fuel on the fire?

    Raimund Laqua, P.Eng., PMP Note: Link to my strategy briefing document is located at the end of the blog post. About a year ago, I heard an AI expert suggest that we might need AI to control AI. My immediate reaction? That's nonsense. Why would you control something uncertain with more uncertainty? It seemed like doubling down on the problem rather than solving it. Turns out I was wrong. Or at least, I was asking the wrong question. The Problem That Won't Go Away I'm an engineer. I think about systems. And when you look at AI systems through that lens, you run into a problem that won't go away no matter how you approach it: AI systems can generate millions of outputs with infinite variety across contexts that change faster than any human can track, let alone review. This isn't something you fix by hiring more compliance people. The variety of states an AI system can occupy—all the possible outputs it could generate across all possible inputs—grows combinatorially. A compliance officer reviewing dozens of interactions per day simply cannot match an AI system generating millions of interactions per day. We're trying to regulate infinite variety with finite methods. The math doesn't work. What I Missed About That AI Expert That expert was actually right, though he probably didn't explain it in these terms. W. Ross Ashby figured this out decades ago with his Law of Requisite Variety: if you want to control a system, your regulator needs variety equal to or greater than what you're trying to control. If your AI system has variety X, your regulatory system needs variety ≥ X. Humans don't have that variety. We're finite. AI regulators can potentially match it. But—and this is important—my initial skepticism wasn't completely off base. We absolutely should not hand over value judgments and ethical decisions to AI systems. The real question isn't "should AI control AI instead of humans?" It's "where do humans exercise judgment in a control system that needs to operate at AI speeds?" The Answer Is Both Yes and No This is what the briefing document I've written gets into. Do we need AI to regulate AI? Yes and no, depending on what you mean by "regulate." Cybernetic theory breaks regulation into three orders: First-order  is the operational stuff—watching outputs, catching violations, stopping bad things in real-time. This is where AI has to regulate AI because humans lack the requisite variety. We just can't keep up. Second-order  is watching the watchers—making sure those first-order controls are actually working, adjusting them when things change. Both AI and humans work here, with humans providing oversight. Third-order  is the values and ethics layer—deciding what we want, what tradeoffs we'll accept, what "good" even means. This is where human judgment isn't optional. These are value judgments that only humans can legitimately make. So yes, we need AI to regulate AI where speed and scale matter. And no, we don't give up human authority—we put it where it belongs, at the values level, not trying to manually review every output or insert deterministic validators in the AI stream. Why This Actually Matters This isn't theoretical. Organizations deploying AI systems have a duty of care to protect people from harm. When your control systems can't match the variety of what you're controlling, you can't fulfill that duty. There's a gap in your accountability and capability. Right now, most organizations are doing manual oversight—reviewing samples, running periodic audits, fixing things after problems happen. Meanwhile, thousands of interactions are happening that nobody sees. Problems spread before anyone notices. We're creating documentation of our inability to regulate, not actual regulation. The briefing lays out why AI regulating AI isn't a nice-to-have—it's the only way to get the variety you need to actually exercise duty of care. But it also explains why human governance over values can't be negotiated away. Technical systems can implement controls. They can't decide what values those controls should serve. What I've Learned I'm still skeptical when people claim AI will solve everything. But I'm not skeptical anymore about needing AI to regulate AI. That turns out to be grounded in cybernetic theory that's older than modern AI. What matters is how we architect these control systems. AI providing the variety at operational speeds. Humans maintaining authority over values and ethics. Both doing what they're actually capable of doing. If you're trying to figure out how to govern AI systems responsibly—how to meet your duty of care when AI operates faster and bigger than human oversight can match—my strategy briefing document explains the cybernetic principles and practical approaches you can use. The Law of Requisite Variety isn't a suggestion. It's a constraint. We can acknowledge it and design accordingly, or we can keep pretending that manual oversight will somehow catch up. It won't. Download my strategy briefing document here: About the Author:  Raimund Laqua, P.Eng., PMP, has over 30 years of experience in highly regulated industries including oil & gas, medical devices, pharmaceuticals, and others. He serves on OSPE's AI in Engineering committee, and is the AI Committee Chair for E4P. He is also co-founder of ProfessionalEngineers.AI .

  • Governing Large Language Models - A Cybernetic Approach to AI Compliance

    I've been thinking a lot about promises lately. Not the kind we make at year-end meetings, but the deeper promises organizations make when they deploy AI systems. Promises about safety, fairness, and accountability. Promises that become very real when something goes wrong. The challenge with Large Language Models is that traditional compliance approaches assume you can audit the decision-making process. You write procedures, train people, create controls around logical steps you can inspect and verify. But LLMs don't work that way. The "thinking" happens in a mathematical space we can't directly examine. You can't audit billions of neural weights the way you'd review a checklist. This has led me back to some foundational work in cybernetics—ideas that help us think about governing systems we can't fully understand or predict. A Cybernetic Approach to AI Compliance Two insights have been particularly valuable: First, trying to control a complex, adaptive system with rigid rules is like trying to hold water in your hands. The system will always find ways around static controls. Your governance needs to learn and adapt, or it becomes irrelevant quickly. Second, there are different kinds of regulation happening at different levels. Some decisions can be automated effectively—checking inputs, classifying outputs, monitoring for drift. But the deeper questions about what outcomes we should permit, what risks we're willing to accept—those require human judgment. Not because the technology isn't advanced enough, but because those are fundamentally human choices about values and priorities. Current regulatory frameworks seem to understand this intuitively, even if they don't say so explicitly. They assume technical controls operating under human oversight—automated compliance within human-defined boundaries. This changes how I think about AI governance. Instead of trying to make the black box transparent, we focus on governing what we can actually control: what goes in, which models we choose, what comes out. We build learning systems around the opacity rather than trying to eliminate it. For those of us working in regulated environments, this offers a more realistic path forward than waiting for "explainable AI" to solve our governance problems. I've been working through these ideas in more detail—how cybernetic principles apply to AI governance, what this means for compliance frameworks, and how to implement these approaches in practice. You can read more in my latest briefing note which you can download here:

  • PRESENTATION SUMMARY: Elevating Compliance by Applying Lean Principles

    Presenter:  Raimund Laqua, P.Eng., PMP. Date:  November 20, 2025 For Compliance Officers and Managers When compliance becomes operational—which is necessary to meet performance and outcome obligations—you need a method of improvement that focuses on operational systems. This is where LEAN comes in. However, LEAN has to adapt its principles to work with compliance. This presentation explores 10 lean principles and how they are used to improve compliance performance. If you're looking to reduce your compliance costs, don't stop there. Improve value creation with better compliance as well. This is what LEAN COMPLIANCE is all about. Why Operational Compliance Requires Different Improvement Methods Most compliance teams are stuck managing compliance as separate programs rather than operational systems. But when your organization has performance and outcome obligations—not just rule-following requirements—compliance must become operational. It must deliver results, not just demonstrate activities. Once compliance is operational, you need improvement methods designed for operational systems. Traditional compliance improvement focuses on better documentation, more training, or tighter controls. But operational systems require systematic improvement methods that address flow, waste, variation, and capability—exactly what lean principles provide. The challenge? Standard lean principles assume compliance is waste to be minimized. For operational compliance, lean principles must be adapted to recognize compliance as value-creating capability that needs optimization, not elimination. How Lean Principles Adapt for Compliance Performance 1. Value and Waste Traditional Lean:  Value is what customers pay for; compliance is non-value-added waste to minimize Lean Compliance:  Value includes stakeholder trust, risk reduction, operational license; compliance waste (over-regulation, excessive auditing, firefighting) stems from uncertainty in compliance systems 2. Flow (Push/Pull) Traditional Lean:  Smooth movement of materials/work through production processes using pull signals Lean Compliance:  Pull promises rather than push obligations—organizational levels pull the promises they need from above rather than having compliance requirements pushed down to them 3. Value Streams Traditional Lean:  Map material and information flow from customer order to delivery, eliminating non-value steps Lean Compliance:  Map "compliance streams"—the end-to-end flow of how obligations transform into operational capabilities and delivered outcomes, treating compliance as its own value-creating process 4. One-Piece/One-Touch Flow Traditional Lean:  Process work items individually through each step without batching or queuing Lean Compliance:  Handle compliance requirements individually through assessment-design-implementation-verification without batching (e.g., 5 days monthly monitoring vs 20-day annual audits) 5. Poka Yoke (Mistake Proof) Traditional Lean:  Design processes to prevent manufacturing defects or catch them immediately Lean Compliance:  Use behavioral design and environmental cues to make correct compliance actions easier than incorrect ones, replacing training-enforcement with system design 6. Jidoka (Automation with Human Touch) Traditional Lean:  Machines stop automatically when defects detected, workers solve problems Lean Compliance:  Build compliance monitoring into operational processes to signal when going off-track, enabling real-time correction rather than periodic audit discovery 7. Visual Management Traditional Lean:  Make production status, problems, and standards immediately visible to everyone Lean Compliance:  Real-time dashboards showing rule adherence, system performance, and outcome delivery—compliance status as transparent as production metrics 8. Hoshin Kanri (Policy Deployment) Traditional Lean:  Align strategic objectives with operational execution through cascaded goal deployment Lean Compliance:  Connect compliance strategy to business strategy through "catch ball" dialogue, ensuring compliance priorities serve business objectives 9. Pursuit of Perfection Traditional Lean:  Continuous elimination of waste and improvement of customer value delivery Lean Compliance:  Continuously improve organizational capability to deliver compliance outcomes and keep increasingly sophisticated stakeholder promises 10. Respect for People Traditional Lean:  Engage worker knowledge for production process improvement and problem-solving Lean Compliance:  Leverage frontline operational knowledge to design better compliance systems rather than imposing top-down compliance controls What This Means for Your Compliance Performance Reduced Compliance Costs:  Eliminate waste in your compliance processes—over-documentation, redundant activities, firefighting, and rework. Focus resources on activities that actually improve compliance outcomes. Improved Value Creation:  Better compliance creates stakeholder value through enhanced trust, reduced risk, and operational excellence. This value becomes a competitive advantage, not just a cost of doing business. Enhanced Operational Integration:  Compliance becomes part of operational excellence rather than a separate overhead function. Your compliance capabilities enable business performance instead of constraining it. Systematic Improvement:  Apply proven improvement methods to your compliance systems. Move from ad-hoc fixes to systematic enhancement of compliance capability. Implementation for Compliance Professionals First , operationalize your compliance—get all programs working together as integrated systems focused on outcomes, not just activities. Then adapt lean principles specifically for your compliance context, recognizing that compliance creates value that needs optimization. Finally, apply these adapted principles systematically to improve both compliance performance and value creation. This approach is proven across highly regulated industries including oil & gas, financial services, healthcare, and government sectors. The EPA has applied lean principles to environmental regulation for decades, demonstrating that operational compliance improvement works. The Bottom Line for Obligation Owners You are accountable to meet obligations—regulatory requirements, voluntary commitments, stakeholder expectations. You have two choices: continue managing these obligations as overhead to be minimized, or develop them as operational capabilities to be optimized. When obligations have performance and outcome requirements—as most now do—compliance becomes operational. And when compliance is operational, you need improvement methods designed for operational systems. Lean Compliance provides those methods. It's not about doing compliance faster or cheaper (though both happen). It's about building compliance capability that creates value while meeting your obligations reliably. If you're already looking to reduce compliance costs, don't stop there. Use these adapted lean principles to improve value creation with better compliance as well. That's how you transform accountability from burden into competitive advantage. Your obligations aren't going away—they're getting more complex. The question is whether you'll develop the capability to meet them systematically, or continue managing them reactively. Lean Compliance gives you the systematic approach.

  • Integrative Compliance: Embedding Regulatory Obligations in Operational Capability

    If you're a compliance director or manager, you've probably noticed something frustrating: organizations can have excellent compliance documentation, pass audits, and still get surprised by violations. The gap isn't in what they document—it's in how regulatory obligations are embedded in operational capability. This is where integrative compliance  transforms everything. While traditional compliance creates separate activities that run parallel to operations, integrative compliance embeds regulatory obligations directly into operational capability itself. When you achieve integrative compliance, regulatory fulfillment becomes inseparable from value creation. Integrative Compliance What Is Integrative Compliance? Integrative compliance embeds regulatory obligations directly into operational capability rather than creating separate compliance activities. It's the difference between having environmental procedures that get referenced during audits versus having environmental obligations embedded in every production decision and automated control system. Compliance streams in integrative compliance represent the flow of promises (commitments) through your organization—and these promises can be fulfilled by humans, machines, or combinations of both.  A promise to "encrypt all personal data" might be fulfilled by automated systems. A promise to "conduct safety inspections" might be fulfilled by human operators. A promise to "maintain equipment reliability" might be fulfilled by predictive maintenance algorithms combined with human technicians. The Lean Compliance Operational Model  provides the framework for building integrative compliance through four essential dimensions that map to organizational levels: Governance Level → Compliance Outcomes:  What compliance results must we achieve? Program Level → Compliance Targets:  What performance measures demonstrate progress? System Level → Compliance Practices:  What standardized methods ensure capability? Process Level → Compliance Rules:  What specific actions must be taken? From Parallel Activities to Embedded Capability Lean Compliance Operational Model Here's the key point: integrative compliance only works when obligations are embedded in operational capability across all four dimensions.  You can't just add compliance activities alongside operations—you need to embed obligations into organizational capability itself. Consider environmental compliance for a manufacturing facility. Traditional compliance creates separate environmental activities: quarterly emissions monitoring, annual environmental training, periodic waste audits. These run parallel to production operations. Integrative compliance embeds environmental obligations directly into organizational capability through both human and machine promises: Process rules:  Automated systems continuously monitor emissions and classify waste in real-time, while operators follow specific handling procedures System practices:  Production scheduling systems incorporate environmental constraints using ISO 14001 practices, with human oversight and decision-making Program targets:  Monthly production targets include environmental performance metrics tracked by both automated monitoring and human verification Governance outcomes:  Business performance includes sustained environmental permit compliance demonstrated through machine-generated evidence and human attestation Now environmental compliance happens naturally as production happens. Regulatory obligations and operational capability are embedded together. The Power of Integrative Streams When operational compliance powers your compliance streams, you don't just get integrated activities—you create integrative streams  where value creation and compliance delivery become inseparable. This is the double helix of organizational DNA in action. Integrative Streams Synergistic Performance: with integrative streams, improving one stream automatically strengthens the other. Enhanced production processes simultaneously improve compliance outcomes like safety and quality. Better operational methods create both more efficient operations and stronger compliance capability. Investment in one stream pays dividends in both. Emergent Capabilities : Integrative streams create capabilities that neither stream could achieve alone. A manufacturing process with embedded compliance monitoring doesn't just meet regulatory requirements—it creates real-time visibility that enables faster optimization, predictive maintenance, and proactive risk management. Adaptive Resilience : When compliance and value streams are truly integrative, they adapt together to changing conditions. New regulations don't break operations—they become opportunities to strengthen both compliance and competitive advantage simultaneously. Real-Time Visibility: I nstead of discovering compliance problems weeks later during reviews, you know immediately when something's not working. If waste classification isn't happening during production, the production system alerts you in real-time. Predictable Performance : because compliance is embedded in operations, compliance performance becomes as predictable as operational performance. If your production process is reliable, your compliance delivery is reliable. Reduced Waste: You eliminate duplicate activities and conflicting priorities. Instead of production schedules that ignore environmental constraints (requiring later rework), you create schedules that optimize both production and environmental performance. Capability Building: Each operational improvement also improves compliance capability. When you enhance production quality, you simultaneously strengthen quality compliance. When you improve safety processes, you build safety compliance capability. Building Integrative Compliance The Lean Compliance Operational Model shows how to embed regulatory obligations in operational capability: 1. Start with Outcomes (Governance Level) What regulatory results must your organization achieve? Not just "be environmentally compliant," but specific outcomes like "maintain all environmental permits without violations" or "achieve zero unauthorized emissions." 2. Define Targets (Program Level) What performance measures will demonstrate you're achieving those outcomes? Monthly emission levels, waste diversion rates, incident-free days, permit renewal success. 3. Design Practices (System Level) What systematic methods will deliver those targets? This is where standards like ISO 14001 provide proven approaches to environmental management that can be integrated into operations. 4. Embed Rules (Process Level) What specific actions must happen during each operational task? Real-time monitoring, immediate classification, proper handling procedures, documentation requirements. 5. Create the Compliance Stream Each level must enable the one above it: rules enable practices, practices enable targets, targets deliver outcomes. And each level must be supported by the one below it: outcomes require targets, targets require practices, practices require rules. The Integrative Compliance Test Here's how you know if you have integrative compliance rather than just parallel compliance activities: Can compliance promises be demonstrated through normal operations?  Whether fulfilled by humans, machines, or both—can workers show how compliance is embedded in their work processes, and can systems demonstrate automated compliance delivery? Does improving operations also improve compliance?  When you enhance production efficiency or operational delivery, do compliance outcomes like safety and quality performance improve simultaneously? Can you predict compliance failures before they happen?  If operational performance degrades—whether human or machine—can you predict where compliance failures will occur? Is compliance visible in real-time?  Can you demonstrate current compliance status through both automated monitoring and human verification without waiting for the next audit or review? If you answered "no" to any of these, you have parallel compliance activities but not integrative compliance. The Bottom Line The future of compliance isn't better documentation or more audits—it's integrative compliance  that embeds mandatory and voluntary obligations directly in operational capability. When compliance obligations and operational capability are inseparable, you achieve the double helix of organizational DNA. Organizations that master integrative compliance don't choose between efficiency and compliance outcomes, between innovation and regulation, between speed and safety compliance. They achieve all of these because regulatory obligations are embedded in the operational capability that makes value creation and compliance delivery mutually reinforcing. Ready to move from parallel compliance activities to integrative compliance? Start with one critical obligation and embed it in operational capability across all four dimensions. The transformation will demonstrate why integrative compliance is the foundation for sustainable regulatory performance and business success. Ray Laqua, P.Eng, PMP | Lean Compliance Consulting Transforming regulatory obligations into operational capability

bottom of page