COMPLIANCE
SEARCH
Find what you need
572 results found with an empty search
- Operationalizing AI Governance: A Lean Compliance Approach
AI governance policies typically describe what organizations intend to do. Lean Compliance focuses on how those intentions become operational capabilities that keep promises under uncertainty. Mapping an AI governance policy means creating an operational, regulation framework that links legal , ethical , engineering , and management commitments across AI use‑cases and life-cycle stages. The goal isn't compliance documentation—it's designing the operational capabilities that provides assurance of promise-keeping to regulators, customers, and other stakeholders in real time – a necessity to contend with AI uncertainty. From Policy to Capability Traditional compliance treats AI governance as a paper exercise. Instead, Lean Compliance treats it as operational infrastructure with three components: Guardrails : Controls that prevent harm and contain risk Lampposts : Monitoring that makes system behavior visible Compliance streams : Flows of promises from legal/ethical commitments through engineering controls to demonstrated outcomes Start by inventorying AI assets and dependencies, classifying systems by impact and risk, then mapping controls to data quality, model validation, deployment architecture, ongoing monitoring, and human decision points. Seven Elements of Operational AI Governance 1. Purpose & Scope Define mission, enumerate AI assets, identify high-risk use-cases that trigger enhanced controls. 2. Roles & Accountability Assign decision rights: executive sponsor, AI/Model Compliance lead, Engineering, Data Stewards, Legal. Clear accountability prevents governance failure. 3. Life-cycle Controls Design standards, pre-deployment risk assessment, validation protocols, controlled pilots, change management. Each stage produces evidence of promise-keeping. 4. Operational Controls Data governance for quality and provenance. Drift detection and performance monitoring. Access controls and third-party assurance. Containment for operational technology and critical systems. 5. Assurance & Metrics KPIs for safety, fairness, reliability, incidents. Minimal Viable Compliance (MVC) measurement—enough to demonstrate compliance effectiveness without waste. 6. Escalation & Human Oversight Human judgment layer for ethical decisions, incident response, regulatory reporting. Accountability resides with people, not algorithms. 7. Continuous Improvement Build-measure-learn cycles. AI-assisted operational controls where they add value. Periodic alignment with ISO 42001, NIST AI RMF, sector frameworks. Minimal Viable Program (MVP): A Bayesian Approach Don't build the entire program at once. Treat governance as a learning system that updates its understanding of risk and control effectiveness based on operational evidence—what Bayesian learning does with beliefs, MVP does with governance capability: Prior : Start with initial risk assessment and minimal controls for highest-risk systems Evidence : Deploy controls and measure actual outcomes—incidents, false positives, operational friction Update: Revise your understanding of which controls create value vs. waste Iterate : Strengthen what works, eliminate what doesn't, expand to next-priority systems This is the Lean Startup model applied to governance. Your first control framework is a hypothesis. Operational data tells you if you're right. Each cycle, incident, or signal improves your understanding of how to keep promises effectively. The difference from traditional compliance: you're not trying to build perfect governance upfront. You're building a learning system that gets smarter about risk and control effectiveness over time, using evidence from operations to update your governance model. The test isn't whether your policy document passes audit. It's whether your organization reliably keeps its AI-related promises under conditions of uncertainty and change, learning and adapting as both AI systems and risk landscape evolve. Governance becomes operational capability when it ensures and protects stakeholder value through evidence-based learning, not just regulatory coverage through documentation. Is your AI governance capable of ensuring and protecting Total Value? Find out by getting your Total Value Assessment available here .
- Compliance as Wisdom
Compliance as Organizational Wisdom: The Strategic Practice of Restraint Organizations that run algorithmic processes without restraint—or blindly follow operating processes that serve purposes misaligned with their mission—act unwisely. They optimize metrics divorced from their core purpose, cut costs that destroy capabilities essential to their mission, and follow recursive loops that lead them away from sustainable value creation. Compliance is the means by which organizations practice restraint in service of wisdom. When market pressures create impulses to cut corners, governance uses compliance mechanisms to maintain the discipline to keep promises. When algorithms identify short-term profit opportunities, or when standard procedures push for quarterly targets, compliance provides the means to ask whether these actions serve the organization's actual mission. This transforms compliance from procedural overhead into the operational means of organizational wisdom. Instead of rule-following, it becomes the systematic means of promise-keeping—providing governance the mechanisms to interrupt processes that serve purposes misaligned with organizational mission. Consider the difference: A cost-cutting algorithm that reduces expenses by 15% regardless of impact on core capabilities Governance that uses compliance mechanisms to ask: "What are we actually trying to achieve, and what promises are we keeping or breaking?" The first serves narrow financial purposes. The second uses compliance as the means to maintain organizational integrity while pursuing the actual mission. In this way, compliance becomes the means by which governance maintains organizational purpose—ensuring that efficiency serves effectiveness, not the other way around.
- From Chaos to Order: The Creation Process
The opening of Genesis describes a progression: formlessness to form, potential to purpose, chaos to order. The sequence—formless and void, then light, then separation, then foundation, then rhythm, then inhabitants, then agency, then rest—keeps showing up when building new organizations, new capabilities, new systems from the ground up. Each stage creates conditions for the next. Skip one, and the whole thing stumbles. This isn't prescriptive or scientific. But as a lens for understanding how new things come into being, the pattern proves useful. Starting With What Is "The earth was formless and void, and darkness was over the surface of the deep." The Hebrew is tohu wabohu —formless and void. No structure, and nothing inhabiting the structure. Both conditions matter. Every new venture, every new organizational capability, every genuine innovation begins here. Potential exists. Intent is present—the spirit hovering over waters. But structure hasn't emerged yet, and there's nothing coherent to populate even if it had. This is the natural starting point for creation. Not a problem to solve, but a condition to work from. You have potential energy, raw materials, purpose—but no form yet. The work starts with naming what is, not what we wish were true. Observability Precedes Control "Let there be light." The first act of creation isn't building anything. It's establishing the capacity to observe. Light enables feedback—the fundamental requirement of any control system. In cybernetic terms: you cannot regulate what you cannot sense. Before structure, before process, before any attempt at order, you need the ability to distinguish signal from noise, day from night, progress from mere activity. When creating something new, we often rush to build before we can see clearly. We start with solutions before we understand what we're actually working with. But observability comes first. Creating light means establishing conditions where truth becomes visible. What feedback mechanisms will tell you whether this new thing is working? How will you know if you're making progress? What will reveal the difference between what you imagine and what's actually happening? Many new ventures fail here. They build elaborate structures without the sensing mechanisms needed to know whether those structures serve any purpose. Separation Creates Domains "Let there be an expanse between the waters to separate water from water." Separating water from water—what meaningful distinction does that create? When creating something new without clear boundaries, you cannot distinguish the new thing from its environment. Internal operations blur with external relationships. What you're creating bleeds into everything around it. The expanse creates domains. Not barriers, but appropriate separation that allows different types of work to occur under different conditions. What belongs inside this new thing versus outside it? Where does governance sit relative to operations? What boundaries define the system you're creating? Without these boundaries, the new thing never achieves coherent identity. The boundary isn't about isolation. It's about creating conditions where the new system can develop its own character, separate from everything else. This is about requisite variety in control structures. Different levels of the system need different operating conditions to function effectively. Foundation and Self-Reproduction "Let the dry land appear... let the land produce vegetation bearing seed according to its kind." Two things happen on day three: stable foundation emerges, creating conditions for opportunities to grow. The dry land creates those conditions—stable ground where something can take root. You cannot build on water. The foundation isn't bureaucracy or rigidity. It's the stable platform that makes growth possible. Then vegetation appears, bearing seed according to its kind. Self-reproducing capability. Practices that don't require constant intervention to survive. Knowledge that transfers between people. Patterns that perpetuate themselves without heroic individual effort. The dry land creates the conditions. The vegetation represents what grows from those conditions—opportunities realized, capabilities developed, patterns that regenerate themselves. When creating something new, you need both. The stable platform that creates conditions for growth, and the self-regenerating capacity that allows the system to develop and persist. A new organization, a new capability, a new system isn't established until its essential patterns can reproduce without depending on specific individuals or constant oversight. Coordination Through Rhythm "Let there be lights in the expanse to mark seasons and days and years." This isn't about creating a calendar. It's about establishing rhythmic structures that allow distributed activity to coordinate without requiring constant direct communication. Consider how celestial bodies function: they don't command anything. They provide reliable patterns that other systems can synchronize to. Migration, planting, sleeping, waking—all coordinated by rhythm rather than control. New systems need temporal architecture. When does planning occur? When do we review? When do we commit? When do we reflect? These rhythms are coordinating mechanisms that allow the new thing to operate coherently. The fourth day establishes the governance cadences that allow the emerging system to coordinate itself across time and distance. It's not time management. It's the creation of predictable patterns that enable distributed decision-making. Populating Structure With Capability "Let the waters teem with living creatures, and let birds fly across the expanse." Only now—after observation, boundaries, foundation, and rhythm are established—does the text populate the system with specialized actors. Fish in water, birds in air. Each in the domain suited to their nature. We typically try to staff new ventures before we've established what domains exist. Before we know what boundaries matter. Before there's stable ground to work from. Before there are coordinating rhythms to synchronize around. When you populate too early, people don't know where they belong or what they're optimizing for. When you populate after establishing structure, roles emerge more naturally. The domains reveal what capabilities they need and where those capabilities fit. This isn't about org charts or hierarchy. It's about alignment between capability and context—putting specialized excellence in the environment where it can function effectively. The Emergence of Agency "Then God said, 'Let us make mankind in our image, in our likeness, so that they may rule...'" Day six distinguishes between land animals and humans. Both are sophisticated—the animals represent complex operational capability. But humans represent something different: the capacity for responsible agency. What separates execution from stewardship? The ability to exercise judgment. To make promises and adapt means while honouring ends. To take responsibility for outcomes, not just follow processes. To understand purpose, not just complete tasks. This is where promise-keeping capability emerges. Where people can say "this is my responsibility" and mean it—not just in their assigned domain, but for the coherence of the whole. All the previous stages create conditions where this becomes possible. You cannot ask people to exercise responsible judgment when they're working on unstable ground, within unclear boundaries, with no ability to observe what's actually happening, and no coordinating rhythms to synchronize their choices with others'. Agency isn't demanded. It emerges when conditions support it. Building Rest Into the Rhythm "By the seventh day God had finished the work he had been doing; so on the seventh day he rested from all his work." The text declares each stage "good" and the whole "very good." Rest comes not from exhaustion, but as part of the pattern itself. The sabbath principle is about building rest into the rhythm of creation. Not as recovery from depletion, but as integral structure. As space for reflection. As pause that allows what's been built to settle and stabilize. When creating something new, we rarely pause. There's always more to build, more to perfect, more to add. But the pattern suggests rest isn't optional—it's part of the architecture. Systems need time to stabilize. New patterns need space to settle. People need breathing room to see what they've built. Systems that never rest eventually break. Not from the work itself, but from the inability to consolidate learning, to reflect on what's been accomplished, to let new patterns take hold. Sustainability requires rhythm that includes rest. Not as weakness, but as structure itself. The Pattern This isn't a methodology. You cannot follow seven steps and create whatever you're trying to build. What this offers is a pattern for noticing—a way of observing what might be missing, or what you might be attempting before conditions are ready to support it. The sequence matters. Not rigidly—creation isn't a linear process—but directionally. You build observability, then boundaries, then foundation, then rhythm, then populate with capability, then enable agency, then build in rest and reflection. You might cycle through these patterns multiple times, at different scales, in different aspects of what you're creating. The pattern recurs because it describes something fundamental about how complex systems come into being. After the Seventh Day The Genesis narrative doesn't end with creation. It continues with stewardship, with relationship, with the ongoing work of maintaining and developing what's been brought into being. Creation establishes structure. What follows is the responsibility of those who inhabit it—the promise-keeping work of honouring what's been built while adapting to what emerges. The pattern suggests something important: bringing order from chaos isn't the end of the work. It's the foundation for what comes next. Once you've created the conditions for life, for growth, for agency—the real work begins. The work of stewardship. Of maintenance. Of continuous adaptation within stable structure. Ancient wisdom doesn't provide formulas. It offers patterns that generations have found useful for making sense of recurring challenges. Whether this particular pattern proves useful in your work with creating new things—that's for you to discover. The creation process described in Genesis might simply be reminding us: there are natural progressions in how complex things come into being. You work with those progressions, not against them. You create conditions in sequence. You respect the time things need to stabilize. You build rest into rhythm. You enable agency through structure, not despite it. And then, after the seventh day, the real work of inhabiting what you've created begins. What patterns have you noticed in how new things come into being?
- Cultivating Opportunities
As we wind down for the year, I find myself looking ahead and wondering what's in store. As leaders, we know there are many forces at work—often too many to deal with, and many outside our control. But here's what I've been thinking: What we experience is also the result of the opportunities we cultivate in the current year. This insight came to me recently from working with someone I consider wise—a man now retired from a distinguished career as a physician and researcher, well known in his field. I call him the Great Gardener. The Cultivation Principle In a project I'm working on with him, he's demonstrated time and again the value of cultivating opportunities. He's shown me how important it is to cultivate opportunities much the same way we cultivate a garden—which, by the way, is one of his greatest passions. His approach is simple but profound: whenever you see an interest, desire, a spark, or a possibility from someone who can contribute to your endeavour, you need to cultivate it. Even from people you might consider your "enemy" or "competitor." We may not have control over what will bear fruit and what doesn't, but we do have control over preparing the soil to provide the greatest chance for something good to happen. We also have control over the seeds we plant. The question for us is: Will we plant seeds of purpose, unity, and partnership? Or will we scatter seeds of chaos, discord, and resistance? Cultivating at Work In compliance, we also see this principle at work. The organizations that thrive aren't just those with the best control frameworks—they're the ones that have cultivated trust with regulators, built genuine partnerships with business units, and developed the conditions for mission and compliance success. They spend time cultivating the soil. When they need to find a way forward through complex challenges, these cultivated relationships and developed capabilities— not external forces —are what they lean on to move ahead. Getting Ready for Spring Even though winter is almost here and many aren't thinking of gardening, this is precisely the time for us to consider what opportunities to cultivate in the year ahead. What vision needs casting? What sparks in your organization need fanning? What relationships need nurturing to create the probability for opportunities to grow? In our field, we're experts at spotting threats and building defences. We excel at risk assessments, gap analyses, and control design. These capabilities are essential. But what if our greatest competitive advantage lies not just in the problems we prevent, but in the possibilities we cultivate today? We may not be able to control everything that happens to us, but we can choose where we invest our time, resources, and energy. This year, let's commit to balancing our portfolio: continue the essential work of managing risks, but also dedicate intentional effort to planting and cultivating opportunities. Let's see what good things will grow.
- Deploy First, Engineer Later: The AI Risk We Can’t Afford
The sequence matters: proper engineering design must occur before deployment, not afterwards. by Raimund Laqua, PMP, P.Eng As a professional engineer with over three decades of experience in highly regulated industries, I firmly believe we can and should embrace AI technology. However, the current approach to deployment poses a risk we simply cannot afford. Across industries, I’m observing a troubling pattern: organizations are bypassing the engineering design phase and directly jumping from AI research and prototyping to production deployment. This “Deploy First, Engineer Later” approach or as some call, "Fail First, Fail Fast": treats AI systems like software products rather than engineered systems that require professional design discipline. Engineering design goes beyond validation and testing after deployment; it’s a disciplined practice of designing systems for safety, reliability, and trust from the outset. When we want these qualities in AI systems and the internal controls that use them, we must engineer them in from the beginning, not retrofit them later. Here’s the typical sequence organizations follow: Research and prototype development Direct deployment to production systems Hope to retrofit safety, security, quality, and reliability later What should happen instead: Research and controlled experimentation Engineering design for safety, reliability, and trust requirements Deployment of properly engineered systems AI research and controlled experimentation have their place in laboratories where trained professionals can systematically study impacts and develop knowledge for practice. However, we’re witnessing live experimentation in critical business and infrastructure systems, where both businesses and the public bear the consequences when systems fail due to inadequate engineering. When companies deploy AI without proper engineering design, they’re building systems that don’t account for the most important qualities: safety, security, quality, reliability, and trust. These aren’t features that can be added later; they must be built into the system architecture from the start. Consider the systems we rely on: medical devices, healthcare, power generation and distribution, financial systems, transportation networks, and many others. These systems require engineering design that considers failure modes, safety margins, reliability requirements, and trustworthiness criteria before deployment. However, AI is being integrated into these systems without this essential engineering work. This creates what I call an “operational compliance gap.” Organizations have governance policies and risk management statements, but these don’t translate into the engineering design work needed to build or procure inherently safe and reliable systems. Without proper engineering design, governance policies become meaningless abstractions. They give the appearance of protection, but without the operational capabilities to ensure that what matters most is protected. The risk goes beyond individual organizations. We currently lack enough licensed professional engineers with AI expertise to provide the engineering design discipline critical systems need. Without professional accountability structures, software developers are making engineering design decisions about safety and mission-critical systems without the professional obligations that engineering practice demands. Professional engineering licensing ensures accountability for proper design practice. Engineers become professionally obligated to design systems that meet safety, reliability, and trust requirements. This creates the discipline needed to counteract the “deploy first, engineer later” approach that’s currently dominating AI adoption. The consequences of deploying unengineered AI systems aren’t abstract future concerns; they’re immediate risks to operational integrity, business continuity, and public safety. These risks are simply too great for businesses and society to ignore, especially as they try to retrofit engineering discipline into systems never intended for safety or reliability. Engineering design can’t be an afterthought. The sequence matters: proper engineering design must occur before deployment, not afterwards. Deploying systems first and then engineering them is a risk we simply can’t afford.
- AI Regulating AI: Are we pouring fuel on the fire?
Raimund Laqua, P.Eng., PMP Note: Link to my strategy briefing document is located at the end of the blog post. About a year ago, I heard an AI expert suggest that we might need AI to control AI. My immediate reaction? That's nonsense. Why would you control something uncertain with more uncertainty? It seemed like doubling down on the problem rather than solving it. Turns out I was wrong. Or at least, I was asking the wrong question. The Problem That Won't Go Away I'm an engineer. I think about systems. And when you look at AI systems through that lens, you run into a problem that won't go away no matter how you approach it: AI systems can generate millions of outputs with infinite variety across contexts that change faster than any human can track, let alone review. This isn't something you fix by hiring more compliance people. The variety of states an AI system can occupy—all the possible outputs it could generate across all possible inputs—grows combinatorially. A compliance officer reviewing dozens of interactions per day simply cannot match an AI system generating millions of interactions per day. We're trying to regulate infinite variety with finite methods. The math doesn't work. What I Missed About That AI Expert That expert was actually right, though he probably didn't explain it in these terms. W. Ross Ashby figured this out decades ago with his Law of Requisite Variety: if you want to control a system, your regulator needs variety equal to or greater than what you're trying to control. If your AI system has variety X, your regulatory system needs variety ≥ X. Humans don't have that variety. We're finite. AI regulators can potentially match it. But—and this is important—my initial skepticism wasn't completely off base. We absolutely should not hand over value judgments and ethical decisions to AI systems. The real question isn't "should AI control AI instead of humans?" It's "where do humans exercise judgment in a control system that needs to operate at AI speeds?" The Answer Is Both Yes and No This is what the briefing document I've written gets into. Do we need AI to regulate AI? Yes and no, depending on what you mean by "regulate." Cybernetic theory breaks regulation into three orders: First-order is the operational stuff—watching outputs, catching violations, stopping bad things in real-time. This is where AI has to regulate AI because humans lack the requisite variety. We just can't keep up. Second-order is watching the watchers—making sure those first-order controls are actually working, adjusting them when things change. Both AI and humans work here, with humans providing oversight. Third-order is the values and ethics layer—deciding what we want, what tradeoffs we'll accept, what "good" even means. This is where human judgment isn't optional. These are value judgments that only humans can legitimately make. So yes, we need AI to regulate AI where speed and scale matter. And no, we don't give up human authority—we put it where it belongs, at the values level, not trying to manually review every output or insert deterministic validators in the AI stream. Why This Actually Matters This isn't theoretical. Organizations deploying AI systems have a duty of care to protect people from harm. When your control systems can't match the variety of what you're controlling, you can't fulfill that duty. There's a gap in your accountability and capability. Right now, most organizations are doing manual oversight—reviewing samples, running periodic audits, fixing things after problems happen. Meanwhile, thousands of interactions are happening that nobody sees. Problems spread before anyone notices. We're creating documentation of our inability to regulate, not actual regulation. The briefing lays out why AI regulating AI isn't a nice-to-have—it's the only way to get the variety you need to actually exercise duty of care. But it also explains why human governance over values can't be negotiated away. Technical systems can implement controls. They can't decide what values those controls should serve. What I've Learned I'm still skeptical when people claim AI will solve everything. But I'm not skeptical anymore about needing AI to regulate AI. That turns out to be grounded in cybernetic theory that's older than modern AI. What matters is how we architect these control systems. AI providing the variety at operational speeds. Humans maintaining authority over values and ethics. Both doing what they're actually capable of doing. If you're trying to figure out how to govern AI systems responsibly—how to meet your duty of care when AI operates faster and bigger than human oversight can match—my strategy briefing document explains the cybernetic principles and practical approaches you can use. The Law of Requisite Variety isn't a suggestion. It's a constraint. We can acknowledge it and design accordingly, or we can keep pretending that manual oversight will somehow catch up. It won't. Download my strategy briefing document here: About the Author: Raimund Laqua, P.Eng., PMP, has over 30 years of experience in highly regulated industries including oil & gas, medical devices, pharmaceuticals, and others. He serves on OSPE's AI in Engineering committee, and is the AI Committee Chair for E4P. He is also co-founder of ProfessionalEngineers.AI .
- Governing Large Language Models - A Cybernetic Approach to AI Compliance
I've been thinking a lot about promises lately. Not the kind we make at year-end meetings, but the deeper promises organizations make when they deploy AI systems. Promises about safety, fairness, and accountability. Promises that become very real when something goes wrong. The challenge with Large Language Models is that traditional compliance approaches assume you can audit the decision-making process. You write procedures, train people, create controls around logical steps you can inspect and verify. But LLMs don't work that way. The "thinking" happens in a mathematical space we can't directly examine. You can't audit billions of neural weights the way you'd review a checklist. This has led me back to some foundational work in cybernetics—ideas that help us think about governing systems we can't fully understand or predict. A Cybernetic Approach to AI Compliance Two insights have been particularly valuable: First, trying to control a complex, adaptive system with rigid rules is like trying to hold water in your hands. The system will always find ways around static controls. Your governance needs to learn and adapt, or it becomes irrelevant quickly. Second, there are different kinds of regulation happening at different levels. Some decisions can be automated effectively—checking inputs, classifying outputs, monitoring for drift. But the deeper questions about what outcomes we should permit, what risks we're willing to accept—those require human judgment. Not because the technology isn't advanced enough, but because those are fundamentally human choices about values and priorities. Current regulatory frameworks seem to understand this intuitively, even if they don't say so explicitly. They assume technical controls operating under human oversight—automated compliance within human-defined boundaries. This changes how I think about AI governance. Instead of trying to make the black box transparent, we focus on governing what we can actually control: what goes in, which models we choose, what comes out. We build learning systems around the opacity rather than trying to eliminate it. For those of us working in regulated environments, this offers a more realistic path forward than waiting for "explainable AI" to solve our governance problems. I've been working through these ideas in more detail—how cybernetic principles apply to AI governance, what this means for compliance frameworks, and how to implement these approaches in practice. You can read more in my latest briefing note which you can download here:
- PRESENTATION SUMMARY: Elevating Compliance by Applying Lean Principles
Presenter: Raimund Laqua, P.Eng., PMP. Date: November 20, 2025 For Compliance Officers and Managers When compliance becomes operational—which is necessary to meet performance and outcome obligations—you need a method of improvement that focuses on operational systems. This is where LEAN comes in. However, LEAN has to adapt its principles to work with compliance. This presentation explores 10 lean principles and how they are used to improve compliance performance. If you're looking to reduce your compliance costs, don't stop there. Improve value creation with better compliance as well. This is what LEAN COMPLIANCE is all about. Why Operational Compliance Requires Different Improvement Methods Most compliance teams are stuck managing compliance as separate programs rather than operational systems. But when your organization has performance and outcome obligations—not just rule-following requirements—compliance must become operational. It must deliver results, not just demonstrate activities. Once compliance is operational, you need improvement methods designed for operational systems. Traditional compliance improvement focuses on better documentation, more training, or tighter controls. But operational systems require systematic improvement methods that address flow, waste, variation, and capability—exactly what lean principles provide. The challenge? Standard lean principles assume compliance is waste to be minimized. For operational compliance, lean principles must be adapted to recognize compliance as value-creating capability that needs optimization, not elimination. How Lean Principles Adapt for Compliance Performance 1. Value and Waste Traditional Lean: Value is what customers pay for; compliance is non-value-added waste to minimize Lean Compliance: Value includes stakeholder trust, risk reduction, operational license; compliance waste (over-regulation, excessive auditing, firefighting) stems from uncertainty in compliance systems 2. Flow (Push/Pull) Traditional Lean: Smooth movement of materials/work through production processes using pull signals Lean Compliance: Pull promises rather than push obligations—organizational levels pull the promises they need from above rather than having compliance requirements pushed down to them 3. Value Streams Traditional Lean: Map material and information flow from customer order to delivery, eliminating non-value steps Lean Compliance: Map "compliance streams"—the end-to-end flow of how obligations transform into operational capabilities and delivered outcomes, treating compliance as its own value-creating process 4. One-Piece/One-Touch Flow Traditional Lean: Process work items individually through each step without batching or queuing Lean Compliance: Handle compliance requirements individually through assessment-design-implementation-verification without batching (e.g., 5 days monthly monitoring vs 20-day annual audits) 5. Poka Yoke (Mistake Proof) Traditional Lean: Design processes to prevent manufacturing defects or catch them immediately Lean Compliance: Use behavioral design and environmental cues to make correct compliance actions easier than incorrect ones, replacing training-enforcement with system design 6. Jidoka (Automation with Human Touch) Traditional Lean: Machines stop automatically when defects detected, workers solve problems Lean Compliance: Build compliance monitoring into operational processes to signal when going off-track, enabling real-time correction rather than periodic audit discovery 7. Visual Management Traditional Lean: Make production status, problems, and standards immediately visible to everyone Lean Compliance: Real-time dashboards showing rule adherence, system performance, and outcome delivery—compliance status as transparent as production metrics 8. Hoshin Kanri (Policy Deployment) Traditional Lean: Align strategic objectives with operational execution through cascaded goal deployment Lean Compliance: Connect compliance strategy to business strategy through "catch ball" dialogue, ensuring compliance priorities serve business objectives 9. Pursuit of Perfection Traditional Lean: Continuous elimination of waste and improvement of customer value delivery Lean Compliance: Continuously improve organizational capability to deliver compliance outcomes and keep increasingly sophisticated stakeholder promises 10. Respect for People Traditional Lean: Engage worker knowledge for production process improvement and problem-solving Lean Compliance: Leverage frontline operational knowledge to design better compliance systems rather than imposing top-down compliance controls What This Means for Your Compliance Performance Reduced Compliance Costs: Eliminate waste in your compliance processes—over-documentation, redundant activities, firefighting, and rework. Focus resources on activities that actually improve compliance outcomes. Improved Value Creation: Better compliance creates stakeholder value through enhanced trust, reduced risk, and operational excellence. This value becomes a competitive advantage, not just a cost of doing business. Enhanced Operational Integration: Compliance becomes part of operational excellence rather than a separate overhead function. Your compliance capabilities enable business performance instead of constraining it. Systematic Improvement: Apply proven improvement methods to your compliance systems. Move from ad-hoc fixes to systematic enhancement of compliance capability. Implementation for Compliance Professionals First , operationalize your compliance—get all programs working together as integrated systems focused on outcomes, not just activities. Then adapt lean principles specifically for your compliance context, recognizing that compliance creates value that needs optimization. Finally, apply these adapted principles systematically to improve both compliance performance and value creation. This approach is proven across highly regulated industries including oil & gas, financial services, healthcare, and government sectors. The EPA has applied lean principles to environmental regulation for decades, demonstrating that operational compliance improvement works. The Bottom Line for Obligation Owners You are accountable to meet obligations—regulatory requirements, voluntary commitments, stakeholder expectations. You have two choices: continue managing these obligations as overhead to be minimized, or develop them as operational capabilities to be optimized. When obligations have performance and outcome requirements—as most now do—compliance becomes operational. And when compliance is operational, you need improvement methods designed for operational systems. Lean Compliance provides those methods. It's not about doing compliance faster or cheaper (though both happen). It's about building compliance capability that creates value while meeting your obligations reliably. If you're already looking to reduce compliance costs, don't stop there. Use these adapted lean principles to improve value creation with better compliance as well. That's how you transform accountability from burden into competitive advantage. Your obligations aren't going away—they're getting more complex. The question is whether you'll develop the capability to meet them systematically, or continue managing them reactively. Lean Compliance gives you the systematic approach.
- Integrative Compliance: Embedding Regulatory Obligations in Operational Capability
If you're a compliance director or manager, you've probably noticed something frustrating: organizations can have excellent compliance documentation, pass audits, and still get surprised by violations. The gap isn't in what they document—it's in how regulatory obligations are embedded in operational capability. This is where integrative compliance transforms everything. While traditional compliance creates separate activities that run parallel to operations, integrative compliance embeds regulatory obligations directly into operational capability itself. When you achieve integrative compliance, regulatory fulfillment becomes inseparable from value creation. Integrative Compliance What Is Integrative Compliance? Integrative compliance embeds regulatory obligations directly into operational capability rather than creating separate compliance activities. It's the difference between having environmental procedures that get referenced during audits versus having environmental obligations embedded in every production decision and automated control system. Compliance streams in integrative compliance represent the flow of promises (commitments) through your organization—and these promises can be fulfilled by humans, machines, or combinations of both. A promise to "encrypt all personal data" might be fulfilled by automated systems. A promise to "conduct safety inspections" might be fulfilled by human operators. A promise to "maintain equipment reliability" might be fulfilled by predictive maintenance algorithms combined with human technicians. The Lean Compliance Operational Model provides the framework for building integrative compliance through four essential dimensions that map to organizational levels: Governance Level → Compliance Outcomes: What compliance results must we achieve? Program Level → Compliance Targets: What performance measures demonstrate progress? System Level → Compliance Practices: What standardized methods ensure capability? Process Level → Compliance Rules: What specific actions must be taken? From Parallel Activities to Embedded Capability Lean Compliance Operational Model Here's the key point: integrative compliance only works when obligations are embedded in operational capability across all four dimensions. You can't just add compliance activities alongside operations—you need to embed obligations into organizational capability itself. Consider environmental compliance for a manufacturing facility. Traditional compliance creates separate environmental activities: quarterly emissions monitoring, annual environmental training, periodic waste audits. These run parallel to production operations. Integrative compliance embeds environmental obligations directly into organizational capability through both human and machine promises: Process rules: Automated systems continuously monitor emissions and classify waste in real-time, while operators follow specific handling procedures System practices: Production scheduling systems incorporate environmental constraints using ISO 14001 practices, with human oversight and decision-making Program targets: Monthly production targets include environmental performance metrics tracked by both automated monitoring and human verification Governance outcomes: Business performance includes sustained environmental permit compliance demonstrated through machine-generated evidence and human attestation Now environmental compliance happens naturally as production happens. Regulatory obligations and operational capability are embedded together. The Power of Integrative Streams When operational compliance powers your compliance streams, you don't just get integrated activities—you create integrative streams where value creation and compliance delivery become inseparable. This is the double helix of organizational DNA in action. Integrative Streams Synergistic Performance: with integrative streams, improving one stream automatically strengthens the other. Enhanced production processes simultaneously improve compliance outcomes like safety and quality. Better operational methods create both more efficient operations and stronger compliance capability. Investment in one stream pays dividends in both. Emergent Capabilities : Integrative streams create capabilities that neither stream could achieve alone. A manufacturing process with embedded compliance monitoring doesn't just meet regulatory requirements—it creates real-time visibility that enables faster optimization, predictive maintenance, and proactive risk management. Adaptive Resilience : When compliance and value streams are truly integrative, they adapt together to changing conditions. New regulations don't break operations—they become opportunities to strengthen both compliance and competitive advantage simultaneously. Real-Time Visibility: I nstead of discovering compliance problems weeks later during reviews, you know immediately when something's not working. If waste classification isn't happening during production, the production system alerts you in real-time. Predictable Performance : because compliance is embedded in operations, compliance performance becomes as predictable as operational performance. If your production process is reliable, your compliance delivery is reliable. Reduced Waste: You eliminate duplicate activities and conflicting priorities. Instead of production schedules that ignore environmental constraints (requiring later rework), you create schedules that optimize both production and environmental performance. Capability Building: Each operational improvement also improves compliance capability. When you enhance production quality, you simultaneously strengthen quality compliance. When you improve safety processes, you build safety compliance capability. Building Integrative Compliance The Lean Compliance Operational Model shows how to embed regulatory obligations in operational capability: 1. Start with Outcomes (Governance Level) What regulatory results must your organization achieve? Not just "be environmentally compliant," but specific outcomes like "maintain all environmental permits without violations" or "achieve zero unauthorized emissions." 2. Define Targets (Program Level) What performance measures will demonstrate you're achieving those outcomes? Monthly emission levels, waste diversion rates, incident-free days, permit renewal success. 3. Design Practices (System Level) What systematic methods will deliver those targets? This is where standards like ISO 14001 provide proven approaches to environmental management that can be integrated into operations. 4. Embed Rules (Process Level) What specific actions must happen during each operational task? Real-time monitoring, immediate classification, proper handling procedures, documentation requirements. 5. Create the Compliance Stream Each level must enable the one above it: rules enable practices, practices enable targets, targets deliver outcomes. And each level must be supported by the one below it: outcomes require targets, targets require practices, practices require rules. The Integrative Compliance Test Here's how you know if you have integrative compliance rather than just parallel compliance activities: Can compliance promises be demonstrated through normal operations? Whether fulfilled by humans, machines, or both—can workers show how compliance is embedded in their work processes, and can systems demonstrate automated compliance delivery? Does improving operations also improve compliance? When you enhance production efficiency or operational delivery, do compliance outcomes like safety and quality performance improve simultaneously? Can you predict compliance failures before they happen? If operational performance degrades—whether human or machine—can you predict where compliance failures will occur? Is compliance visible in real-time? Can you demonstrate current compliance status through both automated monitoring and human verification without waiting for the next audit or review? If you answered "no" to any of these, you have parallel compliance activities but not integrative compliance. The Bottom Line The future of compliance isn't better documentation or more audits—it's integrative compliance that embeds mandatory and voluntary obligations directly in operational capability. When compliance obligations and operational capability are inseparable, you achieve the double helix of organizational DNA. Organizations that master integrative compliance don't choose between efficiency and compliance outcomes, between innovation and regulation, between speed and safety compliance. They achieve all of these because regulatory obligations are embedded in the operational capability that makes value creation and compliance delivery mutually reinforcing. Ready to move from parallel compliance activities to integrative compliance? Start with one critical obligation and embed it in operational capability across all four dimensions. The transformation will demonstrate why integrative compliance is the foundation for sustainable regulatory performance and business success. Ray Laqua, P.Eng, PMP | Lean Compliance Consulting Transforming regulatory obligations into operational capability
- What Organizations Desperately Need: Compliance Streams, Not Compliance Documentation
If you're a compliance director or manager in a highly regulated industry, you know this frustration: Your organization has procedures, training records, audit schedules, and risk assessments. You pass audits. Your management systems are certified. But violations still surprise you. You're constantly firefighting. And when leadership asks "are we actually meeting our obligations?" you can't answer with complete confidence. The problem isn't your competence. It's that most compliance approaches break obligations into disconnected parts—separate procedures, isolated training, independent audits. This reductive view creates gaps between requirements and reality, making it impossible to see how compliance actually flows through operations. The Solution: Compliance Streams Compliance & Value Streams Compliance streams are the end-to-end flows of promises (commitments) that transform regulatory obligations into demonstrated outcomes, embedded within your operational value streams. Think of it like value stream mapping for compliance: instead of transforming materials into products, you're transforming regulatory obligations into operational capability and demonstrable compliance outcomes. A compliance stream creates a holistic, systems view that replaces the reductive approach of managing disconnected compliance parts. This creates an unbroken "golden thread of assurance" that connects regulatory requirements directly to operational evidence through clear promise flows. What Are Compliance Streams? A compliance stream is fundamentally different from traditional compliance approaches. Instead of creating separate compliance activities that run parallel to operations, compliance streams embed regulatory obligations directly with your value streams - the actual work flows that create value for customers and stakeholders. Key characteristics of compliance streams: End-to-end flow: From regulatory requirement to demonstrated evidence Embedded in operations: Part of value creation, not separate from it Promise-based: Clear commitments at every organizational level Continuous assurance: Real-time visibility, not periodic audits Systems view: All elements working together, not disconnected parts When you implement compliance streams, compliance becomes a natural part of how work gets done rather than something that happens to work. How Promises Flow Through Compliance Streams Compliance streams work by creating connected flows of promises through four dimensions of your organization: The Four Dimensions of Promise Flow Governance Level → Compliance Outcomes At the board and executive level, leaders make high-level promises about regulatory results: "We will maintain GDPR compliance and privacy certifications." These are outcome commitments that define what success looks like from a regulatory perspective. Program Level → Compliance Targets Directors and managers translate outcomes into specific, measurable performance commitments: "We will respond to 100% of data subject requests within 30 days." These targets bridge between strategic intent and operational capability. System Level → Compliance Practices Teams and functions commit to standardized methods that enable the targets: "We will implement ISO 27001 information security management and data classification procedures." These are the systematic approaches that create predictable performance. Process Level → Compliance Rules Individuals and automated systems make specific procedural commitments: "We will encrypt all personal data at rest using AES-256." These are the concrete actions that execute the practices. The Golden Thread: Connecting Every Promise The golden thread of assurance connects every operational promise back to regulatory obligations and forward to compliance evidence. This thread ensures the following: 1. Accountability - Promise Ownership True accountability is threaded through the work itself, not added as an afterthought. Every promise has a clear owner at each level, and responsibility is embedded in job roles rather than bolted on through separate accountability structures. Test your accountability: Can you name who owns each promise and how their performance is measured? If someone asks "who ensures we encrypt personal data correctly?" can you immediately identify the specific role holder and their metrics? 2. Alignment - Promise Integrity This isn't about closing gaps. It's about creating design and causal integrity where promises support higher-level commitments AND are enabled by lower-level commitments. The flow works bidirectionally—each promise logically enables the next level up while being made possible by the level below. Test your alignment: Can you trace from any specific procedure back to the regulatory outcome it serves? Does encrypting data with AES-256 clearly enable data classification procedures, which enable ISO 27001 implementation, which enables 30-day response times, which enables GDPR compliance? 3. Assurance - Promise Verification Assurance goes beyond periodic audits to provide three types of ongoing confidence: Current assurance: Promises are being kept right now Sustained assurance: Capability to keep promises persists over time Adaptive assurance: Promises evolve as conditions change Test your assurance: Can you demonstrate ongoing promise fulfillment rather than just point-in-time evidence? Do you know not just that data was encrypted last month, but that encryption is happening reliably today and will adapt as threats evolve? Why Compliance Streams Work When you implement compliance streams instead of traditional compliance approaches, several transformations occur: Predictable Performance: Instead of hoping all the pieces work together, you know how the system performs. You can predict where failures will occur before they happen. Reduced Waste: You eliminate duplicate compliance activities because you can see where different obligations converge into single operational promises. Faster Response: When regulations change, you know exactly which promises need to be updated rather than reviewing every procedure to see what might be affected. Real-Time Visibility: You have ongoing visibility into compliance status rather than waiting for the next audit to discover problems. Mission Certainty: Compliance becomes a capability that ensures business objectives rather than a constraint that slows them down. Traditional Compliance vs. Compliance Streams: A Data Privacy Example Consider data privacy compliance in your organization. Traditional compliance would create separate activities: Privacy training (HR department) Data inventory (IT department) Consent management (Legal department) Breach procedures (Security department) Records retention (Records management) Each department would have its own procedures, training, and audit schedules. You'd create cross-reference matrices trying to show how they connect. But you still wouldn't have clear visibility into whether personal data is actually being protected in real-time. The compliance stream approach embeds privacy obligations directly into your data-handling value streams. Instead of separate privacy activities, you create connected promise flows: Outcome promises: "We will maintain customer trust through demonstrated privacy protection" Target promises: "100% data requests within 30 days, zero unauthorized transfers, annual certification maintained" Practice promises: "ISO 27001 implementation, data classification workflows, breach notification protocols" Rule promises: "AES-256 encryption, access logging, explicit consent, retention deletion" Now privacy compliance happens naturally as part of how you handle customer data, with a golden thread connecting specific technical controls all the way up to business outcomes. Getting Started with Compliance Streams Choose one high-value obligation that currently causes uncertainty or surprises Map the current promise flow from regulation to operational commitments to evidence Identify broken links where promises aren't clear, owned, or demonstrably kept Design the golden thread connecting all levels with clear accountability Build and test the stream to prove it creates reliable assurance Replicate the approach across other compliance domains The Bottom Line Compliance streams transform how organizations meet regulatory obligations by embedding compliance directly into value streams through connected promise flows. This creates systems thinking that replaces the traditional reductive approach, generating a golden thread of assurance from regulatory requirements to operational evidence. The result: Compliance becomes a natural part of how work gets done rather than something that happens to work. You stop firefighting violations and start building capability. You move from hoping between audits to knowing in real-time how obligations are being fulfilled. For compliance directors and managers in highly regulated industries, compliance streams eliminate uncertainty by making regulatory fulfillment visible, traceable, and embedded in operations themselves. Ray Laqua, P.Eng, PMP | Lean Compliance Consulting | Transforming regulatory obligations into operational capability
- How to Prove Your Compliance Actually Works: A Practical Guide to Building Confidence
If you're responsible for compliance, you've probably faced this uncomfortable question: "How do you know you're actually compliant?" Most organizations point to policies, training records, and audit reports. But there's often a nagging gap between having documentation and having genuine confidence that your obligations are truly being met. This is where Goal Structuring Notation (GSN ) and claim trees become game-changers. They're tools borrowed from safety-critical industries like aerospace and nuclear energy that help you build bulletproof arguments proving your compliance approach actually works. Let me show you how this applies to the real world of operational compliance. Goal Structure Notation & Claim Trees Applied to Obligations & Promises Compliance Model The Core Problem: Bridging the Gap Between Obligation and Reality When a regulator says "you must protect customer data" or a standard requires "annual risk assessments," there's a massive gap between that external demand and what actually happens inside your organization. Typically, someone writes a policy, IT implements some controls, training gets scheduled, and an auditor eventually checks some boxes. But can you actually prove that these activities fulfill the obligation? Can you show the clear, logical chain from requirement to result? Usually, no. And that's exactly the problem GSN and claim trees solve. Lean Compliance Operational Model The Lean Compliance Operational Model recognizes that obligations don't magically get fulfilled. Instead, there's a structured flow: external obligations must be transformed into internal promises (commitments that real people make about their own behavior), those promises must be coordinated by programs (governance structures that steer and align), and finally systems (the actual processes, tools, and people) must execute those promises to deliver outcomes. Understanding this flow is critical because each layer represents a potential point of failure, and each layer needs assurance. What GSN Brings to the Table Goal Structuring Notation is simply a visual way to show your argument for why something is true. Think of it like building a legal case, but for compliance. You make a top-level claim like "We comply with ISO 27001," and GSN forces you to systematically break that down into answerable questions. What strategy are you using to achieve this? What are the sub-claims that must be true for your top claim to hold? What evidence supports each sub-claim? It creates a tree-like diagram where you can't hide gaps or hand-wave difficult questions. Goal Structure Notation (GSN) Example For compliance using the Lean Compliance model, your top goal is always some variant of "We fulfill [specific obligation]." Then you decompose this through the operational layers. First, you need to prove you have the right promises in place—that someone has actually committed to doing what's needed, and that those commitments cover everything the obligation requires. Second, you need to show that programs are governing effectively, coordinating these promises and steering systems toward the right outcomes. Third, you need to demonstrate that systems are actually delivering on those promises reliably and consistently. Finally, you need to prove that the outcomes you're achieving actually fulfill the original obligation. Each of these becomes a major branch in your GSN tree. How Claim Trees Provide the Evidence While GSN shows the structure of your argument, claim trees show the evidence behind it. When applied to the Lean Compliance Operational Model these are evidence that commitments (i.e. promises) where kept. A claim tree starts with a statement you need to prove and systematically breaks it down into smaller, more specific statements that are easier to prove with concrete evidence. Each branch represents a necessary condition, and each leaf eventually points to a specific piece of evidence that you can touch, verify, and date. Claim Tree Example Let's say you need to prove that "Our CISO can reduce information security risks to acceptable levels." This isn't something you can just assert. You need to break it down. Does the CISO have the necessary skills to identify and evaluate risks? You prove this with certifications, training records, and demonstrated experience. Does the CISO have authority to implement risk treatments? You prove this with delegation letters and budget approval authority. Does the CISO have access to the systems and processes needed to actually reduce risk? You prove this with access control matrices and evidence of control implementations. Has risk actually been reduced? You prove this with metrics showing risk levels before and after interventions, incident data, and independent assessments. Notice how each question demands specific, verifiable evidence. No hand-waving is allowed. Getting the Outcome Right: Beyond Procedural Compliance Here's where many compliance programs go wrong, and it's worth pausing on this point. When ISO 27001 requires "conducting regular information security risk assessments," the procedural view focuses on the activity: did someone complete a risk assessment document? Did they do it on schedule? Does it have all the required sections? Auditors check boxes, the assessment report goes in a folder, and everyone moves on. But that completely misses the point. The operational view asks a fundamentally different question: are information security risks actually being managed to acceptable levels? The risk assessment isn't the goal—it's a means to an end. The real obligation is about achieving a state of managed risk, where threats are identified, evaluated, and addressed through appropriate controls that either reduce the likelihood of bad things happening or cushion the organization against their effects when they do. This shift in perspective changes everything about how you structure your assurance argument. Your top-level goal shouldn't be "We conduct annual risk assessments." It should be "Information security risks are maintained at acceptable levels." Now your promises need to reflect actual risk management activities: someone promises to identify emerging threats, someone promises to evaluate their potential impact, someone promises to implement controls that reduce risk, someone promises to monitor whether those controls are working, and someone promises to adjust when they're not. The risk assessment becomes one tool in this broader promise network, not the end goal itself. A Real-World Example in Action Let's walk through a practical scenario using this outcome-focused approach. Suppose you need to comply with ISO 27001's risk management requirements. Following the operational model, you first identify what outcome the obligation actually demands: information security risks must be managed to levels acceptable to the organization and its stakeholders. This means risks are identified, assessed, treated, and monitored on an ongoing basis. GSN & Lean Compliance Operational Model You first start by identifying what promises are needed to achieve this outcome. Your CISO might promise to maintain an accurate inventory of information assets and their associated risks, updating it quarterly and whenever significant changes occur. Your Information Security team might promise to implement and maintain controls that reduce identified high and medium risks below the organization's risk appetite threshold. Your IT operations team might promise to monitor security controls continuously and alert the security team when controls fail or degrade. Your business unit leaders might promise to accept documented residual risks for their areas of responsibility. Notice how these promises focus on achieving the state of managed risk, not just completing assessment documents. Now you build your GSN argument with the top goal stating "Information security risks are maintained at acceptable levels." Your strategy is to demonstrate this through the operational model layers. This creates sub-goals: proving that your promises collectively cover all aspects of risk management (identification, assessment, treatment, monitoring), showing that your ISMS program effectively coordinates these risk management activities and ensures nothing falls through the cracks, demonstrating that your GRC system and security tools enable reliable execution of risk treatments, and finally evidencing that risk levels are actually being maintained within acceptable bounds over time. For each of these sub-goals, you build claim trees. Consider the promise to implement controls to reduce risk. You create a claim tree that breaks this down: high-priority risks must be identified and prioritized, appropriate controls must be selected based on cost-benefit analysis, those controls must be implemented with sufficient coverage and strength, controls must be tested to verify they actually work, and the reduction in risk must be measurable. To prove this, you gather evidence: your risk register showing identified risks with severity ratings, treatment plans documenting which controls address which risks, implementation records showing when controls went live, penetration test results demonstrating control effectiveness, and trend data showing risk scores declining after controls were implemented. The claim tree for monitoring is equally important. You need to prove that controls don't just exist but continue working over time. This requires evidence of continuous monitoring systems generating alerts, incident response logs showing how alerts are handled, metrics showing control uptime and effectiveness, and regular reviews that catch degradation before it matters. The difference between this approach and a procedural approach is critical: procedural compliance might show you completed an assessment last year, but operational compliance shows you're actively managing risk right now, today, with evidence that the state of risk is known and capable of being controlled in the future. Why This Approach Transforms Compliance Using GSN and claim trees forces you to think systematically in ways that traditional compliance approaches don't. You can't hide gaps in your compliance approach when you have to draw out the complete argument from obligation to outcome. Every link in the chain must be justified, and every justification must point to evidence. This becomes especially powerful when you realize that most compliance failures don't happen because people are malicious or lazy; they happen because promises weren't clearly made, coordination broke down between teams, or systems simply couldn't deliver what was assumed. The outcome focus also protects you from the trap of performative compliance. It's easy to produce a beautiful risk assessment document that satisfies an auditor but does nothing to actually reduce risk. When your GSN goal is "risks are managed" rather than "assessment is conducted," you're forced to prove actual risk reduction. Your evidence can't just be the assessment document—it needs to be declining incident rates, reduced exposure to critical threats, faster detection and response times, and stakeholder confidence that risks are under control. This is compliance that actually makes the organization safer, not just better documented. The approach also makes compliance conversations dramatically more productive. Instead of debating whether you're "compliant" or arguing about interpretation, you can point to specific claims and evidence. When auditors ask if you're managing risk, you can walk them through your reasoning: here are the promises made, here's how programs coordinate them, here's how systems execute them, and here's the evidence that risk is actually being reduced. When executives want assurance, you can show them exactly where confidence comes from—not "we did an assessment" but "here's how we know our risks are under control." When something fails, you can trace back through the GSN to find exactly which promise broke, which program oversight was missing, or which system capability was insufficient. Perhaps most importantly, this methodology creates real assurance that evolves with your organization. Traditional compliance creates static documentation that's outdated the moment it's finished. GSN and claim trees are designed to be continuously maintained. When obligations change, you update the top-level goal and trace down to see which promises need adjustment. When new threats emerge, you update your risk management promises and prove you can address them. When systems evolve, you update the system capability claims. Your assurance case becomes an actual reflection of your compliance posture—and more importantly, your actual security posture—not a dusty binder on a shelf. Getting Started You don't need to boil the ocean to start using this approach. Pick one critical obligation your organization struggles to demonstrate compliance with. But here's the crucial step: reframe that obligation as an outcome, not an activity. Don't write "We conduct risk assessments." Write "Risks are managed to acceptable levels." Don't write "We provide training." Write "Staff make secure decisions in their daily work." This outcome framing immediately clarifies what really matters. Then identify the key promises that need to exist for that outcome to be achieved. For each promise, ask yourself if you can actually prove it's deliverable and being kept. Build claim trees for the most critical or questionable promises, and honestly assess what evidence you have versus what you're assuming. You'll quickly discover where your real gaps are—and importantly, you'll discover whether you're just checking boxes or actually achieving the outcomes your obligations demand. The goal isn't perfection from day one. The goal is building a structured, evidence-based approach to compliance that gives you genuine confidence and provides clear answers when stakeholders ask that uncomfortable question: "How do you know you're actually compliant?" With GSN and claim trees overlaid on the Lean Compliance operational model, focusing relentlessly on outcomes rather than activities, you'll finally have a good answer. More importantly, you'll know that your compliance efforts are actually making your organization better, not just better documented. If you are interested in learning more about GSN you can download the GSN Community Standard Version 3: For more information about the Lean Compliance Operational Model and related methodologies, visit our website here.
- Jidoka and AI: Lessons for Compliance
As someone working in compliance during this wave of AI adoption, I've been thinking about how we approach automation differently than other industries. The compliance field is naturally cautious about new technology—and for good reason. When we fail to meet regulatory standards, performance targets, or outcome requirements, the consequences extend far beyond operational inefficiency. Recently, I've been reflecting on Jidoka, Toyota's manufacturing principle that emerged over a century ago. What strikes me isn't just its practical applications, but what it reveals about our fundamental relationship with automated processes. The Wisdom of Stopping Jidoka emerged from a simple innovation: Sakichi Toyoda's loom that stopped automatically when a thread broke. The breakthrough wasn't the technology itself, but the philosophy it embodied—that intelligent automation should know when to stop working. This seems counterintuitive to how we often think about automation today. We typically measure automation success by uptime, throughput, and reduced human intervention. Yet Jidoka suggests that the most intelligent systems are those that recognize their own limitations and halt operations when conditions exceed their capabilities. In compliance, this perspective feels particularly relevant. We're constantly balancing the need for efficiency with the imperative to meet regulatory standards and maintain adherence to rules. The traditional compliance approach has been heavily manual precisely because the cost of errors is so high. But what if, instead of choosing between automation and safety, we designed systems that could recognize when they were operating outside acceptable parameters? The Multi-Process Insight What fascinates me about Jidoka is how it enabled what Toyota called "multi-process handling"—one operator overseeing multiple automated processes rather than watching each machine individually. This wasn't just about efficiency; it was about creating a different relationship between humans and automated systems. In compliance, we often fall into one of two extremes: either we automate everything and hope for the best, or we maintain such tight human oversight that we lose most efficiency benefits. Jidoka suggests a middle path—automation that's designed to be trustworthy precisely because it knows when not to be trusted. Consider how this might apply to our work. Rather than having compliance staff continuously monitor every automated process—whether it's regulatory reporting, performance tracking, or rule enforcement—we could design systems that operate independently until they encounter situations that require human judgment. The automation handles routine cases while immediately flagging exceptions, emerging patterns, or conditions that fall outside established parameters. Beyond Technical Implementation What I find most compelling about Jidoka isn't the technical mechanisms, but the underlying philosophy about quality and responsibility. Traditional automation often pushes quality control to the end of the process—we automate first, then inspect the results. Jidoka reverses this: it builds quality consciousness into the automated process itself. In compliance, this philosophical shift could be profound. Instead of implementing AI systems and then auditing their outputs, we might design systems that continuously evaluate whether they're meeting compliance objectives—not just processing data correctly, but actually achieving the outcomes and standards we're responsible for maintaining. This requires thinking differently about what we mean by "intelligent" automation. Intelligence isn't just about pattern recognition or data processing speed; it's about understanding context, recognizing limitations, and making appropriate decisions about when to continue and when to pause. The Compliance Challenge Compliance work encompasses far more than transaction monitoring or rule enforcement. We're responsible for ensuring adherence to regulatory requirements, meeting performance targets, achieving outcome standards, and maintaining organizational practices that support broader objectives. Each of these areas presents different challenges for automation. Performance targets might shift based on market conditions or regulatory changes. Outcome standards often involve qualitative judgments that are difficult to codify. Rule adherence requires interpreting requirements that may be ambiguous or evolving. Standard practices need to adapt to new situations while maintaining consistency with established principles. Jidoka's approach—building abnormality detection into the process itself—offers a way to think about these challenges. Rather than trying to automate everything perfectly from the start, we could focus on creating systems that recognize when they're operating outside their zone of competence. A Different Relationship with AI What strikes me most about reflecting on Jidoka in the context of AI adoption is how it suggests a different relationship with automated systems. Instead of viewing AI as either fully autonomous or requiring constant supervision, Jidoka points toward automation that operates with built-in humility. Systems designed with Jidoka principles don't just execute processes—they continuously assess whether they're meeting the standards and objectives they were designed to support. They're designed not just for efficiency, but for responsibility. For compliance professionals considering AI adoption, this perspective might be liberating. Rather than worrying about losing control or maintaining perfect oversight, we could focus on designing systems that share our commitment to meeting standards and achieving outcomes. The goal isn't automation for automation sake, but reliable automation that knows its limits. In compliance, where the stakes are high and the requirements are complex, that kind of intelligent restraint might be exactly what we need. These reflections come from considering how lean manufacturing principles might inform compliance practice in an era of increasing automation.











