top of page

SEARCH

Find what you need

560 results found with an empty search

  • Compliance Operability Assessment Using Total Value Chain and Compliance Criticality Analysis

    Why Is This Assessment Necessary? For compliance to be effective, it must generate desired outcomes. These outcomes may include reducing violations and breaches, minimizing identity thefts, enhancing integrity, and ultimately fostering greater stakeholder trust. Realizing these benefits requires compliance to function as more than just the sum of its parts. Unfortunately, many organizations focus solely on individual components rather than the whole system – they see the trees but miss the forest, or concentrate on controls instead of the overall program. Too often, compliance teams work hard and hope for the best. While hope is admirable, it's an inadequate strategy for ensuring concrete outcomes. To elevate above merely a collection of parts, compliance needs to operate as a cohesive system. In this context, operability is defined as the extent to which the compliance function is fit for purpose, capable of achieving compliance objectives, and able to realize the benefits of being compliant. The minimum level of compliance operability is achieved when: All essential functions, behaviors, and interactions exist and perform at levels necessary to create the intended outcomes of compliance. This defines what is known as Minimal Viable Compliance (MVC) , which must be reached, sustained, and then advanced to realize better outcomes. For this to occur, we need a comprehensive approach. We need: Governance to set the direction Programs to steer the efforts Systems to keep operations between the lines Processes to help stay ahead of risks All of these elements must work together as an integrated whole.

  • AI Engineering: The Last Discipline Standing

    The software engineering and related domains are undergoing their most dramatic transformation in decades. In discussions I have had over the last year, IT product companies appear to be moving towards an AI first model. As AI capabilities rapidly advance, a stark prediction is emerging from industry leaders: AI Engineering may soon become the dominant—perhaps only remaining—engineering discipline in many IT domains. How Product Teams Are Already Changing Looking at how IT technology companies are adapting to AI uncovers an interesting pattern: teams of three to five people are building products that traditionally required much larger engineering groups. The traditional model—where product managers coordinate with software engineers, UI designers, data analysts, DevOps specialists, and scrum leaders—is being replaced by something fundamentally different. Instead, these companies operate with product managers working directly with AI Engineers who can orchestrate entire development lifecycles. These professionals are learning to master a new set of skills: AI system design (architecting intelligent solutions from requirements), AI integration (embedding capabilities seamlessly into products), and AI operations (managing and maintaining AI-powered systems at scale). Companies like Vercel, Replit, and dozens of Y Combinator startups demonstrate this model in action daily. What once required full engineering teams now happens through sophisticated prompt engineering and AI orchestration. A Pattern We've Seen Before This transformation feels familiar because I lived through something similar in integrated circuit manufacturing. In the early days, I worked for an integrated circuits manufacturing in Canada where they at first designed circuits by hand, built prototypes in physical labs, and painstakingly transferred designs to mylar tape for silicon fabrication. This process required teams of specialists: layout technicians, CAD operators, lab engineers—each role seemingly indispensable. Over the years, each function was improved as computer technology was adopted. We started using circuit simulation, computer-aided design with automated design rule checking, and wafer fabrication layout tools. This is not unlike how organizations are now adopting AI to improve individual tasks and functions. Then silicon compilers arrived and changed everything overnight. Suddenly, engineers could create entire circuit designs by simply describing what the circuit should accomplish using Hardware Description Languages like VHDL and Verilog. The compiler handled layout optimization, timing analysis, and fabrication preparation automatically. The entire process could be automated. From ideation to the fab in one step. Entire job categories vanished, but the engineers who adapted became exponentially more productive. ONE-SPRINT MVP Today's product development is following a similar pattern. AI Engineers translate application requirements through sophisticated prompts into working minimum viable products (MVPs) – one-sprint MVP. This approach is resulting in fewer people to deliver working solutions faster while supporting rapid iteration cycles that make even Agile development methodologies feel glacially slow. The Tools Driving This Shift The evidence surrounds us. GitHub Copilot and Cursor generate entire codebases from natural language descriptions. Vercel's V0 creates production-ready React components from simple prompts. Claude Artifacts builds functional prototypes through conversation. Replit Agent handles full-stack development tasks autonomously. These aren't novelty demos—they're production tools that engineers use to create real products for customers to use. However, this is just the beginning. Where Traditional Engineering Still Matters Now this wave won't wash away all engineering domains equally. Critical areas will maintain their need for specialized expertise: embedded systems interfacing with hardware, high-performance computing requiring deep optimization, safety-critical applications in aerospace and medical devices, large-scale infrastructure architecture, and cybersecurity frameworks. But the domains most vulnerable to AI consolidation—web applications, mobile apps, data pipelines, standard enterprise software, code creation, and prototype development—represent the majority of current engineering employment. The Economic Forces at Play The economics driving this shift are brutal in their simplicity. When a single AI Engineer can deliver 80% of what a five-person traditional team produces, at a fraction of the cost and timeline, market forces make the choice inevitable. This isn't a gradual transition that companies will deliberate over for years. Organizations that successfully implement AI-first methodologies will out-compete those clinging to traditional approaches. The advantage gap widens daily as AI capabilities improve and more teams discover these efficiencies. Venture capital flows increasingly toward AI-first startups with lean technical teams, while traditional software companies scramble to demonstrate AI integration strategies or risk irrelevance. Survival Strategies in an AI-First World AI represents a genuine threat to traditional engineering careers. The question isn't whether disruption will occur, but how to position yourself to survive and thrive as AI-first methodologies become standard practice. Critical survival tactics: Immediate actions (next 6-12 months): Master AI tools now  - Become proficient with GitHub Copilot, Claude, ChatGPT, and emerging AI development platforms Learn prompt engineering  - This is becoming as fundamental as learning programming languages once was Shift to AI-augmented workflows  - Don't just use AI as a helper; restructure how you approach problems entirely Build AI system integration skills  - Focus on connecting AI components rather than building from scratch Strategic positioning (1-2 years): Become an AI Engineer  - Align your engineering practice from traditional engineering to AI system design; adopt AI engineering knowledge and methods into your practice Specialize in AI reliability and maintenance  - AI systems need monitoring, debugging, and optimization Develop AI model customization expertise  - Fine-tuning, prompt optimization, and model selection Master AI-human collaboration patterns  - Understanding when to use AI vs. when human expertise is still required Why Waiting Is Dangerous Critics point to legitimate current limitations: AI-generated code often lacks production robustness, complex integrations still require deep expertise, and security considerations demand human judgment. These concerns echo the early objections to silicon compilers, which initially produced inferior results compared to expert human designers. But here's what history teaches us: the technology improved rapidly and soon exceeded human capabilities in most scenarios. The engineers who adapted early secured the valuable remaining roles. Those who waited found themselves competing against both improved tools and colleagues who had already mastered them. Understanding the Challenge This isn't another gradual technology transition that engineers can adapt to over several years. AI-first methodologies represent a substantial challenge to traditional engineering roles, with the potential for significant displacement across the industry. The reality:  Engineers who don't adapt may find themselves competing against AI-first approaches, systems and tools that operate continuously, require no salaries or benefits, and improve steadily. This will be an increasingly difficult competition to win. The opportunity:  Engineers who proactively embrace AI-first approaches will be better positioned to secure valuable roles in the evolving landscape. Leading this transformation offers better prospects than waiting for external pressure to force change. The window for proactive adaptation becomes smaller with time. Each month of delay reduces competitive advantage as AI capabilities advance and more engineers begin their own transformation journeys. The choice ahead is significant: evolve into an AI Engineer who works with intelligent systems, or risk being replaced by someone who does. Raimund Laqua, PMP, P.Eng is co-founder of ProfessionalEngineers.AI (ray@professionalengineers.ai) a Canadian engineering practice focused on advancing AI engineering in Canada. Raimund Laqua, is also founder of Lean Compliance ( ray.laqua@leancompliance.ca ), a Canadian consulting practice focused on helping orgnizations operating in highly-regulated, high risk sectors always stay ahead of risk, between the lines, and on-mission.

  • Understanding Operational Compliance: Key Questions Answered

    Operational Compliance Organizations investing in compliance often have legitimate questions about how the Operational Compliance Model relates to their existing frameworks, tools, and investments. These questions reflect the reality that most organizations have already implemented various compliance approaches—ISO management standards, GRC platforms, COSO frameworks, Three Lines of Defence models, and others. Rather than viewing these as competing approaches, the Operational Compliance Model serves as an integrative architecture that amplifies the value of existing investments while addressing fundamental gaps that prevent compliance from achieving its intended outcomes. The following responses explore how Operational Compliance works with, enhances, and elevates traditional approaches to create the socio-technical systems necessary for sustainable mission and compliance success. Responses to Questions "Why can I not use an ISO management systems standard?" ISO management standards are excellent for procedural compliance  but fall short of achieving operational compliance . Operational Compliance defines a state of operability when all essential compliance functions, behaviours, and interactions exist and perform at levels necessary to create the outcomes of compliance. The fundamental limitation is that ISO standards focus on building parts of a system  (processes, procedures, documentation) rather than the interactions between parts  that create actual outcomes. Companies usually run out of time, money, and motivation to move beyond implementing the parts of a system to implementing the interactions which is essential for a system to be considered operational. ISO standards help you pass audits, but the Operational Compliance Model helps you achieve the outcomes those audits are supposed to ensure—better safety, security, sustainability, quality, and stakeholder trust. "Doesn't GRC cover this, at least for IT obligations?" GRC (Governance, Risk, and Compliance) platforms are tools, not operational models. Traditional "Procedural Compliance" is based on a reactive model for compliance that sits apart and is not embedded within the business. Most GRC implementations create sophisticated reporting systems but don't address the fundamental challenge: how to make compliance integral to value creation . The Operational Compliance Model recognizes that obligations arise from four types of regulatory design (micro-means, micro-ends, macro-means, macro-ends) that each require different approaches. GRC tools can support this model, but they can't create the socio-technical processes that actually regulate organizational effort toward desired outcomes. "I already have dozens of frameworks" This objection actually proves the need for the Operational Compliance Model. Having dozens of frameworks is precisely the problem—it creates framework proliferation  without operational integration . Lean TCM incorporates an Operational Compliance Model that supports all obligation types and commitments using design principles derived from systems theory and modern regulatory designs. The Operational Compliance Model doesn't replace your frameworks; it provides the integrative architecture  to make them work together as a system rather than competing silos. It's the difference between having a collection of car parts versus having a functioning vehicle. "What about COSO? This already provides an overarching framework?" COSO is excellent for internal control over financial reporting  but was designed primarily for audit and governance purposes. The Operational Compliance Model addresses several limitations of COSO: Scope : COSO focuses on control activities; Operational Compliance focuses on outcome creation Integration : COSO's five components work within compliance functions; Operational Compliance embeds compliance into operations Regulatory Design : COSO assumes one type of obligation; Operational Compliance handles four distinct types that require different approaches Uncertainty : COSO manages risk; Operational Compliance improves probability of success  in uncertain environments COSO can be a component within the Operational Compliance Model, but it's insufficient by itself to achieve operational compliance. "What about Audit 3 Lines of Defence?" The Three Lines of Defence model is fundamentally reactive —it's designed to catch problems after they occur. Operational Compliance is based on a holistic and proactive model that defines compliance as integral to the value chain. The limitations of Three Lines of Defence: Line 1  (operations) sees compliance as separate from their real work Line 2  (risk/compliance) monitors rather than enables performance Line 3  (audit) confirms what went wrong after the fact The Operational Compliance Model collapses these artificial lines  by making compliance inherent to operational processes. Instead of three defensive lines, you get one integrated system  where compliance enables rather than constrains performance. The Essential Difference For compliance to be effective, it must first be operational—achieved when all essential compliance functions, behaviours, and interactions exist and perform at levels necessary to create the outcomes of compliance. The majority of existing frameworks and models serve important functions, but they operate within the procedural compliance paradigm . The Operational Compliance Model represents a paradigm shift  from compliance as overhead to compliance as value creation—from meeting obligations to achieving outcomes.

  • AI's Category Failure

    When a technology can reshape entire industries, automate critical decisions, and potentially act autonomously in the physical world, how we define it matters. Yet our current approach to defining artificial intelligence is fundamentally flawed—and this definitional confusion is creating dangerous blind spots in how we regulate, engineer, deploy, and think about AI systems. We can always reduce complex systems to their constituent parts, each of which can be analyzed further. However, the problem is not with the parts but with the whole. Consider how we approach regulation: we don't just regulate individual components—we regulate systems based on their emergent capabilities and potential impacts. Take automobiles. We don't primarily regulate steel, rubber, or microchips. We regulate vehicles because of what they can do: transport people at high speeds, potentially causing harm. A car moving at 70 mph represents an entirely different category of risk than the same steel and plastic sitting motionless in a factory. The emergent property of high-speed movement, not the individual components, drives our regulatory approach. The same principle should apply to artificial intelligence, but currently doesn't. Today's definitions focus on algorithms, neural networks, and training data rather than on what AI systems can actually accomplish. This reductionist thinking creates a dangerous category error that leaves us unprepared for the systems we're building. The Challenge of Definition Today's AI definitions focus on technical components rather than capabilities and behaviours. This is like defining a car as "metal, plastic, and electronic components" instead of "a system capable of autonomous movement that can transport people and cargo." This reductionist approach creates real problems. When regulators examine AI systems, they often focus on whether the software meets certain technical standards rather than asking: what can this system actually do? what goals might it pursue? How might it interact with the world? And, what are the risks of this impact? Defining AI properly is challenging because we're dealing with systems that emulate knowledge and intelligence—concepts that remain elusive even in human contexts. But the difficulty isn't in having intelligent systems; it's in understanding what these systems might do with their capabilities. A Fundamental Category Error What we have is a category failure. We have not done our due diligence to properly classify what AI represents—which is ironic, since classification is precisely what machine learning systems excel at. We lack the foundational work needed for proper AI governance. Before we can develop effective policies, we need a clear conceptual framework (an ontology) that describes what AI systems are and how they relate to each other. From this foundation, we can build a classification system (a taxonomy) that groups AI systems by their actual capabilities rather than their technical implementations. Currently, we treat all AI systems similarly, whether they're simple recommendation algorithms or sophisticated systems capable of autonomous planning and action. This is like having the same safety regulations for bicycles and fighter jets because both involve "transportation technology." The Agentic AI Challenge Let's consider autonomous AI agents—systems that can set their own goals and take actions to achieve them. A customer service chatbot that can only respond to pre-defined queries is fundamentally different from an AI system that can analyze market conditions, formulate investment strategies, and execute trades autonomously. These agentic systems represent a qualitatively different category of risk. Unlike traditional software that follows predetermined paths, they can exhibit emergent behaviours that even their creators didn't anticipate. When we deploy such systems in critical infrastructure—financial markets, power grids, transportation networks—we're essentially allowing non-human entities to make consequential decisions about human welfare. The typical response is that AI can make decisions better and faster than humans. This misses the crucial point: current AI systems don't make value-based decisions in any meaningful sense. They optimize for programmed objectives without understanding broader context, moral implications, or unintended consequences. They don't distinguish between achieving goals through beneficial versus harmful means. Rethinking Regulatory Frameworks Current AI regulation resembles early internet governance—focused on technical standards rather than systemic impacts. We need an approach more like nuclear energy regulation, which recognizes that the same underlying technology can power cities or destroy them. Nuclear regulation doesn't focus primarily on uranium atoms or reactor components. Instead, it creates frameworks around containment, safety systems, operator licensing, and emergency response—all based on understanding the technology's potential for both benefit and catastrophic harm. For AI, this means developing regulatory categories based on capability rather than implementation. A system's ability to act autonomously in high-stakes environments matters more than whether it uses transformers, reinforcement learning, or symbolic reasoning. The European Union's AI Act represents significant progress toward this vision. It establishes a risk-based framework with four categories—unacceptable, high, limited, and minimal risk—moving beyond purely technical definitions toward impact-based classification. The Act prohibits clearly dangerous practices like social scoring and cognitive manipulation while requiring strict oversight for high-risk applications in critical infrastructure, healthcare, and employment. However, the EU approach still doesn't fully solve our category failure problem. While it recognizes "systemic risks" from advanced AI models, it primarily identifies these risks through computational thresholds rather than emergent capabilities. The Act also doesn't systematically address the autonomy-agency spectrum that makes certain AI systems particularly concerning—the difference between a system that can set its own goals versus one that merely optimizes predefined objectives. Most notably, the Act treats powerful general-purpose AI models like GPT-4 as requiring transparency rather than the stringent safety measures applied to high-risk systems. This potentially under-regulates foundation models that could be readily configured for autonomous operation in critical domains. The regulatory framework remains a strong first step, but the fundamental challenge of properly categorizing AI by what it can do rather than how it's built remains only partially addressed. Toward Engineering-Based Solutions How do we apply rigorous engineering principles to build reliable, trustworthy AI systems? The engineering method is fundamentally an integrative and synthesis process that considers the whole as well as the parts. Unlike reductionist approaches that focus solely on components, engineering emphasizes understanding how parts interact to create emergent system behaviors, identifying failure modes across the entire system, building in safety margins, and designing systems that fail safely rather than catastrophically. This requires several concrete steps: Capability-based classification:  Group AI systems by what they can do—autonomous decision-making, goal-setting, real-world action—rather than how they're built. Risk-proportionate oversight:  Apply more stringent requirements to systems with greater autonomy and potential impact, similar to how we regulate medical devices or aviation systems. Mandatory transparency for high-risk systems:  Require clear documentation of an AI system's goals, constraints, and decision-making processes, especially for systems operating in critical domains. Human oversight requirements:  Establish clear protocols for meaningful human control over consequential decisions, recognizing that "human in the loop" can mean many different things. Moving Forward The path forward requires abandoning our component-focused approach to AI governance. Just as we don't regulate nuclear power by studying individual atoms, we shouldn't regulate AI by examining only algorithms and datasets. We need frameworks that address AI systems as integrated wholes—their emergent capabilities, their potential for autonomous action, and their capacity to pursue goals that may diverge from human intentions. Only by properly categorizing what we're building can we ensure that artificial intelligence enhances human flourishing rather than undermining it. The stakes are too high for continued definitional confusion. As AI capabilities rapidly advance, our conceptual frameworks and regulatory approaches must evolve to match the actual nature and potential impact of these systems. The alternative is governance by accident rather than design—a luxury we can no longer afford.

  • Lean Compliance: A Founder's Reflection

    Lean Compliance Reflections I often think about the future of Lean Compliance, especially lately as I feel compliance is approaching a turning point, where we’ve always been heading but now faster due to AI. In this article, I consider the future of Lean Compliance in the context of where regulators are heading, where industry is at, what industry now needs, and what Lean Compliance offers. Navigating this space has not only shaped our company's direction but also highlighted the fundamental challenge facing compliance professionals today: an industry caught between old habits and new realities. The Vision Behind Lean Compliance I founded Lean Compliance (2017) because I saw an industry trapped in an outdated paradigm. Too many organizations treat compliance as a documentation exercise—paper-based, procedural, reactive. They've built systems around checking boxes rather than meeting obligations and managing actual risk. Now, this was not necessarily their fault. Regulations, a significant source of obligations, were for the most part rules-based and prescriptive enforced by adherence audits. However, obligations were changing and organizations needed a different approach to how compliance and risk should be managed. Our goal was to support the inevitable transition toward performance and outcome-based obligations, helping companies move beyond mere documentation toward demonstrating real progress in advancing obligation outcomes. We recognized that compliance should be integrated into business operations right from the start, rather than treated as a separate function in need of future integration. In addition, we saw how effective compliance could enable organizations operate with greater confidence when they genuinely understood and managed their risks, which is primarily a proactive and integrative behaviour. Where Regulators Are Leading Regulators have been signalling a clear direction for several decades, particularly in high-risk sectors. They're moving away from prescriptive, one-size-fits-all requirements toward performance and outcome-based obligations that focus on effectiveness over process, assurance over documentation, and managed risk over compliance theatre. This paradigm shift presents opportunities for organizations that can adapt to these changing expectations. Those that can demonstrate real effectiveness in realizing obligation outcomes—rather than just following procedures—will find themselves better positioned as regulations continue to evolve. Where the Market Remains Yet most organizations (along with external auditors) are still entrenched in paper-based and procedural compliance even when performance and outcome-based obligations are specified. While there is comfort in the known, viewing everything through a prescriptive lens prevents organizations from realizing the benefits of being in compliance. This contributes to why many who pass audits and achieve certifications seldom improve the object under regulation: safety, security, sustainability, quality, legal, and now responsible AI obligations. The market reflects this reality in what it's asking for: technology-first solutions that promise productivity improvements without fundamental change. Companies want tools that take away reactive pain—the scramble to respond to audit findings, the stress of regulatory examinations, the endless documentation requirements. They're looking for ways to do what they've always done, just faster and with less manual effort. This creates both opportunity and challenge. While there's clear appetite for improvement, there's resistance to the deeper transformation that truly effective compliance requires. The Territory We Inhabit Operational Compliance Lean Compliance operates in the space between regulatory direction and market reality. Rather than being another consulting company promising incremental improvements, we focus on bridging this gap through awareness, education, transformation, and community building. We've found that many organizations simply aren't aware of how significant the gap has become between their current practices and regulatory and stakeholder expectations. Our work often begins with helping them understand where they stand and what opportunities exist. The educational component has proven essential because many don't know what being proactive, integrative or operational looks like in practice. Sustainable change requires obligation owners who understand both the rationale behind obligations along with how to operationalize it. We're not just implementing disconnected controls—we're building systems that deliver on compliance. The transformation programs we created provide structured approaches for moving from procedural to operational compliance. This involves more than new tools—it requires rethinking governance, programs, systems, and processes, and often rebuilding organizational culture around continuously meeting obligations and keeping promises. We're also working to build a community of practice among compliance professionals who are navigating similar challenges. This community serves as a source of continued learning and peer support as the profession evolves. Looking Ahead The gap between regulatory expectations and current market practices continues to widen. Organizations that remain focused on paper-based, procedural approaches will continue to struggle as regulators increasingly demand evidence of effectiveness rather than just documentation.   This challenge becomes particularly evident when considering emerging obligations from AI regulations and stakeholder expectations. Meeting these obligations using paper-based, procedural compliance simply won't be enough. Compliance will require demonstrating actual performance and outcomes—how AI systems behave in practice, not just what policies exist on paper. This reality further highlights the need for operational compliance approaches.   There seems to be increasing recognition that compliance needs to evolve toward operational approaches—where organizations invest in building systems that deliver on promises to meet obligations rather than on documentation alone. Increasingly more are beginning to view compliance as increasing the probability of meeting business objectives rather than simply constraining them.   The question is not whether, but rather how long will industry continue in its reactive, siloed, and procedural ways before it embraces the shift toward operational compliance? And will this now be shortened due to AI?   The organizations that embrace operational compliance now will be better positioned to turn meeting obligations into business advantages while preserving value creation. This shift offers an opportunity to move from reactive to proactive approaches, where compliance supports rather than hinders business objectives. This transformation needs informed leadership and new approaches to compliance, which we’ve been preparing for over the past decade. This is why Lean Compliance is uniquely positioned to guide organizations through this critical transition. At Lean Compliance, we're always looking to connect with organizations and professionals grappling with these same tensions. If you're interested in exploring what operational compliance means for your specific context, let's start the conversation.

  • Promise Architectures: The New Guardrails for Agentic AI

    As AI systems evolve from simple tools into autonomous agents capable of independent decision-making and action, we face a fundamental choice in how we approach AI safety and reliability. Current approaches rely on guardrails—external constraints, rules, and control mechanisms designed to prevent AI systems from doing harm. But as AI agents increasingly become the actual means by which organizations and individuals fulfill their promises and obligations, we can consider a different approach: promise fulfillment architectures embedded within the agents themselves. This represents a shift from asking: "How do we prevent AI from doing wrong?" to "How do we enable AI to reliably meet obligations?" Promise Theory , developed by Mark Burgess and recognized by Raimund Laqua (Founder of Lean Compliance) as an essential concept in operational compliance, offers a powerful framework for understanding this fundamental transformation—where AI agents serve as the operational means for keeping commitments rather than simply entities that need to be controlled through external guardrails. The Architecture of Compliance Promise Theory reveals that compliance follows a fundamental three-part structure: Obligation → Promise → Compliance This architecture exists, although it is not often explicit in current compliance frameworks. Obligations create the need for action, promises define how that need will be met, and compliance is the actual execution of those promises. Understanding this helps us see that compliance is never just "rule-following"—it's always the fulfillment of some underlying promise structure. When we apply this lens to AI agents, we discover something significant. Consider an AI agent managing customer service operations. This agent isn't just "following business rules"—it has become the actual means by which the company fulfills its promises to customers. The company has obligations to resolve issues and maintain service quality. The AI agent becomes the means of fulfilling promises made to meet these obligations through specific commitments about response times, solution quality, and escalation protocols. Compliance is the AI agent's successful execution of these promises, making it the operational mechanism through which the company keeps its commitments. Unlike current AI systems that respond to prompts, agentic AI agents must serve as the reliable fulfillment mechanism across extended periods of autonomous operation. The agent doesn't just make its own promises—it becomes the operational means by which organizational promises get kept. From External Constraints to Internal Architecture Traditional AI safety approaches focus on external constraints and control mechanisms. But understanding AI agents as promise fulfillment mechanisms highlights the need for a fundamental shift in system design. Instead of guardrails as external constraints, we need promise fulfillment architectures embedded in the AI systems themselves. This perspective shows that effective AI agents require internal promise fulfillment architectures—systems designed from the ground up to serve as reliable promise delivery mechanisms. When AI agents are designed as promise fulfillment mechanisms, they become the operational means by which promises get kept rather than entities that happen to follow rules. This becomes crucial when organizations depend on agents as their primary mechanism for keeping commitments and meeting obligations. For agentic AI, promise fulfillment architecture becomes the foundation that enables agents to serve as reliable operational mechanisms for keeping promises. Instead of relying on external monitoring and control, we build agents whose core purpose is to function as the means by which promises get fulfilled autonomously and reliably. Promise Networks in Multi-Agent Systems When multiple AI agents work together, Promise Theory helps us see how they can serve as the operational means for fulfilling complex, interconnected promises. Rather than monolithic compliance, we see networks of agents serving as fulfillment mechanisms for interdependent promises. An analysis agent serves as the means for fulfilling promises about accurate data interpretation, while a planning agent fulfills promises about generating feasible action sequences, and an execution agent fulfills promises about carrying out plans within specified parameters. Each agent's function as a promise fulfillment mechanism enables other agents to serve as fulfillment mechanisms for their own promises. System-level promise fulfillment emerges from this network of agents serving as operational means for keeping commitments. This becomes especially important in agentic AI systems where multiple agents must coordinate as the collective means for fulfilling organizational promises without constant human oversight. In fact, they must operationalize the commitments the organization has made regarding its obligations, particularly with respect to the “Duty of Care.” Operational Compliance Through Promise Theory Raimund Laqua's work in Lean Compliance emphasizes Promise Theory as essential to understanding operational compliance. In this framework, operational compliance is fundamentally about making and keeping promises to meet obligations—operationalizing obligations through concrete commitments. Operational Compliance This transforms how we analyze AI agent compliance. Traditional approaches view AI agents as executing programmed constraints and behavioral rules. The promise-keeping view shows AI agents operationalizing their obligations through promises and fulfilling those commitments while making autonomous decisions. The difference helps explain why some AI agents can be more reliable and trustworthy—they have clearer, more consistent promise structures that effectively operationalize their obligations and guide their autonomous behavior. AI Agents Enabling Human Promise Fulfillment Understanding AI agents through Promise Theory also helps us understand how AI agents function as reliable promise fulfillment mechanisms, they can enable human agents to meet their own obligations more effectively. This creates a symbiotic relationship where AI agents serve as the operational means for human promise-keeping. Consider a healthcare administrator who has obligations to ensure patient care quality, regulatory compliance, and operational efficiency. By deploying AI agents designed with promise fulfillment architectures, the administrator can rely on these systems to consistently deliver on specific commitments—maintaining patient records accurately, flagging compliance issues proactively, and optimizing resource allocation. The AI agents become the reliable mechanisms through which the human agent fulfills their broader organizational obligations. This relationship extends beyond simple task delegation. When AI agents are designed as promise fulfillment mechanisms, they provide humans with predictable, accountable partners in meeting complex obligations. The human can make promises to stakeholders with confidence because they have AI agents that reliably execute the operational components of those promises. This enables humans to take on more ambitious obligations and make more significant commitments, knowing they have trustworthy AI partners designed to help fulfill them. The key insight is that AI agents with embedded promise fulfillment architecture don't just complete tasks—they become part of the human's promise-keeping capability, extending what humans can reliably commit to and deliver on in their professional and organizational roles. Measuring Promise Assurance Understanding AI agent behavior through promise keeping enables evaluation approaches that go beyond simple reliability metrics to include assurance—our confidence in an agent's trustworthiness during autonomous operation. Promise consistency (promises kept / promises made) measures how reliably the agent fulfills its commitments across extended autonomous operation. Promise clarity examines how well the agent's commitments are communicated and understood. Promise adaptation evaluates how well the agent maintains its core commitments while adapting to new contexts during independent decision-making. Promise-keeping becomes not just a measure of performance, but a foundation for assurance in autonomous AI systems operating with reduced human oversight. This provides a more nuanced view of AI agent trustworthiness than simple rule-compliance measures. Promise Architectures: The Future of Agentic AI Promise Theory provides an analytical framework for understanding why compliance works the way it does. By revealing the hidden promise structures underlying all compliant behavior, it helps us design, evaluate, and improve AI systems more systematically. Rather than asking "Is the AI agent following the rules?" we can ask more nuanced questions about what obligations the agent is trying to fulfill, what promises it has made about fulfilling them, and how consistently it executes those promises across independent decisions. As we make AI agents more autonomous, we need to understand how they function as the operational means for fulfilling promises and design agentic systems with embedded promise fulfillment architecture. In a world of increasingly autonomous AI agents, understanding compliance through Promise Theory offers a path toward more reliable, predictable, and assured agentic behavior where agents serve as the primary operational mechanisms for fulfilling organizational and individual promises. Compliance is never just about following orders—it's always about keeping promises. Promise Theory helps us see those promises clearly, providing a foundation for building AI agents that function as effective promise fulfillment mechanisms where assurance comes from their demonstrated capability to serve as reliable means for keeping commitments rather than from imposed constraints. As AI systems become more agentic, this embedded promise fulfillment capability may prove to be the most effective approach to maintaining reliable, ethical, and trustworthy autonomous behavior that actively delivers on commitments.

  • Does Your AI Strategy Pass the Ketchup Test?

    A simple test to bust through the hype These days, AI providers, leaders, and evangelists claim that AI technology will transform any organization's operations. Just add AI to what you're doing, and everything gets better – like adding ketchup to your food. But here's what I discovered after reviewing AI implementation plans: most aren't actually about AI at all. They're generic digital transformation playbooks with "AI" replacing whatever technology was trendy last year. ⚡ The Ketchup Test The Ketchup Test I recently reviewed an AI plan from a major organization. It looked comprehensive at first – clear values, comprehensive strategies, concrete actions. Then I tried an experiment: I replaced every occurrence of " AI" with “ KETCHUP .” Original: Accelerate the integration and utilization of AI at scale Empower staff with knowledge, skills, and tools to rapidly deploy AI Grow an AI -first workforce to oversee and integrate AI throughout the enterprise After the Ketchup Test: Accelerate the integration and utilization of KETCHUP at scale Empower staff with knowledge, skills, and tools to rapidly deploy KETCHUP Grow a KETCHUP -first workforce to oversee and integrate KETCHUP throughout the enterprise Both versions read like legitimate strategic initiatives. That's the problem. ⚡ Why This Matters Real AI strategy requires addressing AI-specific challenges that don't apply to other technologies: How will you handle AI hallucinations in critical decisions? What's your approach to algorithmic bias detection? How will you maintain explainability for regulators? What happens when your models degrade over time? If your strategy doesn't address questions like these, you're not planning for AI – you're planning for generic technology that happens to be called AI. ⚡ AI Isn't Ketchup Too many organizations treat AI like a condiment – something you add to existing processes to make them "better." But AI isn't ketchup. It fundamentally changes how decisions are made and how humans interact with systems. It requires new governance, different risk management, and entirely new expertise. Adding AI to a poorly designed process doesn't improve it – it amplifies existing problems at machine speed. Ketchup won't turn a badly cooked steak into a good one. It just makes it worse, faster. ⚡ The Challenge Try the Ketchup Test on your AI strategy today. Replace "AI" with "KETCHUP and read it again. If it still makes sense, you have boilerplate, not an AI plan, and you have work to do. What you need is deep understanding of what AI actually is, how it works, its limitations, and its genuine benefits. Not everything is better with ketchup – and not everything needs AI. The organizations that succeed with AI won't be the ones with comprehensive plans taken from last year’s playbook. They'll be the ones that understand the technology well enough to know when and how to use it appropriately.

  • ERP vs GRC: Feed-Forward vs Feed-Back Systems

    The distinction between Enterprise Resource Planning (ERP) and Governance, Risk, and Compliance (GRC) platforms reveals a fundamental difference in operational philosophy that has significant implications for organizational effectiveness. While both systems aim to ensure organizational obligations are met, they approach this goal from opposite directions. Proactive versus Reactive Compliance ERP: The Feed-Forward Compliance System Enterprise Resource Planning ERP systems exemplify feed-forward compliance architecture. They are operational systems designed around planning, forecasting, and ensuring product delivery by orchestrating all necessary resources at the right time, with the right specifications, and through the right processes. This forward-looking approach means ERP systems actively prevent problems before they occur. The feed-forward nature of ERP manifests in several ways. Production planning modules ensure materials are ordered and available before manufacturing begins. Financial planning components forecast cash flow needs and trigger procurement decisions. Human resource modules anticipate staffing requirements and initiate hiring processes. Each function is designed to identify requirements and deploy resources proactively, creating a continuous cycle of planning, execution, and adjustment that keeps operations flowing smoothly. GRC: The Feed-Back Compliance System Governance, Risk and Compliance In contrast, most GRC platforms operate as feed-back systems, focusing primarily on reporting and monitoring what has already occurred. These systems are fundamentally reactive rather than proactive, concentrating on audits, compliance reporting, and risk assessment after events have transpired. While this backward-looking approach provides valuable insights for accountability and learning, it often fails to prevent compliance failures or operational disruptions. The feed-back nature of traditional GRC systems creates inherent limitations. By the time a compliance violation is detected and reported, the damage may already be done. Risk assessments become exercises in documenting past failures rather than preventing future ones. Governance frameworks become bureaucratic reporting mechanisms rather than operational guidance systems that actively steer organizational behavior. The Operational Gap What becomes apparent when examining many GRC implementations is that they are not operational in the systems sense of the word. They lack the forward-looking, resource-orchestrating capabilities that make ERP systems effective operational tools. Instead of ensuring continuous meeting of obligations through proactive planning and resource allocation, GRC platforms often become elaborate documentation and reporting systems that react to problems after they manifest. This reactive posture explains why many organizations struggle with GRC effectiveness. When compliance and risk management are treated as reporting functions rather than operational imperatives, they become disconnected from the daily flow of business activities. The result is often a compliance program that exists parallel to, rather than integrated with, actual business operations. A Path Forward: Operational Compliance Operational Compliance GRC would benefit significantly from adopting more ERP-like characteristics. An Operational Compliance system would function as a feed-forward compliance engine, using planning and forecasting to ensure all obligation requirements and commitments are met, risks are mitigated before they materialize, and governance objectives are achieved through proactive resource allocation and process design. Such a system would anticipate compliance deadlines and automatically trigger necessary actions, allocate resources for risk mitigation activities before threats become critical, and integrate governance requirements directly into operational workflows. Instead of asking "Are we in compliance?" an Operational Compliance system would continuously ask "How do we meet all our obligations in the presence of uncertainty?” What's Next? The fundamental difference between feed-forward ERP systems and feed-back GRC platforms reflects deeper philosophical approaches to organizational management. While ERP systems actively shape future outcomes through proactive planning and resource orchestration, traditional GRC platforms remain trapped in reactive reporting cycles. Organizations seeking more effective governance, risk management, and compliance outcomes should consider how to make their GRC capabilities more operational and forward-looking, drawing inspiration from the proven effectiveness of ERP system design principles. The most successful organizations will be those that transform GRC from a backward-looking reporting function into a forward-looking operational capability that actively ensures continuous compliance and proactive risk management.

  • Minimal Viable Performance (MVP)

    Minimal Viable Compliance / Performance Outcomes are the effects of capabilities which means that if you want to advance your outcomes you need to advance your capabilities. The purpose of a management program is to adjust system set-points to the values needed (i.e. Minimal Viable Performance - MVP) to achieve the desired outcomes. This works the same way that a thermostat works in your home. If you want to feel warmer you need to increase the thermostat to a higher value. It is then the responsibility of the heating system to first achieve and then maintain that value. This is called a persistent achievement obligation. You may find your compliance systems do not have the capabilities you need to achieve and then to maintain your higher standards. There are three categories of measures to help you know if your systems are operating at levels to meet persistent achievement obligations. These are: Measures of conformance - evidentiary artifacts that demonstrate conformance to requirements Measures of performance - abilities to meet compliance objectives Measures of effectiveness - progress against compliance outcomes towards zero: non-conformance, injuries, violations, emissions, etc. Internal and external audits mostly focus on verifying conformance. However, the purpose of the compliance function goes further to ensure that safety, quality, environmental, and regulatory systems are operating at the levels needed to achieve targeted outcomes. This requires an integrated approach focused not only on conformance to each element but also how each element performs in the context of the entire system.

  • Bounded-set Versus Centred-set Compliance

    Understanding compliance mindsets using set theory TLDR Those involved with compliance will eventually observe two different mindsets at work. Each one is concerned about compliance yet differ in their focus, goals, objectives and almost everything else. Both of these groups have something to offer but when not understood can create confusion and misalignment with respect to compliance. The first one is concerned with protecting the organization by staying between the lines. This one follows Compliance 1 practices as discussed in our recent post . The other wants to change the lines to achieve higher standards and better outcomes for all stakeholders. This one follows Compliance 2 practices. The first is all about following the rules and keeping things the same, the second one invites change to make progress. This has all the makings of conflict unless a way can be found for each to work together. Organizations that desire to meet all their stakeholder obligations will need to effectively contend with each compliance group. An integrative approach offers a path forward that recognizes the benefits of both when combined may increase the probability of an organization keeping all its promises. In this article, we will use the concept of social sets to better understand these two groups to see how they might work as one for the benefit of the organization as a whole. Social Sets Bounded-Set and Centred-Set The concept of sets and set theory is well established and used to describe collections of objects from which much of our mathematics is derived. Set theory is also used to better understand social groups and communities specifically using the concepts of bounded and centred sets. Roughly speaking, bounded-sets are defined by boundaries and our relation to them (in or out). Whereas centred-sets are defined by a centre (e.g. values) and our direction of movement relative to it (advancing towards or retreating away). For the purpose of discussion bounded and centred sets can be mapped to Compliance 1 and Compliance 2 practices as shown in the next figure: Bounded-Set Compliance versus Centred-Set Compliance We will explore each one in turn except the Fuzzy-set (C0) which describes a group not concerned about compliance. Bounded-set Compliance (C1) A bounded-set can be defined by these characteristics: A focus on boundaries - are we in or out? Static - evaluated at a fixed point in time Homogeneous practices, variety of values "Adherence” mindset Some say we tend to think mostly in terms of bounded-set categories. We think of characteristics that define one group compared with another. This seems to be the case for Compliance 1 which thinks of compliance in terms of passing a boundary (e.g. an audit) that defines whether or not we are in compliance or out. Boundaries in compliance consist of such things as: inspections, controls, management reviews, governance, obligations & risk registers, and so on. Evaluation is done at a point in time by identifying minimum thresholds and standards with respect to these characteristics. The bounded-set is hard at the edges and soft in the middle. It requires substantial training and discussion to create consistent behaviours conforming to desired standards. It also requires an “adherence” mindset. Community is created by adhering to common practices. Improvement is hard to define in the bounded-set. There is no change to see. Once you are "in compliance" what else is there to improve? Transformation if it exists at all is more about repairing the fences than moving the boundaries towards a higher standard or ideal. All this makes getting "in compliance" a barrier that is difficult for many to obtain. However, once it is achieved many consider the hard part to be done and what is left to do is only maintenance. Monitoring the boundaries and making repairs (i.e. closing gaps) are key activities for bounded-set compliance that are in the "in" group. Centred-set Compliance (C2) A centred-set can be defined by these characteristics: A focus on a centre (values, an ideal, etc.) - are we heading towards or away? Dynamic - evaluated using multiple points over time Homogeneous values, variety of practices “Progress” mindset Centred-set compliance is concerned by the direction you are heading towards or away from the centre or ideal. For compliance the direction either advances or hinders the creation of compliance outcomes identified by goals, targets, and objectives to create what we don't already have. Evaluation is based on measuring progress towards an ideal over a period of time. This requires multiple data points to confirm direction and progress. The centred-set is soft at the edges and hard at the middle (the ideal). Community is created by bringing people together based on commonly-shared interests and values. You might call this a “missional” mindset. Centred-set compliance requires educating stakeholders who are assumed to have a “bounded-set” mindset which creates additional challenges. Centred-set compliance is all about change which is necessary to make progress. However, transformation is less about improving what is and more about creating what isn't which is a riskier endeavour. Centred-set compliance has lower barriers to get started. All you need is a group of people who are passionate about creating stakeholder value. However, that is harder than it might seem. Passion alone is seldom enough. Trying to achieve ambitious (perhaps, even necessary) goals without a critical mass of support often will lead to failed initiatives. In addition, without structure and discipline these initiatives are often poorly managed which also contributes to failure. Is it one or the other or both or something else? When organizations begin to take ownership of their obligations they start their compliance journey usually with bounded-set compliance. The belief is that organizations benefit from structure and discipline to change values and behaviours. By repeating common practices a compliance culture and community is created. As organizations mature in their compliance they may find that they don’t need the structures as much to be their tutor. Organizational and personal conscience informed by previous habits and practices may replace adherence to prescriptive rules. Organizations may also now benefit from having a community to help keep them in line. However, a key problem is that values, beliefs, and community have all been shaped by practices around the boundaries. Organizations in the bounded-set face the wrong direction for advancing compliance outcomes. They are looking for holes in the wall and are not facing the direction they should be heading to meet all their obligations. No wonder this can be a source of conflict with centred-set compliance groups. Bounded-set compliance groups often do not realize that passing and maintaining the boundary was never the end but rather the beginning. There is a higher standard to obtain. Unfortunately, those in the bounded-set are often overwhelmed and preoccupied maintaining the boundaries (the walls) that they don’t have the resources to make any progress towards a "centre" no matter how important that may be. Conversely, organizations that start with a centred-set approach have their own advantages and disadvantages. One key advantage is that it attracts those who are passionate about the end goals although perhaps not so much about how to get there. Nevertheless, centred-set compliance groups can bring needed energy and enthusiasm to drive compliance to higher levels and achieve more. Centred-set compliance largest challenge is contending with bounded-set compliance groups. Centred-set compliance groups are often asked to metaphorically fit a square peg in a round hole with the hole being in the boundary and nowhere near the centre that they want to move towards. You may as well be speaking a different language. Transforming one group to the other appears to have many challenges many of which are similar to the challenges associated with combining two cultures. Left to themselves they will operate as silos independently and not benefit from the other. The solution for compliance may not be to add one to the other but have something else altogether. Integrative-set Compliance (C3) Clearly, compliance needs to stay between the lines AND change the lines simultaneously if it wants to meet all of its stakeholder obligations: mandatory and voluntary. What is needed is another set that is integrative in nature focused on the whole. Integrative means combining two or more things to form an effective unit or system – precisely what compliance needs. We can define this set by: A focus on the connections - are we working as a whole? Continuous assessment Integrative values and behaviours “Holistic” mindset It is by managing connections that organizations can harness the power of both bounded-set and centred-set compliance. This is not about achieving balance or adding one to the other. Instead, it is about establishing essential capabilities that work together to reinforce each other to achieve the objectives of both. Not simple, but not impossible either. Organizations that desire to meet all their stakeholder obligations will need to effectively contend with bounded-set and centre-set compliance groups. An integrative approach offers a path forward that recognizes the benefits of both when combined may increase the probability of an organization keeping all its promises.

  • Engineered Compliance: Mapping Obligations to Outcomes in Regulated Industries

    By Raimund Laqua, PMP, P.Eng., Founder and Chief Compliance Engineer at Lean Compliance I've spent 30 years in the trenches of compliance, and one question keeps coming up: "Are all compliance obligations implemented as controls?" This isn't just a theoretical question. It has real consequences for safety, operations, and organizational success. I've walked through facilities where managers proudly showed me comprehensive compliance documentation, yet their controls weren't effectively addressing the risks they were designed to manage. Many organizations treat compliance as a simple equation: identify requirements → implement controls → document everything → pass audits. But when I look at what actually happens in practice, I see something different. Organizations can check all the right boxes and still fail to achieve what matters most: the outcomes that regulations were intended to achieve. In this article, I'm sharing what I've learned from three decades helping companies move beyond procedural to operational compliance. This shift isn't just about better compliance—it's about safer operations, improved efficiency, and sustainable success in regulated industries. The Problem with Controls Early in my career, I worked with a pipeline company that was dealing with issues across several areas: their management systems had gaps, they were experiencing worker safety incidents, pipe handling problems were occurring, and there were environmental protection concerns. They had implemented control systems with procedures covering these areas, but the controls weren't effectively preventing these issues from recurring. This illustrated a pattern I would see repeatedly—having controls in place doesn't automatically translate to the protection those controls were intended to provide. This is a pattern I've seen repeatedly across industries—oil & gas, healthcare, manufacturing, you name it. Companies invest in comprehensive control systems, create detailed procedures, and maintain voluminous records. Then they're shocked when incidents occur or when regulators issue findings. The reality is that traditional control-based approaches often emphasize implementation over effectiveness. They're built around passing audits rather than achieving outcomes. And they typically react to problems rather than preventing them. I've seen this reactive cycle play out hundreds of times: A finding or incident occurs The organization implements more controls and documentation Things look better on paper Another issue occurs in a different area Rinse and repeat This approach isn't just ineffective—it's exhausting. It burns out compliance professionals across all domains, frustrates operations teams, and wastes resources. Worst of all, it doesn't adequately protect what matters – it doesn't actually work. Companies that break this cycle take a fundamentally different approach. They focus on what actually works in the field, not just what controls are documented in the office. They build systems that detect problems before they manifest. Most importantly, they design their programs around the outcomes they need to achieve, not just the controls they need to implement. When companies make this shift, something remarkable happens. They create an upward momentum where better outcomes lead to increased stakeholder trust, which supports more effective compliance, which delivers even better outcomes—a virtuous cycle that creates real value. What Regulators Actually Want Working alongside regulatory professionals for decades has given me an interesting perspective. While many people have a narrow view of regulators, the reality is much more nuanced. Modern regulatory frameworks contain four distinct types of obligations which are often overlooked: Rules-based requirements  tell you exactly what to do When a regulation states "pressure vessels must be inspected every 36 months," there's no ambiguity. You either did the inspection on schedule or you didn't. Practice standards  define approaches you need to follow Requirements to "implement management of change procedures" don't prescribe exact steps, but they do require specific processes to be in place and functioning. Performance-based requirements  specify what you need to achieve When regulations require "99.95% availability of safety systems," they don't specify how you achieve it—what is important is that you do. Outcome-based obligations  focus on the protection you need to provide. Requirements to "prevent releases" or "ensure process safety" focus on the ultimate goal without specifying methods or performance standards. I've watched this evolution unfold over my career. Twenty years ago, most regulations were prescriptive rules. Today, regulators increasingly focus on performance and outcomes, giving organizations flexibility in how they achieve compliance while holding them accountable for results. Here's the thing: the approach that works for rules-based requirements fails miserably for outcome-based ones. This disconnect explains something I've observed repeatedly: organizations can be simultaneously "in compliance" according to their documentation but failing to deliver the outcomes regulations were intended to ensure. Matching Your Approach to Your Obligations Over time, I've developed a practical framework for matching compliance approaches to the primary types of obligations: For rules-based requirements : Traditional controls with verification work fine When regulations specify exact inspection frequencies or precise parameter settings, implementing those specific controls and verifying they happened is appropriate. I worked with a medical device manufacturer who needed to document specific quality checks. For these clear requirements, we implemented straightforward controls and verification processes. This worked perfectly for these types of obligations. For practice standards : You need functioning processes, not just documented ones For requirements specifying management systems or processes, having documentation isn't enough—those processes must function effectively in practice. At an energy company, we moved beyond just documenting their management of change process to ensuring it actually managed the risks resulting from planned changes. This shift from "having a process" to "having a process that works" made all the difference. For performance-based requirements : You need monitoring and adaptive approaches When regulations specify performance targets, you need systems that continuously monitor performance and adapt when targets aren't being met. A refinery implemented real-time monitoring of their safety-critical systems rather than just periodic checks. This allowed them to address potential issues before they affected system reliability, consistently meeting their 99.9% availability requirements for emergency shutdown systems. For outcome-based obligations : You need integrated programs that address all factors For requirements focused on outcomes like safety or environmental protection, you need comprehensive programs that address technical, human, and organizational factors. With a pipeline operator, we helped develop a holistic approach to process safety management that went beyond inspections to address all factors affecting pipeline safety. This program-based approach delivered much better protection than their previous control-centric system. This framework isn't just another approach to compliance—it's what's needed to meet all your obligations not just the ones you are most familiar with. In addition, the further you move from rules toward outcomes, the more you need to shift from documentation to operational effectiveness. Four Practical Steps to Transform Your Approach Based on my experience helping organizations make this transition, I've developed a four-step process at Lean Compliance called The Proactive Certainty Program™ . It's designed to help companies move from procedural to operational compliance: 1. ORIENT Start by understanding which direction you are heading: This begins with a comprehensive scorecard assessment that evaluates 10 essential aspects of operational compliance. This reveals gaps in your compliance approach and readiness for transformation that typical reviews often miss. During this activity: Identify your highest-risk areas and greatest improvement opportunities Evaluate your operational compliance across the 10 essential aspects Determine what's preventing you from being more proactive Assess your readiness for transforming your approach This step is about honest assessment. Many organizations believe their compliance programs are more effective than they actually are. The orientation phase provides clarity on the true starting point. 2. MAP With a clear understanding of the current situation, develop a practical roadmap: This 13-week process includes structured learning objectives that teach you what you need to know about operational compliance, combined with hands-on work to create a viable pathway from the current state to where you need to be. During this phase: Learn essential concepts and principles that drive effective operational compliance Current approaches are evaluated against what actually works in similar organizations A roadmap is designed toward what's called "Minimal Viable Compliance" A clear pathway is created from the current state to operational compliance This mapping creates the blueprint for transformation. It's not about theory—it's about establishing a practical path forward based on specific situations and resources while building the knowledge foundation needed for success. 3. OPERATIONALIZE Implementation is where many transformations fail. The focus must be on building what's essential: This phase is about establishing practices that keep organizations between the lines and ahead of risk in their operations rather than creating more documentation. During this step: Establish the essential practices required for operational compliance Implement the minimum necessary foundation rather than trying to boil the ocean Create operational mechanisms that make compliance part of regular work Develop monitoring systems that provide early warning of potential issues This activity ensures building a foundation that delivers real protection before expanding to address less critical areas. It's about focusing resources where they matter most to stay between the lines and ahead of risk. 4. ELEVATE With the essentials in place, performance can be elevated and outcomes advanced: This phase involves implementing continuous improvement cycles that steadily advance capabilities beyond minimum requirements. During this activity Systematically raise standards beyond minimal compliance Advance capabilities to achieve better outcomes with less effort Implement improvement cycles based on lean principles Realize the full benefits of proactive compliance This elevation phase transforms compliance from a cost center into a value creator. Organizations that reach this level consistently outperform their peers in both compliance and operational metrics. These four steps—ORIENT, MAP, OPERATIONALIZE, ELEVATE—aren't academic. They've guided dozens of organizations from reactive, procedural-focused compliance to proactive, operational-oriented programs. The transformation doesn't happen overnight, but each step delivers tangible benefits that make the journey worthwhile. The Path Forward So, let's return to our original question: Are all compliance obligations implemented as controls? After 30 years in the field, my answer is clear: While controls are essential for rules-based requirements, they're insufficient for performance and outcome-based obligations. Those require operational approaches focused on what actually happens in the field, not just what's documented in the office. I've seen organizations waste millions on compliance efforts that look good on paper but fail to deliver real value. I've also seen organizations transform their approach and achieve better outcomes with fewer resources. The difference comes down to recognizing that compliance isn't primarily a procedural challenge—it's an operational one. It's about ensuring that what happens in the field consistently delivers the outcomes regulations were intended to protect. The organizations that thrive in today's complex regulatory environment are those that: Take ownership of their obligations rather than just reacting to audits Establish real-time monitoring systems rather than waiting for periodic checks Continuously improve their approach based on operational feedback This transformation isn't just about better compliance—it's about safer operations, improved efficiency, and sustained organizational success. It's about protecting what matters while eliminating activities that don't add value. In my experience, this isn't a journey you can skip or shortcut. There's no magical tool that will transform your compliance program overnight. But by following a structured approach and focusing on what actually works, you can steadily move from where you are to where you need to be. The companies I've seen make this journey successfully share one characteristic: they're committed to doing the right thing, not just checking the right boxes. They see compliance not as a burden to be minimized but as a capability to be developed. If that describes your organization, you're already on the right path. And if you're struggling with compliance that feels heavy on procedures but light on effectiveness, there's a better way forward. I've seen it work repeatedly across industries, and I'm confident it can work for you too. Raimund Laqua, PMP, P.Eng. is Founder and Chief Compliance Engineer at Lean Compliance Consulting, Inc., which he founded in 2017. With over 30 years of consulting experience across North America, he focuses on helping ethical, ambitious companies in highly-regulated, high-risk industries improve the effectiveness of their compliance programs. His expertise spans safety & security, quality, regulatory and environmental objectives across multiple sectors including oil & gas, energy, pharmaceutical, medical device, financial, technology, and government. He is the author of weekly blog articles, an upcoming book on operational compliance, and regularly speaks on topics of risk, compliance, lean, and responsible and safe AI.

  • AI's Most Serious Blindspot and Bias

    Working with AI over the past year opened my eyes to a systemic problem: AI systems are stuck in the past. This creates both a serious blindspot and bias. It's a blindspot because AI systems literally cannot "see" emerging trends, innovations, or approaches that aren't well-represented in their training data. They have a gap in their perception of what's happening at the leading edge of any field. It's also a bias because these systems are statistically weighted toward dominant patterns in their training data. They're biased toward what was common, established, or traditional, and against what's novel, emerging, or revolutionary—even when the newer approaches might be superior. The two problems reinforce each other: the blindspot creates the bias, and the bias makes it harder to overcome the blindspot - a vicious cycle that keeps it anchored in the past. ⚡️ What I Discovered in Practice Every time I ask ChatGPT about risk and compliance, I get the same old story—procedural compliance with its reactive, audit-focused approach. No surprise there. That's how most companies still operate, and that's what fills the training data. But here's the thing: forward-thinking organizations are already moving toward something different. They're embracing operational compliance—integrative, proactive, and risk-based—to meet modern regulatory demands that focus on performance and outcomes. This shift might be the future, but it barely exists in AI's world. The data doesn't show it enough, so the AI rarely mentions it. I've tried everything. Even when I spell out operational compliance in my prompts, the AI keeps drifting back to the old ways. It's frustrating to watch traditional approaches seep into responses about the future simply because they're what the system has seen most often. Sure, some principles remain constant—like laws of physics. But strategies and methodologies evolve. That's the painful irony here: the very tool I hoped would help generate fresh insights is handcuffed by yesterday's patterns. Maybe Hume had it right all along. Data shows what is—not what should be. ⚡️ Breaking Free From Out-dated Approaches To get past this limitation, I've learned to: Question the responses. "What emerging shifts might you be missing here?" Add my own knowledge about current transitions that haven't made it into the data yet. Build better reference materials focused on innovative approaches. Look for tools that flag when responses are stuck in outdated thinking. Remember that AI shows what was common, not what's becoming common. We need the past to learn, but we can't let it trap us there. By pushing against the limits of probability-based responses, we can use these tools while hanging onto our uniquely human ability to imagine what's never existed before.

© 2017-2025 Lean Compliance™ All rights reserved.
bottom of page