top of page

SEARCH

Find what you need

588 results found with an empty search

  • Double Your Capacity to Deliver Total Value

    Taiichi Ohno's Secret to Delivering Total Value To understand this approach, we need to return to the origins of LEAN manufacturing when Taiichi Ohno first introduced it at Toyota in the 1950s. While Ohno is widely known as the father of LEAN who taught waste removal, standard work, and continuous flow, there's a crucial element of his approach that often gets overlooked. Ohno's transformational insight (not really a secret) was that the production leader should "break" the standard by continuously improving it. When you achieve an improvement that allows you to remove your best person from the production line, what that person does next becomes the key to exponential growth rather than incremental gains. These freed-up resources didn't disappear—they worked on creating further improvements that resulted in even more people being removed from the line. Through this compounding effect, Ohno eventually had enough people to start an entire second production line. Instead of achieving fractional improvements, he was able to double his capacity using existing resources. As Ohno explained: "Making an improvement that can take one person out results in just one person's cost being saved. If you take that person and have her make improvements, you start getting savings of two, three, four, and five people and so forth. Taking out the best person and making her improve the rest is really effective." This same principle applies to creating Total Value through productivity and compliance programs. You begin by reducing waste, standardizing work, and streamlining workflow—but that's only the foundation of what's possible. The real transformation happens when freed-up resources from reactive, unproductive activities are redirected toward proactive, productive work. These resources can then anticipate changes, address root causes, and introduce new capabilities that keep the organization ahead of risk, operating between the lines, and staying on-mission. By following this approach, organizations can double their capacity to meet not just regulatory obligations, but all their obligations—using the resources they already have. The capacity for dramatic improvement often already exists within organizations; it simply requires a more holistic approach to unlock it.

  • When Automation Hides Waste

    Applying Lean to Digital Waste The digital transformation has fundamentally changed how work gets done, but it has also created a new challenge for operational excellence. While LEAN methodology has long focused on eliminating waste in manufacturing and physical processes, the rise of digital operations has introduced new forms of waste that are often harder to see and understand. Today's organizations increasingly operate through layers of software, automation, and algorithms that obscure the reality of what's actually happening in their processes. This digital opacity creates a fundamental problem: you cannot improve what you cannot see. As more organizations cross the threshold where digital processes outnumber physical ones, the need to identify and eliminate digital waste becomes critical to maintaining operational excellence. The Visibility Problem in Digital Operations Speed, efficiency, and effectiveness are not synonymous. When organizations prioritize doing things faster through automation, they often inadvertently conceal the very waste that LEAN methodology seeks to eliminate—over-processing, excessive movement, and other forms of operational inefficiency. More critically, automation buries operational reality within layers of code, making processes invisible to the stakeholders and decision-makers who need to understand them. What actually happens becomes locked away in digital black boxes, inaccessible to those responsible for improvement and oversight. The rise of AI has both amplified this challenge and brought it into sharp focus. As organizations face new obligations for transparency and explainability in their AI systems, they're discovering that the visibility problem extends far beyond artificial intelligence. This need for transparency was always essential once we entered the digital era—we simply didn't recognize its urgency. The critical difference today is that many organizations have crossed a threshold where digital processes outnumber physical ones. While this shift doesn't apply to every industry, it represents the new reality for a significant portion of the business world. This makes the LEAN principle of visibility—the practice of "walking the Gemba" to see what's actually happening—more important than ever. You cannot improve what you cannot see, and in our increasingly digital world, automation has made it easier to operate blindly. The challenge isn't just maintaining visibility; it's actively creating it in environments where the real work happens behind screens rather than on factory floors. The Eight Digital Wastes To address digital waste, we must first identify it. Here are the eight traditional LEAN wastes translated into their digital equivalents: 1. Overproduction → Over-Engineering/Feature Bloat Building more features than users need or want. Creating complex solutions when simple ones would suffice, or developing features "just in case" without validated demand. 2. Waiting → System Delays/Loading Times Users waiting for pages to load, API responses, system processing, or approval workflows. Also includes developers waiting for builds, deployments, or code reviews. 3. Over-processing → Excessive Processing/Computations Using more computational power than necessary to achieve desired outcomes. This includes deploying large language models for simple text tasks that simpler algorithms could handle, running complex AI models when rule-based systems would suffice, or using resource-intensive processing when lightweight alternatives exist. The massive compute requirements of modern AI often exemplify this waste. 4. Inventory → Technical Debt Accumulated shortcuts, suboptimal code, outdated dependencies, architectural compromises, and deferred maintenance that slow down future development and increase system fragility. This includes both intentional debt (conscious trade-offs) and unintentional debt (poor practices that compound over time). 5. Motion → Inefficient User Interactions Excessive clicks, complex navigation paths, switching between multiple applications to complete simple tasks, or poor user interface design that requires unnecessary user movements and interactions. 6. Defects → Bugs/Quality Issues Software bugs, data corruption, system errors, security vulnerabilities, or any digital output that doesn't meet requirements and needs to be fixed or reworked. 7. Unused Human Creativity → Underutilized Digital Capabilities Not leveraging automation opportunities, failing to use existing system capabilities, or having team members perform manual tasks that could be automated. Also includes not utilizing data insights or analytics capabilities. 8. Transportation → Non-Value-Added Automation Automating processes that don't actually improve outcomes or create value—like automated reports no one reads, robotic processes that move data unnecessarily between systems, or AI features that complicate rather than simplify user workflows. The automation itself becomes the waste, moving work around without improving it. Apply LEAN to Reduce Digital Waste Understanding digital waste is only the first step. Organizations must actively work to make their digital operations as transparent and improvable as physical processes once were. Here's how to apply these concepts: Create Digital Gemba Walks: Establish regular practices to observe digital processes in action. This might include reviewing system logs, monitoring user journeys, analyzing performance metrics, and sitting with users as they navigate your systems. I mplement Visibility Tools : Deploy monitoring, logging, and analytics that make digital processes observable. Create dashboards that show not just outcomes, but the steps and resources required to achieve them. Question Automation : Before automating any process, ask whether the automation truly adds value or simply moves work around. Ensure that automated processes remain observable and improvable. Address Technical Debt Systematically : Treat technical debt as you would physical inventory—track it, prioritize its reduction, and prevent its accumulation through better practices. Optimize for Actual Value : Regularly audit your digital systems to identify over-processing, unnecessary features, and inefficient interactions. Focus computational resources on tasks that truly benefit from them. Design for Transparency : When building new digital processes, make observability and explainability first-class requirements, not afterthoughts. The path to eliminating digital waste begins with increased transparency. Organizations must prioritize making their digital processes observable and understandable, creating the visibility necessary to identify, measure, and systematically eliminate these new forms of waste. Only through this enhanced transparency can we unlock the true potential of digital operations while maintaining the continuous improvement capabilities that drive lasting operational excellence.

  • Management PDCA - Hero or Zero?

    For those responsible for management systems you have most likely noticed the elevation of continuous improvement and specifically the use of a Plan-Do-Check-Act (PDCA) cycle in related standards, guidelines, and even regulations. Here are a few examples (API RP 1173, ISO 9001, ISO 22301): The use of improvement cycles has been effective in specific contexts and areas. So it’s not a surprise to see PDCA (or similar) cycles also being applied to management programs and systems. However, guidance on what and how PDCA is to work at the systems level has been few and far between. At a macro level the same acronym (PDCA) is being used however the details of what is to happen within each step is vague, and differs from standard to standard. In some cases PDCA is being used as a process to build the system as if it was a project methodology. In most cases PDCA has been re-defined as the model for the system processes within a given standard. It looks like PDCA is used as magical pixie dust sprinkled everywhere where things are managed. If you are confused by all of this, you are not alone. Research has shown that the inconsistent use of PDCA has contributed to the failure of not only what we might call “ Management PDCAs ” but traditional process improvement as well. It is difficult for organizations to get the benefits from PDCA when it is being re-defined, co-opted, and misapplied. In this article we take a look at “Management PDCAs”and how these compare with traditional continuous improvement cycles. We will try to clear out some of the confusion and find out if Management PDCAs are going to be a hero or end up as a zero – not amounting to very much and perhaps making things worse. History of PDCA There is much written and available on the topic of continuous improvement. PDCA is not new and has evolved over the years. Here are a few of the familiar ones you probably have heard or know about: Deming Wheel Shewhart Cycle Japanese PDCA PDSA PDCA / A3 (Lean) DMAIC (Six Sigma) Kaizen / Toyota Kata Observe-PDCA OODA Build-Measure-Learn (Lean Startup) And others At a basic level, PDCA is a model for continuous improvement that uses iterations to optimize towards a goal. In practice, focusing on smaller improvements with frequent iterations accelerates learning and establishes behaviours that build towards an improvement culture. When this is done well it results in a virtuous cycle where both action and behaviours reinforce each other delivering more and better improvements over time. No wonder management standards and regulatory bodies are looking at harnessing the power of PDCA – it has been a real super power. What all these continuous improvement cycles have in common is that they are all meta processes that stand outside of what you want to improve. You can in theory (practice may be different) apply them to improving tasks, processes, systems, programs, and many other things. Each encapsulates a methodology where the specifics of what happens inside the cycle depend on what you want to improve. For example, some are focused on problem solving, while others on discovery of better ways to achieve a particular target or goal. The majority of them are most effective when applied to incremental changes at the process level and less so at involving system-wide improvements. What is the Problem with Management PDCA? Let's now take a look at how PDCA is being used by many management systems standards and guidelines. We will consider: PDCA as a project methodology PDCA as a systems model PDCA as a new variant for continuous improvement PDCA as a replacement for CAPA (corrective actions / preventative actions) PDCA as a project methodology Many have adopted the practice of viewing all management processes through the lens of P-D-C-A. While PDCA may define a natural process for management where we plan the work, work the plan and then check to make sure the plan was done, this is not the same as continuous improvement and what PDCA was intended for. As an example, ISO defines PDCA in the following way: PDCA is a tool that can be used to manage processes and systems. P-Plan: set the objectives of the system and processes to deliver results (“What to do” and “how to do it”) D-Do: implement and control what was planned C-Check: monitor and measure processes and results against policies, objectives and requirements and report results A-Act: take actions to improve the performance of processes PDCA operates as a cycle of continual improvement, with risk‐based thinking at each stage. On paper this sounds good, but this is a form of linear thinking. in this case PDCA has been flattened out to form a sequence of steps. There is no improvement cycle and the only activity to improve is specified in the ACT step not the DO step where it happens in traditional PDCA. PDCA as a system model Several management system standards have conceptualized their management activities as part of an overarching PDCA cycle. In essence, PDCA has become a system cycle and not an improvement cycle in the traditional sense. To help us understand this we need to consider the difference between management systems and management programs. At a high level when you want consistency you use a system; when you want to change something you launch a program. Management systems, which is what ISO and others provide standards for, are meant to maintain state which means consistently achieving a specific level of performance with respect to such things as quality, safety, security, and so on. This is accomplished by monitoring processes and taking action to correct for deviations in whatever way that is defined. Management programs, on the other hand, are used to change state to achieve new levels of performance. This is a feed-forward control loop that adjusts system capabilities to achieve higher standards of effectiveness. This fits closer to the notion of continuous improvement towards better outcomes rather than deviation from standard. Both feed-back and feed-forward processes can benefit from PDCA but only partially. The benefit of iterations only occurs as often as "defects" are discovered or "standards" are raised. This limits the scope of improvements to those events and mostly to the reactive side of equation when risk has already become an issue. PDCA as a new variant When standards envision their systems as improvement cycles they are creating a new variation of PDCA that works differently than traditional PDCA cycles. The processes that are linked to Plan-Do-Check-Act steps are intended to operate simultaneously. For example, in the case of AP RP 1173 Pipeline Safety Management System, you never stop doing DO'ing operational controls to CHECK safety assurance.There is no sequencing of steps, or iteration happening here. Instead, PDCA is used to describe a function that the set of processes performs. This is different than conducting a PDCA followed by another PDCA and then another until you achieve your goal. PDCA as a replacement for CAPA Continuous improvement in the form of PDCA has been placed on the reactive side and embedded in the system as mostly a replacement for CAPA. All too often I have seen PDCA used to define a process for actions. Again, this is linear thinking applied to managed work. There is no iteration, no striving towards a goal, no incremental improvement. From Zero to Hero What seems to have happened is that we have a conflation of improvement strategies all under the umbrella of PDCA. It's no wonder why there has been confusion and lack of success. For PDCA to be more than words on page (or magical pixie dust) it should follow the principals defined by each methodology. Failure to follow the principals has been reported as a large contributor (perhaps the largest) to why PDCA has not been effective. With respect to Management PDCAs these should: Not be used as a process to build a system. PDCA is intended to improve the system after it has become operational. PDCA is a cycle that is repeated not a linear step of project steps. There are other methodologies to establish systems such as Lean Startup for example. Not be used as a replacement for CAPA . PDCA should instead be a proactive process for continuous improvement focused on staying ahead of risk and prevention not only on reacting to incidents. Be part of the system but not the system itself . Mapping management system processes to PDCA steps misrepresents management system dynamics which will lead to ineffective implementation and operations. Be repeated as often as possible to develop habits and leverage iterative improvements. The power of PDCA comes from proactive actions reinforced by proactive behaviours to establish a virtuous cycle. What most have instead is a vicious cycle – reactive actions reinforced by reactive behaviours. Where best to use PDCA? Continuous improvement needs to occur across all levels but at a minimum incorporate be used to improve processes (loop 1), and improve systems (loop 2): Loop 1: At the process level ,PDCA should focus on improving efficiencies and consistency. This is where Lean practices are most useful. Process level improvements tend to utilize existing capabilities to reduce waste and improve alignment. These improvements can be accomplished using frequent incremental changes over time. Loop 2: At the program level PDCA would focus on improving effectiveness of a system. This could be called a Program PDCA. This should follow approaches that utilize experimentation and system level interventions. System level improvements benefit from step-wise improvements that elevate capabilities to effect better outcomes. It is more difficult to incrementally improve through a maturity curve. What do you think?

  • Compliance Chain Analysis

    Harvard Business School's Michael E. Porter introduced the concept of a value chain in his book, " Competitive Advantage: Creating and Sustaining Superior Performance ," in 1985. In his book he writes: "Competitive advantage cannot be understood by looking at a firm as a whole," Porter wrote. "It stems from the many discrete activities a firm performs in designing, producing, marketing, delivering and supporting its product. Each of these activities can contribute to a firm's relative cost position and create a basis for differentiation." Porter believed that competitive advantage comes from: (1) cost leadership, and (2) differentiation. Value chain analysis (VCA) helps to understand how both affect margin. Value chain analysis considers the contribution of an organization's activities towards the optimization of margin, where margin is an organization's ability to deliver a product or service for which the customer is willing to pay more than the sum of the costs of all activities in the value chain. Porter argues that a company can improve its margin by the way "primary activities" are chained together and how they are linked to supporting activities. He defines "primary activities" as those that are essential to adding value and creating competitive advantage. Furthermore, secondary activities assist the primary activities to maintain or enhance the product's value by means of cost reduction or value improvement. This the domain of LEAN and operational excellence. An example value chain along with general processes are shown in the following diagram: Value Chain Analysis A Compliance Perspective In recent years, compliance has increased in both complexity as well as demand by regulation and industry standards. It is, therefore, worth taking another look at the value chain in terms of how compliance should now be considered. Porter includes the quality assurance (QA) function as part of the "Firm Infrastructure." At a basic level, this places QA outside of the core processes and considered as means to improve value and reduce cost. The latter, is the more common emphasis as many organizations view quality and other compliance functions as an overhead that needs to be reduced. For the purpose of this discussion, we will use the same primary activities from the typical value chain. However, infrastructure activities are expanded to include other compliance activities such as: quality, safety, environmental and ethics & compliance. Compliance activities can in principle contribute to value improvement as well as cost reduction. Although, the effects may not be direct or immediate. A key role of compliance is to drive down risk which as we know has effects that may be delayed or mitigated. Therefore, instead of margin, it might be more useful to consider the level of risk as the measure to be optimized. It is common for compliance to be organized into isolated functions that are separate from the primary activities. However, we know that these programs are not effective when implemented in this way. Instead, they are more effective when seen as horizontal capabilities that cross the entire value chain. The following diagram illustrates how a compliance chain can be constructed using Porter's value chain as a model: Compliance Chain Analysis By analyzing the relationship between compliance and primary activities (including secondary), it is possible to gain a better understanding of the following: Cost of compliance and non-compliance How and to what degree compliance affects risk Value of compliance (cost avoidance, increased trust, and reduction in: defects, incidents, fatalities, financial losses, etc) Strategies aligned with competitive advantages can then be applied to improve both margin as well drive down overall risk: Cost Advantage Porter argued that there are 10 drivers that improve cost advantage: Create greater economies of scale Increase the rate of organizational learning Improve capacity utilization Create stronger linkages between activities Develop synergies between business units Look to increase vertical integration Improve the timing of market entry Alter the firm’s strategy regarding cost or differentiation leadership Change the geographic location of the activities Look to address institutional factors such as regulation and tax efficiency Differentiation Advantage Porter further identifies 9 factors to promote unique value: Changing policies and strategic decisions Improving linkages among activities Altering market timing Altering production locations Increase the rate of organizational learning Create stronger linkages between activities Develop relationships between business units Change the scale of operations Look to address institutional factors such as regulation and product requirements Compliance Advantage We suggest 10 principles to drive compliance advantage: Keep all your promises ​ Take ownership for all your compliance obligations (required and voluntary) Develop programs and systems that always keep you in compliance Incrementally and continuously improve your compliance Make compliance an integral part of your performance and productivity processes Use proactive strategies to always stay in compliance Monitor in real-time the status and your ability to stay in compliance Audit outcomes of your compliance programs not activity Develop a learning culture around compliance Always strengthen your ability to easily meet and maintain compliance Summary Total Value Chain Analysis Value chain analysis (VCA) has been used successfully to help companies create both cost and differentiation advantage to improve their margins. In today's highly regulated marketplace, tools like VCA can also be used to create a compliance advantage to decrease overall risk. While, this may not result in immediate cost reduction, it can avoid future costs and differentiate a company from its competitors by achieving: higher quality, safer operations, and improved trust from their stakeholders.

  • Which is Better for AI Safety: STAMP/STPA or HAZOP/PHA?

    STAMP/STPA and traditional PHA methods like HAZOP represent fundamentally different safety analysis philosophies. STAMP/STPA views accidents as control problems in complex socio-technical systems, focusing on hierarchical control structures and unsafe control actions that can occur even when all components function properly.  In contrast, HAZOP operates on the principle that deviations from design intent cause accidents, using systematic guide words (No, More, Less, etc.) applied to process parameters to identify potential failure scenarios. Traditional PHA methods like FMEA and What-If analysis similarly focus on component failures and bottom-up analysis approaches. Research demonstrates these methodologies are complementary rather than competitive. Studies show STPA identifies approximately 27% of hazards missed by HAZOP, while HAZOP finds about 30% of hazards that STPA overlooks.  STAMP/STPA excels at analyzing software-intensive systems, complex organizational interactions, and novel technologies where traditional failure-based analysis falls short.  HAZOP proves to be better for traditional process systems with well-defined physical parameters and established operational procedures, benefiting from decades of industrial experience and mature tooling. For AI safety analysis, STAMP/STPA appears better suited to AI's systemic and emergent risks, but the choice becomes more nuanced when considering AI's integration into traditional process systems.  While STPA naturally addresses algorithmic decision-making, human-AI interactions, and emergent behaviors that traditional failure analysis struggles with, AI increasingly operates within conventional industrial processes where HAZOP's systematic parameter analysis remains valuable.  The real challenge lies in analyzing AI-augmented process control systems—where an AI controller making real-time decisions about flow rates or temperatures requires both STPA's systems perspective to understand the AI's control logic and HAZOP's structured approach to analyze how AI decisions affect physical process parameters.  Rather than viewing these as competing methodologies, the most thoughtful approach recognizes that AI safety analysis may require STPA for understanding the AI system itself, while leveraging HAZOP's proven framework for analyzing how AI decisions propagate through traditional process systems—a hybrid necessity as AI becomes embedded throughout industrial infrastructure.

  • You're Not Managing Risk—You're Just Cleaning Up Messes

    Imagine you're a ship captain navigating treacherous waters. Most captains rely on their damage control teams—when the hull gets breached, they spring into action, pumping out water and patching holes. That's feedback control, and while it's essential, it's not what separates legendary captains from the rest. Risk Management is a Feed Forward Process The best captains? They're obsessed with their barometer readings, wind patterns, and ocean swells before the storm hits. They're tracking leading indicators—subtle changes that whisper of trouble long before it screams. That's feedforward control, and it's the secret that transforms risk management from crisis response into strategic advantage. Here's the truth that will revolutionize how you think about risk: Risk management is a feedforward process. Everything else is just damage control. Walk into any company's "risk management" meeting, and you'll see the problem immediately. They're not managing risk at all—they're managing the aftermath of risks that already materialized. These meetings are filled with lagging indicators—the equivalent of counting holes in your ship's hull after the storm has passed. True risk management is feedforward by definition. It's about reading the environment, anticipating what's coming, and adjusting course before the storm hits. When you're reacting to problems that already happened, you've left risk management behind and entered crisis response. This means fundamentally changing what you track. You measure leading indicators: Employee engagement scores before they become turnover rates Customer complaint sentiment before it becomes churn Process deviation patterns before they become quality failures Market volatility signals before they become financial losses Compliance inoperability before it becomes violations Organizations that make this shift see remarkable transformations in their risk posture by changing their measurement focus from "How badly did we get hit?" to "What's building on the horizon?" Consider how this works in practice: instead of tracking injury rates (lagging), organizations can track near-miss reporting frequency and planned change frequency (leading). This approach often leads to dramatic reductions in actual injuries—not because teams get better at treating injuries, but because they get better at preventing the conditions that create them. True risk management isn't about reading storms or cleaning up after them—it's about creating the conditions for smooth sailing. What leading indicators is your organization ignoring while it counts yesterday's damage?

  • What Is Your MOC Maturity Index?

    MOC Maturity Index Change can be (and often is) a significant source of new risk. As a result, many companies have implemented the basics when it comes to Management of Change (MOC). This may be enough to pass an audit but is not enough to effectively manage the risks due to: asset, process, or organizational change. For that you need processes that are adequately scoped, have clear accountability, and that effectively manage risks during and after the change is implemented. You also need to properly measure both the performance and effectiveness of the MOC process to know whether or not: (1) there is sufficient capacity to manage planned changes and (2) risks are properly mitigated. We created a quick assessment for you to get an idea of how well you are doing. You can take this free assessment by clicking: here

  • LEAN - Lost in Translation

    There are times when leadership sets their gaze on operations in order to better delight their customers, increase margins, or improve operational excellence. This gaze for many companies has translated into a journey of continuous improvement – the playground for LEAN. All across the world companies have embraced LEAN principles and practices in almost every business sector. In many cases, LEAN initiatives have produced remarkable results and for some created a new “way of organizational life.” Continuous improvement has become a centring force as a means for aligning a company’s workforce with management objectives. With this success, the mantra of continuous improvement has expanded, along with the LEAN tools and practices, to other areas of the business such as: quality, safety, environmental, regulatory and other compliance functions. However, in these cases, LEAN has not helped as much as it could and in fact in some cases has made things worse. The problem has not been with the translation of Japanese words such as “Gemba”, “Kaizen”, “Muda”, “Muri”, and others. Instead, the problem is with the translation of LEAN itself. The objectives, principles, and practices that have worked on the production floor where improvement is measured by cycle time reductions has not been as effective when applied to areas where improvement is measured by the reduction in risk. The following are key areas where inadequate translations have caused confusion resulting in a lack of effectiveness when applying LEAN: Value Improvement Uncertainty Objectives Translating Value When it comes to compliance programs the translation of such things as: “value”, “value stream”, “waste” , “flow” and “customer” are not as obvious or as easy to see compared to what happens on a production floor. Some LEAN practitioners chose not to do a translation and instead consider everything only from a “manufacturing” point of view. This means that everything that does not directly “touch a product” is considered as non-value added which makes almost everything else “waste.” Using a “narrow” definition has led LEAN teams to remove essential activities and outcomes such as: evidentiary artifacts needed to verify compliance risk measures needed to protect the value stream activities needed to meet compliance objectives preventive risk controls tasks that are occasionally used (ex. risk assessments) tasks where people have forgotten its purpose critical to compliance activities Translating Improvement Without an adequate translation for value some practitioners choose "simplification” as a guiding principle for process improvement. Unfortunately, after simplification has been completed what often remains is a process that does very little to advance any outcome, let alone those associated with compliance obligations. However, there is one outcome that is advanced and that is the possibility of risk. Over-simplification (a source of improvement waste) can create new or expose existing vulnerabilities that threaten value creation. Companies may end up with more risk than any productivity improvement they might have realized by the elimination of “waste.” Translating Uncertainty The value chain is responsible for creating value in the eyes of a company’s stakeholders and it does so in the presence of uncertainty. Productivity programs serve the value chain by improving margins through operational excellence and LEAN initiatives. Margins are necessary to mitigate the effects of aleatory (i.e. irreducible) uncertainty. It also affords an organization a degree of resilience against disruption. While the removal of waste can improve financial margins, many LEAN practitioners may not properly recognize the use of margins (extra time, extra capacity, extra resources, ..etc.) put in place to contend with uncertainty. As a result any gains in financial margins will be lost (and often more) to address the disruption when uncertainty does become a reality and there is no margin in place to attenuate its effects. Compliance is considered as necessary, but a waste from a LEAN perspective. However, compliance programs serve the value chain by mitigating the effects of epistemic (i.e. reducible) uncertainty. It does this by buying down risk through effective quality, safety, security, environmental, and regulatory systems and processes. Without margins or the buying down of risk companies are less resilient and are more vulnerable to threats to value creation. Translating Objectives LEAN can be applied to all processes provided that effective translations are made to the domain where LEAN is being applied. When it comes to compliance programs it is important to recognize that the primary objective is to eliminate and reduce risk rather than "waste". The following chart contains translations for other LEAN objectives as applied to the compliance domain: LEAN practitioners would do well to consider the impact of risk when engaged in process improvement that involves compliance obligations. Compliance processes will still contain traditional sources of "waste." Therefore, risk specialists would also benefit from applying LEAN to existing risk control processes to improve detection and response times to better prevent risk as well as mitigate its effects. You might say that since every process operates in the presence of uncertainty, LEAN practitioners should all be risk managers. You could say that risk is the cause of the waste which LEAN has traditionally focused on eliminating.

  • Closing the Compliance Effectiveness Gap

    Compliance Effectiveness Gap Compliance has been heading in a new direction over the last decade. It's moving beyond paper and procedural compliance towards performance and operational compliance.   This change is necessary to accommodate modern risk-based regulatory designs, which elevate outcomes and performance over instructions and rules.   Instead of checking boxes, compliance needed to become operational, which is something that LEAN, along with Operational Excellence principles and practices, helps to establish.   As LEAN endeavours to eliminate operational waste, those who are accountable for mission success have noticed that such things as defects, violations, incidents, injuries, fines, and misconduct are also wastes that take away from the value businesses strive to create.   This waste results predominately from a misalignment between organizational values and operational objectives. You can call this business integrity, which at its core is a lack of effective regulation – The Compliance Effectiveness Gap. Total Value Chain The Problem with Compliance   In a nutshell, compliance should ensure mission success, not hinder it.   Over the years compliance has come alongside the value chain in the form of programs associated with safety, security, sustainability, quality, legal adherence, ethics, and now responsible AI.   However, many organizations experience that these programs operate re-actively, separately, and disconnected from the purpose of protecting and ensuring mission success - the creation of value. They are misaligned not only in terms of program outcomes, but also with respect with business value.   This creates waste in the form of duplication of effort, technology, tools, and executive attention. However, perhaps more importantly, the lack of effectiveness ends up creating the conditions for non-conformance, defects, incidents, injuries, legal violations, misconduct, and business uncertainty.   Closing – The Compliance Effectiveness Gap – is now a strategic objective for organizations who are looking to maximize value creation.   A Program by a New Name   To prioritize this objective, we have renamed our advanced program from: "The Proactive Certainty Program™" to "The Total Value Compliance Program™"   This program builds on our previous work and adds a Value Operational Assessment to identify operational capabilities needed to close – The Compliance Effectiveness Gap  – the gap between organizational values and operational objectives.   With greater alignment (a measure of integrity), uncertainty decreases, risk is reduced, waste eliminated, and value maximized.   The First Step   The first step toward closing The Compliance Effectiveness Gap  is a:   TOTAL VALUE COMPLIANCE AUDIT This is not a traditional audit.   Instead, this is a 10-week participatory engagement (4 hours per week investment), where compliance program & obligation owners, managers, and teams (depending on the package chosen) will actively engage in learning, evaluation, and development of a detailed roadmap to compliance operability – compliance that is capable of being effective. The deliverables you receive include: Executive / Management Education  (Operational Compliance) Integrative Program Evaluation  (Values Operations Alignment) Total Value Compliance Roadmap  (Minimal Viable Compliance Operability) The compounding value you will enjoy: Turning compliance from a roadblock into a business accelerator Aligning your values with your operations for better business integrity Creating competitive advantage, and greater stakeholder trust Enabling innovation and productivity instead of hindering them Are you ready to finally close The Compliance Effective Gap ?

  • Compliance Operability Assessment Using Total Value Chain and Compliance Criticality Analysis

    Why Is This Assessment Necessary? For compliance to be effective, it must generate desired outcomes. These outcomes may include reducing violations and breaches, minimizing identity thefts, enhancing integrity, and ultimately fostering greater stakeholder trust. Realizing these benefits requires compliance to function as more than just the sum of its parts. Unfortunately, many organizations focus solely on individual components rather than the whole system – they see the trees but miss the forest, or concentrate on controls instead of the overall program. Too often, compliance teams work hard and hope for the best. While hope is admirable, it's an inadequate strategy for ensuring concrete outcomes. To elevate above merely a collection of parts, compliance needs to operate as a cohesive system. In this context, operability is defined as the extent to which the compliance function is fit for purpose, capable of achieving compliance objectives, and able to realize the benefits of being compliant. The minimum level of compliance operability is achieved when: All essential functions, behaviors, and interactions exist and perform at levels necessary to create the intended outcomes of compliance. This defines what is known as Minimal Viable Compliance (MVC) , which must be reached, sustained, and then advanced to realize better outcomes. For this to occur, we need a comprehensive approach. We need: Governance to set the direction Programs to steer the efforts Systems to keep operations between the lines Processes to help stay ahead of risks All of these elements must work together as an integrated whole. To use an analogy, an effective compliance system may not need to be as complex as a car, but it should be at least as functional as a bicycle. The key point is that it must be more than just a box of disconnected car or bicycle parts. This holistic perspective on compliance operability allows organizations to: Identify gaps with their current compliance Prioritize areas for improvement Ensure that all components of the compliance system are working in harmony Continuously improve and adapt their compliance efforts to meet changing requirements and expectations By conducting a Compliance Operability Assessmen t, organizations can move beyond a piecemeal approach to compliance and develop a robust, systemic strategy that is more likely to achieve desired outcomes and create lasting value. Total Value Chain Analysis Total Value Chain Analysis Value Chain Analysis (VCA), introduced by Michael E. Porter in 1985, is a strategic tool that examines how a firm's activities contribute to its competitive advantage. Porter argued that competitive advantage stems from cost leadership and differentiation, and VCA helps understand how various activities affect a company's margin. This concept has been foundational in helping businesses optimize their operations and improve their market position. In recent years, the increasing complexity and demands of regulatory compliance have necessitated an adaptation of Porter's model. This has led to the development of Total Value Chain Analysis, which integrates compliance activities into the traditional value chain framework. Unlike the original model, which often viewed compliance as a separate, overhead function, this new approach considers compliance as a set of horizontal capabilities that span the entire value chain. The focus shifts from purely optimizing margin to also minimizing overall risk. Total Value Chain Analysis offers several key insights. It helps organizations understand the true cost of both compliance and non-compliance, illustrates how compliance activities affect risk across different business functions, and demonstrates the value of compliance in terms of cost avoidance, increased trust, and reduction in defects, incidents, and other negative outcomes. This holistic view allows companies to develop more comprehensive strategies for competitive advantage. Building on Porter's strategies for cost and differentiation advantage, Compliance Chain Analysis introduces the concept of compliance advantage. This new perspective suggests ten principles for driving compliance advantage, including keeping all promises, taking ownership of compliance obligations, integrating compliance into performance processes, and developing a learning culture around compliance. By adhering to these principles, companies can create a robust compliance framework that not only meets regulatory requirements but also contributes to overall business success. The Total Value Chain concept provides an integrated approach, combining traditional VCA with a strong focus on compliance. While this strategy may not always lead to immediate cost reductions, it offers significant long-term benefits. Companies can avoid future costs associated with non-compliance and differentiate themselves in the market through higher quality products and services, safer operations, and improved stakeholder trust. In today's highly regulated marketplace, this comprehensive approach to value chain and compliance management provides a powerful tool for creating and sustaining competitive advantage. Compliance Operability Assessment Process The Compliance Operability Assessment Process uses the Total Value Chain Analysis as its foundation. This comprehensive approach helps evaluate the level of compliance operability and maturity for all compliance obligations, both mandatory and voluntary. Compliance Operability Assessment Process The process consists of the following steps: 1. Identify Business Requirements This initial step involves understanding the core business needs, objectives, and strategic goals of the organization. It provides context for how compliance fits into the overall business model. 2. Create Operations Business Model In this step, we identify and map out the operational functions, behaviours, and interactions within the organization. This creates a clear picture of how the business operates on a day-to-day basis. 3. Identify Compliance Requirements Here, we determine all relevant compliance requirements that apply to the organization. This includes industry-specific regulations, general legal obligations, along with voluntary commitments made to stakeholders. 4. Create Obligations and Promises Register This step involves creating a comprehensive register that identifies: Legal and regulatory obligations along with voluntary commitments made to stakeholders Promises and policies created associated with meeting obligations and stakeholder commitments Areas of uncertainty in staying between the lines and ahead of risk Potential compliance and operational risk associated with meeting obligations and stakeholder commitments 5. Evaluate and Map Compliance Criticality In this step, we assess what is critical-to-compliance (description below). This helps prioritize compliance efforts and resource allocation to focus on those areas that matter most to staying between the lines and ahead of risk. 6. Create Integrated Operational Compliance Model This step involves integrating the compliance requirements into the operational business model. It shows how compliance obligations interact with and affect day-to-day business and how obligations will be met and keeping promises associated with them. T 7. Evaluate Compliance Operability The final step is to assess how well the integrated compliance model functions within the organization. This evaluation helps identify areas of strength and weakness in the compliance program, and guides future improvement efforts. By following this structured process, organizations can gain a holistic view of their compliance landscape and how it integrates with their business operations. This approach allows for: Better alignment between compliance efforts and business objectives Identification of potential gaps or overlaps in compliance activities More efficient resource allocation for compliance management Improved ability to anticipate and mitigate compliance risks Enhanced overall effectiveness of the compliance program The Compliance Operability Assessment Process provides a systematic method for organizations to move beyond a checkbox approach to compliance. Instead, it fosters the development of a mature, integrated compliance system that adds value to the organization while effectively managing regulatory and stakeholder obligations. Compliance Criticality Analysis The concept of compliance criticality is often encountered in various contexts, similar to other "Critical-to-X" frameworks: Critical-to-Quality (CTQ) Critical-to-Safety (CTS) Critical-to-Environment (CTE) Critical-to-Sustainability Critical-to-Value And others Critical-to-Compliance  refers to: essential structures, functions, behaviours, or interactions that directly impact an organization's ability to meet obligations or keep promises. Compliance Criticality Map Examples include organizational structure, roles, culture, governance, programs, systems, processes, procedures, protocols, resources, capacity, goals, priorities, and strategy. The importance of Critical-to-Compliance can be understood through several key benefits: Change Management : By identifying critical-to-compliance elements, organizations can prioritize efforts to mitigate risks and prevent non-compliance when implementing planned changes. Resource Optimization : Focusing on critical-to-compliance elements helps organizations avoid wasting time and resources on less significant areas, ensuring that compliance efforts are concentrated on what matters most. Risk Management: Understanding which elements are critical-to-compliance allows organizations to establish necessary controls, reducing the probability of non-conformance and mitigating high-impact risks. Comprehensive Coverage : Identifying critical-to-compliance elements helps organizations ensure that all essential capabilities are in place to meet relevant regulatory requirements and voluntary stakeholder obligations. Enhanced Confidence : Recognizing and addressing critical-to-compliance aspects demonstrates an organization's commitment to meeting obligations and keeping promises. To assess the importance of different elements, a criticality ranking can be applied: Critical : Discontinuing or substantially changing this aspect will result in a high likelihood of failure to meet compliance obligations or keep promises. Significant : Discontinuing or substantially changing this aspect will significantly affect the ability to meet compliance obligations or keep promises. Moderate : Discontinuing or substantially changing this aspect will moderately affect the ability to meet compliance obligations or keep promises. Not Significant : Discontinuing or substantially changing this aspect will not significantly affect the ability to meet compliance obligations or keep promises. By utilizing this framework, organizations can effectively prioritize their compliance efforts and ensure they are focusing on the most crucial aspects of their operations. Operational Compliance Maturity Regulatory bodies and standards organizations are increasingly expecting companies to utilize capability maturity models to enhance performance and progress towards ambitious targets such as zero incidents, zero fatalities, zero harm, zero emissions, and zero violations. While capability maturity models have been around for some time, their application in compliance improvement has been limited. However, this trend is beginning to change. One area where capability maturity models have been successfully employed is in software development, particularly in aerospace and defence applications. The CMMI (Capability Maturity Model Integration) Institute, building on research originally conducted by Carnegie Mellon University, continues to develop and publish maturity models. In response to the shift towards outcome and performance-based regulatory obligations, we have adapted the CMMI model to better support the capabilities needed to advance outcomes over time. It's important to note that certain minimum operability requirements must be met before any significant progress in outcomes can be achieved. Fundamentally, better outcomes are obtained when processes function more like a purposeful system rather than as individual components. This principle is derived from systems theory, which posits that outcomes are emergent properties resulting from the product of a system's interactions, rather than simply the sum of its parts. The following model provides a framework for organizations to assess their current compliance maturity level and identify areas for improvement, ultimately working towards more effective and efficient compliance management. Operational Compliance Maturity Operational Compliance Maturity Model: 5 - Leading: Advancing Outcomes Advancing overall compliance outcomes Reducing risk and ensuring value Continuous innovation and learning at all organizational levels 4 - Governing: Regulating Effectiveness Tracking compliance outcomes Introducing feed-forward processes to improve effectiveness Focusing on achieving outcomes and improving capabilities Continuous improvement is ingrained in organizational culture 3 - Managing: Regulating Performance Standards provide guidance and normative practices and behaviours Introducing feedback processes to improve consistency Focusing on compliance performance and risk management Continuous improvement is intentional and proactive 2 - Controlling: Regulating Conformance Planning, performing, measuring, and controlling compliance processes Defining and mostly following compliance procedures Focusing on inspections, audits, and corrective actions Continuous improvement is reactive 1 - Perceiving: Recognizing Obligations Developing obligation and risk awareness Focusing on prescriptive compliance and training Work is completed but often delayed or over budget Procedures are sometimes followed with unpredictable output or outcomes 0 - Avoiding: Unknown Lack of obligation and risk awareness Obligations may or may not be achieved Procedures are rarely followed Compliance risk is unknown Conclusion The Compliance Operability Assessment using Total Value Chain and Compliance Criticality Analysis provides organizations with a comprehensive framework to evaluate and enhance their compliance efforts. By integrating compliance into the broader business strategy and operations, this approach moves beyond traditional checkbox compliance to create a more robust, effective, and value-driven compliance system. The process outlined - from identifying business requirements to evaluating compliance operability - allows organizations to gain a holistic view of their compliance landscape. This systematic method helps align compliance efforts with business objectives, identify gaps and overlaps in compliance activities, optimize resource allocation, and improve risk management. Furthermore, the introduction of concepts like Minimal Viable Compliance (MVC), Compliance Criticality Analysis, and the Operational Compliance Maturity Model provides organizations with concrete tools to assess their current state and chart a path for improvement. These individual frameworks enable companies to prioritize their compliance efforts, focusing on the most critical aspects that directly impact their ability to meet obligations and keep promises. As regulatory environments continue to evolve and stakeholder expectations increase, this integrated approach to compliance management becomes increasingly vital. By viewing compliance as an integral part of the value chain rather than a separate overhead function, organizations can not only meet their regulatory obligations but also create competitive advantage. This shift in perspective transforms compliance from a cost centre into a strategic asset that contributes to overall business success, fosters stakeholder trust, and drives continuous improvement across the organization. The is now part of our Total Value Compliance Program.

  • AI Engineering: The Last Discipline Standing

    The software engineering and related domains are undergoing their most dramatic transformation in decades. In discussions I have had over the last year, IT product companies appear to be moving towards an AI first model. As AI capabilities rapidly advance, a stark prediction is emerging from industry leaders: AI Engineering may soon become the dominant—perhaps only remaining—engineering discipline in many IT domains. How Product Teams Are Already Changing Looking at how IT technology companies are adapting to AI uncovers an interesting pattern: teams of three to five people are building products that traditionally required much larger engineering groups. The traditional model—where product managers coordinate with software engineers, UI designers, data analysts, DevOps specialists, and scrum leaders—is being replaced by something fundamentally different. Instead, these companies operate with product managers working directly with AI Engineers who can orchestrate entire development lifecycles. These professionals are learning to master a new set of skills: AI system design (architecting intelligent solutions from requirements), AI integration (embedding capabilities seamlessly into products), and AI operations (managing and maintaining AI-powered systems at scale). Companies like Vercel, Replit, and dozens of Y Combinator startups demonstrate this model in action daily. What once required full engineering teams now happens through sophisticated prompt engineering and AI orchestration. A Pattern We've Seen Before This transformation feels familiar because I lived through something similar in integrated circuit manufacturing. In the early days, I worked for an integrated circuits manufacturing in Canada where they at first designed circuits by hand, built prototypes in physical labs, and painstakingly transferred designs to mylar tape for silicon fabrication. This process required teams of specialists: layout technicians, CAD operators, lab engineers—each role seemingly indispensable. Over the years, each function was improved as computer technology was adopted. We started using circuit simulation, computer-aided design with automated design rule checking, and wafer fabrication layout tools. This is not unlike how organizations are now adopting AI to improve individual tasks and functions. Then silicon compilers arrived and changed everything overnight. Suddenly, engineers could create entire circuit designs by simply describing what the circuit should accomplish using Hardware Description Languages like VHDL and Verilog. The compiler handled layout optimization, timing analysis, and fabrication preparation automatically. The entire process could be automated. From ideation to the fab in one step. Entire job categories vanished, but the engineers who adapted became exponentially more productive. ONE-SPRINT MVP Today's product development is following a similar pattern. AI Engineers translate application requirements through sophisticated prompts into working minimum viable products (MVPs) – one-sprint MVP. This approach is resulting in fewer people to deliver working solutions faster while supporting rapid iteration cycles that make even Agile development methodologies feel glacially slow. The Tools Driving This Shift The evidence surrounds us. GitHub Copilot and Cursor generate entire codebases from natural language descriptions. Vercel's V0 creates production-ready React components from simple prompts. Claude Artifacts builds functional prototypes through conversation. Replit Agent handles full-stack development tasks autonomously. These aren't novelty demos—they're production tools that engineers use to create real products for customers to use. However, this is just the beginning. Where Traditional Engineering Still Matters Now this wave won't wash away all engineering domains equally. Critical areas will maintain their need for specialized expertise: embedded systems interfacing with hardware, high-performance computing requiring deep optimization, safety-critical applications in aerospace and medical devices, large-scale infrastructure architecture, and cybersecurity frameworks. But the domains most vulnerable to AI consolidation—web applications, mobile apps, data pipelines, standard enterprise software, code creation, and prototype development—represent the majority of current engineering employment. The Economic Forces at Play The economics driving this shift are brutal in their simplicity. When a single AI Engineer can deliver 80% of what a five-person traditional team produces, at a fraction of the cost and timeline, market forces make the choice inevitable. This isn't a gradual transition that companies will deliberate over for years. Organizations that successfully implement AI-first methodologies will out-compete those clinging to traditional approaches. The advantage gap widens daily as AI capabilities improve and more teams discover these efficiencies. Venture capital flows increasingly toward AI-first startups with lean technical teams, while traditional software companies scramble to demonstrate AI integration strategies or risk irrelevance. Survival Strategies in an AI-First World AI represents a genuine threat to traditional engineering careers. The question isn't whether disruption will occur, but how to position yourself to survive and thrive as AI-first methodologies become standard practice. Critical survival tactics: Immediate actions (next 6-12 months): Master AI tools now  - Become proficient with GitHub Copilot, Claude, ChatGPT, and emerging AI development platforms Learn prompt engineering  - This is becoming as fundamental as learning programming languages once was Shift to AI-augmented workflows  - Don't just use AI as a helper; restructure how you approach problems entirely Build AI system integration skills  - Focus on connecting AI components rather than building from scratch Strategic positioning (1-2 years): Become an AI Engineer  - Align your engineering practice from traditional engineering to AI system design; adopt AI engineering knowledge and methods into your practice Specialize in AI reliability and maintenance  - AI systems need monitoring, debugging, and optimization Develop AI model customization expertise  - Fine-tuning, prompt optimization, and model selection Master AI-human collaboration patterns  - Understanding when to use AI vs. when human expertise is still required Why Waiting Is Dangerous Critics point to legitimate current limitations: AI-generated code often lacks production robustness, complex integrations still require deep expertise, and security considerations demand human judgment. These concerns echo the early objections to silicon compilers, which initially produced inferior results compared to expert human designers. But here's what history teaches us: the technology improved rapidly and soon exceeded human capabilities in most scenarios. The engineers who adapted early secured the valuable remaining roles. Those who waited found themselves competing against both improved tools and colleagues who had already mastered them. Understanding the Challenge This isn't another gradual technology transition that engineers can adapt to over several years. AI-first methodologies represent a substantial challenge to traditional engineering roles, with the potential for significant displacement across the industry. The reality:  Engineers who don't adapt may find themselves competing against AI-first approaches, systems and tools that operate continuously, require no salaries or benefits, and improve steadily. This will be an increasingly difficult competition to win. The opportunity:  Engineers who proactively embrace AI-first approaches will be better positioned to secure valuable roles in the evolving landscape. Leading this transformation offers better prospects than waiting for external pressure to force change. The window for proactive adaptation becomes smaller with time. Each month of delay reduces competitive advantage as AI capabilities advance and more engineers begin their own transformation journeys. The choice ahead is significant: evolve into an AI Engineer who works with intelligent systems, or risk being replaced by someone who does. Raimund Laqua, PMP, P.Eng is co-founder of ProfessionalEngineers.AI (ray@professionalengineers.ai) a Canadian engineering practice focused on advancing AI engineering in Canada. Raimund Laqua, is also founder of Lean Compliance ( ray.laqua@leancompliance.ca ), a Canadian consulting practice focused on helping orgnizations operating in highly-regulated, high risk sectors always stay ahead of risk, between the lines, and on-mission.

  • Understanding Operational Compliance: Key Questions Answered

    Operational Compliance Organizations investing in compliance often have legitimate questions about how the Operational Compliance Model relates to their existing frameworks, tools, and investments. These questions reflect the reality that most organizations have already implemented various compliance approaches—ISO management standards, GRC platforms, COSO frameworks, Three Lines of Defence models, and others. Rather than viewing these as competing approaches, the Operational Compliance Model serves as an integrative architecture that amplifies the value of existing investments while addressing fundamental gaps that prevent compliance from achieving its intended outcomes. The following responses explore how Operational Compliance works with, enhances, and elevates traditional approaches to create the socio-technical systems necessary for sustainable mission and compliance success. Responses to Questions "Why can I not use an ISO management systems standard?" ISO management standards are excellent for procedural compliance  but fall short of achieving operational compliance . Operational Compliance defines a state of operability when all essential compliance functions, behaviours, and interactions exist and perform at levels necessary to create the outcomes of compliance. The fundamental limitation is that ISO standards focus on building parts of a system  (processes, procedures, documentation) rather than the interactions between parts  that create actual outcomes. Companies usually run out of time, money, and motivation to move beyond implementing the parts of a system to implementing the interactions which is essential for a system to be considered operational. ISO standards help you pass audits, but the Operational Compliance Model helps you achieve the outcomes those audits are supposed to ensure—better safety, security, sustainability, quality, and stakeholder trust. "Doesn't GRC cover this, at least for IT obligations?" GRC (Governance, Risk, and Compliance) platforms are tools, not operational models. Traditional "Procedural Compliance" is based on a reactive model for compliance that sits apart and is not embedded within the business. Most GRC implementations create sophisticated reporting systems but don't address the fundamental challenge: how to make compliance integral to value creation . The Operational Compliance Model recognizes that obligations arise from four types of regulatory design (micro-means, micro-ends, macro-means, macro-ends) that each require different approaches. GRC tools can support this model, but they can't create the socio-technical processes that actually regulate organizational effort toward desired outcomes. "I already have dozens of frameworks" This objection actually proves the need for the Operational Compliance Model. Having dozens of frameworks is precisely the problem—it creates framework proliferation  without operational integration . Lean TCM incorporates an Operational Compliance Model that supports all obligation types and commitments using design principles derived from systems theory and modern regulatory designs. The Operational Compliance Model doesn't replace your frameworks; it provides the integrative architecture  to make them work together as a system rather than competing silos. It's the difference between having a collection of car parts versus having a functioning vehicle. "What about COSO? This already provides an overarching framework?" COSO is excellent for internal control over financial reporting  but was designed primarily for audit and governance purposes. The Operational Compliance Model addresses several limitations of COSO: Scope : COSO focuses on control activities; Operational Compliance focuses on outcome creation Integration : COSO's five components work within compliance functions; Operational Compliance embeds compliance into operations Regulatory Design : COSO assumes one type of obligation; Operational Compliance handles four distinct types that require different approaches Uncertainty : COSO manages risk; Operational Compliance improves probability of success  in uncertain environments COSO can be a component within the Operational Compliance Model, but it's insufficient by itself to achieve operational compliance. "What about Audit 3 Lines of Defence?" The Three Lines of Defence model is fundamentally reactive —it's designed to catch problems after they occur. Operational Compliance is based on a holistic and proactive model that defines compliance as integral to the value chain. The limitations of Three Lines of Defence: Line 1  (operations) sees compliance as separate from their real work Line 2  (risk/compliance) monitors rather than enables performance Line 3  (audit) confirms what went wrong after the fact The Operational Compliance Model collapses these artificial lines  by making compliance inherent to operational processes. Instead of three defensive lines, you get one integrated system  where compliance enables rather than constrains performance. The Essential Difference For compliance to be effective, it must first be operational—achieved when all essential compliance functions, behaviours, and interactions exist and perform at levels necessary to create the outcomes of compliance. The majority of existing frameworks and models serve important functions, but they operate within the procedural compliance paradigm . The Operational Compliance Model represents a paradigm shift  from compliance as overhead to compliance as value creation—from meeting obligations to achieving outcomes.

bottom of page