top of page

SEARCH

Find what you need

568 results found with an empty search

  • When Automation Hides Waste

    Applying Lean to Digital Waste The digital transformation has fundamentally changed how work gets done, but it has also created a new challenge for operational excellence. While LEAN methodology has long focused on eliminating waste in manufacturing and physical processes, the rise of digital operations has introduced new forms of waste that are often harder to see and understand. Today's organizations increasingly operate through layers of software, automation, and algorithms that obscure the reality of what's actually happening in their processes. This digital opacity creates a fundamental problem: you cannot improve what you cannot see. As more organizations cross the threshold where digital processes outnumber physical ones, the need to identify and eliminate digital waste becomes critical to maintaining operational excellence. The Visibility Problem in Digital Operations Speed, efficiency, and effectiveness are not synonymous. When organizations prioritize doing things faster through automation, they often inadvertently conceal the very waste that LEAN methodology seeks to eliminate—over-processing, excessive movement, and other forms of operational inefficiency. More critically, automation buries operational reality within layers of code, making processes invisible to the stakeholders and decision-makers who need to understand them. What actually happens becomes locked away in digital black boxes, inaccessible to those responsible for improvement and oversight. The rise of AI has both amplified this challenge and brought it into sharp focus. As organizations face new obligations for transparency and explainability in their AI systems, they're discovering that the visibility problem extends far beyond artificial intelligence. This need for transparency was always essential once we entered the digital era—we simply didn't recognize its urgency. The critical difference today is that many organizations have crossed a threshold where digital processes outnumber physical ones. While this shift doesn't apply to every industry, it represents the new reality for a significant portion of the business world. This makes the LEAN principle of visibility—the practice of "walking the Gemba" to see what's actually happening—more important than ever. You cannot improve what you cannot see, and in our increasingly digital world, automation has made it easier to operate blindly. The challenge isn't just maintaining visibility; it's actively creating it in environments where the real work happens behind screens rather than on factory floors. The Eight Digital Wastes To address digital waste, we must first identify it. Here are the eight traditional LEAN wastes translated into their digital equivalents: 1. Overproduction → Over-Engineering/Feature Bloat Building more features than users need or want. Creating complex solutions when simple ones would suffice, or developing features "just in case" without validated demand. 2. Waiting → System Delays/Loading Times Users waiting for pages to load, API responses, system processing, or approval workflows. Also includes developers waiting for builds, deployments, or code reviews. 3. Over-processing → Excessive Processing/Computations Using more computational power than necessary to achieve desired outcomes. This includes deploying large language models for simple text tasks that simpler algorithms could handle, running complex AI models when rule-based systems would suffice, or using resource-intensive processing when lightweight alternatives exist. The massive compute requirements of modern AI often exemplify this waste. 4. Inventory → Technical Debt Accumulated shortcuts, suboptimal code, outdated dependencies, architectural compromises, and deferred maintenance that slow down future development and increase system fragility. This includes both intentional debt (conscious trade-offs) and unintentional debt (poor practices that compound over time). 5. Motion → Inefficient User Interactions Excessive clicks, complex navigation paths, switching between multiple applications to complete simple tasks, or poor user interface design that requires unnecessary user movements and interactions. 6. Defects → Bugs/Quality Issues Software bugs, data corruption, system errors, security vulnerabilities, or any digital output that doesn't meet requirements and needs to be fixed or reworked. 7. Unused Human Creativity → Underutilized Digital Capabilities Not leveraging automation opportunities, failing to use existing system capabilities, or having team members perform manual tasks that could be automated. Also includes not utilizing data insights or analytics capabilities. 8. Transportation → Non-Value-Added Automation Automating processes that don't actually improve outcomes or create value—like automated reports no one reads, robotic processes that move data unnecessarily between systems, or AI features that complicate rather than simplify user workflows. The automation itself becomes the waste, moving work around without improving it. Apply LEAN to Reduce Digital Waste Understanding digital waste is only the first step. Organizations must actively work to make their digital operations as transparent and improvable as physical processes once were. Here's how to apply these concepts: Create Digital Gemba Walks: Establish regular practices to observe digital processes in action. This might include reviewing system logs, monitoring user journeys, analyzing performance metrics, and sitting with users as they navigate your systems. I mplement Visibility Tools : Deploy monitoring, logging, and analytics that make digital processes observable. Create dashboards that show not just outcomes, but the steps and resources required to achieve them. Question Automation : Before automating any process, ask whether the automation truly adds value or simply moves work around. Ensure that automated processes remain observable and improvable. Address Technical Debt Systematically : Treat technical debt as you would physical inventory—track it, prioritize its reduction, and prevent its accumulation through better practices. Optimize for Actual Value : Regularly audit your digital systems to identify over-processing, unnecessary features, and inefficient interactions. Focus computational resources on tasks that truly benefit from them. Design for Transparency : When building new digital processes, make observability and explainability first-class requirements, not afterthoughts. The path to eliminating digital waste begins with increased transparency. Organizations must prioritize making their digital processes observable and understandable, creating the visibility necessary to identify, measure, and systematically eliminate these new forms of waste. Only through this enhanced transparency can we unlock the true potential of digital operations while maintaining the continuous improvement capabilities that drive lasting operational excellence.

  • Management PDCA - Hero or Zero?

    For those responsible for management systems you have most likely noticed the elevation of continuous improvement and specifically the use of a Plan-Do-Check-Act (PDCA) cycle in related standards, guidelines, and even regulations. Here are a few examples (API RP 1173, ISO 9001, ISO 22301): The use of improvement cycles has been effective in specific contexts and areas. So it’s not a surprise to see PDCA (or similar) cycles also being applied to management programs and systems. However, guidance on what and how PDCA is to work at the systems level has been few and far between. At a macro level the same acronym (PDCA) is being used however the details of what is to happen within each step is vague, and differs from standard to standard. In some cases PDCA is being used as a process to build the system as if it was a project methodology. In most cases PDCA has been re-defined as the model for the system processes within a given standard. It looks like PDCA is used as magical pixie dust sprinkled everywhere where things are managed. If you are confused by all of this, you are not alone. Research has shown that the inconsistent use of PDCA has contributed to the failure of not only what we might call “ Management PDCAs ” but traditional process improvement as well. It is difficult for organizations to get the benefits from PDCA when it is being re-defined, co-opted, and misapplied. In this article we take a look at “Management PDCAs”and how these compare with traditional continuous improvement cycles. We will try to clear out some of the confusion and find out if Management PDCAs are going to be a hero or end up as a zero – not amounting to very much and perhaps making things worse. History of PDCA There is much written and available on the topic of continuous improvement. PDCA is not new and has evolved over the years. Here are a few of the familiar ones you probably have heard or know about: Deming Wheel Shewhart Cycle Japanese PDCA PDSA PDCA / A3 (Lean) DMAIC (Six Sigma) Kaizen / Toyota Kata Observe-PDCA OODA Build-Measure-Learn (Lean Startup) And others At a basic level, PDCA is a model for continuous improvement that uses iterations to optimize towards a goal. In practice, focusing on smaller improvements with frequent iterations accelerates learning and establishes behaviours that build towards an improvement culture. When this is done well it results in a virtuous cycle where both action and behaviours reinforce each other delivering more and better improvements over time. No wonder management standards and regulatory bodies are looking at harnessing the power of PDCA – it has been a real super power. What all these continuous improvement cycles have in common is that they are all meta processes that stand outside of what you want to improve. You can in theory (practice may be different) apply them to improving tasks, processes, systems, programs, and many other things. Each encapsulates a methodology where the specifics of what happens inside the cycle depend on what you want to improve. For example, some are focused on problem solving, while others on discovery of better ways to achieve a particular target or goal. The majority of them are most effective when applied to incremental changes at the process level and less so at involving system-wide improvements. What is the Problem with Management PDCA? Let's now take a look at how PDCA is being used by many management systems standards and guidelines. We will consider: PDCA as a project methodology PDCA as a systems model PDCA as a new variant for continuous improvement PDCA as a replacement for CAPA (corrective actions / preventative actions) PDCA as a project methodology Many have adopted the practice of viewing all management processes through the lens of P-D-C-A. While PDCA may define a natural process for management where we plan the work, work the plan and then check to make sure the plan was done, this is not the same as continuous improvement and what PDCA was intended for. As an example, ISO defines PDCA in the following way: PDCA is a tool that can be used to manage processes and systems. P-Plan: set the objectives of the system and processes to deliver results (“What to do” and “how to do it”) D-Do: implement and control what was planned C-Check: monitor and measure processes and results against policies, objectives and requirements and report results A-Act: take actions to improve the performance of processes PDCA operates as a cycle of continual improvement, with risk‐based thinking at each stage. On paper this sounds good, but this is a form of linear thinking. in this case PDCA has been flattened out to form a sequence of steps. There is no improvement cycle and the only activity to improve is specified in the ACT step not the DO step where it happens in traditional PDCA. PDCA as a system model Several management system standards have conceptualized their management activities as part of an overarching PDCA cycle. In essence, PDCA has become a system cycle and not an improvement cycle in the traditional sense. To help us understand this we need to consider the difference between management systems and management programs. At a high level when you want consistency you use a system; when you want to change something you launch a program. Management systems, which is what ISO and others provide standards for, are meant to maintain state which means consistently achieving a specific level of performance with respect to such things as quality, safety, security, and so on. This is accomplished by monitoring processes and taking action to correct for deviations in whatever way that is defined. Management programs, on the other hand, are used to change state to achieve new levels of performance. This is a feed-forward control loop that adjusts system capabilities to achieve higher standards of effectiveness. This fits closer to the notion of continuous improvement towards better outcomes rather than deviation from standard. Both feed-back and feed-forward processes can benefit from PDCA but only partially. The benefit of iterations only occurs as often as "defects" are discovered or "standards" are raised. This limits the scope of improvements to those events and mostly to the reactive side of equation when risk has already become an issue. PDCA as a new variant When standards envision their systems as improvement cycles they are creating a new variation of PDCA that works differently than traditional PDCA cycles. The processes that are linked to Plan-Do-Check-Act steps are intended to operate simultaneously. For example, in the case of AP RP 1173 Pipeline Safety Management System, you never stop doing DO'ing operational controls to CHECK safety assurance.There is no sequencing of steps, or iteration happening here. Instead, PDCA is used to describe a function that the set of processes performs. This is different than conducting a PDCA followed by another PDCA and then another until you achieve your goal. PDCA as a replacement for CAPA Continuous improvement in the form of PDCA has been placed on the reactive side and embedded in the system as mostly a replacement for CAPA. All too often I have seen PDCA used to define a process for actions. Again, this is linear thinking applied to managed work. There is no iteration, no striving towards a goal, no incremental improvement. From Zero to Hero What seems to have happened is that we have a conflation of improvement strategies all under the umbrella of PDCA. It's no wonder why there has been confusion and lack of success. For PDCA to be more than words on page (or magical pixie dust) it should follow the principals defined by each methodology. Failure to follow the principals has been reported as a large contributor (perhaps the largest) to why PDCA has not been effective. With respect to Management PDCAs these should: Not be used as a process to build a system. PDCA is intended to improve the system after it has become operational. PDCA is a cycle that is repeated not a linear step of project steps. There are other methodologies to establish systems such as Lean Startup for example. Not be used as a replacement for CAPA . PDCA should instead be a proactive process for continuous improvement focused on staying ahead of risk and prevention not only on reacting to incidents. Be part of the system but not the system itself . Mapping management system processes to PDCA steps misrepresents management system dynamics which will lead to ineffective implementation and operations. Be repeated as often as possible to develop habits and leverage iterative improvements. The power of PDCA comes from proactive actions reinforced by proactive behaviours to establish a virtuous cycle. What most have instead is a vicious cycle – reactive actions reinforced by reactive behaviours. Where best to use PDCA? Continuous improvement needs to occur across all levels but at a minimum incorporate be used to improve processes (loop 1), and improve systems (loop 2): Loop 1: At the process level ,PDCA should focus on improving efficiencies and consistency. This is where Lean practices are most useful. Process level improvements tend to utilize existing capabilities to reduce waste and improve alignment. These improvements can be accomplished using frequent incremental changes over time. Loop 2: At the program level PDCA would focus on improving effectiveness of a system. This could be called a Program PDCA. This should follow approaches that utilize experimentation and system level interventions. System level improvements benefit from step-wise improvements that elevate capabilities to effect better outcomes. It is more difficult to incrementally improve through a maturity curve. What do you think?

  • Compliance Chain Analysis

    Harvard Business School's Michael E. Porter introduced the concept of a value chain in his book, " Competitive Advantage: Creating and Sustaining Superior Performance ," in 1985. In his book he writes: "Competitive advantage cannot be understood by looking at a firm as a whole," Porter wrote. "It stems from the many discrete activities a firm performs in designing, producing, marketing, delivering and supporting its product. Each of these activities can contribute to a firm's relative cost position and create a basis for differentiation." Porter believed that competitive advantage comes from: (1) cost leadership, and (2) differentiation. Value chain analysis (VCA) helps to understand how both affect margin. Value chain analysis considers the contribution of an organization's activities towards the optimization of margin, where margin is an organization's ability to deliver a product or service for which the customer is willing to pay more than the sum of the costs of all activities in the value chain. Porter argues that a company can improve its margin by the way "primary activities" are chained together and how they are linked to supporting activities. He defines "primary activities" as those that are essential to adding value and creating competitive advantage. Furthermore, secondary activities assist the primary activities to maintain or enhance the product's value by means of cost reduction or value improvement. This the domain of LEAN and operational excellence. An example value chain along with general processes are shown in the following diagram: Value Chain Analysis A Compliance Perspective In recent years, compliance has increased in both complexity as well as demand by regulation and industry standards. It is, therefore, worth taking another look at the value chain in terms of how compliance should now be considered. Porter includes the quality assurance (QA) function as part of the "Firm Infrastructure." At a basic level, this places QA outside of the core processes and considered as means to improve value and reduce cost. The latter, is the more common emphasis as many organizations view quality and other compliance functions as an overhead that needs to be reduced. For the purpose of this discussion, we will use the same primary activities from the typical value chain. However, infrastructure activities are expanded to include other compliance activities such as: quality, safety, environmental and ethics & compliance. Compliance activities can in principle contribute to value improvement as well as cost reduction. Although, the effects may not be direct or immediate. A key role of compliance is to drive down risk which as we know has effects that may be delayed or mitigated. Therefore, instead of margin, it might be more useful to consider the level of risk as the measure to be optimized. It is common for compliance to be organized into isolated functions that are separate from the primary activities. However, we know that these programs are not effective when implemented in this way. Instead, they are more effective when seen as horizontal capabilities that cross the entire value chain. The following diagram illustrates how a compliance chain can be constructed using Porter's value chain as a model: Compliance Chain Analysis By analyzing the relationship between compliance and primary activities (including secondary), it is possible to gain a better understanding of the following: Cost of compliance and non-compliance How and to what degree compliance affects risk Value of compliance (cost avoidance, increased trust, and reduction in: defects, incidents, fatalities, financial losses, etc) Strategies aligned with competitive advantages can then be applied to improve both margin as well drive down overall risk: Cost Advantage Porter argued that there are 10 drivers that improve cost advantage: Create greater economies of scale Increase the rate of organizational learning Improve capacity utilization Create stronger linkages between activities Develop synergies between business units Look to increase vertical integration Improve the timing of market entry Alter the firm’s strategy regarding cost or differentiation leadership Change the geographic location of the activities Look to address institutional factors such as regulation and tax efficiency Differentiation Advantage Porter further identifies 9 factors to promote unique value: Changing policies and strategic decisions Improving linkages among activities Altering market timing Altering production locations Increase the rate of organizational learning Create stronger linkages between activities Develop relationships between business units Change the scale of operations Look to address institutional factors such as regulation and product requirements Compliance Advantage We suggest 10 principles to drive compliance advantage: Keep all your promises ​ Take ownership for all your compliance obligations (required and voluntary) Develop programs and systems that always keep you in compliance Incrementally and continuously improve your compliance Make compliance an integral part of your performance and productivity processes Use proactive strategies to always stay in compliance Monitor in real-time the status and your ability to stay in compliance Audit outcomes of your compliance programs not activity Develop a learning culture around compliance Always strengthen your ability to easily meet and maintain compliance Summary Total Value Chain Analysis Value chain analysis (VCA) has been used successfully to help companies create both cost and differentiation advantage to improve their margins. In today's highly regulated marketplace, tools like VCA can also be used to create a compliance advantage to decrease overall risk. While, this may not result in immediate cost reduction, it can avoid future costs and differentiate a company from its competitors by achieving: higher quality, safer operations, and improved trust from their stakeholders.

  • Which is Better for AI Safety: STAMP/STPA or HAZOP/PHA?

    STAMP/STPA and traditional PHA methods like HAZOP represent fundamentally different safety analysis philosophies. STAMP/STPA views accidents as control problems in complex socio-technical systems, focusing on hierarchical control structures and unsafe control actions that can occur even when all components function properly.  In contrast, HAZOP operates on the principle that deviations from design intent cause accidents, using systematic guide words (No, More, Less, etc.) applied to process parameters to identify potential failure scenarios. Traditional PHA methods like FMEA and What-If analysis similarly focus on component failures and bottom-up analysis approaches. Research demonstrates these methodologies are complementary rather than competitive. Studies show STPA identifies approximately 27% of hazards missed by HAZOP, while HAZOP finds about 30% of hazards that STPA overlooks.  STAMP/STPA excels at analyzing software-intensive systems, complex organizational interactions, and novel technologies where traditional failure-based analysis falls short.  HAZOP proves to be better for traditional process systems with well-defined physical parameters and established operational procedures, benefiting from decades of industrial experience and mature tooling. For AI safety analysis, STAMP/STPA appears better suited to AI's systemic and emergent risks, but the choice becomes more nuanced when considering AI's integration into traditional process systems.  While STPA naturally addresses algorithmic decision-making, human-AI interactions, and emergent behaviors that traditional failure analysis struggles with, AI increasingly operates within conventional industrial processes where HAZOP's systematic parameter analysis remains valuable.  The real challenge lies in analyzing AI-augmented process control systems—where an AI controller making real-time decisions about flow rates or temperatures requires both STPA's systems perspective to understand the AI's control logic and HAZOP's structured approach to analyze how AI decisions affect physical process parameters.  Rather than viewing these as competing methodologies, the most thoughtful approach recognizes that AI safety analysis may require STPA for understanding the AI system itself, while leveraging HAZOP's proven framework for analyzing how AI decisions propagate through traditional process systems—a hybrid necessity as AI becomes embedded throughout industrial infrastructure.

  • You're Not Managing Risk—You're Just Cleaning Up Messes

    Imagine you're a ship captain navigating treacherous waters. Most captains rely on their damage control teams—when the hull gets breached, they spring into action, pumping out water and patching holes. That's feedback control, and while it's essential, it's not what separates legendary captains from the rest. Risk Management is a Feed Forward Process The best captains? They're obsessed with their barometer readings, wind patterns, and ocean swells before the storm hits. They're tracking leading indicators—subtle changes that whisper of trouble long before it screams. That's feedforward control, and it's the secret that transforms risk management from crisis response into strategic advantage. Here's the truth that will revolutionize how you think about risk: Risk management is a feedforward process. Everything else is just damage control. Walk into any company's "risk management" meeting, and you'll see the problem immediately. They're not managing risk at all—they're managing the aftermath of risks that already materialized. These meetings are filled with lagging indicators—the equivalent of counting holes in your ship's hull after the storm has passed. True risk management is feedforward by definition. It's about reading the environment, anticipating what's coming, and adjusting course before the storm hits. When you're reacting to problems that already happened, you've left risk management behind and entered crisis response. This means fundamentally changing what you track. You measure leading indicators: Employee engagement scores before they become turnover rates Customer complaint sentiment before it becomes churn Process deviation patterns before they become quality failures Market volatility signals before they become financial losses Compliance inoperability before it becomes violations Organizations that make this shift see remarkable transformations in their risk posture by changing their measurement focus from "How badly did we get hit?" to "What's building on the horizon?" Consider how this works in practice: instead of tracking injury rates (lagging), organizations can track near-miss reporting frequency and planned change frequency (leading). This approach often leads to dramatic reductions in actual injuries—not because teams get better at treating injuries, but because they get better at preventing the conditions that create them. True risk management isn't about reading storms or cleaning up after them—it's about creating the conditions for smooth sailing. What leading indicators is your organization ignoring while it counts yesterday's damage?

  • What Is Your MOC Maturity Index?

    MOC Maturity Index Change can be (and often is) a significant source of new risk. As a result, many companies have implemented the basics when it comes to Management of Change (MOC). This may be enough to pass an audit but is not enough to effectively manage the risks due to: asset, process, or organizational change. For that you need processes that are adequately scoped, have clear accountability, and that effectively manage risks during and after the change is implemented. You also need to properly measure both the performance and effectiveness of the MOC process to know whether or not: (1) there is sufficient capacity to manage planned changes and (2) risks are properly mitigated. We created a quick assessment for you to get an idea of how well you are doing. You can take this free assessment by clicking: here

  • LEAN - Lost in Translation

    There are times when leadership sets their gaze on operations in order to better delight their customers, increase margins, or improve operational excellence. This gaze for many companies has translated into a journey of continuous improvement – the playground for LEAN. All across the world companies have embraced LEAN principles and practices in almost every business sector. In many cases, LEAN initiatives have produced remarkable results and for some created a new “way of organizational life.” Continuous improvement has become a centring force as a means for aligning a company’s workforce with management objectives. With this success, the mantra of continuous improvement has expanded, along with the LEAN tools and practices, to other areas of the business such as: quality, safety, environmental, regulatory and other compliance functions. However, in these cases, LEAN has not helped as much as it could and in fact in some cases has made things worse. The problem has not been with the translation of Japanese words such as “Gemba”, “Kaizen”, “Muda”, “Muri”, and others. Instead, the problem is with the translation of LEAN itself.

  • Closing the Compliance Effectiveness Gap

    Compliance Effectiveness Gap Compliance has been heading in a new direction over the last decade. It's moving beyond paper and procedural compliance towards performance and operational compliance.   This change is necessary to accommodate modern risk-based regulatory designs, which elevate outcomes and performance over instructions and rules.   Instead of checking boxes, compliance needed to become operational, which is something that LEAN, along with Operational Excellence principles and practices, helps to establish.   As LEAN endeavours to eliminate operational waste, those who are accountable for mission success have noticed that such things as defects, violations, incidents, injuries, fines, and misconduct are also wastes that take away from the value businesses strive to create.   This waste results predominately from a misalignment between organizational values and operational objectives. You can call this business integrity, which at its core is a lack of effective regulation – The Compliance Effectiveness Gap. Total Value Chain The Problem with Compliance   In a nutshell, compliance should ensure mission success, not hinder it.   Over the years compliance has come alongside the value chain in the form of programs associated with safety, security, sustainability, quality, legal adherence, ethics, and now responsible AI.   However, many organizations experience that these programs operate re-actively, separately, and disconnected from the purpose of protecting and ensuring mission success - the creation of value. They are misaligned not only in terms of program outcomes, but also with respect with business value.   This creates waste in the form of duplication of effort, technology, tools, and executive attention. However, perhaps more importantly, the lack of effectiveness ends up creating the conditions for non-conformance, defects, incidents, injuries, legal violations, misconduct, and business uncertainty.   Closing – The Compliance Effectiveness Gap – is now a strategic objective for organizations who are looking to maximize value creation.   A Program by a New Name   To prioritize this objective, we have renamed our advanced program from: "The Proactive Certainty Program™" to "The Total Value Compliance Program™"   This program builds on our previous work and adds a Value Operational Assessment to identify operational capabilities needed to close – The Compliance Effectiveness Gap  – the gap between organizational values and operational objectives.   With greater alignment (a measure of integrity), uncertainty decreases, risk is reduced, waste eliminated, and value maximized.   The First Step   The first step toward closing The Compliance Effectiveness Gap  is a:   TOTAL VALUE COMPLIANCE AUDIT This is not a traditional audit.   Instead, this is a 10-week participatory engagement (4 hours per week investment), where compliance program & obligation owners, managers, and teams (depending on the package chosen) will actively engage in learning, evaluation, and development of a detailed roadmap to compliance operability – compliance that is capable of being effective. The deliverables you receive include: Executive / Management Education  (Operational Compliance) Integrative Program Evaluation  (Values Operations Alignment) Total Value Compliance Roadmap  (Minimal Viable Compliance Operability) The compounding value you will enjoy: Turning compliance from a roadblock into a business accelerator Aligning your values with your operations for better business integrity Creating competitive advantage, and greater stakeholder trust Enabling innovation and productivity instead of hindering them Are you ready to finally close The Compliance Effective Gap ?

  • Compliance Operability Assessment Using Total Value Chain and Compliance Criticality Analysis

    Why Is This Assessment Necessary? For compliance to be effective, it must generate desired outcomes. These outcomes may include reducing violations and breaches, minimizing identity thefts, enhancing integrity, and ultimately fostering greater stakeholder trust. Realizing these benefits requires compliance to function as more than just the sum of its parts. Unfortunately, many organizations focus solely on individual components rather than the whole system – they see the trees but miss the forest, or concentrate on controls instead of the overall program. Too often, compliance teams work hard and hope for the best. While hope is admirable, it's an inadequate strategy for ensuring concrete outcomes. To elevate above merely a collection of parts, compliance needs to operate as a cohesive system. In this context, operability is defined as the extent to which the compliance function is fit for purpose, capable of achieving compliance objectives, and able to realize the benefits of being compliant. The minimum level of compliance operability is achieved when: All essential functions, behaviors, and interactions exist and perform at levels necessary to create the intended outcomes of compliance. This defines what is known as Minimal Viable Compliance (MVC) , which must be reached, sustained, and then advanced to realize better outcomes. For this to occur, we need a comprehensive approach. We need: Governance to set the direction Programs to steer the efforts Systems to keep operations between the lines Processes to help stay ahead of risks All of these elements must work together as an integrated whole.

  • AI Engineering: The Last Discipline Standing

    The software engineering and related domains are undergoing their most dramatic transformation in decades. In discussions I have had over the last year, IT product companies appear to be moving towards an AI first model. As AI capabilities rapidly advance, a stark prediction is emerging from industry leaders: AI Engineering may soon become the dominant—perhaps only remaining—engineering discipline in many IT domains. How Product Teams Are Already Changing Looking at how IT technology companies are adapting to AI uncovers an interesting pattern: teams of three to five people are building products that traditionally required much larger engineering groups. The traditional model—where product managers coordinate with software engineers, UI designers, data analysts, DevOps specialists, and scrum leaders—is being replaced by something fundamentally different. Instead, these companies operate with product managers working directly with AI Engineers who can orchestrate entire development lifecycles. These professionals are learning to master a new set of skills: AI system design (architecting intelligent solutions from requirements), AI integration (embedding capabilities seamlessly into products), and AI operations (managing and maintaining AI-powered systems at scale). Companies like Vercel, Replit, and dozens of Y Combinator startups demonstrate this model in action daily. What once required full engineering teams now happens through sophisticated prompt engineering and AI orchestration. A Pattern We've Seen Before This transformation feels familiar because I lived through something similar in integrated circuit manufacturing. In the early days, I worked for an integrated circuits manufacturing in Canada where they at first designed circuits by hand, built prototypes in physical labs, and painstakingly transferred designs to mylar tape for silicon fabrication. This process required teams of specialists: layout technicians, CAD operators, lab engineers—each role seemingly indispensable. Over the years, each function was improved as computer technology was adopted. We started using circuit simulation, computer-aided design with automated design rule checking, and wafer fabrication layout tools. This is not unlike how organizations are now adopting AI to improve individual tasks and functions. Then silicon compilers arrived and changed everything overnight. Suddenly, engineers could create entire circuit designs by simply describing what the circuit should accomplish using Hardware Description Languages like VHDL and Verilog. The compiler handled layout optimization, timing analysis, and fabrication preparation automatically. The entire process could be automated. From ideation to the fab in one step. Entire job categories vanished, but the engineers who adapted became exponentially more productive. ONE-SPRINT MVP Today's product development is following a similar pattern. AI Engineers translate application requirements through sophisticated prompts into working minimum viable products (MVPs) – one-sprint MVP. This approach is resulting in fewer people to deliver working solutions faster while supporting rapid iteration cycles that make even Agile development methodologies feel glacially slow. The Tools Driving This Shift The evidence surrounds us. GitHub Copilot and Cursor generate entire codebases from natural language descriptions. Vercel's V0 creates production-ready React components from simple prompts. Claude Artifacts builds functional prototypes through conversation. Replit Agent handles full-stack development tasks autonomously. These aren't novelty demos—they're production tools that engineers use to create real products for customers to use. However, this is just the beginning. Where Traditional Engineering Still Matters Now this wave won't wash away all engineering domains equally. Critical areas will maintain their need for specialized expertise: embedded systems interfacing with hardware, high-performance computing requiring deep optimization, safety-critical applications in aerospace and medical devices, large-scale infrastructure architecture, and cybersecurity frameworks. But the domains most vulnerable to AI consolidation—web applications, mobile apps, data pipelines, standard enterprise software, code creation, and prototype development—represent the majority of current engineering employment. The Economic Forces at Play The economics driving this shift are brutal in their simplicity. When a single AI Engineer can deliver 80% of what a five-person traditional team produces, at a fraction of the cost and timeline, market forces make the choice inevitable. This isn't a gradual transition that companies will deliberate over for years. Organizations that successfully implement AI-first methodologies will out-compete those clinging to traditional approaches. The advantage gap widens daily as AI capabilities improve and more teams discover these efficiencies. Venture capital flows increasingly toward AI-first startups with lean technical teams, while traditional software companies scramble to demonstrate AI integration strategies or risk irrelevance. Survival Strategies in an AI-First World AI represents a genuine threat to traditional engineering careers. The question isn't whether disruption will occur, but how to position yourself to survive and thrive as AI-first methodologies become standard practice. Critical survival tactics: Immediate actions (next 6-12 months): Master AI tools now  - Become proficient with GitHub Copilot, Claude, ChatGPT, and emerging AI development platforms Learn prompt engineering  - This is becoming as fundamental as learning programming languages once was Shift to AI-augmented workflows  - Don't just use AI as a helper; restructure how you approach problems entirely Build AI system integration skills  - Focus on connecting AI components rather than building from scratch Strategic positioning (1-2 years): Become an AI Engineer  - Align your engineering practice from traditional engineering to AI system design; adopt AI engineering knowledge and methods into your practice Specialize in AI reliability and maintenance  - AI systems need monitoring, debugging, and optimization Develop AI model customization expertise  - Fine-tuning, prompt optimization, and model selection Master AI-human collaboration patterns  - Understanding when to use AI vs. when human expertise is still required Why Waiting Is Dangerous Critics point to legitimate current limitations: AI-generated code often lacks production robustness, complex integrations still require deep expertise, and security considerations demand human judgment. These concerns echo the early objections to silicon compilers, which initially produced inferior results compared to expert human designers. But here's what history teaches us: the technology improved rapidly and soon exceeded human capabilities in most scenarios. The engineers who adapted early secured the valuable remaining roles. Those who waited found themselves competing against both improved tools and colleagues who had already mastered them. Understanding the Challenge This isn't another gradual technology transition that engineers can adapt to over several years. AI-first methodologies represent a substantial challenge to traditional engineering roles, with the potential for significant displacement across the industry. The reality:  Engineers who don't adapt may find themselves competing against AI-first approaches, systems and tools that operate continuously, require no salaries or benefits, and improve steadily. This will be an increasingly difficult competition to win. The opportunity:  Engineers who proactively embrace AI-first approaches will be better positioned to secure valuable roles in the evolving landscape. Leading this transformation offers better prospects than waiting for external pressure to force change. The window for proactive adaptation becomes smaller with time. Each month of delay reduces competitive advantage as AI capabilities advance and more engineers begin their own transformation journeys. The choice ahead is significant: evolve into an AI Engineer who works with intelligent systems, or risk being replaced by someone who does. Raimund Laqua, PMP, P.Eng is co-founder of ProfessionalEngineers.AI (ray@professionalengineers.ai) a Canadian engineering practice focused on advancing AI engineering in Canada. Raimund Laqua, is also founder of Lean Compliance ( ray.laqua@leancompliance.ca ), a Canadian consulting practice focused on helping orgnizations operating in highly-regulated, high risk sectors always stay ahead of risk, between the lines, and on-mission.

  • Understanding Operational Compliance: Key Questions Answered

    Operational Compliance Organizations investing in compliance often have legitimate questions about how the Operational Compliance Model relates to their existing frameworks, tools, and investments. These questions reflect the reality that most organizations have already implemented various compliance approaches—ISO management standards, GRC platforms, COSO frameworks, Three Lines of Defence models, and others. Rather than viewing these as competing approaches, the Operational Compliance Model serves as an integrative architecture that amplifies the value of existing investments while addressing fundamental gaps that prevent compliance from achieving its intended outcomes. The following responses explore how Operational Compliance works with, enhances, and elevates traditional approaches to create the socio-technical systems necessary for sustainable mission and compliance success. Responses to Questions "Why can I not use an ISO management systems standard?" ISO management standards are excellent for procedural compliance  but fall short of achieving operational compliance . Operational Compliance defines a state of operability when all essential compliance functions, behaviours, and interactions exist and perform at levels necessary to create the outcomes of compliance. The fundamental limitation is that ISO standards focus on building parts of a system  (processes, procedures, documentation) rather than the interactions between parts  that create actual outcomes. Companies usually run out of time, money, and motivation to move beyond implementing the parts of a system to implementing the interactions which is essential for a system to be considered operational. ISO standards help you pass audits, but the Operational Compliance Model helps you achieve the outcomes those audits are supposed to ensure—better safety, security, sustainability, quality, and stakeholder trust. "Doesn't GRC cover this, at least for IT obligations?" GRC (Governance, Risk, and Compliance) platforms are tools, not operational models. Traditional "Procedural Compliance" is based on a reactive model for compliance that sits apart and is not embedded within the business. Most GRC implementations create sophisticated reporting systems but don't address the fundamental challenge: how to make compliance integral to value creation . The Operational Compliance Model recognizes that obligations arise from four types of regulatory design (micro-means, micro-ends, macro-means, macro-ends) that each require different approaches. GRC tools can support this model, but they can't create the socio-technical processes that actually regulate organizational effort toward desired outcomes. "I already have dozens of frameworks" This objection actually proves the need for the Operational Compliance Model. Having dozens of frameworks is precisely the problem—it creates framework proliferation  without operational integration . Lean TCM incorporates an Operational Compliance Model that supports all obligation types and commitments using design principles derived from systems theory and modern regulatory designs. The Operational Compliance Model doesn't replace your frameworks; it provides the integrative architecture  to make them work together as a system rather than competing silos. It's the difference between having a collection of car parts versus having a functioning vehicle. "What about COSO? This already provides an overarching framework?" COSO is excellent for internal control over financial reporting  but was designed primarily for audit and governance purposes. The Operational Compliance Model addresses several limitations of COSO: Scope : COSO focuses on control activities; Operational Compliance focuses on outcome creation Integration : COSO's five components work within compliance functions; Operational Compliance embeds compliance into operations Regulatory Design : COSO assumes one type of obligation; Operational Compliance handles four distinct types that require different approaches Uncertainty : COSO manages risk; Operational Compliance improves probability of success  in uncertain environments COSO can be a component within the Operational Compliance Model, but it's insufficient by itself to achieve operational compliance. "What about Audit 3 Lines of Defence?" The Three Lines of Defence model is fundamentally reactive —it's designed to catch problems after they occur. Operational Compliance is based on a holistic and proactive model that defines compliance as integral to the value chain. The limitations of Three Lines of Defence: Line 1  (operations) sees compliance as separate from their real work Line 2  (risk/compliance) monitors rather than enables performance Line 3  (audit) confirms what went wrong after the fact The Operational Compliance Model collapses these artificial lines  by making compliance inherent to operational processes. Instead of three defensive lines, you get one integrated system  where compliance enables rather than constrains performance. The Essential Difference For compliance to be effective, it must first be operational—achieved when all essential compliance functions, behaviours, and interactions exist and perform at levels necessary to create the outcomes of compliance. The majority of existing frameworks and models serve important functions, but they operate within the procedural compliance paradigm . The Operational Compliance Model represents a paradigm shift  from compliance as overhead to compliance as value creation—from meeting obligations to achieving outcomes.

  • AI's Category Failure

    When a technology can reshape entire industries, automate critical decisions, and potentially act autonomously in the physical world, how we define it matters. Yet our current approach to defining artificial intelligence is fundamentally flawed—and this definitional confusion is creating dangerous blind spots in how we regulate, engineer, deploy, and think about AI systems. We can always reduce complex systems to their constituent parts, each of which can be analyzed further. However, the problem is not with the parts but with the whole. Consider how we approach regulation: we don't just regulate individual components—we regulate systems based on their emergent capabilities and potential impacts. Take automobiles. We don't primarily regulate steel, rubber, or microchips. We regulate vehicles because of what they can do: transport people at high speeds, potentially causing harm. A car moving at 70 mph represents an entirely different category of risk than the same steel and plastic sitting motionless in a factory. The emergent property of high-speed movement, not the individual components, drives our regulatory approach. The same principle should apply to artificial intelligence, but currently doesn't. Today's definitions focus on algorithms, neural networks, and training data rather than on what AI systems can actually accomplish. This reductionist thinking creates a dangerous category error that leaves us unprepared for the systems we're building. The Challenge of Definition Today's AI definitions focus on technical components rather than capabilities and behaviours. This is like defining a car as "metal, plastic, and electronic components" instead of "a system capable of autonomous movement that can transport people and cargo." This reductionist approach creates real problems. When regulators examine AI systems, they often focus on whether the software meets certain technical standards rather than asking: what can this system actually do? what goals might it pursue? How might it interact with the world? And, what are the risks of this impact? Defining AI properly is challenging because we're dealing with systems that emulate knowledge and intelligence—concepts that remain elusive even in human contexts. But the difficulty isn't in having intelligent systems; it's in understanding what these systems might do with their capabilities. A Fundamental Category Error What we have is a category failure. We have not done our due diligence to properly classify what AI represents—which is ironic, since classification is precisely what machine learning systems excel at. We lack the foundational work needed for proper AI governance. Before we can develop effective policies, we need a clear conceptual framework (an ontology) that describes what AI systems are and how they relate to each other. From this foundation, we can build a classification system (a taxonomy) that groups AI systems by their actual capabilities rather than their technical implementations. Currently, we treat all AI systems similarly, whether they're simple recommendation algorithms or sophisticated systems capable of autonomous planning and action. This is like having the same safety regulations for bicycles and fighter jets because both involve "transportation technology." The Agentic AI Challenge Let's consider autonomous AI agents—systems that can set their own goals and take actions to achieve them. A customer service chatbot that can only respond to pre-defined queries is fundamentally different from an AI system that can analyze market conditions, formulate investment strategies, and execute trades autonomously. These agentic systems represent a qualitatively different category of risk. Unlike traditional software that follows predetermined paths, they can exhibit emergent behaviours that even their creators didn't anticipate. When we deploy such systems in critical infrastructure—financial markets, power grids, transportation networks—we're essentially allowing non-human entities to make consequential decisions about human welfare. The typical response is that AI can make decisions better and faster than humans. This misses the crucial point: current AI systems don't make value-based decisions in any meaningful sense. They optimize for programmed objectives without understanding broader context, moral implications, or unintended consequences. They don't distinguish between achieving goals through beneficial versus harmful means. Rethinking Regulatory Frameworks Current AI regulation resembles early internet governance—focused on technical standards rather than systemic impacts. We need an approach more like nuclear energy regulation, which recognizes that the same underlying technology can power cities or destroy them. Nuclear regulation doesn't focus primarily on uranium atoms or reactor components. Instead, it creates frameworks around containment, safety systems, operator licensing, and emergency response—all based on understanding the technology's potential for both benefit and catastrophic harm. For AI, this means developing regulatory categories based on capability rather than implementation. A system's ability to act autonomously in high-stakes environments matters more than whether it uses transformers, reinforcement learning, or symbolic reasoning. The European Union's AI Act represents significant progress toward this vision. It establishes a risk-based framework with four categories—unacceptable, high, limited, and minimal risk—moving beyond purely technical definitions toward impact-based classification. The Act prohibits clearly dangerous practices like social scoring and cognitive manipulation while requiring strict oversight for high-risk applications in critical infrastructure, healthcare, and employment. However, the EU approach still doesn't fully solve our category failure problem. While it recognizes "systemic risks" from advanced AI models, it primarily identifies these risks through computational thresholds rather than emergent capabilities. The Act also doesn't systematically address the autonomy-agency spectrum that makes certain AI systems particularly concerning—the difference between a system that can set its own goals versus one that merely optimizes predefined objectives. Most notably, the Act treats powerful general-purpose AI models like GPT-4 as requiring transparency rather than the stringent safety measures applied to high-risk systems. This potentially under-regulates foundation models that could be readily configured for autonomous operation in critical domains. The regulatory framework remains a strong first step, but the fundamental challenge of properly categorizing AI by what it can do rather than how it's built remains only partially addressed. Toward Engineering-Based Solutions How do we apply rigorous engineering principles to build reliable, trustworthy AI systems? The engineering method is fundamentally an integrative and synthesis process that considers the whole as well as the parts. Unlike reductionist approaches that focus solely on components, engineering emphasizes understanding how parts interact to create emergent system behaviors, identifying failure modes across the entire system, building in safety margins, and designing systems that fail safely rather than catastrophically. This requires several concrete steps: Capability-based classification:  Group AI systems by what they can do—autonomous decision-making, goal-setting, real-world action—rather than how they're built. Risk-proportionate oversight:  Apply more stringent requirements to systems with greater autonomy and potential impact, similar to how we regulate medical devices or aviation systems. Mandatory transparency for high-risk systems:  Require clear documentation of an AI system's goals, constraints, and decision-making processes, especially for systems operating in critical domains. Human oversight requirements:  Establish clear protocols for meaningful human control over consequential decisions, recognizing that "human in the loop" can mean many different things. Moving Forward The path forward requires abandoning our component-focused approach to AI governance. Just as we don't regulate nuclear power by studying individual atoms, we shouldn't regulate AI by examining only algorithms and datasets. We need frameworks that address AI systems as integrated wholes—their emergent capabilities, their potential for autonomous action, and their capacity to pursue goals that may diverge from human intentions. Only by properly categorizing what we're building can we ensure that artificial intelligence enhances human flourishing rather than undermining it. The stakes are too high for continued definitional confusion. As AI capabilities rapidly advance, our conceptual frameworks and regulatory approaches must evolve to match the actual nature and potential impact of these systems. The alternative is governance by accident rather than design—a luxury we can no longer afford.

© 2017-2025 Lean Compliance™ All rights reserved.
bottom of page