top of page

SEARCH

Find what you need

537 results found with an empty search

  • Lean Compliance - A Lamppost in an Uncertain World

    After three decades in engineering and compliance, I took a leap of faith to address a critical gap I kept seeing in our industry. Eight years ago, I founded Lean Compliance because I believed there had to be a better way than reactive box-checking and last-minute audit preparation. Leaders in high-risk, highly regulated industries don't just want to pass inspections—they want genuine assurance they're meeting their duty of care to employees, customers, and communities. In this reflection, I share my journey of trying to transform compliance from a reactive necessity into a proactive business advantage, the challenges we've faced, and why, despite the obstacles, this remains a mission worth pursuing. Lean Compliance - A Lamppost in an Uncertain World In life and in business, you will face struggles. Some result from the actions of others and the environment we live in. Others are caused by our own choices. Anyone who has started a new business knows exactly what I'm talking about. When I founded Lean Compliance back in 2017, this was my situation. After working as an engineer for another company for over 30 years—designing and building systems for companies in highly regulated, high-risk industries—it was time to part ways. This wasn't the reason but rather the catalyst for something I should have done years before. The chances of success for any new business are slim, particularly when you're trying something innovative. This challenge is compounded when, as in the case of compliance, many don't have the desire to improve or see the need to do something different. "I'm already in compliance, so what is there to improve?" Sure, there's a business case for doing compliance more efficiently, and by compliance, most mean passing audits and inspections. Some call this GRC engineering, automation, or just IT development—something I had done, and many have done, for years. While management systems and automation are solutions for efficiency, they weren't answering the real issue facing leaders. Leaders wanted to know if their compliance efforts were enough, were they effective. Not just effective at passing audits and inspections, as important as that is. They wanted to know if they were meeting their obligations associated with safety, security, sustainability, quality, ethics, regulatory requirements, privacy, and so on. They were concerned about their duty of care. What assurance was there that their efforts would be enough? Could they keep their plants operational, employees safe, customer data secure, products and services at the highest standard of quality, and maintain the trust of all stakeholders? Those in highly regulated, high-risk sectors understand that without trust, you'll never have a legal license, let alone a social license to operate. This wasn't a technical problem looking for a technical solution. It was something more. It was about integrity, consistently meeting obligations and keeping promises. Not just once or right before an audit, but all the time. But here's the thing: they wanted this not primarily to pass audits and inspections. They wanted this because they cared for the welfare of the business, employees, customers, and communities. For them, compliance wasn't optional. It was essential to keep them on mission, between the lines, and ahead of risk. And the way compliance was being done wasn't working. Doubling down on audits or doing them faster was never going to be enough. Course correction after the fact was always too slow and too late when it comes to duty of care. So this is why I created Lean Compliance —to help businesses deliver on their duty of care. Compliance could learn from Lean principles about processes, controls, continuous flow, problem-solving, and how to continuously improve toward better outcomes. This would create room to be proactive—something that is desperately needed. But compliance also needed to be thought of differently. Not as a checklist of things needed to pass (Procedural Compliance), but as something organizations need to continuously do (Operational Compliance). After 8 years, this remains for me and others on this journey a road less travelled. Organizations appear just as reactive with their compliance as what I observed throughout my career. Compliance budgets are insufficient, and what little they have is used to invest in technical solutions to provide some relief, in hopes of catching a breath. Some are now looking to AI to accelerate their reactivity, and time will tell if this helps or makes matters worse. Lean Compliance exists because compliance remains predominantly reactive, siloed, and uncertain. I realize that Lean Compliance  is not yet what it could or needs to be. However, for some, Lean Compliance has been and continues to be a lamppost shining light toward a better way to approach compliance—compliance defined by proactivity, integration, and certainty. As someone once said, "Good things take time. Great things take longer." The important thing is not to give up, which is what I intend not to do. If you want to join me and the others who have already begun this journey, I welcome the opportunity to meet you, share stories, and discuss the future of Lean Compliance . Reach out to me on Linkedin: https://www.linkedin.com/in/raimund-laqua/

  • Business Intelligence: Are We Asking the Right Question?

    During our Elevate Compliance huddle this week, we explored how to transform data into compliance intelligence. Everyone agrees intelligence is critical for business and compliance success—with companies investing heavily in data collection for dashboards and scorecards. However, I wonder if our approach is missing something important. Elevate Compliance Huddle - Compliance Intelligence Data provides explicit knowledge—information that can be easily articulated, documented, and shared. There's also tacit knowledge—insights embodied in experience, connected to intuition, values, and ideas. The real question we should ask is: ⚡️ How can we convert all forms of knowledge, both explicit and tacit, into meaningful business intelligence? Artificial Intelligence has limitations because it operates primarily on explicit knowledge (data and facts). Organizations relying on AI as their main intelligence source should recognize this constraint. To truly elevate business and compliance intelligence, we must incorporate embodied knowledge. We need to learn to make value-based decisions aligned with ethical principles about how things should be, rather than merely following predictions about what might happen. While "keeping humans in the loop" with AI is commonly advocated, even this approach falls short. Genuine intelligence requires embodied knowledge where we continuously learn to be good and behave well—what we call integrity. As we pursue Artificial General Intelligence (AGI), let's remember that only humans can bridge the divide between what is and what ought to be (Hume's Guillotine). This human intelligence, combining data with ethical judgment, leads us toward integrity and ultimately wisdom. What do you think? Join me (Raimund Laqua) every week for our Elevate Compliance Huddles where we discuss essential compliance principles to practice. https://www.leancompliance.ca/elevate-compliance-huddle

  • Where Does Compliance Belong

    Organizations today grapple with numerous compliance requirements: safety, security, sustainability, privacy, quality, environmental, social, regulatory, and responsible AI practices. A fundamental challenge many face is determining where these compliance functions belong within the organizational structure. As a result, these programs often end up relegated to the sides and corners of organizational charts. Some leaders deliberately position compliance functions far from core operations, perhaps viewing them as necessary burdens rather than strategic assets. This approach is understandable but misses a deeper truth. The difficulty in placing compliance programs stems from an intuitive understanding that effective compliance requires participation from every part of an organization. Meeting obligations and keeping promises isn't solely the responsibility of a designated department; it's an essential property of every function across the business. In this sense, compliance reflects the character of an organization rather than merely being one characteristic among many. It represents how the organization functions at a fundamental level. No wonder we struggle with where to position compliance—it doesn't fit neatly into traditional hierarchical structures precisely because it must influence everything. When considering compliance's proper place, we should recognize that it isn't analogous to a hand or foot of the business—appendages that perform specific tasks but can operate somewhat independently. Instead, compliance functions more like the heart of an organization, circulating vital resources to every "cell" while removing harmful waste to maintain overall health. Just as the heart regulates blood pressure to sustain life, compliance regulates business practices to ensure the life of the business. It establishes the rhythm and ensures that resources, standards and requirements flow to every corner of operations. This is where compliance truly belongs—not at the periphery but at the centre, the heart, of the business. When positioned properly, compliance doesn't constrain an organization but gives it life to fulfill its purpose. What do you think?

  • The Trinity of Trust: Monitoring, Observability, and Explainability in Modern Systems

    In today's compliance landscape, organizations face mounting pressure to build reliable systems while meeting an expanding array of compliance obligations. Understanding how systems behave—whether traditional software or advanced AI—has become essential not just for performance but for trust and accountability. Three interconnected concepts have emerged as the foundation for this understanding: monitoring, observability, and explainability. Lean Compliance: Trinity of Trust Understanding the Trinity of Trust Monitoring: The Vigilant Guardian Monitoring serves as our first line of defence, continuously tracking predefined metrics and triggering alerts when thresholds are crossed. In traditional software, this means watching system resources, application performance, and infrastructure health. For AI systems, monitoring extends to model performance metrics, prediction latency, and data drift detection. While monitoring excels at answering anticipated questions like "Is the system down?" or "Is performance degraded?", it struggles with novel or complex failure modes. Think of monitoring as a vigilant guard—essential but limited to checking what it's been instructed to watch. Observability: The Insightful Explorer Observability takes us deeper, enabling us to infer a system's internal state from its external outputs. Built on metrics, logs, and traces, observability empowers teams to ask new questions they didn't anticipate when designing the system. In AI contexts, observability encompasses the full model lifecycle—from data ingestion through training to deployment and inference. It provides the context needed to understand not just that something happened, but how it happened, allowing for effective troubleshooting of novel problems. Explainability: The Transparent Interpreter Explainability completes our trinity by answering the critical "why" questions. For traditional software, explainability comes from clean architecture, comprehensive documentation, and traceable execution flows. In AI systems—where complex models often operate as black boxes—explainability techniques like SHAP, LIME, and counterfactual explanations become essential. Explainability transforms compliance from a checkbox exercise to genuine accountability. It provides the justification for why specific decisions were made, enabling human oversight of complex system behaviours and supporting the increasingly mandated right to explanation. Weaving the Golden Thread of Assurance Together, these three concepts create what compliance professionals call the "golden thread"—a continuous, traceable connection between obligations and evidence of their fulfillment. Each plays a distinct and vital role: Monitoring  verifies that promises are being kept in real-time Observability  provides the evidence trail needed to prove compliance retrospectively Explainability  delivers the justification for why specific decisions were made For compliance teams and obligation owners, this trinity creates unprecedented visibility: Monitoring  allows them to track adherence to regulatory thresholds and alerting on potential violations before they become serious breaches Observability  enables tracing sensitive data or decisions through distributed systems and investigating compliance issues with complete context Explainability  demonstrates that algorithmic processes align with stated policies and regulatory requirements A Comparative Lens When we compare these approaches, we see their complementary strengths: Depth of Understanding Monitoring  shows what happened Observability  reveals how it happened Explainability  clarifies why it happened Proactive vs. Retrospective Value For proactive insights: Monitoring  excels at immediate alerting Observability  detects emerging patterns Explainability  identifies problematic reasoning before serious failures For retrospective analysis: Explainability  provides the deepest understanding of decisions Observability  offers the most comprehensive view of system behaviour Monitoring  provides basic historical metrics The Compliance Intelligence Imperative As regulatory pressures intensify across industries—from GDPR's right to explanation to emerging AI regulations—organizations cannot afford to address compliance as an afterthought. The most forward-thinking companies are adopting compliance initiatives that implement the  Trinity of Trust  into their core operations. Lean Compliance's "Compliance Intelligence Program" stands at the forefront of this evolution, transforming obligation management from a static documentation exercise into a dynamic, intelligence-driven practice. By embedding monitoring, observability, and explainability into compliance, organizations gain: Real-time visibility into compliance status Rich context for investigating potential violations Clear explanations for regulators and stakeholders Proactive identification of compliance risks before they materialize A Call to Action As we navigate the complexities of modern systems, particularly those powered by AI, the trinity of monitoring, observability, and explainability moves from optional to essential. Organizations that fail to embrace these practices face not just technical risks but also compliance risk leading to loss of reputation and stakeholder trust. Make implementing Lean Compliance's "Compliance Intelligence Program" a priority this year. By weaving the  Trinity of Trust  into your compliance fabric, you transform obligations from burdens into competitive advantages—creating systems that are not just certified but worthy of the trust placed in them by customers, partners, and regulators. The organizations that thrive in today's landscape will be those that recognize compliance not as a cost centre but as an intelligence centre—one that delivers deeper understanding, greater assurance, and ultimately, unshakable trust. About the author: Raimund Laqua, PMP, P.Eng, is founder of Lean Compliance ( www.leancompliance.ca ), and co-founder of  ProfessionalEngineers.AI .

  • Why Your GRC Efforts Are Failing

    When it comes to designing systems, a common mistake is confusing essential properties with essential parts. This fundamental error explains why many Governance, Risk, and Compliance (GRC) initiatives fall short of their objectives. ⚡️ Learning from Systems Thinking Russell L. Ackoff's systems thinking principles provide valuable insights: Understanding proceeds from the whole to its parts, not from the parts to the whole as knowledge does. The essential properties that define any system are properties of the whole which none of the parts have independently. Essential parts are necessary for the system to perform its function but are not sufficient on their own. Properties derive from the interaction of parts, not from their actions taken separately. ⚡️ The GRC Challenge GRC efforts will never be effective as long as they focus solely on the individual components. Instead, we must first ask a fundamental question: "What properties does my information security and privacy program need to deliver that none of the parts by themselves provide?" The answer is not simply governance, risk management, or compliance. These are merely parts of a larger system, not the essential properties themselves. ⚡️ The Path Forward The true path forward is to define the system's purpose. Without a clear understanding of what your security and privacy program is ultimately meant to achieve as a unified whole, individual GRC components will remain fragmented and ineffective. By first establishing the system's overarching purpose, you create the foundation for meaningful interaction of governance, risk management, and compliance activities to work together towards providing essential properties. Only by defining this systemic purpose can you determine these essential properties and how the parts must interact to produce them. This purpose-driven approach transforms GRC from disconnected activities into a cohesive system that delivers genuine value.

  • Systems Thinking

    Machines, organizations, and communities include and are themselves part of systems.  Systems Thinking Russell L. Ackoff, a pioneer in systems thinking, defined a system not as a sum of its parts but as the product of its interactions of those parts. ".. the essential properties that define any system are the properties of the whole which none of the parts have." The example he gives is that of a car.  The essential property of car is to take us from one place to another.  This is something that only a car as a whole can do.  The engine by itself cannot do this. Neither can the wheels, the seats, the frame, and so on. Ackoff continues: "In systems thinking, increases in understanding are believed to be obtainable by expanding the systems to be understood, not by reducing them to their elements. Understanding proceeds from the whole to its parts, not from the parts to the whole as knowledge does." A system is a whole which is defined by its function in a larger system of which it's a part. For a system to perform its function it has essential parts: Essential parts are necessary for the system to perform its function but not sufficient Implies that an essential property of a system is that it can not be divided into independent parts. Its properties derive out of the interaction of its parts and not the actions of its parts taken separately. When you apply analysis (reductionism) to a system you take it apart and it loses all its essential properties, and so do the parts. This gives you knowledge (know how) on how the part works, but not what they are for. To understand what parts are for, you need synthesis (holism) which considers the role the part has with the whole. Why is this important and what has this to do with quality, safety, environmental or regulatory objectives? The answer is, when it comes to management systems, we often take a reductionist approach to implementation.  We divide systems into constituent parts and focus our implementation and improvement at the component level. This according to Ackoff is necessary, but not sufficient for the system to perform. We only need to look at current discussions with respect to compliance to understand that the problem with performance is not only the performance of the parts themselves, but rather failures in the links (i.e. dependencies) the parts have with each other.  Todd Conklin (Senior Advisor to the Associate Director, at Los Alamos National Laboratory) calls this "between and among" the nodes. To solve these problems you cannot optimize the system by optimizing the parts making each one better. You must consider the system as a whole – you must consider dependencies. However, this is not how most compliance systems are implemented or improved. Instead, the parts of systems are implemented in silos that seldom or ever communicate with each other.  Coordination and governance is also often lacking to properly establish purpose, goals, and objectives for the system. In practice, optimization mostly happens at the nodes and not the dependencies.  It is this lack of systems attention that contributes to poor performance.  No wonder we often hear of companies who have implemented all the "parts" of a particular management system and yet fail to receive any of the benefits from doing so.  For them it has only been a cost without any return. However, by applying Systems Thinking you can achieve a better outcome.  "One can survive without understanding, but not thrive. Without understanding one cannot control causes; only treat effect, suppress symptoms. With understanding one can design and create the future ... people in an age of accelerating change, increasing uncertainty, and growing complexity often respond by acquiring more information and knowledge, but not understanding." -- Russell Ackoff. For those looking for a deeper dive the following video (90 minutes) provides an excellent survey of systems thinking by Russell L. Ackoff a pioneer in the area of systems improvement working along side others such as W. Edward Deming.

  • The Easter Egg Hidden in Plain Sight: How We Elevate GRC

    Like all great Easter egg hunts, sometimes the most valuable treasures aren’t lost—they’re simply hidden where few think to look. Easter Egg Hidden in Plain Sight For eight years, our Proactive Certainty Program has contained a special Easter egg that many organizations have overlooked. This egg wasn’t tucked away in some remote corner or buried underground—it was displayed prominently, hiding in plain sight: the program’s ability to elevate GRC, along with many other compliance programs. “We already have a GRC framework,” compliance leaders would say, walking right past our not-so-secret Easter egg. “We don’t need another approach.” The subtext was obvious; they were too busy fighting fires, patching the next vulnerability, or closing gaps from their audits to realize a better way was in front of them. What they didn’t realize was that our Easter egg wasn’t a replacement for their GRC efforts—it was the key to unlocking their full potential. The Treasure Hidden in Plain Sight Organizations that discovered this hidden gem experienced a transformation. They watched as their governance structures evolved from merely existing to actively anticipating challenges. They witnessed their previously siloed and partially integrated systems become truly integrative, working in harmony rather than just coexisting. Most remarkably, they saw their risk management approach transform into certainty creation—ensuring obligations would be met even in unpredictable circumstances. The Easter egg was always there, if you took the time to look. It was hidden (not intentionally) from those who still looked through the lens of Procedural Compliance . However, some organizations would pause long enough to ask: “How exactly does your program differ from traditional GRC?” That question would uncover the egg’s location—the crucial understanding that GRC viewed through a procedural lens remains hidden, while GRC elevated through our Proactive Certainty Program reveals the key to success. The Easter Egg - Now Revealed The Lean Compliance Easter Egg The way our program elevates GRC is by transforming: From reactive Governance to proactive governance — We don’t just ensure governance structures exist; we help them learn to steer the organization to achieve mission success. From Risk management to certainty creation — Rather than just managing risks to avoid loss, we increase the probability of success, ensuring obligations will be met even amidst uncertainty. From Integrated to truly integrative Compliance — Beyond simply mapping or connecting subsystems, we ensure they work together as one to achieve targeted compliance outcomes. For the last number of years, some organizations have discovered this treasure and have realized better outcomes from their GRC efforts. Are You Still Looking? Others continue to hunt, fill their baskets with more governance structures, management frameworks, risk assessments, compliance controls, and procedures—never realizing the real treasure isn’t just another egg but the special one to pull everything together–The Proactive Certainty Program. This program transforms your compliance to ensure organizations always stay on mission, between the lines, and ahead of risk. Our Easter egg isn’t new. It wasn’t lost. It’s been hiding in plain sight all along. It’s not for everyone, but it could be for you. Will your organization be the next to experience the Lean Compliance Easter Egg ? You can find out by filling in the Proactive Certainty Scorecard , and perhaps, you will discover the treasure within.

  • Why Engineering Matters to AI

    As organizations rush to adopt artificial intelligence, one common mistake is treating AI systems like just another IT solution. After all, both are software-based, require infrastructure, and are built by technical teams. But here’s the thing: AI systems behave fundamentally differently from traditional IT systems, and trying to design and manage them the same way can lead to failure, risk, and even regulatory trouble. To use AI responsibly and effectively, we need to engineer it—with discipline, oversight, and purpose-built practices. Here’s why. Traditional IT Systems: Predictable by Design Traditional IT systems are built using explicit rules and logic. Developers write code that tells the system exactly what to do in every scenario. For example, if a customer forgets their password, the system follows a defined process to reset it. There's no guesswork involved. These systems are: Deterministic : Given the same input, they always produce the same output. Transparent : The logic is visible in the code and can be easily audited. Testable : You can run tests to verify whether each function behaves correctly. Static : Once deployed, the system doesn’t change unless someone updates the code. This predictability makes traditional systems easier to govern. Compliance, security, and operational risk controls are well-established. AI Systems: Learning Machines with Unpredictable Behaviour AI systems—especially those based on machine learning (ML)—work differently. Instead of being programmed with rules, they are trained on data to find patterns and make decisions. Key characteristics of AI systems include: Probabilistic Behaviour:  The same input can produce different outputs, depending on the model’s training. Emergent Logic : The rules are not written by developers, but learned from data, which can make them hard to understand or explain. Continuous Change : Models may be retrained over time, either manually or automatically, as new data becomes available. Hidden Risks : Bias, drift, or performance degradation can emerge silently if not monitored. In short, AI systems are dynamic, opaque, and complex—which makes them harder to test, trust, and manage using traditional IT approaches. Why Engineering Matters for AI Because of these differences, AI systems need a new layer of discipline—AI engineering—to ensure they are safe, reliable, and aligned with business and societal goals. Here are some key concepts behind engineering AI systems: 1.  Robustness AI needs to perform reliably, even when it encounters data it hasn’t seen before. Engineering for robustness means testing models under various scenarios, stress conditions, and edge cases—not just relying on average accuracy. 2.  Explainability When an AI system makes a decision, stakeholders—whether users, regulators, or auditors—need to understand why. Explainability tools and techniques help uncover what’s driving the model’s decisions, which is essential for trust and accountability. 3.  Adaptive Regulation and Monitoring AI systems can degrade over time if the data they see starts to shift—a phenomenon known as model drift. Engineering for AI involves setting up real-time monitoring, alerting, and feedback loops to catch and respond to issues before they cause harm. 4.  Bias and Fairness Since AI learns from historical data, it can inherit and amplify existing biases. Engineering practices must include fairness checks, bias audits, and tools that help identify and mitigate discriminatory behavior. 5.  Life-cycle Management AI development doesn’t end at deployment. Engineering includes versioning models, tracking data changes, managing retraining pipelines, and ensuring models continue to meet performance and compliance requirements over time. Comparing the Two Approaches Here’s a simplified comparison: The Bottom Line AI systems hold enormous potential—but with that power comes greater complexity and risk. Unlike traditional IT systems, they: Learn instead of follow Adapt instead of stay static Predict instead of execute To manage this effectively, we need to engineer AI with rigor—just like we do with bridges, aircraft, or medical devices. This means combining the best of digital engineering with new practices in data and cognitive science, systems and model engineering, adaptive regulation, AI safety, and ethical design. It’s not enough to build AI systems that work. We need to build AI systems we can trust. This article was written by Raimund Laqua, Founder of Lean Compliance and Co-founder of ProfessionalEngineers.AI

  • Artificial Intelligence Doesn't Care, You Must!

    Artificial intelligence feels no remorse when it discriminates, no concern when it violates privacy, and no accountability when its decisions harm human lives. This reality—that AI inherently lacks the capacity to care about its impacts—places a real and immediate burden of responsibility on the organizations that deploy these increasingly powerful systems. As AI technologies transform modern businesses the obligation of “duty of care" has surfaced as a critical priority for responsible deployment. This duty represents the specific obligations that fall to organizations that integrate AI into their operations, requiring them to act as the ethical and practical stewards for systems that cannot steward themselves. Because AI itself doesn't care, the responsibility falls squarely on those that lead their organizations to care enough to deploy it wisely. Organizations deploying AI face a critical choice today: Will you embrace your duty of care, or risk the consequences of unchecked artificial intelligence? The time for passive implementation is over. Take these essential steps now: ⚡️ Identify and evaluate AI obligations and commitments (regulatory, voluntary, and ethical) ⚡️ Implement effective management and technical programs to contend with uncertainty and risk ⚡️ Train leadership (business and technical) on AI ethics and responsible deployment principles ⚡️ Create clear accountability frameworks that connect technical teams with executive oversight Don't wait for regulations to force your hand or for AI failures to damage your reputation and harm those who trust you. Contact us today (pmo@leancompliance.ca) to schedule an AI Duty of Care Assessment and take the first step toward fulfilling your responsibility in the age of artificial intelligence that doesn't care—but you must.

  • Capabilities Driven Business Canvas

    A principle that is easily forgotten is that to change outcomes you need to change your capabilities. Michael Porter's value chain analysis helps to visualize the chain of capabilities needed to create business value. However, capabilities are needed for every endeavor that requires an outcome to be achieved and even more so to sustain and improve over time. The practice of this principle is essential for compliance to meet objectives associated with regulatory performance and outcome based obligations. It is also necessary to solve problems in pursuit of those goals. The following capabilities driven business canvas will help you focus your attention on what matters most when improving outcomes. Capabilities Driven Business Canvas This canvas is available in a PowerPoint format along with other templates, workshops, and resources by becoming a Lean Compliance Member.

  • Remove Roadblocks Not Guardrails

    Are you doing Value Stream Mapping (VSM) wrong? Are you doing Value Stream Mapping wrong? Value Stream Mapping is a powerful tool for eliminating waste in organizational processes. When implemented correctly, it creates leaner, more efficient operations by removing unnecessary activities. However, the challenge lies in distinguishing between what truly diminishes value and what actually creates or protects it. This critical blind spot leads to cutting elements that appear wasteful but are essential for mission success. ⚡️ How often have organizations eliminated safety stock as “waste,” only to discover it was their crucial buffer against supply chain uncertainties? ⚡️ How frequently have approval processes been streamlined for efficiency without considering their role in ensuring proper duty of care? ⚡️ How many times have compliance measures been reduced, inadvertently pushing operations to the edge of uncertainty and creating fragility instead of resilience? The key to effective process improvement isn’t just cutting—it’s strategic discernment. Yes, eliminate true waste, but equally important: ensure you’re adding what’s necessary for mission success - you need to do both. 🔸 Call to Action: Identify the Guardians of Your Commitments 🔸 Three practical steps to protect your promises while eliminating waste: ⚡️ Map commitment touch points - Identify each process step that directly supports meeting your regulatory obligations, policy requirements, or stated objectives. These are your value protection points. ⚡️ Distinguish promise-fulfilment from waste - Ask: "Does this step directly help us fulfill a specific commitment we've made?" If yes, it's not waste—it's essential. ⚡️ Create a commitment impact assessment - Before removing any step, evaluate: "Will this change hamper our ability to keep our promises to regulators, customers, or stakeholders?" Remember: True LEAN COMPLIANCE doesn't compromise your ability to meet obligations—it enhances it by removing only what doesn't support your commitments. Need help aligning your efficiency efforts with your commitment framework? Let's connect.

  • The Cost of AI

    Is the collateral damage from AI worth it, and who should decide? When it comes to AI, we appear to be “hell-bent“ towards developing Artificial General Intelligence (AGI) so as to consume all available energy, conduct uncontrolled AI experiments in the wild at scale, and disrupt society without a hint of caution or duty of care. The decision of “Should We?” has always been the question. However, when asked, the conversation often turns to silence. Now, creating smart machines that can simulate intelligence is not the primary issue; it’s giving it agency to act in the real world without understanding the risk, that’s the real problem. Some might even call this foolishness. The agentic line should never have been crossed without adequate safeguards. And yet without understanding the risk, how will we know what is adequate? Nevertheless, here we are developing AI agents ready to be deployed in full force, for what purpose and at what cost? Technology is often considered as neutral, and this appears to be how we are treating AI, just like other IT applications, morally agnostic. Whether technology is agnostic or not, the question is, are we morally blind, or just wilfully ignorant? Do we really know what we are giving up to gain something we know very little about? To address some of this risk, organizations are adopting ISO 42001 certification as a possible shield against claims of negligence or wrongdoing, and AI insurance will no doubt be available soon. But perhaps, we would do better by learning from the medical community and treat AI as something that is both a help and a harm – not neutral. But more importantly, something that requires a measure of precaution, a duty of care, and professional engineering. If we did, we would keep AI in the lab until we studied it carefully. We would conduct controlled clinical trials to ensure that specific uses of AI actually create the intended benefits and minimize the harms, anticipated or otherwise. Time will tell if the decisions surrounding AI will prove to be reckless, foolish, or wise. However, what should not happen is for those who will gain the most to decide if the collateral damage is worth it. What are we sacrificing, what will we gain, and will it be worth the risk? Let’s face the future, but with our eyes open so we can count the cost. For organizations looking to implement AI systems responsibly, education is the crucial first step. Understanding how these standards apply to your specific context creates the foundation for successful implementation. That's why Lean Compliance is launching a new educational program to help organizations understand and take a standards-based approach to AI. From introductory webinars to comprehensive implementation workshops, we're committed to building your capacity for responsible and safe AI.

© 2017-2025 Lean Compliance™ All rights reserved.

Ensuring Mission Success Through Compliance

bottom of page