SEARCH
Find what you need
525 results found with an empty search
- Engineering Responsibility: A Practitioner's Guide to Meaningful AI Oversight
As a compliance engineer, I've watched AI transform from research curiosity to world-changing technology. What began as exciting progress has become a complex challenge that demands our attention. Three critical questions now face us: Can we control these systems? Can we afford them? and What might we lose in the process? The Control Challenge AI systems increasingly make decisions with minimal human input, often delivering better results than human-guided processes. This efficiency is both promising and concerning. I've noticed a troubling shift: human oversight, once considered essential, is increasingly viewed as a bottleneck. Organizations are eager to remove humans from the loop, seeing us as obstacles to efficiency rather than essential guardians of safety and ethics. As compliance professionals, we must determine where human judgment remains non-negotiable. In healthcare, finance, and public safety, human understanding provides context and ethical consideration that algorithms simply cannot replicate. Our responsibility is to build frameworks that clearly define these boundaries, ensuring automation serves humanity rather than the reverse. The Sustainability Dilemma The resource demands of advanced AI are staggering. Training requirements for large models double approximately every few months, creating an unsustainable trajectory for energy consumption that directly conflicts with climate goals. Only a handful of companies can afford to develop cutting-edge AI, creating a technological divide. If access becomes limited to those who can pay premium prices, we risk deepening existing inequalities. The environmental burden often falls on communities already vulnerable to climate impacts. Data centres consume vast amounts of water and electricity, frequently in regions already facing resource scarcity. Our compliance frameworks must address both financial and environmental sustainability. We need clear standards for resource consumption reporting and incentives for more efficient approaches. What We Stand to Lose Perhaps most concerning is what we surrender when embedding AI throughout society. Beyond job displacement, we risk subtle but profound impacts on human capabilities and connections. Medical professionals may lose diagnostic skills when relying heavily on AI. Students using AI writing tools may develop different—potentially diminished—critical thinking abilities. Skills developed over generations could erode within decades. There's also the irreplaceable value of human connection. Care work, education, and community-building fundamentally rely on human relationships. When these interactions become mediated by AI, we may lose essential aspects of our humanity—compassion, empathy, and shared experience. Engineering Responsibility: A Practical Framework As compliance professionals, we must engineer responsibility into AI systems. I propose these actionable steps: Implement Real-Time Governance Controls Deploy continuous monitoring systems that track AI decision patterns, identify anomalies, and enforce boundaries in real-time. These controls should automatically flag or pause high-risk operations that require human review, rather than relying on periodic audits after potential harm occurs. Require Environmental Impact Assessments Before deploying large AI systems, organizations should assess energy requirements and environmental impact. Not every process needs AI—sometimes simpler solutions are both sufficient and sustainable. Promote Accessible AI Infrastructure Support initiatives creating public AI resources and open-source development. Compliance frameworks should reward knowledge-sharing rather than secrecy. Protect Human Capabilities Establish guidelines ensuring AI complements rather than replaces human skill development. This includes policies requiring ongoing training in core skills even as AI assistance becomes available. Establish Cross-Disciplinary Oversight Councils Create formal oversight bodies with representation across technical, ethical, social, and legal domains. These councils must have binding authority over AI implementations and clear enforcement mechanisms to ensure accountability when standards aren't met. As compliance engineers, we must move beyond checkbox exercises to become true stewards of responsible innovation. Our goal isn't blocking progress but ensuring that technology serves humanity's best interests. The questions we face don't have simple answers. But by addressing them directly and engineering thoughtful oversight systems, we can shape an AI future that enhances human potential rather than diminishing it. Our moment to influence this path is now, before technological momentum makes meaningful oversight impossible. Let's rise to this challenge by engineering responsibility into every aspect of AI development and deployment.
- Transforming Business Through AI: Key Insights
The business world is changing fast as companies adopt AI technology. At a recent conference that I attended, experts shared valuable insights on making this transition successfully. Here's what stood out. Finding the Balance AI offers two main benefits for businesses: it can make your current work more efficient, and it can help you do things that weren't possible before. But there's a catch – as one speaker put it, "AI becomes an accelerant - whatever is weak will break." In other words, AI will make your strengths stronger but also expose your weaknesses faster. This dynamic creates both opportunity and risk. Organizations with solid foundations in data management, security, and operational excellence will see AI amplify these strengths. Meanwhile, companies with existing weaknesses may find AI implementations expose these vulnerabilities. The tension between innovation and exposure stood out as a consistent theme. Leaders face the challenge of encouraging creative AI applications while managing potential risks. As one presenter noted, "adopting AI is an opportunity to strengthen your foundations," suggesting that the implementation process itself can improve underlying systems and processes. Getting Governance Right Companies need clear rules for using AI safely. Mercedes-Benz showed how they've built AI risk management into their existing structures. Many experts suggested moving away from rigid checklists toward more flexible guidelines that can evolve with the technology. What matters most? Trust. Customers don't just want AI – they want AI they can trust. This means being careful about where your data comes from, protecting privacy, and being open about how your AI systems work. The establishment of ISO 42001 as an audit standard signals the maturing governance landscape. However, many speakers emphasized that truly effective governance requires moving "from compliance to confidence" – shifting focus from simply checking boxes to building genuinely trustworthy systems. A key insight was that "you can do security without compliance, but you can't do compliance without security." This highlights how fundamental security practices must underpin any meaningful compliance effort. Well-designed guardrails, now developing as the new compliance measures, should be risk-based rather than prescriptive, allowing for innovation within appropriate boundaries. Data provenance received particular attention, with speakers noting that "AI loves data and you will need to manage/govern your use of data." This becomes especially challenging when considering privacy regulations, as legal departments often restrict the use of existing customer data for AI applications. Speakers suggested more nuanced approaches are needed to balance innovation with appropriate data protection. Different Approaches Around the World How companies use AI varies greatly by location. European businesses tend to focus heavily on compliance, with frameworks like the EU AI Act shaping implementation strategies. Regional differences significantly impact how organizations approach AI adoption and governance. Some participants questioned whether the EU AI Act might be too restrictive, noting discussions about potentially toning down certain requirements – similar to adjustments made to GDPR after implementation. This reflects the ongoing challenge of balancing protection with innovation. Compliance expertise varies by region as well. I observed that "compliance is a bigger deal in Europe and they are good at it," suggesting that European organizations may have advantages in navigating complex regulatory environments. This expertise could become increasingly valuable as AI regulations mature globally. Workforce Changes We can't ignore that some jobs will be replaced by automation. This creates a potential two-tier economy and raises important questions about training and developing people for new roles. Companies need to build AI literacy across all departments, from engineering to legal, HR, and marketing. The conference highlighted that AI literacy isn't one-size-fits-all – training needs to be tailored to different functions. Engineers need technical understanding, while legal teams require compliance and risk perspectives. Marketing departments might focus on ethical use cases and customer perception. A particularly interesting trend is taking shape around AI skills development. Many professionals are moving into AI governance roles, but fewer are pursuing AI engineering due to the longer lead time for developing technical expertise. This could create imbalances, with potentially too many governance specialists and too few engineers who can implement AI systems properly. Beyond job replacement, AI promises to transform how knowledge workers engage with information. Rather than simply replacing analysts, AI can help them process "the mountain of existing data" – shifting focus from basic results to deeper insights. This suggests a future where AI augments human capabilities rather than simply substituting for them. The "Shadow AI" Problem Just like when employees started bringing their own devices to work, companies now face "shadow AI" – people using AI tools without official approval. This growing challenge is more pervasive than previous BYOD issues, as AI tools are easily accessible online and often leave fewer traces. Implementing an acceptable use AI policy is the most effective way to address this challenge. Such a policy clearly defines which AI tools are approved, how they can be used, and what data can be processed through them. Rather than simply banning unofficial tools, effective policies create reasonable pathways for employees to suggest and adopt new AI solutions through proper channels. The policy should balance security concerns with practical needs – if official tools are too restrictive or cumbersome, employees will find workarounds. By acknowledging legitimate use cases and providing approved alternatives, companies can bring shadow AI into the light while maintaining appropriate oversight. Regular training on the policy helps employees understand not just the rules but the reasoning behind them – particularly the security and privacy risks that shadow AI can introduce. When employees understand both the "what" and the "why," they're more likely to follow guidelines voluntarily. The proliferation of shadow AI creates a fundamental governance challenge captured by the insight that "you can't protect what you can't see." Organizations first need visibility into AI usage before they can establish effective governance. This requires technical solutions to detect AI applications across the enterprise, combined with cultural approaches that encourage transparency. Bringing Teams Together One clear message from the conference: AI governance and engineering must work hand-in-hand. No single person or team has all the answers for creating responsible AI systems. This calls for collaboration across departments and sometimes specialized roles like AI Compliance Engineering. A key challenge is that traditional organizational structures often separate these functions. In practice, it appears that AI governance cannot be effectively separated from AI engineering, yet many companies attempt to do just that. Successful organizations are creating new collaborative structures that bridge these domains. The automotive industry provides useful parallels. As one presenter noted, "automotive has 180 regulations, now AI is being introduced from an IT perspective." This highlights how AI governance is emerging from IT but needs to learn from industries with long histories of safety-critical regulation. However, important differences exist. One speaker emphasized that "IT works differently than the automotive industry," suggesting that governance approaches need adaptation rather than simple transplantation between sectors. The growing consensus suggests that use case-based approaches to AI risk management may be more effective than broad categorical rules. Defining clear interfaces between governance and engineering appeared as a potential solution, with one suggestion to "define KPIs for AI that should be part of governance." This metrics-based approach to governance integration could help standardize how AI systems are measured and evaluated within governance frameworks. Moving Forward As your company builds AI capabilities, you'll need both effective safeguards and room for innovation. This is a chance to strengthen your organization's foundation through better data management and security practices. The most successful companies will develop approaches tailored to specific uses rather than applying generic rules everywhere. And as AI systems become more independent, finding the right balance between automation and human oversight will be crucial. The rise of autonomous AI agents introduces new challenges. As AI systems become more sophisticated, there are legitimate concerns that certain types of AI agents might operate with limited human oversight and could potentially act in unexpected ways depending on their autonomy levels. These considerations highlight the need for governance approaches that can handle increasingly sophisticated AI systems. The conference acknowledged that "an evergreen process has not been developed yet" for AI governance, suggesting that organizations must remain adaptable as best practices continue to evolve. This dynamic environment creates space for innovation in governance itself – developing new methods and controls that can effectively manage AI risks while enabling beneficial applications. In this changing landscape, the winners will be those who can blend good governance with practical engineering while keeping focused on what matters most – creating value for customers and the business. By addressing AI governance as an enabler rather than just a constraint, organizations can build the confidence needed for successful adoption while managing the inherent risks of these powerful technologies.
- When Rules Are Meant to Be Broken: Tackling Deliberate Non-Compliance
Every organization faces an uncomfortable reality that few discuss openly: some people deliberately circumvent established standards & protocols, and break rules. While compliance systems effectively guide well-intentioned employees, they often fall short when confronted with those who intentionally work around safeguards. The Expansive Scope of Modern Compliance Today's compliance encompasses far more than basic regulatory adherence. Organizations must navigate obligations across multiple domains: Safety protocols protecting employees, customers, and communities Security frameworks safeguarding information and physical assets Privacy requirements preserving confidential and personal data Quality standards ensuring product and service excellence Sustainability commitments upholding environmental and social responsibility Regulatory mandates meeting industry-specific legal requirements Each domain creates unique challenges when addressing deliberate non-compliance. Beyond Good Intentions: The Triple Purpose of Compliance Compliance frameworks serve three essential functions across these domains: Guiding well-intentioned people through complex requirements Preventing accidental missteps through education and systems design Limiting harm from deliberate circumvention through detection and consequences Most compliance efforts focus heavily on the first two—creating dangerous blind spots when confronted with intentional violations. The Sophisticated Strategies of Willful Non-Compliance Those who deliberately circumvent standards rarely do so openly. Instead, they use calculated approaches: Feigning technical confusion ("This sustainability reporting system makes no sense") Creating plausible deniability ("The privacy assessment? That was handled elsewhere") Pressuring compliance professionals ("We'll miss our safety certification if we document every test") Undermining specialized expertise ("Security doesn't understand what we're trying to do") Finding technical loopholes while violating the spirit of commitments How Rule-Breakers Navigate Different Types of Obligations Regulatory Requirements: Calculated risk-takers understand enforcement limitations and make cold assessments about detection probability. They hide deliberate violations within seemingly compliant operations—whether in financial reporting, environmental compliance, or product safety. Voluntary Standards and Certifications: When organizations publicly commit to voluntary standards (ISO certifications, sustainability frameworks, industry best practices), some individuals view these as optional "stretch goals" rather than binding commitments—creating significant reputation risks. Organizational Values and Commitments: Most concerning are those who publicly champion quality, safety, or ethical commitments while systematically undermining them behind closed doors—appearing compliant while subverting obligations and promises. The Critical Distinction: Deliberate Violations vs. Approved Deviations Not all deviations from standard procedures represent non-compliance. In complex environments, rigid adherence to every protocol may occasionally impede safety, quality, or other objectives. Smart organizations distinguish between: Unauthorized violations where individuals circumvent standards without proper review Approved deviations where exceptions receive documentation, risk assessment, and authorization Good compliance frameworks include straightforward processes for requesting deviations when legitimate operational needs arise. These typically require risk assessments, appropriate approvals, compensating controls, and time limitations. By creating clear pathways for authorized exceptions, organizations maintain integrity while allowing necessary flexibility. The key difference lies in transparency—approved deviations remain visible and governed, while violations deliberately hide. Why Traditional Approaches Fall Short Standard compliance tools assume good intentions. Policies, training modules, and basic monitoring catch honest mistakes but miss deliberate evasion. Cross-domain challenges make detection particularly difficult—a privacy violation might hide within technical security documentation, or safety shortcuts might be buried in quality process paperwork. Forward-Thinking Strategies Against Cross-Domain Non-Compliance Leading organizations are developing more sophisticated approaches: Integrated compliance frameworks detecting patterns across safety, quality, privacy, and other domains Root cause analysis examining motivations behind deliberate circumvention Cultural assessment tools measuring psychological safety for raising concerns Cross-functional relationship mapping identifying problematic influence dynamics Advanced detection systems finding subtle signals of potential circumvention The Evolving Role of Compliance Professionals Addressing willful non-compliance requires a more sophisticated stance: Building cross-domain expertise to spot evasion techniques Ensuring meaningful consequences for deliberate violations Implementing integrated detection frameworks across safety, quality, privacy, and other areas Developing partnerships with leaders who understand how compliance failures create cascading risks Creating genuine safe channels for reporting concerns about misconduct Building a Culture of True Commitment The most effective defence against deliberate circumvention isn't found in more policies—it's in creating environments where: Compliance serves as a strategic asset, not a necessary evil Leaders model commitment to standards, not just technical compliance People feel empowered to raise concerns without fear Those who circumvent standards face consequences, regardless of seniority The organization learns from past violations to strengthen its approach Moving Forward The uncomfortable reality about compliance is that it must function both as a guide for the well-intentioned and as a defence against those who deliberately subvert standards—across safety, security, privacy, quality, sustainability, and regulatory domains. By developing targeted approaches to identify and address wilful non-compliance, organizations protect themselves against potentially devastating threats from within. How does your organization manage the tension between strict compliance and necessary operational flexibility? Share your experiences in the comments.
- Compliance Programs and Systems
What do quality, safety, security, sustainability, environmental, regulatory and ethics programs have in common ? All these programs have the same purpose. They exist to make certain that organizational values are realized by introducing change to culture, behaviours, systems, and processes within a business. Programs are the means by which operational governance steers. They also bridge the gap between organizational values and operational objectives. Management programs differ from management systems (examples: ISO 27001, ISO 9001, IOS 42001, etc) in the following way: Management systems are reactive by design to stay between the lines. Management programs are proactive by design to stay ahead of risk. Compliance Programs and Systems Programs are the feed-forward processes of Operational Compliance an example of double-loop learning. A thermostat (system loop) may help keep your room at a specified temperature. However, it will never tell you if the room is warm enough (program loop). The system loop regulates towards a specific target. The program loop adjusts the target to regulate towards better outcomes. This is one of the reasons why organizations need programs , they are essential to regulate systems. Systems by design optimize towards the set target by removing variation in its inputs, wip, and outputs and will never on their own adapt to higher standards. That's why you need management programs - they are the feed-forward process necessary to steer towards better outcomes.
- Operational Compliance - Update
The following diagram is a vertical orientation of our Operational Compliance Model updated to better emphasize how bridging the gap between the ends and means happens in accountable organizations. Operational Compliance Model (Updated) We use the Operational Compliance Model to ensure policy-driven outcomes, targets, standard practices, and rules arising from modern and risk-based regulatory designs are properly handled with assurance at the right level of accountability and responsibility from the top to the bottom of the organization. The Operational Compliance Model includes built-in risk management, compliance, and governance right from the start in one integrative model. It's also AI-Ready with reinforcement learning loops to not only course correct but also teach you how to improve at the same time. The Operational Compliance Model incorporates the essential principles to meeting obligations and keeping promises with accountability, scalable for all businesses from small, medium or large. This model is best implemented using the Lean Startup approach to achieve Minimal Viable Compliance (MVC), on which improvements can be made over time. At all times, you will learn how to be effective at compliance as you build capability from a scooter, motorcycle, and then a car. Compliance cannot be achieved by the parts alone. Only when the parts are working together as one system can the outcome of compliance be realized. Our methodology lifts businesses in highly regulated, high-risk industries above the reactivity of being buried by standards, frameworks, and controls focused on certifications and audits. By effectively meeting your safety, security, sustainability, quality, regulatory, and ethical obligations, you'll always stay on mission, between the lines, and ahead of risk. Book a meeting with me (Raimund Laqua) to discuss how Lean Compliance can help ensure your Mission Success Through Compliance .
- Organizational Silos, Root Causes, and the Promise of GRC
A fundamental root cause of organizational dysfunction can be traced to Taylorism and scientific management approaches to organizational design. This management philosophy has fragmented organizations into isolated components that operate without understanding their function in relation to the whole system. Each unit focuses narrowly on its specific tasks rather than comprehending how its work contributes to the organization's broader mission. Taylorism, developed by Frederick Winslow Taylor in the early 20th century, revolutionized industrial management by breaking complex processes into specialized, measurable tasks. This scientific management approach emphasized efficiency through standardization, detailed time studies, and rigid division of labor—separating planning from execution and managers from workers. While it dramatically increased productivity in manufacturing settings, Taylorism's legacy includes the fragmentation of work into disconnected activities, the devaluation of worker knowledge and autonomy, and the creation of organizational structures where specialists operate in isolation without understanding how their work contributes to the whole. This mechanistic view of organizations treats humans as interchangeable parts in a machine rather than as adaptive components in a living system, laying the groundwork for today's organizational silos. This fragmentation has progressively diluted managerial accountability, creating a paradoxical situation where responsibility is distributed widely, yet true accountability remains elusive. The few managers who are genuinely accountable often lack sufficient span of control to fulfill their obligations effectively or to properly address organizational risks. Their authority is constrained to specific domains, preventing them from implementing comprehensive solutions that cross departmental boundaries. The Promise of GRC Governance, Risk, and Compliance ( GRC ) emerged as a framework intended to harmonize disparate control mechanisms and create organizational coherence amidst increasing regulatory complexity. In theory, GRC should align governance structures, risk management practices, and compliance activities to ensure strategic objectives are met while navigating uncertainty and meeting obligations. However, in practice, GRC has often deteriorated into a technical exercise focused on tools, documentation, and process integration rather than meaningful business outcomes. Organizations implement expensive GRC systems that track controls and compliance tasks but fail to create the integrative force. GRC has become fixated on the mechanics of integration while losing sight of its intended purpose—bridging the gap between the ends and the means through improved alignment, accountability, and assurance. The result is a parallel bureaucracy that adds complexity without addressing the fundamental disconnection between operational activities and organizational purpose, creating the illusion of better control while leaving the organization vulnerable to the very risks it aims to mitigate. The critical gap between means (how we operate) and ends (what we aim to achieve) persists, despite GRC's original promise to bridge this divide. A Path Forward GRC initiatives are fundamentally incapable of achieving their intended purpose without first addressing the root cause of organizational dysfunction—the Taylorist fragmentation that has created siloed thinking and diluted accountability. No amount of sophisticated GRC technology, integrated controls, or compliance documentation can overcome an organizational design where units operate in isolation, managers lack proper authority, and employees don't understand how their work contributes to strategic outcomes. Attempting to implement GRC in such environments merely adds another layer of complexity atop an already disjointed system. True GRC effectiveness requires a complete reimagining of organizational structure—one that reconnects fragmented parts into a coherent whole, restores clear lines of accountability with commensurate authority, and creates transparency between operational activities and strategic objectives. Only by rebuilding the foundation can GRC fulfill its promise as an integrative force rather than another disconnected management program. Here are actions you can take to deliver the promise of GRC: Reimagine Organizational Design : Move beyond Taylorist fragmentation by designing organizations around end-to-end value streams rather than specialized functions. This approach connects each activity directly to customer and stakeholder outcomes. Establish Clear Accountability Frameworks : Implement a formal accountability structure that clearly delineates decision rights, empowers responsible individuals with appropriate authority, and aligns accountability with organizational objectives. Expand Managerial Span of Control : Broaden the authority of accountable managers to encompass all resources necessary to fulfill their responsibilities, enabling them to address risks holistically across traditional boundaries. Redefine GRC Purpose : Shift GRC focus from mere integration of controls to becoming an integrative force that enhances organizational capability to achieve strategic objectives while navigating uncertainty. Implement Systems Thinking : Adopt a holistic approach where leaders and employees understand both their specific roles and how they contribute to the larger system, fostering shared understanding of interdependencies. Develop Integrative Leadership Capabilities : Train leaders to think across boundaries, understand complex systems, and make decisions that optimize the whole rather than sub-optimizing components. Create Mission-Focused Metrics: Develop performance measures that track progress toward strategic outcomes rather than merely monitoring compliance or departmental outputs, reinforcing the connection between daily activities and organizational purpose. The path forward requires courage to challenge deeply entrenched management paradigms that have shaped our organizations for over a century. By recognizing Taylorism's limitations and reimagining organizational design around wholeness rather than fragmentation, leaders can create systems where accountability flows naturally from clear purpose. This transformation demands that we reconceive GRC not as a technical solution but as a strategic capability that connects governance to execution through integrative leadership. The organizations that thrive in today’s complex landscape will be those that successfully unite their fragmented parts into purposeful wholes, establish meaningful accountability with appropriate authority, and leverage GRC as an integrative force that bridges the gap between strategic intent and operational reality. The challenge is significant, but the alternative—continuing to build increasingly complex control systems atop fundamentally flawed foundations—is a recipe for continued disappointment and organizational dysfunction.
- Lean Compliance - A Lamppost in an Uncertain World
After three decades in engineering and compliance, I took a leap of faith to address a critical gap I kept seeing in our industry. Eight years ago, I founded Lean Compliance because I believed there had to be a better way than reactive box-checking and last-minute audit preparation. Leaders in high-risk, highly regulated industries don't just want to pass inspections—they want genuine assurance they're meeting their duty of care to employees, customers, and communities. In this reflection, I share my journey of trying to transform compliance from a reactive necessity into a proactive business advantage, the challenges we've faced, and why, despite the obstacles, this remains a mission worth pursuing. Lean Compliance - A Lamppost in an Uncertain World In life and in business, you will face struggles. Some result from the actions of others and the environment we live in. Others are caused by our own choices. Anyone who has started a new business knows exactly what I'm talking about. When I founded Lean Compliance back in 2017, this was my situation. After working as an engineer for another company for over 30 years—designing and building systems for companies in highly regulated, high-risk industries—it was time to part ways. This wasn't the reason but rather the catalyst for something I should have done years before. The chances of success for any new business are slim, particularly when you're trying something innovative. This challenge is compounded when, as in the case of compliance, many don't have the desire to improve or see the need to do something different. "I'm already in compliance, so what is there to improve?" Sure, there's a business case for doing compliance more efficiently, and by compliance, most mean passing audits and inspections. Some call this GRC engineering, automation, or just IT development—something I had done, and many have done, for years. While management systems and automation are solutions for efficiency, they weren't answering the real issue facing leaders. Leaders wanted to know if their compliance efforts were enough, were they effective. Not just effective at passing audits and inspections, as important as that is. They wanted to know if they were meeting their obligations associated with safety, security, sustainability, quality, ethics, regulatory requirements, privacy, and so on. They were concerned about their duty of care. What assurance was there that their efforts would be enough? Could they keep their plants operational, employees safe, customer data secure, products and services at the highest standard of quality, and maintain the trust of all stakeholders? Those in highly regulated, high-risk sectors understand that without trust, you'll never have a legal license, let alone a social license to operate. This wasn't a technical problem looking for a technical solution. It was something more. It was about integrity, consistently meeting obligations and keeping promises. Not just once or right before an audit, but all the time. But here's the thing: they wanted this not primarily to pass audits and inspections. They wanted this because they cared for the welfare of the business, employees, customers, and communities. For them, compliance wasn't optional. It was essential to keep them on mission, between the lines, and ahead of risk. And the way compliance was being done wasn't working. Doubling down on audits or doing them faster was never going to be enough. Course correction after the fact was always too slow and too late when it comes to duty of care. So this is why I created Lean Compliance —to help businesses deliver on their duty of care. Compliance could learn from Lean principles about processes, controls, continuous flow, problem-solving, and how to continuously improve toward better outcomes. This would create room to be proactive—something that is desperately needed. But compliance also needed to be thought of differently. Not as a checklist of things needed to pass (Procedural Compliance), but as something organizations need to continuously do (Operational Compliance). After 8 years, this remains for me and others on this journey a road less travelled. Organizations appear just as reactive with their compliance as what I observed throughout my career. Compliance budgets are insufficient, and what little they have is used to invest in technical solutions to provide some relief, in hopes of catching a breath. Some are now looking to AI to accelerate their reactivity, and time will tell if this helps or makes matters worse. Lean Compliance exists because compliance remains predominantly reactive, siloed, and uncertain. I realize that Lean Compliance is not yet what it could or needs to be. However, for some, Lean Compliance has been and continues to be a lamppost shining light toward a better way to approach compliance—compliance defined by proactivity, integration, and certainty. As someone once said, "Good things take time. Great things take longer." The important thing is not to give up, which is what I intend not to do. If you want to join me and the others who have already begun this journey, I welcome the opportunity to meet you, share stories, and discuss the future of Lean Compliance . Reach out to me on Linkedin: https://www.linkedin.com/in/raimund-laqua/
- Business Intelligence: Are We Asking the Right Question?
During our Elevate Compliance huddle this week, we explored how to transform data into compliance intelligence. Everyone agrees intelligence is critical for business and compliance success—with companies investing heavily in data collection for dashboards and scorecards. However, I wonder if our approach is missing something important. Elevate Compliance Huddle - Compliance Intelligence Data provides explicit knowledge—information that can be easily articulated, documented, and shared. There's also tacit knowledge—insights embodied in experience, connected to intuition, values, and ideas. The real question we should ask is: ⚡️ How can we convert all forms of knowledge, both explicit and tacit, into meaningful business intelligence? Artificial Intelligence has limitations because it operates primarily on explicit knowledge (data and facts). Organizations relying on AI as their main intelligence source should recognize this constraint. To truly elevate business and compliance intelligence, we must incorporate embodied knowledge. We need to learn to make value-based decisions aligned with ethical principles about how things should be, rather than merely following predictions about what might happen. While "keeping humans in the loop" with AI is commonly advocated, even this approach falls short. Genuine intelligence requires embodied knowledge where we continuously learn to be good and behave well—what we call integrity. As we pursue Artificial General Intelligence (AGI), let's remember that only humans can bridge the divide between what is and what ought to be (Hume's Guillotine). This human intelligence, combining data with ethical judgment, leads us toward integrity and ultimately wisdom. What do you think? Join me (Raimund Laqua) every week for our Elevate Compliance Huddles where we discuss essential compliance principles to practice. https://www.leancompliance.ca/elevate-compliance-huddle
- Where Does Compliance Belong
Organizations today grapple with numerous compliance requirements: safety, security, sustainability, privacy, quality, environmental, social, regulatory, and responsible AI practices. A fundamental challenge many face is determining where these compliance functions belong within the organizational structure. As a result, these programs often end up relegated to the sides and corners of organizational charts. Some leaders deliberately position compliance functions far from core operations, perhaps viewing them as necessary burdens rather than strategic assets. This approach is understandable but misses a deeper truth. The difficulty in placing compliance programs stems from an intuitive understanding that effective compliance requires participation from every part of an organization. Meeting obligations and keeping promises isn't solely the responsibility of a designated department; it's an essential property of every function across the business. In this sense, compliance reflects the character of an organization rather than merely being one characteristic among many. It represents how the organization functions at a fundamental level. No wonder we struggle with where to position compliance—it doesn't fit neatly into traditional hierarchical structures precisely because it must influence everything. When considering compliance's proper place, we should recognize that it isn't analogous to a hand or foot of the business—appendages that perform specific tasks but can operate somewhat independently. Instead, compliance functions more like the heart of an organization, circulating vital resources to every "cell" while removing harmful waste to maintain overall health. Just as the heart regulates blood pressure to sustain life, compliance regulates business practices to ensure the life of the business. It establishes the rhythm and ensures that resources, standards and requirements flow to every corner of operations. This is where compliance truly belongs—not at the periphery but at the centre, the heart, of the business. When positioned properly, compliance doesn't constrain an organization but gives it life to fulfill its purpose. What do you think?
- The Trinity of Trust: Monitoring, Observability, and Explainability in Modern Systems
In today's compliance landscape, organizations face mounting pressure to build reliable systems while meeting an expanding array of compliance obligations. Understanding how systems behave—whether traditional software or advanced AI—has become essential not just for performance but for trust and accountability. Three interconnected concepts have emerged as the foundation for this understanding: monitoring, observability, and explainability. Lean Compliance: Trinity of Trust Understanding the Trinity of Trust Monitoring: The Vigilant Guardian Monitoring serves as our first line of defence, continuously tracking predefined metrics and triggering alerts when thresholds are crossed. In traditional software, this means watching system resources, application performance, and infrastructure health. For AI systems, monitoring extends to model performance metrics, prediction latency, and data drift detection. While monitoring excels at answering anticipated questions like "Is the system down?" or "Is performance degraded?", it struggles with novel or complex failure modes. Think of monitoring as a vigilant guard—essential but limited to checking what it's been instructed to watch. Observability: The Insightful Explorer Observability takes us deeper, enabling us to infer a system's internal state from its external outputs. Built on metrics, logs, and traces, observability empowers teams to ask new questions they didn't anticipate when designing the system. In AI contexts, observability encompasses the full model lifecycle—from data ingestion through training to deployment and inference. It provides the context needed to understand not just that something happened, but how it happened, allowing for effective troubleshooting of novel problems. Explainability: The Transparent Interpreter Explainability completes our trinity by answering the critical "why" questions. For traditional software, explainability comes from clean architecture, comprehensive documentation, and traceable execution flows. In AI systems—where complex models often operate as black boxes—explainability techniques like SHAP, LIME, and counterfactual explanations become essential. Explainability transforms compliance from a checkbox exercise to genuine accountability. It provides the justification for why specific decisions were made, enabling human oversight of complex system behaviours and supporting the increasingly mandated right to explanation. Weaving the Golden Thread of Assurance Together, these three concepts create what compliance professionals call the "golden thread"—a continuous, traceable connection between obligations and evidence of their fulfillment. Each plays a distinct and vital role: Monitoring verifies that promises are being kept in real-time Observability provides the evidence trail needed to prove compliance retrospectively Explainability delivers the justification for why specific decisions were made For compliance teams and obligation owners, this trinity creates unprecedented visibility: Monitoring allows them to track adherence to regulatory thresholds and alerting on potential violations before they become serious breaches Observability enables tracing sensitive data or decisions through distributed systems and investigating compliance issues with complete context Explainability demonstrates that algorithmic processes align with stated policies and regulatory requirements A Comparative Lens When we compare these approaches, we see their complementary strengths: Depth of Understanding Monitoring shows what happened Observability reveals how it happened Explainability clarifies why it happened Proactive vs. Retrospective Value For proactive insights: Monitoring excels at immediate alerting Observability detects emerging patterns Explainability identifies problematic reasoning before serious failures For retrospective analysis: Explainability provides the deepest understanding of decisions Observability offers the most comprehensive view of system behaviour Monitoring provides basic historical metrics The Compliance Intelligence Imperative As regulatory pressures intensify across industries—from GDPR's right to explanation to emerging AI regulations—organizations cannot afford to address compliance as an afterthought. The most forward-thinking companies are adopting compliance initiatives that implement the Trinity of Trust into their core operations. Lean Compliance's "Compliance Intelligence Program" stands at the forefront of this evolution, transforming obligation management from a static documentation exercise into a dynamic, intelligence-driven practice. By embedding monitoring, observability, and explainability into compliance, organizations gain: Real-time visibility into compliance status Rich context for investigating potential violations Clear explanations for regulators and stakeholders Proactive identification of compliance risks before they materialize A Call to Action As we navigate the complexities of modern systems, particularly those powered by AI, the trinity of monitoring, observability, and explainability moves from optional to essential. Organizations that fail to embrace these practices face not just technical risks but also compliance risk leading to loss of reputation and stakeholder trust. Make implementing Lean Compliance's "Compliance Intelligence Program" a priority this year. By weaving the Trinity of Trust into your compliance fabric, you transform obligations from burdens into competitive advantages—creating systems that are not just certified but worthy of the trust placed in them by customers, partners, and regulators. The organizations that thrive in today's landscape will be those that recognize compliance not as a cost centre but as an intelligence centre—one that delivers deeper understanding, greater assurance, and ultimately, unshakable trust. About the author: Raimund Laqua, PMP, P.Eng, is founder of Lean Compliance ( www.leancompliance.ca ), and co-founder of ProfessionalEngineers.AI .
- Why Your GRC Efforts Are Failing
When it comes to designing systems, a common mistake is confusing essential properties with essential parts. This fundamental error explains why many Governance, Risk, and Compliance (GRC) initiatives fall short of their objectives. ⚡️ Learning from Systems Thinking Russell L. Ackoff's systems thinking principles provide valuable insights: Understanding proceeds from the whole to its parts, not from the parts to the whole as knowledge does. The essential properties that define any system are properties of the whole which none of the parts have independently. Essential parts are necessary for the system to perform its function but are not sufficient on their own. Properties derive from the interaction of parts, not from their actions taken separately. ⚡️ The GRC Challenge GRC efforts will never be effective as long as they focus solely on the individual components. Instead, we must first ask a fundamental question: "What properties does my information security and privacy program need to deliver that none of the parts by themselves provide?" The answer is not simply governance, risk management, or compliance. These are merely parts of a larger system, not the essential properties themselves. ⚡️ The Path Forward The true path forward is to define the system's purpose. Without a clear understanding of what your security and privacy program is ultimately meant to achieve as a unified whole, individual GRC components will remain fragmented and ineffective. By first establishing the system's overarching purpose, you create the foundation for meaningful interaction of governance, risk management, and compliance activities to work together towards providing essential properties. Only by defining this systemic purpose can you determine these essential properties and how the parts must interact to produce them. This purpose-driven approach transforms GRC from disconnected activities into a cohesive system that delivers genuine value.
- Systems Thinking
Machines, organizations, and communities include and are themselves part of systems. Systems Thinking Russell L. Ackoff, a pioneer in systems thinking, defined a system not as a sum of its parts but as the product of its interactions of those parts. ".. the essential properties that define any system are the properties of the whole which none of the parts have." The example he gives is that of a car. The essential property of car is to take us from one place to another. This is something that only a car as a whole can do. The engine by itself cannot do this. Neither can the wheels, the seats, the frame, and so on. Ackoff continues: "In systems thinking, increases in understanding are believed to be obtainable by expanding the systems to be understood, not by reducing them to their elements. Understanding proceeds from the whole to its parts, not from the parts to the whole as knowledge does." A system is a whole which is defined by its function in a larger system of which it's a part. For a system to perform its function it has essential parts: Essential parts are necessary for the system to perform its function but not sufficient Implies that an essential property of a system is that it can not be divided into independent parts. Its properties derive out of the interaction of its parts and not the actions of its parts taken separately. When you apply analysis (reductionism) to a system you take it apart and it loses all its essential properties, and so do the parts. This gives you knowledge (know how) on how the part works, but not what they are for. To understand what parts are for, you need synthesis (holism) which considers the role the part has with the whole. Why is this important and what has this to do with quality, safety, environmental or regulatory objectives? The answer is, when it comes to management systems, we often take a reductionist approach to implementation. We divide systems into constituent parts and focus our implementation and improvement at the component level. This according to Ackoff is necessary, but not sufficient for the system to perform. We only need to look at current discussions with respect to compliance to understand that the problem with performance is not only the performance of the parts themselves, but rather failures in the links (i.e. dependencies) the parts have with each other. Todd Conklin (Senior Advisor to the Associate Director, at Los Alamos National Laboratory) calls this "between and among" the nodes. To solve these problems you cannot optimize the system by optimizing the parts making each one better. You must consider the system as a whole – you must consider dependencies. However, this is not how most compliance systems are implemented or improved. Instead, the parts of systems are implemented in silos that seldom or ever communicate with each other. Coordination and governance is also often lacking to properly establish purpose, goals, and objectives for the system. In practice, optimization mostly happens at the nodes and not the dependencies. It is this lack of systems attention that contributes to poor performance. No wonder we often hear of companies who have implemented all the "parts" of a particular management system and yet fail to receive any of the benefits from doing so. For them it has only been a cost without any return. However, by applying Systems Thinking you can achieve a better outcome. "One can survive without understanding, but not thrive. Without understanding one cannot control causes; only treat effect, suppress symptoms. With understanding one can design and create the future ... people in an age of accelerating change, increasing uncertainty, and growing complexity often respond by acquiring more information and knowledge, but not understanding." -- Russell Ackoff. For those looking for a deeper dive the following video (90 minutes) provides an excellent survey of systems thinking by Russell L. Ackoff a pioneer in the area of systems improvement working along side others such as W. Edward Deming.