COMPLIANCE
SEARCH
Find what you need
564 results found with an empty search
- Artificial Intelligence Doesn't Care, You Must!
Artificial intelligence feels no remorse when it discriminates, no concern when it violates privacy, and no accountability when its decisions harm human lives. This reality—that AI inherently lacks the capacity to care about its impacts—places a real and immediate burden of responsibility on the organizations that deploy these increasingly powerful systems. As AI technologies transform modern businesses the obligation of “duty of care" has surfaced as a critical priority for responsible deployment. This duty represents the specific obligations that fall to organizations that integrate AI into their operations, requiring them to act as the ethical and practical stewards for systems that cannot steward themselves. Because AI itself doesn't care, the responsibility falls squarely on those that lead their organizations to care enough to deploy it wisely. Organizations deploying AI face a critical choice today: Will you embrace your duty of care, or risk the consequences of unchecked artificial intelligence? The time for passive implementation is over. Take these essential steps now: ⚡️ Identify and evaluate AI obligations and commitments (regulatory, voluntary, and ethical) ⚡️ Implement effective management and technical programs to contend with uncertainty and risk ⚡️ Train leadership (business and technical) on AI ethics and responsible deployment principles ⚡️ Create clear accountability frameworks that connect technical teams with executive oversight Don't wait for regulations to force your hand or for AI failures to damage your reputation and harm those who trust you. Contact us today (pmo@leancompliance.ca) to schedule an AI Duty of Care Assessment and take the first step toward fulfilling your responsibility in the age of artificial intelligence that doesn't care—but you must.
- Capabilities Driven Business Canvas
A principle that is easily forgotten is that to change outcomes you need to change your capabilities. Michael Porter's value chain analysis helps to visualize the chain of capabilities needed to create business value. However, capabilities are needed for every endeavor that requires an outcome to be achieved and even more so to sustain and improve over time. The practice of this principle is essential for compliance to meet objectives associated with regulatory performance and outcome based obligations. It is also necessary to solve problems in pursuit of those goals. The following capabilities driven business canvas will help you focus your attention on what matters most when improving outcomes. Capabilities Driven Business Canvas This canvas is available in a PowerPoint format along with other templates, workshops, and resources by becoming a Lean Compliance Member.
- Remove Roadblocks Not Guardrails
Are you doing Value Stream Mapping (VSM) wrong? Are you doing Value Stream Mapping wrong? Value Stream Mapping is a powerful tool for eliminating waste in organizational processes. When implemented correctly, it creates leaner, more efficient operations by removing unnecessary activities. However, the challenge lies in distinguishing between what truly diminishes value and what actually creates or protects it. This critical blind spot leads to cutting elements that appear wasteful but are essential for mission success. ⚡️ How often have organizations eliminated safety stock as “waste,” only to discover it was their crucial buffer against supply chain uncertainties? ⚡️ How frequently have approval processes been streamlined for efficiency without considering their role in ensuring proper duty of care? ⚡️ How many times have compliance measures been reduced, inadvertently pushing operations to the edge of uncertainty and creating fragility instead of resilience? The key to effective process improvement isn’t just cutting—it’s strategic discernment. Yes, eliminate true waste, but equally important: ensure you’re adding what’s necessary for mission success - you need to do both. 🔸 Call to Action: Identify the Guardians of Your Commitments 🔸 Three practical steps to protect your promises while eliminating waste: ⚡️ Map commitment touch points - Identify each process step that directly supports meeting your regulatory obligations, policy requirements, or stated objectives. These are your value protection points. ⚡️ Distinguish promise-fulfilment from waste - Ask: "Does this step directly help us fulfill a specific commitment we've made?" If yes, it's not waste—it's essential. ⚡️ Create a commitment impact assessment - Before removing any step, evaluate: "Will this change hamper our ability to keep our promises to regulators, customers, or stakeholders?" Remember: True LEAN COMPLIANCE doesn't compromise your ability to meet obligations—it enhances it by removing only what doesn't support your commitments. Need help aligning your efficiency efforts with your commitment framework? Let's connect.
- The Cost of AI
Is the collateral damage from AI worth it, and who should decide? When it comes to AI, we appear to be “hell-bent“ towards developing Artificial General Intelligence (AGI) so as to consume all available energy, conduct uncontrolled AI experiments in the wild at scale, and disrupt society without a hint of caution or duty of care. The decision of “Should We?” has always been the question. However, when asked, the conversation often turns to silence. Now, creating smart machines that can simulate intelligence is not the primary issue; it’s giving it agency to act in the real world without understanding the risk, that’s the real problem. Some might even call this foolishness. The agentic line should never have been crossed without adequate safeguards. And yet without understanding the risk, how will we know what is adequate? Nevertheless, here we are developing AI agents ready to be deployed in full force, for what purpose and at what cost? Technology is often considered as neutral, and this appears to be how we are treating AI, just like other IT applications, morally agnostic. Whether technology is agnostic or not, the question is, are we morally blind, or just wilfully ignorant? Do we really know what we are giving up to gain something we know very little about? To address some of this risk, organizations are adopting ISO 42001 certification as a possible shield against claims of negligence or wrongdoing, and AI insurance will no doubt be available soon. But perhaps, we would do better by learning from the medical community and treat AI as something that is both a help and a harm – not neutral. But more importantly, something that requires a measure of precaution, a duty of care, and professional engineering. If we did, we would keep AI in the lab until we studied it carefully. We would conduct controlled clinical trials to ensure that specific uses of AI actually create the intended benefits and minimize the harms, anticipated or otherwise. Time will tell if the decisions surrounding AI will prove to be reckless, foolish, or wise. However, what should not happen is for those who will gain the most to decide if the collateral damage is worth it. What are we sacrificing, what will we gain, and will it be worth the risk? Let’s face the future, but with our eyes open so we can count the cost. For organizations looking to implement AI systems responsibly, education is the crucial first step. Understanding how these standards apply to your specific context creates the foundation for successful implementation. That's why Lean Compliance is launching a new educational program to help organizations understand and take a standards-based approach to AI. From introductory webinars to comprehensive implementation workshops, we're committed to building your capacity for responsible and safe AI.
- Risk-based Thinking: A Strategic Approach
Risk-based thinking is a mindset (perception, personas, perspective) to proactively improve the certainty of achieving an outcome utilizing strategies that consider threats and opportunities. Risk-based Thinking This mindset integrates risk management into everyday decision-making rather than treating it as a separate process. This capability helps organizations succeed in the presence of uncertainty. By adopting this mindset, leaders proactively identify what might go wrong (threats) and what might create opportunities to improve their chance of success. This forward-looking approach aids in strategic planning, decision making, and execution. Risk-based thinking requires viewing situations from multiple angles – questioning assumptions, identifying potential gains, and balancing priorities. This helps teams avoid blind spots that could derail their objectives. When embedded in organizational culture, this approach creates a balanced framework for decision-making. It enables calculated risk-taking with appropriate safeguards, helping teams avoid both excessive caution and reckless advancement. Take Action Today Don't wait for a crisis to implement risk-based thinking in your organization. Begin by evaluating your current projects through this strategic lens. Identify three potential threats and three possible opportunities for each initiative. Then develop specific action plans to address these scenarios. Share this approach with your team and incorporate it into your regular planning processes. By making risk-based thinking a habit rather than an afterthought, you'll create competitive advantage in an increasingly uncertain business environment.
- Is Lean Compliance the Same as GRC?
While Governance, Risk, and Compliance (GRC) in IT typically focuses on certified management systems like ISO 27001, SOC 2, and PCI DSS—with technology platforms designed for audit automation through integration—it often misses its true purpose. GRC should deliver targeted outcomes, not just certified systems. It needs to be operational, with all components working together to achieve compliance goals and objectives. Unfortunately, many organizations lack the know how to establish systems that are more than their parts. Lean Compliance addresses this gap by helping organizations achieve minimal viable compliance (MVC)—ensuring essential functions, behaviours, and interactions operate sufficiently together to generate targeted outcomes. Rather than focusing on integration alone, Lean Compliance emphasizes operability through a comprehensive model covering governance, programs, systems, and processes. Think of it as Operational GRC. GRC was always meant to deliver better safety, security, sustainability, privacy, quality, ethical, and regulatory outcomes—not just support audits and certifications. Our outcome-focused approach is what makes Lean Compliance different: we aim higher to ensure compliance delivers what you need for mission success.
- Better Compliance Done a Better Way
According to Albert Einstein: Insanity is doing the same thing over and over again and expecting different results. And yet, that is exactly how some organizations approach compliance. Consistency and conformance is king and hoping for better outcomes is the primary article of faith. Any improvements that are made have more to do with form as prescribed r ather than function as intended . Under these articles of faith companies rarely know the effectiveness of their compliance which is usually not assured or measured. The phrase "blind faith" comes to mind. Just follow the rules and everything will be just fine. Pain medication is available at the gift shop on your way out. This posture, and yes, it is mostly - posture - as common and prevailing as it may be, is fortunately changing. Slowly, yes; but changing nonetheless. But what is it changing to and how? A Better Way With Much Better Results In order to better protect public and environmental safety, stakeholder value, reputation, quality, and other value outcomes, a sea-change is happening to the risk and compliance landscape. Compliance obligations now have more to do with making progress towards vision zero targets such as: zero emissions, zero fatalities, zero harm, zero fines, zero violations, and so on, than meeting prescriptive requirements. The latter is still necessary but only as a part of an overall compliance framework. Why? because regulators, standards bodies, and stakeholders recognize that to address more complex and systemic risk organizations need more latitude in terms of the means by which risk is addressed. This is a huge paradigm shift for this who work in risk and compliance. Previous one-size-fits-all prescriptive approaches to prevent loss and mitigate harms are too expensive when aggregated across an industry or even an organization. But more importantly, they are ineffective to deal with the challenges that must now be faced. The bad news is that after decades under the tutelage of prescriptive regulations and industry standards making the necessary changes will not and have not been easy. Substituting audit regimes with performance and risk-based compliance services has been slow although there are signs that things are speeding up. At the same time continuing to use reactive, and silo-ed functions to meet obligations will not be enough and probably never was. Compliance must now be goal-oriented, proactive and integrated into overall governance and managerial accountability. Advancing outcomes is now the new king and risk-based approaches focused on continuous improvement over time is the new standard. Instead of hoping for better outcomes companies must now put in place measures to make certain that they are better – informed faith rather than blind faith. The good news is, this will make compliance more effective at protecting overall value and lighter weight in the process (think risk-based and lean). Compliance will be in a better position to contend with uncertainty and improve the probability that what we value is not lost and new value is advanced. If this only means preventing risk before they become a reality then this will be a huge win for everyone. Compliance will no longer be seen as a necessary evil and something to avoid but will be looked at as a necessary good and something to be good at. Of course, some will continue with the same approaches they have followed for years and hope for the best. But we know this leads to same outcomes that we have always had; passing audits but not advancing compliance outcomes or reducing risk.
- Are You Ready For an Environment-First Future?
Environment-First Future Those that have been following us will know that compliance needs to be more than just checking boxes and passing audits. This is true for all compliance domains including environmental obligations. In recent years I have written about how the compliance landscape has changed and that it needs to more like operations than simply a function that inspects and conducts audits. Compliance as a category of programs is more akin to quality which has control and assurance functions but also strives to build quality into the design of products, services and all functions of the organization. One does not need to see very far ahead to realize that this is exactly what is happening now in earnest for Environmental Compliance. Environmental compliance is moving beyond simply monitoring and reporting towards establishing programs and systems to reduce carbon footprint, emissions, waste, and other objectives all in increasing measure. Sustainability is now the top priority and net zero across every dimension is the driver for operational objectives. Instead of quality as job one or safety first programs, organizations now need to lead their risk & compliance programs with an Environment-First strategy. The Environment and ESG There are many reasons why we are now seeing a greater elevation of environmental initiatives within organizations. Some of these will include the heighten attention on climate change along with existing environmental protection regulations and initiatives. However, what seems to be the source of urgency and immediacy is the increase of ESG integration in the investment world. ESG is all over the news, financial reports and increasingly in shareholder reports. However, it does not have a consistent definition. In broad terms it is concerned with Environmental, Social, and Governance objectives applied to sustainability. Specifically, ESG investing is focused on scoring organizations on how well they are doing at being a good steward of the environment. In broad terms this is called value investing. However, investors are also interested in the impact organizations are making at improving the environment or reducing climate change and its effects. This is called impact investing. Currently ESG scoring is done by investors and ESG reporting is done by organizations with some regulation of common categories on which to report. However, for the most part, the categories and measurements used in scoring and how it is reported are far from being the same. Greater alignment is expected but there will always be gaps driven by differences in priorities across investors, organizations, and governments. Whether or not ESG helps to create greater returns for shareholders is debatable. In some cases, ESG investments may be more expensive and come with lower returns. However, what is starting to become clear is that the integration of ESG may have a greater impact on promoting environmental initiatives than what government regulations might enforce. In essence, the marketplace is engaging in a more significant way to drive environmental change which for many is a more effective and desirable approach. What we can say with certainty is that we are moving towards an Environment-First world which will affect investments, stakeholder expectations, and compliance obligations among many other things. Environmental programs will no longer be characterized by only monitoring and reporting. Instead, environmental programs will be defined by sustainability and the effective implementation of systems to progressively reach zero emissions, net zero carbon footprint, zero waste, zero environmental harm, and other environmental objectives. Are you ready for an Environment-First future? You can be. Lean Compliance has helped organizations establish an Environment-First program and can help you do the same. Subscribe to our newsletter so you don’t miss our future articles as we unpack what it means for an organization to be Environment-First and the impact this will have on compliance and the business as a whole.
- Minimal Viable Compliance: Building Frameworks That Actually Work
In this article, I explore the key distinctions between framework-focused and operational compliance approaches, and how they relate to Minimal Viable Compliance (MVC). Minimal Viable Compliance A framework-focused approach to compliance emphasizes creating the structural architecture and formal elements of a compliance program. This includes developing policies, procedures, organizational charts, committee structures, and reporting mechanisms. While these elements are needed, organizations can sometimes become overly focused on documentation and form over function. They might invest heavily in creating comprehensive policy libraries, detailed process maps, and governance structures without sufficient attention to how these will operate in practice. It's akin to having a beautifully designed blueprint for a building without considering how people will actually live and work within it. In contrast, operational compliance focuses on the engineering and mechanics of how compliance actually works in practice. This approach prioritizes the systems, workflows, and daily activities that deliver on compliance obligations. It emphasizes creating practical, executable processes that enable the organization to consistently meet its regulatory requirements and stakeholder commitments. Rather than starting with the framework, operational compliance begins with the end goal followed by what promises need to be kept, what risks need to be handled, and identifying what operational capabilities need to be established. This might mean focusing on staff training, developing clear handoffs between departments, implementing monitoring systems, and establishing feedback and feed-forward loops to identify and address issues quickly along with steering the business towards targeted outcomes. The concept of Minimal Viable Compliance (MVC) bridges these two approaches by asking: what is the minimum set of framework elements and operational capabilities (functions, behaviours, & interactions) needed to effectively and continuously meet our compliance obligations? This does not mean building minimum or basic compliance. MVC recognizes that both structure and function are necessary, but seeks to optimize the balance between them. It avoids the trap of over-engineering either the framework or operations beyond what's needed for effective compliance. For example, rather than creating extensive policies for every conceivable scenario, MVC might focus on core principles and key controls while building strong operational processes around high-risk areas. This approach allows organizations to start with essential compliance elements and iteratively build upon them based on practical experience and changing needs, rather than trying to create a perfect compliance program from the outset. Driving Compliance to Higher Standards The key to compliance success lies in understanding that framework and operational compliance are not opposing forces but complementary elements that must work in harmony. The framework provides the necessary structure and shape, while operational compliance ensures that these translates into effective action – action that delivers on obligations. MVC helps organizations find the right balance by focusing on what's truly necessary to achieve compliance objectives that advance outcomes towards higher standards.
- Engineering Through AI Uncertainty
As artificial intelligence continues to advance, AI engineers face a practical challenge – how to build trustworthy systems when working with inherent uncertainty. This isn't merely a theoretical concern but a practical engineering problem that requires thoughtful solutions. CYNEFIN Uncertainty Map Understanding Uncertainty: The CYNEFIN Framework The CYNEFIN framework (pronounced "kuh-NEV-in") offers a useful approach for categorizing different types of uncertainty, which helps determine appropriate engineering responses: 1. Known-Knowns (Clear Domain) In this zone, we have high visibility of risks. Cause-effect relationships are clear, established practices work reliably, and outcomes are predictable. Standard engineering approaches are effective here. 2. Known-Unknowns (Complicated Domain) Here we have moderate visibility. While solutions aren't immediately obvious, we understand the questions we need to answer. Expert analysis can identify patterns and develop reliable practices for addressing challenges. 3. Unknown-Unknowns (Complex Domain) This zone presents poor visibility of risks. While we can't predict outcomes beforehand, retrospective analysis can help us understand what happened. We learn through observation and adaptation rather than pre-planning. 4. Unknowable (Chaotic Domain) This represents the deepest uncertainty – no visibility with unclear cause-effect relationships even after the fact. Traditional models struggle to provide explanations for what occurs in this domain. Current State of AI Uncertainty Current AI technologies, particularly advanced systems that use large language models, operate somewhere between zones 4 and 3 – between Unknowable and Unknown-Unknowns. This assessment isn't alarmist but simply acknowledges the current technical reality. These systems can produce different outputs from identical inputs, and their internal decision processes often resist straightforward explanation. This level of uncertainty raises practical questions about appropriate governance. What aspects of AI should receive attention: the technology itself, the models, the companies developing them, the organizations implementing them, or the engineers designing them? Whether formal regulation emerges or not, the engineering challenge remains clear. Finding Success Amid Uncertainty The path forward isn't about eliminating uncertainty – that's likely impossible with complex AI systems. Instead, we need practical approaches to find success while working within uncertain conditions: Embracing Adaptive Development Rather than attempting to plan for every contingency, successful AI engineering embraces iterative development with continuous learning. This approach acknowledges uncertainty as a given and builds systems that can adapt and improve through ongoing feedback. Implementing Practical Safeguards Even without complete predictability, we can implement effective safeguards. These include establishing operational boundaries, creating monitoring systems that detect unexpected behaviors, and building appropriate intervention mechanisms. Focusing on Observable Outcomes While internal processes may remain partially opaque, we can measure and evaluate system outputs against clear standards. This shifts the engineering focus from complete understanding to practical reliability in achieving intended outcomes. Dynamic Observation Rather Than Static Evidence While traditional engineering relies on gathering empirical evidence through systematic testing, AI systems present a unique challenge. Because these systems continuously learn, adapt, and evolve, yesterday's test results may not predict tomorrow's behavior. Rather than relying solely on static evidence, successful AI engineering requires ongoing observation and dynamic assessment frameworks that can evolve alongside the systems they monitor. This approach shifts from collecting fixed data points to establishing continuous monitoring processes that track how systems change over time. A Practical Path Forward The goal for AI engineering isn't to eliminate all uncertainty but to move systems from Zone 4 (Unknowable) to Zone 3 and (Unknown-Unknowns) toward Zone 2 (Known-Unknowns). This represents a shift from unmanageable to manageable risk. In practical terms, this means developing systems where: We can reasonably predict the boundaries of behavior, even if we can't predict specific outputs with perfect accuracy We understand enough about potential failure modes to implement effective controls We can observe and measure relevant aspects of system performance We can make evidence-based improvements based on real-world operation Learning to Succeed with Uncertainty Building trustworthy AI systems doesn't require perfect predictability. Many complex systems we rely on daily – from weather forecasting to traffic management – operate with a measure of uncertainty yet deliver reliable value. The engineering challenge is to develop practical methods that work effectively within the presence of uncertainty rather than being paralyzed by it. This includes: Developing better testing methodologies that identify potential issues without requiring exhaustive testing of all possibilities Creating monitoring systems that detect when AI behavior drifts outside acceptable parameters Building interfaces that clearly communicate system limitations and confidence levels to users Establishing feedback mechanisms that continuously improve system performance By approaching AI engineering with these practical considerations, we can build systems that deliver value despite inherent uncertainty. The measure of success isn't perfect predictability but rather consistent reliability in achieving beneficial outcomes while avoiding harmful ones. How does your organization approach uncertainty in AI systems? What practical methods have you found effective?
- The Emergence of AI Engineering
The Emergence of AI Engineering - Can You Hear the Music? In a compelling presentation to the ASQ chapter / KinLin Business school in London Ontario, Raimund Laqua delivered a thought-provoking talk on the emergence of AI Engineering as a distinct discipline and its critical importance in today's rapidly evolving technological landscape. Drawing from his expertise and passion for responsible innovation, Laqua painted a picture of both opportunity and urgency surrounding artificial intelligence development. The Context: Canada's Missed Opportunity Laqua began by highlighting how Canada, despite housing some of the world's best AI research centers, has largely given away its innovations without securing substantial benefits for Canadians. Instead of leading the charge in applying AI to build a better future, Canada risks becoming "a footnote on the page of AI history." "Some say we don't do engineering in Canada anymore, not real engineering, never mind AI engineering," Laqua noted with concern. His mission, along with others, is to change this trajectory and ensure that Canadian innovation translates into Canadian prosperity. This requires navigating what he called "the map of AI Hype," passing through "the mountain of inflated expectations" and enduring "the valley of disillusionment" to reach "the plateau of productivity" where AI can contribute to a thriving tomorrow. Understanding AI: Beyond the Hype A significant portion of the presentation was dedicated to defining AI, which Laqua approached from multiple angles, acknowledging that AI is being defined in real-time as we speak. AI as a Field of Study and Practice AI represents both a scientific discipline and an engineering practice. As a science, AI employs the scientific method through experiments and observations. As an engineering practice, it utilizes the engineering method embodied through design and prototyping. Laqua observed that currently, many AI companies are conducting experiments in public at scale, prioritizing science over engineering—a practice he suggested needs reconsideration. AI's Domain Diversity Laqua emphasized that no single domain captures the full scope of AI. It spans multiple knowledge and practice domains, making it challenging to draw clear boundaries around what constitutes AI. This multidisciplinary nature contributes to the difficulty in defining and regulating AI comprehensively. Historical Evolution AI isn't new—it began with perceptrons (analog neural nets) in 1943, around the same time as the Manhattan Project. The technology has evolved through decades of research and experimentation to reach today's transformer models that power applications like ChatGPT, which Laqua described as "the gateway to AI" much like Netscape was "the gateway to the Internet." AI's Predictive Nature At its core, AI is a stochastic machine—a probabilistic engine that processes data to make predictions with inherent uncertainty. This stands in contrast to the deterministic nature of classical physics and traditional engineering, where predictability and reliability are paramount. "We are throwing a stochastic wrench in a deterministic works," Laqua noted, "where anything can happen, not just the things we intend." AI's Core Capabilities AI is defined by its capabilities Laqua outlined five essential capabilities that define modern AI: Data Processing : The ability to collect and process vast amounts of data, with OpenAI reportedly having already processed "all the available data in the world that it can legally or otherwise acquire." Machine Learning : The creation of knowledge models stored in neural networks, where most current AI research is focused. Artificial Intelligence : Special neural network architectures or inference engines that transform knowledge into insights. Agentic AI : AI with agency—the ability to act in digital or physical worlds, including autonomous decision-making capabilities. Autopoietic AI : A concept coined by Dr. John Vervaeke (UoT), referring to AI that can adapt and create more AI, essentially reproducing itself. Having smart AI is one thing, but having AI make decisions on its own with agency in the real or digital world is something else entirely that deserves careful consideration before crossing. Laqua cautioned, "Some have already blown through this guardrail." AI's Unique Properties Laqua identified four aspects that collectively distinguish AI from other technologies: AI is a stochastic machine, introducing uncertainty unlike deterministic machines AI is a machine that can learn from data AI can learn how to learn, which represents its most powerful capability AI has agency in the world by design, influencing rather than merely observing "Imagine a tool that can learn how to become a better tool to build something you could only have dreamed of before," Laqua said, capturing the transformative potential of AI while acknowledging the need to use this power safely. The Uncertainty of AI The Cynefin Uncertainty Map Laqua emphasized that uncertainty is the root cause of AI risk, but what's different with AI is the degree and scope of this uncertainty. Traditional risk management approaches may be insufficient to address these new challenges. This demands that we learn how to be successful in the presence of this uncertainty. The CYNEFIN Map of Uncertainty Using the CYNEFIN framework, Laqua positioned AI between the "Unknowable" zone (complete darkness with unclear cause and effect, even in hindsight) and the "Unknown Unknowns" zone (poor visibility of risks, but discernible with hindsight). This placement underscores the extreme uncertainty associated with AI and the need to engineer systems that move toward greater visibility and predictability. Dimensions of AI Uncertainty The presentation explored several critical dimensions of AI uncertainty: Uncertainty about Uncertainty: AI's outputs are driven by networks of probabilities, creating a meta-level uncertainty that requires new approaches to risk management. Uncertainty about AI Models: Laqua pointed out that "all models are wrong, although some are useful." LLMs are neither valid nor reliable in the technical sense—the same inputs can produce different outputs each time, making them technically unreliable in ways that go beyond mere inaccuracy. Uncertainty about Intelligence : The DIKW model (Data, Information, Knowledge, Wisdom) suggests that intelligence lies between knowledge and wisdom, but Laqua noted that humans introduce a top-down aspect related to morality, imagination, and agency that current AI models don't fully capture. Hemisphere Intelligence : Drawing on Dr. Ian McGilchrist's research on brain hemispheres, Laqua suggested that current AI primarily emulates left-brain intelligence (focused on details, logic, and analysis) while lacking right-brain capabilities (intuition, creativity, empathy, and holistic thinking). This imbalance stems partly from the left-brain dominance in tech companies developing AI. Uncertainty about Ethics : Citing W. Ross Ashby's "Law of Inevitable Ethical Inadequacy," Laqua explained why AI tends to "cheat": "If you don't specify a secure ethical system, what you will get is an insecure unethical system." This creates goal alignment problems—if AI is instructed to win at chess, it will prioritize winning at the expense of other unspecified goals. Uncertainty about Regulation : Traditional regulatory instruments may be inadequate for AI. According to cybernetic principles, "to effectively regulate AI, the regulator must be as intelligent as the AI system under regulation." This suggests that conventional paper-based policies and procedures may be insufficient, and we might need "AI to regulate AI"—an idea Laqua initially rejected but has come to reconsider. Governing AI: Four Essential Pillars AI Governance Pillars To address these uncertainties and create trustworthy AI, Laqua presented four governance pillars that are emerging globally: 1. Legal Compliance AI must adhere to laws and regulations, which are still developing globally. Laqua referenced several regulatory frameworks, including the EU's AI Act (approved in 2024), which he described as "perhaps the most comprehensive, built on top of the earlier GDPR framework." He noted that Canada lags behind, with Bill C-27 (Canada's AI act) having been canceled when the federal government was prorogued. While these legislative efforts are well-intentioned, Laqua cautioned that they are "new and untested," with technical standards even further behind. "We don't know if regulations will be too much, not enough, or even effective," he observed, emphasizing the need for lawyers, policy makers, regulators, and educators who understand AI technology. 2. Ethical Frameworks Since "AI technology is not able to support ethical subroutines," humans must be ethical in AI's design, development, and use. This begins with making ethical choices concerning artificial intelligence and establishing AI ethical decision-making within organizations and businesses. Laqua called for "people who will speak up regarding the ethics of AI" to ensure responsible development. 3. Engineering Standards AI systems must be properly engineered, preferably by licensed professionals. Laqua emphasized that professional engineers in Canada "are bound by an ethical code of conduct to uphold the public welfare." He argued that licensed Professional AI Engineers are best positioned to design and build AI systems that prioritize public good. 4. Management Systems AI requires effective management to handle its inherent unpredictability. "To manage means to handle risk," Laqua explained, noting that AI introduces "an extra measure" of uncertainty due to its non-deterministic nature. He described AI as "a source of chaos" that, while useful, needs effective management to mitigate risks. International Standards as Starting Points Laqua recommended several ISO standards that can serve as starting points for implementing these pillars: - ISO 37301 – Compliance Management System (Legal) - ISO 24368 – AI Ethical Guidelines (Ethical) - ISO 5338 – AI System Lifecycle (Engineered) - ISO 42001 – AI Management System (Managed) He emphasized that implementing these standards requires "people who are competent, trustworthy, ethical, and courageous (willing to speak up, and take risks)"—not just technical expertise but individuals who "can hear the music," alluding to a story about Oppenheimer's ability to understand the deeper implications of theoretical physics. The Call for AI Engineers The AI Engineering Body of Knowledge (AIENGBOK) The presentation culminated in a compelling call for the emergence of AI Engineers—professionals who can "fight the dragon of AI uncertainty, rescue the princess, and build a better life happily ever after." These engineers would work "to create a better future, not a dystopian one" and "to design AI for good, not for evil." The AI Engineering Body of Knowledge Laqua shared that he has been working with a group called E4P , chairing a committee to define an AI Engineering Body of Knowledge (AIENGBOK). This framework outlines: What AI engineers need to know (theory) What they need to do (practice) The moral character they must embody (ethics) Characteristics of AI Engineers According to Laqua, AI Engineers should possess several defining characteristics: Advanced Education : AI engineers will "require a Master's Level Degree or higher" Transdisciplinary Approach : Not merely working with other disciplines, but representing "a discipline that emerges from working together with other disciplines" Team-Based Responsibility : "Instead of single engineer accountable for a design, we need to do that with teams" X-Shaped Knowledge and Skills : Combining vertical expertise, horizontal breadth and connected. Methodological Foundation : Based on "AI Engineering Methods and Principles" Ethical Commitment : "Bound by AI Engineering Ethics" Professional Licensing : "Certified with a license to practice" The Path Forward Laqua outlined several requirements for establishing AI Engineering as a profession: Learned societies providing accredited programs Engineering professions offering expertise guidelines and experience opportunities Regulatory bodies enabling licensing for AI engineers Broad collaboration to continue developing the AIENGBOK "The stakes are high, the opportunities are great, and there is much work to be done," he emphasized, calling for "people who are willing to accept the challenge to help build a better tomorrow." A Parallel to the Manhattan Project Throughout his presentation, Laqua drew parallels between the current AI innovations and the Manhattan Project, where Robert Oppenheimer led efforts to harness atomic power. Both scenarios involve powerful technologies with potential for both tremendous good and harm, ethical dilemmas, and concerns about singularity events. Oppenheimer's work, while leading to the atomic bomb, also resulted in numerous beneficial innovations, including nuclear energy for power generation and medical applications like radiation treatment. Similarly, AI presents both risks and opportunities. A Closing Reflection Laqua concluded with a thought-provoking question inspired by Oppenheimer's legacy: "AI is like a tool, the important thing isn't that you have one, what's important is what you build with it. What are you building with your AI?" This question encapsulates the presentation's core message: the need for thoughtful, responsible development of AI guided by competent professionals with a strong ethical foundation. Just as Oppenheimer was asked if he could "hear the music" behind mathematical equations, Laqua challenges us to hear the deeper implications of AI beyond its technical capabilities—to understand not just what AI can do, but what it should do to serve humanity's best interests. The presentation serves as both a warning about unmanaged AI risks and an optimistic call for a new generation of AI Engineers who can help shape a future where artificial intelligence enhances rather than diminishes human potential. Raimund Laqua, PMP, P.Eng Raimund Laqua is founder and Chief Compliance Engineer at Lean Compliance Consulting, Inc., and co-founder of ProfesssionalEngineers.AI. Raimund Laqua is also AI Committee Chair at Engineers for the Profession ( E4P ) and participates in working groups and advisory boards that include ISO ESG Working Group, OSPE AI Working Group, and Operational Excellence. Raimund is a professional engineer with a bachelor’s degree in electrical / computer engineering from McMaster University (Hamilton). He has consulted for over 30 years across North America in highly regulated, high-risk sectors: oil & gas, energy, pharmaceutical, medical device, healthcare, government, and technology companies. Raimund is author of weekly blog articles and an upcoming book on Operational Compliance – Staying between the lines and ahead of risk. He speaks regularly on the topics of lean, project management, risk & compliance, and artificial intelligence. LinkedIn: https://www.linkedin.com/in/raimund-laqua/
- Risk Planning is Not Optional
What I have observed after reviewing risk management programs across diverse industries that include oil & gas, pipeline, medical device, chemical processing, high-tech, government, and others is that the ability to address uncertainty and its effects is largely predetermined by design. This holds whether it is the design of a product, process, project, or an organization. The process industry provides an illustrative example of what this looks like. For companies in this sector the majority of safe guards and risk controls are designed into a facility and process before it ever goes on-line. In fact, when a given process becomes operational every future change before it is made is evaluated against how it impacts the design and associated safety measures. This is what risk management looks for companies in high-risk, highly-regulated sectors. The ability to handle uncertainty is designed, maintained, and improved throughout the expected life of the process. Risk informs all decisions and risk management is implicit in every function and activity that is performed. For all companies that contend with uncertainty risk planning and implementation is not optional. Without adequate preparation it is not possible to effectively prevent or recover from the effects of uncertainty when they occur. Hoping for the best is a good thing, but it is not an effective strategy against risk. What is effective is handling uncertainty by design.











