top of page

SEARCH

Find what you need

560 results found with an empty search

  • GRC Engineering: The Need for Practice Standards

    When it comes to GRC systems, there can be a significant gap between what gets implemented and what's actually needed to achieve the performance and outcomes we're after. GRC system failures can be attributed to (among other things) practitioners lacking the fundamentals: understanding regulatory requirements, control theory, and how to translate compliance obligations into effective socio-technical solutions. At its core, this is requirements engineering and system design work. Yet how many self-proclaimed "GRC engineers" can actually design systems and processes that deliver meaningful data privacy, security, or compliance outcomes? Simply calling yourself an engineer doesn't make you one. This isn't just about credentials—it's about competence and trust. Organizations and the public deserve systems built by people who truly understand their craft. We demand reliability and integrity from our systems; shouldn't we expect the same from the people who build them? Other engineering disciplines have practice standards and licensing for good reason. As GRC automation becomes increasingly critical to organizational governance and public safety, we need similar standards to ensure practitioners are actually qualified for the work they claim to do. It's time to establish formal practice standards for GRC engineering—education requirements, competency assessments, and right-to-practice protections that ensure only qualified professionals design and implement the systems protecting our organizations and communities. What's your take on this? I'd love to hear your thoughts.

  • Why Ethics Makes AI Innovation Better

    Ethics in AI is fundamentally an alignment problem between technological capabilities and human values. While discussions often focus on theoretical future risks, we face immediate ethical challenges today that demand practical solutions, not just principles. Many organizations approach AI ethics as an obstacle to innovation - something to be minimized or sidestepped in the pursuit of capability development. This creates a false dichotomy between progress and safety. Instead, we need to integrate ethics directly into development processes to address real issues and risks. The practical application of ethics doesn't hinder innovation but ensures AI systems are truly safe. This integration requires understanding that AI challenges span multiple dimensions. At its core, AI is simultaneously a technical, organizational, and social problem. Technically , we must build robust safety mechanisms and engineering practices. Organizationally , we must consider how AI systems interact with existing processes and infrastructures. Socially, we must acknowledge how AI reflects and amplifies human values, biases, and power structures. Any effective solution must address all three dimensions. A multi-faceted approach helps us tackle issues like fairness. When we talk about mitigating bias in AI, we're really asking: when is statistical bias a legitimate problem versus simply representing a different valid perspective? Applied ethics in AI helps us address these complex issues along with balancing competing values such as privacy versus security, transparency versus intellectual property protection – with no perfect solutions, only thoughtful compromises. Even seemingly technical decisions carry ethical weight. Consider prompt efficiency, which directly impacts energy consumption – making our usage choices inherently ethical ones with environmental consequences. Technical decisions accumulate to create systems with profound social impacts. This is why we need clear metrics to measure success in ethical AI deployment – how do we quantify fairness, transparency, and explainability in meaningful ways? The distinction between human and artificial intelligence also creates an opportunity to uncover previously overlooked human potential – qualities and capabilities that may have been undervalued in our efficiency-focused world. As we build AI systems, we should continuously ask: where can AI best complement human work, and which capabilities should remain distinctly human? Moving Forward: From Principles to Practice The future of AI will be determined not by what we wish or hope for, but by what we actually create through concrete actions. Instead of abstract principles, we need practical implementations built on clear ethical requirements. In regions considering AI deregulation, organizations must strengthen self-regulation practices. While reduced regulation may accelerate certain types of commercial innovation, it risks neglecting safety innovation without proper oversight and incentives. We need breakthroughs in AI safety just as much as we need advances in AI capabilities. The path forward isn't about choosing between innovation and ethics, but recognizing that ethical considerations make our innovations truly valuable and sustainable. Through all of this, remember the simplest principle: be good with AI.

  • Time to Poka-Yoke Your Compliance

    By Raimund Laqua, Lean Compliance Engineer Mistakes aren't failures—they’re lessons. You see this quote everywhere. LinkedIn. Motivational posters. Team meetings. It sounds wise until you work in compliance. Because when compliance engineers make mistakes, people die. The Problem with Mistake Worship The Challenger explosion. Boeing's 737 MAX crashes. The 2008 financial meltdown. These weren't "learning opportunities"—they were preventable disasters where someone's mistake became everyone else's tragedy. I've watched too many post-incident reviews where we nod solemnly, update our procedures, and promise to "learn from this." But learning from mistakes is fundamentally reactive. We're saying: "Let's fail first, then get better." What if we didn't have to fail at all? Poka-Yoke: From Mistake-Proofing to Promise-Keeping In LEAN management, there's a concept called Poka-Yoke—traditionally defined as mistake-proofing. But I prefer to think of it as engineering processes where obligations will always be met and promises kept. Instead of training people to be perfect, you design systems that reliably help organizations deliver on commitments. You make it easier to keep promises rather than break them. Think about USB-C cables. You can't plug them in wrong because there is no wrong way. The connection is engineered to work every time. Now apply this to compliance. Engineering Reliable Delivery Build obligation fulfillment into the process.  If safety inspections must happen before equipment startup, don't rely only on procedures—make startup electronically impossible without all the essential safety aspects in place and operational. Engineer commitment keeping.  Your car won't start without a seatbelt. Your procurement system shouldn't approve purchases without environmental assessments. Design continuous assurance.  Don't wait for quarterly audits to verify compliance. Build systems that provide real-time confirmation—dashboards that show obligation status, alerts that trigger before deadlines, processes that maintain compliance automatically. The key insight: engineer systems where keeping promises is the natural outcome, even when people are stressed and rushing. When Prevention Fails Even perfect systems have failures. But Poka-Yoke isn't just about prevention—it's about rapid detection. Fail small and fast before small problems become big disasters. Manufacturing uses statistical process control to catch deviations immediately. Compliance needs similar real-time monitoring. Not quarterly reports or yearly audits—constant visibility into drift before it becomes non-compliance. Stop Blaming People, Start Fixing Systems When compliance fails, we ask "Who screwed up?" Better question: "What in our system allowed this to happen?" Individual blame misses the point. In complex systems, human error is usually a symptom of poor design. Fix the system, and you fix the error. The Reality Check Perfect systems don't exist. People will always find workarounds when pressured. But that's exactly why we need Poka-Yoke thinking—design for the humans you have, not the perfect humans you wish you had. Stop celebrating your ability to learn from mistakes. Start celebrating your ability to prevent them. The best lesson is the one you never have to learn the hard way. Raimund Laqua is a Lean Compliance Engineer focused on applying operational and lean principles to operationalizing regulatory and voluntary obligations.

  • Operational Rings of Power

    Three operational rings power organizations towards total value from their GRC, ESG, Quality, Security, Regulatory, Ethics, and compliance investments, even when facing uncertainty: 🔸 Ring of Alignment (coordinated effort towards targeted outcomes) 🔸 Ring of Performance (capabilities to meet obligations) 🔸 Ring of Consistency (conformance to standards) Operational Rings of Power These are held together by the fellowship of: 🔸 Feed Forward Processes - leading indicators and actions, and 🔸 Feed Back Processes - lagging indicators and actions When these are operating together as one, obligations can be met and stakeholders will experience the benefits from being in compliance: improved quality, safety, environment, security, sustainability, and so on – the real power of compliance. And who knows, you might even defeat the forces of Mordor and save Middle Earth. Now wouldn't that be something.

  • What Creates Risk Opportunities in Your System?

    By Raimund Laqua, P.Eng. - The Lean Compliance Engineer Uncertainty Creates the Opportunity for Risk I've sat through countless meetings where we talk about being "proactive"—whether it's safety, security, or quality. Yet here we are, still chasing incidents after they happen, still writing corrective actions for problems we should have seen coming. Sound familiar? Here's what I've learned after three decades in risk & compliance: we're fighting the wrong battle. The Real Enemy Isn't What You Think We obsess over the symptoms—incidents, failures, breaches, defects. But here's what we miss: uncertainty creates the opportunity for risk . These incidents are just manifestations of that risk. Hazards, threats, and failure modes? They're all manifestations of uncertainty. Think about your last major incident—safety, security, or quality related. The failure, the breach, the defect—those were risks that became a reality. But the real question is: why didn't we see it coming? Because we weren't looking at the uncertainties that created those risk opportunities in the first place. Why Traditional Programs Feel Like Whack-a-Mole Most risk & compliance management programs treat risk as a pest to eliminate. Write better procedures! More training! Tighter controls! But you can't eliminate risk when uncertainty keeps creating new ones. I've seen this pattern repeatedly across different organizations and domains. The root cause isn't equipment, people, or processes—it's uncertainties that keep creating fresh risk opportunities. What Actually Works The smartest professionals I know don't chase the symptoms of uncertainty (i.e. risk) —they map the uncertainties creating those opportunities. When we run HAZOP s in process safety, we're asking: "What uncertainties exist here, and what risk opportunities might they create?" In cybersecurity, threat modelling does the same thing—identifying uncertainties in system behaviour that create attack opportunities. Quality engineers use FMEA to map uncertainties in manufacturing processes that create defect opportunities. In aerospace, STAMP analysis tracks how uncertainties cascade through control systems, creating risk opportunities at every interaction. Even business consultants figured this out. CYNEFIN maps help teams recognize different types of uncertainty and the unique risk opportunities each creates—whether you're managing operational safety, cyber-security threats, or product quality. The Question That Changes Everything After years of watching organizations struggle with this across multiple domains, I'm convinced the future belongs to teams that hunt uncertainties—not the ones still swatting at symptoms – the effects of uncertainty. Instead of asking "How do we prevent this incident?" try asking "What uncertainties are creating the opportunity for risk to become a reality?"

  • AI Assistants - Threat or Opportunity?

    AI Assistants - Blessing or Curse? The rise of Generative AI has taken the world by storm, and AI assistants are popping up all over the place, providing a new way for people to approach their work. These assistants automate repetitive and time-consuming tasks, enabling individuals to focus on more complex and creative work. However, for some, it is just an improvement in productivity, and they question whether the use of AI assistants may lead to them losing their jobs. For those starting to use AI assistants, they are indeed a blessing, providing much-needed relief for overworked employees. The improved productivity is creating needed capacity and some extra space in already full workloads. However, this is expected to be short-lived as these benefits become normalized and expected. The buffer we now experience will be consumed and used for something – the question is what? No wonder there is a fear that the widespread use of AI assistants may lead to significant job reductions. Some jobs will be redundant, while others will be expected to double their workloads. For instance, if someone used to write ten articles a week, they may now be expected to do twenty using AI assistants. So, where is the real gain for the organization apart from fewer people and perhaps marginal cost reductions? Is this the same story of bottom line rather than top-line thinking. How To Use AI Assistants To Achieve Better Outcomes The key to realizing transformational benefits of AI lies in adapting businesses to fully exploit the capabilities of these tools, without exploiting the people impacted by the technology. Dr. Eliyahu Goldratt (Father of the Theory of Constraints) believed that technology could only bring benefits if it diminished a limitation. Therefore, organizations must ask critical questions to exploit the power of AI technology: What is the power of the new technology? What limitation does the technology diminish? What rules enabled us to manage this limitation? And most importantly, what new rules will we now need? Keeping the old limitations that we had before the new technology limits the benefits we can realize. It is by removing the old rules and adopting new ones that creates transformational benefits. By providing credible answers to these questions, organizations can achieve a return on investment that is both efficient and effective, enabling their employees to focus on higher-level tasks and achieve more significant outcomes – higher returns not just lower costs. This will enable companies to move beyond the short-lived relief of AI and realize its true potential as a transformational tool. Which Path Will You Take? The use of AI will be a threat for some but an opportunity for others. If history repeats itself many organizations will adopt AI assistants, realize the efficiency gains, and pat themselves on the back for a short-term win. However, as these benefits become normalized they will soon be back to where they began. Any gains they might have realized will be lost and they will be left doing more with less except now with their new AI assistant. On the other hand, there will be others who asked the right questions, changed existing processes, and created new rules that will enable them to reap the full benefits of AI technology. They will realize compounding benefits that will accrue over time. What the future holds will depend on which path you take and your willingness to take a longer term perspective focused on improving outcomes rather than just reducing costs. Which path will you take?

  • The Need for LEAN AI Regulation

    There's a growing urgency to establish regulations for artificial intelligence (AI). Public concerns about potential harm and human rights violations are valid. However, proposed regulatory regimes can add significant compliance burdens for organizations already navigating a complex landscape. It's important to consider how existing regulations, standards, and professional oversight bodies can be leveraged for AI. Professional engineers, for example, already adhere to strict ethical codes. Adapting these frameworks to address AI-specific risks could be a quicker and more efficient approach than building entirely new regulatory structures. By focusing on existing resources that safeguard critical infrastructure, public safety, and environmental sustainability, we can promote responsible AI development without stifling innovation. This requires a thoughtful and collaborative approach that balances both innovation and risk mitigation. It’s time we considered Lean AI Regulation.

  • AI Governance, Assurance, and Safety

    AI Governance, Assurance, and Safety As AI becomes more prevalent and sophisticated, it is being used in critical applications, such as healthcare, transportation, finance, and national security. This raises a number of concerns that include: AI systems have the potential to cause harm : AI systems can cause harm if they are not designed and implemented properly. For example, if an AI system is used to make decisions in a critical application such as healthcare, and it makes a wrong decision, it could result in harm to the patient. Therefore, it is important to ensure that AI systems are safe and reliable. AI is becoming more complex : AI systems are becoming more complex as they incorporate more advanced algorithms and machine learning techniques. This complexity can make it difficult to understand how the AI system is making decisions and to identify potential risks. Therefore, it is important to have a governance framework in place to ensure that AI systems are designed and implemented properly. Trust and transparency are necessary : Trust and transparency are critical for the adoption and use of AI systems. If users cannot trust an AI system, they will be reluctant to use it. Therefore, it is important to have mechanisms in place to ensure that AI systems are transparent, explainable, and trustworthy. Regulations and standards are needed: As AI becomes more prevalent and critical, there is a need for regulations and standards to ensure that AI systems are safe and reliable. These regulations and standards can help to ensure that AI systems are designed and implemented properly and that they meet certain safety and reliability standards. As a result, AI governance, assurance, and safety are increasingly important and necessary. Let’s take a closer look at what these mean and how they impact compliance. AI Governance AI governance refers to the set of policies, regulations, and practices that guide the development, deployment, and use of artificial intelligence (AI) systems. It encompasses a wide range of issues, including data privacy, accountability, transparency, and ethical considerations. The goal of AI governance is to ensure that AI systems are developed and used in a way that is consistent with legal and ethical norms, and that they do not cause harm or negative consequences. It also involves ensuring that AI systems are transparent, accountable, and aligned with human values. AI governance is a complex and rapidly evolving field, as the use of AI systems in various domains raises new and complex challenges. It requires the involvement of a range of stakeholders, including governments, industry leaders, academic researchers, and civil society groups. Effective AI governance is crucial for promoting responsible AI development and deployment, and for building trust and confidence in AI systems among the public. AI Assurance AI assurance refers to the process of ensuring the reliability, safety, and effectiveness of artificial intelligence (AI) systems. It involves a range of activities, such as testing, verification, validation, and risk assessment, to identify and mitigate potential issues that could arise from the use of AI. The goal of AI assurance is to build trust in AI systems by providing stakeholders, such as regulators, users, and the general public, with confidence that the systems are functioning as intended and will not cause harm or negative consequences. AI assurance is a critical component of responsible AI development and deployment, as it helps to mitigate potential risks and ensure that AI systems are aligned with ethical and legal norms. It is also important for ensuring that AI systems are transparent and accountable, which is crucial for building trust and promoting responsible AI adoption. AI Safety AI safety refers to the set of principles, strategies, and techniques aimed at ensuring the safe and beneficial development and deployment of artificial intelligence (AI) systems. It involves identifying and mitigating potential risks and negative consequences that could arise from the use of AI, such as unintended outcomes, safety hazards, and ethical concerns. The goal of AI safety is to develop AI systems that are aligned with human values, transparent, and accountable. It also involves ensuring that AI systems are designed and deployed in a way that does not harm humans, the environment, or other living beings. AI safety is a rapidly growing field of research and development, as the increasing use of AI systems in various domains poses new and complex challenges. AI safety is closely related to the broader field of responsible AI, which aims to ensure that AI systems are developed and used in a way that is ethical, transparent, and socially beneficial. AI assurance and AI safety are both important concepts in the field of artificial intelligence (AI), but they refer to different aspects of ensuring the proper functioning of AI systems. AI assurance refers to the process of ensuring that an AI system is operating correctly and meeting its intended goals. This involves testing and validating the AI system to ensure that it is functioning as expected and that its outputs are accurate and reliable. The goal of AI assurance is to reduce the risk of errors or failures in the system and to increase confidence in its outputs. On the other hand, AI safety refers to the specific objective of ensuring that AI systems are safe and do not cause harm to humans or the environment. This involves identifying and mitigating potential risks and unintended consequences of the AI system. The goal of AI safety is to ensure that the AI system is designed and implemented in a way that minimizes the risk of harm to humans or the environment. Impact on Compliance AI governance, AI assurance, and AI safety are critical components to support current and upcoming regulations and standards related to the use of AI systems. These functions will impact compliance in the following ways: AI Governance : AI governance refers to the policies, processes, and controls that organizations put in place to manage and oversee their use of AI. Effective AI governance is essential for compliance because it helps organizations ensure that their AI systems are designed and implemented in accordance with applicable laws and regulations. AI governance frameworks can include policies and procedures for data management, risk management, and ethical considerations related to the use of AI. AI Assurance : AI assurance refers to the process of testing and validating AI systems to ensure that they are functioning correctly and meeting their intended goals. This is important for compliance because it helps organizations demonstrate that their AI systems are reliable and accurate. AI assurance measures can include testing and validation procedures, performance monitoring, and quality control processes. AI Safety: AI safety refers specifically to ensuring that AI systems are safe and do not cause harm to humans or the environment. This is important for compliance because it helps organizations demonstrate that their AI systems are designed and implemented in a way that meets safety and ethical standards. AI safety measures can include risk assessments, safety testing, and ethical considerations related to the use of AI. Together, AI governance, AI assurance, and AI safety help organizations comply with regulations and standards related to the use of AI. These measures ensure that AI systems are designed and implemented in a way that meets safety, ethical, and legal requirements. In addition, compliance with AI-related regulations and standards is essential for building trust with stakeholders and ensuring the responsible and ethical use of AI. Measures of AI Governance, Assurance, and Safety The following are steps that organizations can take to introduce AI governance, assurance, and safety: Establishing AI Regulatory Frameworks : Governments, industry, and organizations need to create frameworks that govern the development, deployment, and use of AI technologies. The regulations should include guidelines for data privacy, security, transparency, and accountability. Implementing Ethical Guidelines: AI systems must adhere to ethical guidelines that consider the impact on society, respect human rights and dignity, and promote social welfare. Ethical considerations must be factored into the design, development, and deployment of AI systems. Promoting Transparency and Explainability: AI systems should be transparent and explainable. This means that the decision-making process of AI systems should be understandable and interpretable by humans. This will enable people to make informed decisions about the use of AI systems. Ensuring Data Privacy and Security: Data privacy and security must be a priority for any AI system. This means that personal data must be protected, and cybersecurity measures must be implemented to prevent unauthorized access to the data. Implementing Risk Management Strategies: Organizations need to develop risk management strategies to address the potential risks associated with the use of AI systems. This includes identifying potential risks, assessing the impact of those risks, and developing mitigation strategies. Establishing Testing and Validation Standards : There must be established testing and validation standards for AI systems to ensure that they meet the required performance, reliability, and safety standards. Creating Accountability Mechanisms: Organizations must be held accountable for the use of AI systems. This includes establishing accountability mechanisms that ensure transparency, fairness, and ethical decision-making. Investing in Research and Development: Investment in research and development is crucial to advance the state of AI technology and address the challenges associated with AI governance, assurance, and safety. In next weeks blog post, we take a deep dive into upcoming cross-cutting AI regulations and guidelines that organizations will need to prepare for and where AI Governance, Assurance and Safety will be required: Canadian Bill C-27 AIDA (in its second reading) European Union AI Act (proposed) UK AI National Strategy (updated Dec 18, 2022) USA NIST AI Framework (released Jan 26, 2023) If you haven't subscribed to our newsletter make sure you that you do so you don't miss it.

  • Why Your IT Playbook Won't Work for AI Systems

    Organizational leadership faces a critical decision: apply familiar commodity IT approaches to AI development or invest in systematic design processes for fundamentally different technology. The wrong choice creates cascading risks that compound as AI systems learn and adapt in unpredictable ways. A Fundamental Difference Commodity IT succeeds with assembly and agile approaches because it works with predictable components that have stable behaviours and known interfaces. A database or API behaves consistently according to its specifications, making integration challenges manageable and testing straightforward. Development teams can iterate rapidly because outcomes are predictable and systems remain stable after deployment. AI Systems violate every assumption that makes commodity IT approaches successful. These systems change behaviour over time through learning and adaptation, making their responses non-deterministic and their long-term behaviour unpredictable. Unlike traditional software that executes according to programmed logic, AI systems evolve their responses based on new data, environmental changes, and feedback mechanisms—creating fundamentally different engineering challenges. Why Familiar Approaches Fail Assembly Approaches appear to work initially but break down under real-world conditions. What looks like "assembling" pre-built AI components actually requires substantial custom engineering to handle behavioural consistency across model updates, performance monitoring as systems drift, bias detection and correction as they adapt, and compliance maintenance as behaviour evolves. The integration complexity is magnified because AI components can change their characteristics over time, breaking assumptions about stable interfaces. Agile Limitations become apparent when dealing with systems that require extended observation periods to reveal their true behaviour. Traditional sprint cycles assume you can fully test and validate functionality within short time-frames, but AI systems may only reveal critical issues weeks or months after deployment as they learn from real-world data. The feedback loops that make agile effective in commodity IT don't work when system behaviour continues evolving in production. Testing Assumptions fail because AI systems don't produce repeatable outputs from identical inputs. Traditional testing validates that a system behaves according to specifications, but AI systems are designed to adapt and change. Point-in-time validation becomes meaningless when the system you tested yesterday may behave differently today based on what it has learned. This requires fundamentally different approaches to verification and validation. The Engineering Necessity AI's adaptive and unpredictable nature makes disciplined design processes absolutely essential. Organizations must develop new capabilities specifically designed to control and regulate AI technology. Goal Boundaries must be explicitly designed to define what the system should optimize for and what constraints must never be violated. Without systematic design for acceptable learning parameters, AI systems can adapt in ways that conflict with business objectives, ethical requirements, or regulatory compliance. Behavioural Governance requires systematic approaches for monitoring, evaluating, and controlling AI system behaviour as it evolves. This includes creating capabilities for detecting when systems drift outside acceptable boundaries and designing interventions to correct problematic adaptations before they cause operational or compliance issues. Continuous Verification becomes essential because AI systems require ongoing monitoring rather than periodic validation. Organizations must build systems with comprehensive monitoring capabilities that track not just performance metrics but behavioural evolution, bias emergence, and compliance drift throughout the system life-cycle. Adaptation Management demands new processes for managing beneficial learning while preventing harmful evolution. This includes designing model versioning and rollback capabilities, creating human oversight mechanisms for critical adaptations, and building processes for systematic feedback and correction. Strategic Implications The choice between commodity IT approaches and AI engineering has profound strategic consequences that will determine organizational success in an AI-driven competitive landscape. Competitive Risk emerges when organizations treat AI systems like traditional software. Ad-hoc approaches create operational risks that compound as systems evolve unpredictably, while engineered approaches enable organizations to deploy AI capabilities that adapt within controlled boundaries and provide sustainable competitive advantages through reliable performance. * Regulatory Exposure is amplified by AI's adaptive nature. The EU AI Act and emerging regulations specifically address systems that change behaviour over time, creating significant liability for non-compliant adaptive systems. Organizations using static approaches face unknown compliance gaps that multiply as their systems learn and evolve, while engineered design provides verifiable compliance and defensible audit trails. Technical Debt Accumulation happens faster with AI systems because each quick implementation becomes a maintenance burden requiring specialized oversight. Ad-hoc AI deployments create knowledge silos and operational dependencies that become increasingly expensive to manage. Systematic approaches build reusable capabilities and organizational knowledge that compound value rather than costs. Organizational Capability determines long-term success in AI deployment. The scarcity of AI talent makes internal capability development critical, but commodity IT approaches don't develop the specialized knowledge needed for managing adaptive systems. Systematic engineering approaches create organizational expertise in designing, deploying, and governing AI systems that becomes increasingly valuable as AI adoption scales. Choose Wisely Organizational leadership must choose between two fundamentally different approaches to AI development: Continue with IT Playbook : Apply familiar assembly and agile methods, hoping that AI components will integrate smoothly and systems will learn beneficially without systematic oversight. This approach appears faster initially but creates compounding risks as systems adapt in unpredictable ways. Invest in AI Engineering : Develop systematic design capabilities specifically for adaptive systems, creating controlled learning environments with proper governance, monitoring, and intervention capabilities. This approach requires upfront investment but builds sustainable AI capabilities with manageable risks. Bottom Line AI systems are not commodity IT with different algorithms—they are fundamentally different technology that requires the application of engineering methods and practice. The adaptive capability that makes AI powerful also makes design essential because these systems will continue changing behaviour throughout their operational life-cycle. Organizations that continue applying familiar IT methods to AI will create operational risks, compliance gaps, and technical debt that become increasingly expensive to address as their systems scale and evolve. Those that invest in engineered approaches will build sustainable competitive advantages through reliable AI capabilities that adapt within controlled boundaries. Skipping the engineering stage to accelerate AI adoption is not only unwise, it's a failure to exercise proper duty of care. About the Author: Raimund Laqua, P.Eng , is a professional computer engineer with over 30 years of expertise in high-risk and regulated industries, specializing in lean methodologies and operational compliance. He is the founder of Lean Compliance and co-founder of ProfessionalEngineers.AI , organizations dedicated to advancing engineering excellence. As a Professional Digital/AI Engineering Advocate, Raimund champions proper licensure across the entire spectrum of digital engineering disciplines. He actively contributes to the profession through his leadership roles, serving as AI Committee Chair for Engineers for the Profession (E4P) and as a member of the Ontario Society of Professional Engineers (OSPE) working group on AI in Engineering, where he helps shape the future of professional engineering practice in the digital domain.

  • Intelligent Design for Intelligent Systems: Restoring Engineering Discipline in AI Development

    The Current Challenge AI systems are increasingly deployed without the systematic design approaches that have proven effective in other engineering disciplines. Development teams often prioritize rapid deployment over comprehensive analysis of system behaviour and potential consequences, viewing detailed design work as an impediment to progress. This approach has led to AI systems that exhibit unintended biases, perform poorly in edge cases, or create consequences that become apparent only after deployment. These issues typically stem not from poor intentions, but from the absence of established design practices that help engineers anticipate and address such problems systematically. This represents a significant challenge for the engineering profession as AI systems take on increasingly critical roles in society. The Divergence of Engineering Practices The software industry's adoption of agile methodologies and rapid iteration cycles has brought valuable benefits in flexibility and responsiveness to changing requirements. However, these approaches have also shifted emphasis away from comprehensive upfront design toward emergent solutions that develop through iterative refinement. This shift made sense for many consumer applications where failures result in minor inconvenience and rapid correction is possible. However, applying the same approach to AI systems that influence significant decisions—about loans, healthcare, employment, or criminal justice—may not be appropriate given the different risk profiles and consequences involved. The gap between current AI development practices and established engineering design principles has widened precisely when AI applications have become more consequential. This divergence raises fundamental questions about professional standards and public responsibility. Lessons from Engineering Disciplines Other engineering fields offer instructive examples of how systematic design practices manage complexity and risk. These examples aren't perfect templates for AI, but they illustrate principles that could be adapted. Process Engineering Excellence Chemical engineers approach new processes through systematic analysis. They begin with fundamental principles—mass balances, energy balances, reaction kinetics, thermodynamics. Hazard analysis follows: What can go wrong? How likely is it? What are the consequences? Safety systems are designed to handle credible failure scenarios, control strategies are developed, and process flow diagrams are created. Only then does detailed engineering and construction begin. This methodical approach doesn't guarantee perfection, but it systematically addresses known risks and creates documentation that helps future engineers understand design decisions. When problems arise, the design history provides context for effective troubleshooting and modification. Medical Device Standards The medical device industry operates under regulatory frameworks that require comprehensive design controls. Companies must demonstrate systematic design planning, establish clear requirements, perform risk analysis, and validate that devices meet their intended use. Design History Files document not just final specifications, but the reasoning behind design choices, alternatives considered, and risk assessments performed. This documentation serves multiple purposes: regulatory compliance, quality assurance, and knowledge transfer. When devices perform unexpectedly or require modification, engineers can trace decisions back to their original rationale and assess the implications of changes. Aerospace and Nuclear Precedents High-consequence industries like aerospace and nuclear engineering demonstrate how design rigour scales with potential impact. Multiple design reviews, extensive analysis and simulation, redundant safety systems, and comprehensive documentation are standard practice. The principle of defence in depth ensures that no single failure leads to catastrophic outcomes. These industries accept higher development costs and longer timelines because the consequences of failure justify the investment in thorough design. They've learned through experience that shortcuts in design often lead to much higher costs later. The Unique Nature of AI Systems AI systems present design challenges that both parallel and extend beyond those in traditional engineering. Understanding these characteristics is essential for developing appropriate design approaches. AI systems exhibit emergent behaviours that can surprise even their creators. Unlike a chemical process whose behaviour follows predictable physical laws, AI systems learn patterns from data that may not be obvious to human designers. A trained model's decision-making process often remains opaque, making it difficult to predict behaviour in edge cases or novel situations. This opacity doesn't excuse engineers from design responsibility—it demands more sophisticated approaches to understanding and validating system behaviour. Traditional testing methods may be insufficient for systems that can behave differently with each new dataset or operational context. AI systems also evolve continuously. Traditional engineered systems are static once deployed, but AI systems often adapt their behaviour based on new data or feedback. This creates ongoing design challenges: How do teams maintain safety and reliability in systems that change their behavior over time? How do they validate performance when the system itself is learning and adapting? The societal implications of AI systems amplify these technical challenges. When AI systems influence medical diagnoses, financial decisions, or criminal justice outcomes, their effects ripple through communities and institutions. Design decisions that seem purely technical can have profound social consequences. Design: The Missing Foundation The software industry has developed a narrow view of what design means, often reducing it to user interface considerations or architectural patterns. True engineering design is a more fundamental process—the intellectual synthesis of requirements, constraints, and knowledge into coherent solutions. Effective design involves creativity within constraints. It requires understanding problems deeply enough to anticipate how solutions might fail or create unintended consequences. It demands making explicit trade-offs rather than allowing them to emerge accidentally through implementation. Design also involves systematic thinking about the entire system life-cycle. What happens when requirements change? How will the system behave in unexpected conditions? What knowledge must be preserved for future maintainers? These questions require deliberate consideration, not emergent discovery. The absence of systematic design shows up predictably in AI projects: requirements that conflate technical capabilities with user needs, no systematic analysis of failure modes or edge cases, ad hoc validation approaches, unclear definitions of acceptable performance, and changes made without understanding their broader implications. Toward Intelligent Design for AI Developing design practices for AI systems requires adapting proven engineering principles while acknowledging the unique characteristics of intelligent systems. This adaptation represents an evolution of engineering practice, not a rejection of software development methodologies. Adaptive Requirements Management : Traditional design assumes relatively stable requirements, but AI systems often operate in environments where requirements evolve with understanding. Design processes must accommodate this evolution while maintaining clear criteria for success and failure. Systematic Behaviour Analysis : Since AI behaviour emerges from training data and algorithmic interactions, design must include systematic approaches to understanding and predicting system behaviour. This includes analyzing training data characteristics, assessing potential biases, and evaluating performance across diverse scenarios. Dynamic Validation Frameworks : Static validation is insufficient for systems that continue learning. Design must incorporate ongoing validation approaches that can detect when system behaviour drifts from acceptable parameters or when operating conditions exceed design assumptions. Living Documentation : Design documentation for AI systems must evolve with the systems themselves. This requires new approaches to capturing design rationale, tracking changes, and maintaining understanding of system behaviour over time. Risk-Proportionate Processes : The level of design rigour should correspond to system impact and risk. Consumer recommendation systems warrant different treatment than medical diagnostic tools, but both require systematic approaches appropriate to their consequences. Transparency by Design : While AI systems may be inherently complex, their design processes need not be opaque. Building in explainability, auditability, and interpretability from the beginning makes it easier to understand, validate, and maintain system behaviour. These approaches don't slow development when implemented thoughtfully. Instead, they can accelerate progress by identifying issues early, building stakeholder confidence, and reducing costly failures that result from inadequate planning. The Professional Obligation Effective design practices represent more than technical improvements—they reflect professional responsibility. Engineers in Canada have an obligation to consider public welfare when developing systems that affect people's lives, careers, and opportunities. The software industry's acceptance of post-deployment fixes may be adequate for many applications, but becomes problematic when applied to AI systems with significant societal impact. When AI systems influence medical treatments, criminal justice decisions, or economic opportunities, the traditional "patch it later" approach may not align with engineering ethics and professional standards. This shift in perspective requires acknowledging that AI development has moved beyond the realm of experimental software into the domain of engineered systems with real-world consequences. With this transition comes the professional obligations that engineers in other disciplines have long accepted. Engineers need to consider what society expects when AI systems are deployed in critical applications. Each unaddressed bias, unanticipated failure mode, or unhandled edge case represents a choice about acceptable risk that deserves deliberate consideration rather than default acceptance. Intelligent Design as Professional Practice The engineering profession stands at a critical juncture. AI systems are becoming more capable and widespread, taking on roles that directly affect human welfare and social outcomes. The practices that guide their development will shape not only technological progress but also public trust in engineering expertise. We need intelligent design practices that match the sophistication of the artificial intelligence we're creating. This means design approaches that can handle uncertainty and adaptation while maintaining safety and reliability. It means documentation that evolves with systems rather than becoming obsolete artifacts. It means validation approaches that continue throughout system lifecycles rather than ending at deployment. The goal isn't to slow AI development with bureaucratic processes, but to accelerate responsible innovation through better engineering practices. Other engineering disciplines have learned that systematic design ultimately speeds development by preventing costly mistakes and building stakeholder confidence. Developing these practices will require collaboration across multiple communities: AI researchers who understand algorithmic behaviour, software engineers who build production systems, domain experts who understand application contexts, and engineers from other disciplines who bring experience with systematic design. The transformation won't happen overnight, but it can begin immediately with recognition that AI systems deserve the same thoughtful design consideration we apply to other engineered systems that affect public welfare. This means asking harder questions about requirements, spending more time analyzing potential failure modes, documenting design decisions more thoroughly, and validating performance more systematically. A Professional Opportunity The engineering profession has historically risen to meet new challenges by adapting its principles to emerging technologies. From mechanical systems to electrical power to chemical processes, engineers have learned to apply systematic design thinking to complex systems with significant societal impact. AI represents the latest such challenge—and perhaps the most important. These systems will increasingly shape economic opportunities, healthcare outcomes, transportation safety, and social interactions. The design practices we establish now will influence how AI develops and how society experiences its benefits and risks. This is fundamentally about professional identity and public responsibility. Engineers have always carried the obligation to consider the broader implications of their work. As AI systems become more powerful and pervasive, that obligation becomes more pressing, not less. The question facing the profession isn't whether AI development should be subject to engineering design principles, but how those principles should evolve to address the unique characteristics and growing importance of intelligent systems. The answer will determine not only the technical trajectory of AI, but also whether the engineering profession continues to merit society's trust in an age of artificial intelligence. We need intelligent design as much as we need artificial intelligence—perhaps more. The two must develop together, each informing and strengthening the other, as we navigate toward a future where engineered intelligence serves human flourishing reliably, safely, and ethically. About the author: Raimund Laqua is a Professional Engineer, founder of Lean Compliance, co-founder of ProfessionalEngineers.AI, and AI Committee Chair for E4P.

  • Have We Reached The End of Software Engineering?

    By Raimund Laqua, P.Eng The End of Software Engineering? I've spent over three decades practising engineering in both Canada and the United States, and what I've witnessed represents something I, along with others, have been slow to understand. The death of software engineering isn't only a result of artificial intelligence, or perhaps ineffective engineering governance—it's also because information technology itself is reaching the end of its natural life-cycle . The technological era that needed it has run its course. The Decline of Engineering in Canada Over my career, I kept hearing "We don't do engineering in Canada anymore." For years, I brushed this off as professional griping. Turns out I was wrong. Working across different sectors and organizations, I learned that while we were still building things, we weren't building them like engineers anymore. This was especially true in Canada. We'd stopped engineering the big infrastructure projects that define industrial nations—refineries, pipelines, nuclear plants, major data centres. Most of our work had shifted to maintaining and operating what earlier generations had actually engineered and built. So when people said engineering was dying, they had a point—at least when it came to designing new infrastructure and mission-critical systems. Information Era at Its End The software world showed this decline even more clearly. What I've come to realize is that information technology itself was hitting the end of its life-cycle as a technological pursuit. You could see it everywhere, but nowhere more obviously than in the rise of Agile methodology. Agile wasn't just push-back against heavy processes—it was information technology's death rattle as an engineering discipline. When any field abandons systematic design in favour of rapid iteration and "working software over comprehensive documentation," it's telling you that the core engineering problems have been solved. This is exactly why software engineering struggles to establish itself as a legitimate engineering discipline. We were trying to professionalize a field just as its fundamental engineering challenges were disappearing. The infrastructure was already built and waiting in the cloud. Design patterns were baked into frameworks. Deployment was increasingly automated. Unless you worked at one of the few companies still tackling basic computing problems, genuine engineering work had largely vanished. Agile just made this official. It acknowledged that you could build most systems through iterative assembly rather than systematic engineering. The methodology wasn't improving our practice; it was adapting to a world where the engineering had already been done by others. The Dawn of Intelligence Technology I was one of the people fighting to revive software engineering as a profession. I believed we could bring back engineering discipline to software development. But sitting here now, I think I was fighting the wrong battle. What I see today isn't the revival of software engineering, but something bigger: the end of the information technology era and the start of the intelligence technology era. AI isn't just another tech advance—it's a fundamental paradigm shift like going from mechanical to electrical engineering, or from electrical to information technology. Unlike the commoditized world of cloud computing and agile development, AI systems need real engineering thinking. They force us to understand complex systems, manage uncertainty, design for safety, and deal with behaviours that emerge in ways we can't always predict—behaviours that can have serious consequences for society. The stakes are enormous. AI systems are being deployed in critical areas—healthcare, transportation, finance, criminal justice—often without the engineering oversight we'd require for any other system with similar potential for harm. We're seeing biased algorithms, unreliable predictions, systems that fail in unexpected ways, and growing public distrust of automated decisions. Digital Engineering: The Next Generation of Software Engineering This is where digital engineering becomes essential. Digital engineering is the systematic application of engineering principles across evolving digital paradigms—from information technology to intelligence technology and whatever comes next. As engineers, we need to establish digital engineering as a proper discipline with clear practice standards, professional accountability, and systematic approaches to managing risk. This means developing methods for analysing requirements in uncertain environments, design patterns for safe AI systems, testing frameworks that can handle non-deterministic behaviours, and maintenance practices for systems that keep learning and evolving. The death of software engineering isn't a failure—it's the natural end of information technology's life-cycle. But this ending marks the beginning of something far more significant: digital engineering as the discipline that adapts engineering rigour to whatever digital paradigm emerges: AI systems, cybersecurity, machine learning, compute and inference engines, and even existing cloud technologies. We stand at the threshold of the AI era. The question is whether we'll build these systems with proper engineering discipline from the start, or repeat the same mistakes that left software engineering struggling for legitimacy. Digital engineering gives us the framework to get it right this time—if we choose to use it. About the Author: Raimund Laqua, P.Eng, is a professional computer engineer with over 30 years of expertise in high-risk and regulated industries, specializing in lean methodologies and operational compliance. He is the founder of Lean Compliance and co-founder of ProfessionalEngineers.AI , organizations dedicated to advancing engineering excellence. As a Professional Digital/AI Engineering Advocate, Raimund champions proper licensure across the entire spectrum of digital engineering disciplines. He actively contributes to the profession through his leadership roles, serving as AI Committee Chair for Engineers for the Profession (E4P) and as a member of the Ontario Society of Professional Engineers (OSPE) working group on AI in Engineering, where he helps shape the future of professional engineering practice in the digital domain.

  • Why AI Isn't Ready for Commoditization

    Technology Life-cycle As I observe the current state of Artificial Intelligence (AI) and the rush surrounding its deployment, I find myself reflecting on a pattern that has repeated throughout technological history—a life-cycle we should follow or ignore at our peril. Understanding this cycle will be crucial as we navigate the turbulent waters of machine intelligence in the coming decades. Technology Birth: The Age of Polymaths At the start of something new, technology emerges from the minds of individuals who must be both theorists and builders out of necessity. During this nascent phase, technology represents the promise of future benefits—a tantalizing glimpse of what could be possible if we can unlock nature's secrets. But here's the thing: these pioneers cannot simply theorize; they must also engineer the very methods and means to test their theories and conduct their experiments. I think of figures like Alan Turing, who didn't just conceive of computation as a mathematical abstraction but had to grapple with the practical challenges of building machines that could embody his ideas. Robert Oppenheimer, who couldn't rely on existing infrastructure but had to orchestrate the creation of entirely new engineering capabilities to transform theoretical physics into reality. Niels Bohr, whose quantum insights required him to work hand-in-hand with experimentalists and instrument makers to probe the atomic realm. These pioneers are remembered not as narrow specialists, but as polymaths who had no choice but to embody both scientific curiosity and engineering necessity in a single person. They were forced to be polymaths because the specialized infrastructure we take for granted today simply didn't exist. They had to build their own tools, design their own experiments, and create their own methods for testing the boundaries of the possible. At this stage, the technology exists primarily in the realm of possibility, but that possibility can only be explored through ingenious combinations of theory and practice. The science dominates the vision, but the engineering dominates the day-to-day reality of actually making progress. We explore uncharted territory where both the map and the vehicle must be invented simultaneously. Technology Maturation: The Great Separation This pioneering phase, however, cannot sustain itself indefinitely. As we look at the evolution of any transformative technology, science and engineering eventually must part ways to serve the technology's evolution. This separation marks the beginning of true maturation—when technology transitions from promise to realizing that promise. During this critical phase, we see the emergence of engineering as a distinct discipline with its own methodologies, constraints, and objectives. While scientists continue to push the boundaries of what's theoretically possible, engineers focus on the art of the practical: How do we make this work reliably? How do we scale it? How do we manage its complexity and cost? This separation isn't arbitrary—it's a natural evolution that allows each discipline to flourish. This is where engineering truly comes into its own. The theoretical insights gained during the science-dominated birth phase become the raw materials for solving real-world problems. We see the development of standardized practices, specialized tools, and systematic approaches to implementation. The technology gains structure, reliability, and predictability. Technology Industrialization: The Commodity Phase The maturation phase gradually gives way to something entirely different. As we look at the next phase of the technology life-cycle, mature technologies enter their final phase: widespread adoption through scaling and refinement. At this stage, technology becomes a utility and commodity, much like electricity or telecommunications today. The focus shifts from fundamental innovation to assembly, component refinement, and optimization. This transformation has its purpose. The cutting-edge science becomes background knowledge. The specialized engineering practices become standardized procedures. The technology that once required polymaths, scientists & engineers, now operates through well-understood processes and established infrastructure. This is precisely where I believe Information Technology finds itself now. The days of inventing new information technology paradigms have largely passed. Instead, we are in an era of integration, standardization, and incremental improvement. Agile is a perfect example of this, as we care less about engineering the technology stack rather than using it. The science is well-established, the engineering principles are codified, and the primary challenge becomes efficient deployment at scale. History Repeating As I look at the current state of artificial intelligence, I see clear parallels to this historical pattern. We are witnessing the emergence of our modern equivalents of Bohr, Oppenheimer, and Turing—visionaries who are simultaneously advancing the science of intelligence while grappling with its practical implications. The field remains dominated by scientific discovery, with engineering practices still in their infancy. However, I am already seeing early signs of the great separation beginning. As AI moves beyond pure research, distinct engineering domains are starting to crystallize. We are beginning to see the emergence of specialized practices around model deployment, safety engineering, human-AI interaction design, and scalable training infrastructure. This mirrors exactly what happened with previous transformative technologies. The science-engineering split is starting to happen, though many haven't recognized it yet. The Critical Mistake We Must Avoid Here is where I believe we are making a fundamental error. Too many organizations and leaders are treating AI as if it were already in the commodity phase—ready for immediate, large-scale adoption with minimal specialized expertise. This represents a dangerous misunderstanding of where we actually stand in the technology life-cycle. This misconception has real consequences. AI should not be rushed into the utility and commodity stage while skipping the crucial engineering maturation phase. Just as we wouldn't have expected the early pioneers of computing to immediately build data centres, we shouldn't expect AI to seamlessly integrate into every business process without first developing robust engineering practices. The consequences of this premature commoditization are already becoming apparent. We see systems deployed without adequate safety measures, unrealistic expectations about reliability and performance, and a general underestimation of the specialized knowledge required to implement AI effectively. Purpose in the Process As I think about the path ahead, I am convinced that respecting this technological life-cycle will be essential for realizing AI's full potential. We must allow the engineering phase to unfold naturally, developing the specialized practices and institutional knowledge necessary for responsible deployment. This requires a fundamental shift in expectations. This means accepting that we are still in the early stages of a much longer journey. The scientists continue their essential work of expanding the boundaries of what's possible, while a new generation of AI engineers is already emerging to bridge the gap between laboratory breakthroughs and real-world applications. The technology life-cycle teaches us that shortcuts are illusions. Each phase serves a purpose, and attempting to bypass any stage risks undermining the entire enterprise. As we stand at this critical juncture in the development of artificial intelligence, I believe our patience and respect for this natural progression will determine whether AI becomes a transformative force for good or another cautionary tale of technological hubris. The future of AI—and perhaps the future of human progress itself—depends on our wisdom to let this life-cycle unfold as it should, rather than as we wish it would. About the Author: Raimund Laqua, P.Eng, is a professional computer engineer with over 30 years of expertise in high-risk and regulated industries, specializing in lean methodologies and operational compliance. He is the founder of Lean Compliance and co-founder of ProfessionalEngineers.AI , organizations dedicated to advancing engineering excellence. As a Professional Digital/AI Engineering Advocate, Raimund champions proper licensure across the entire spectrum of digital engineering disciplines. He actively contributes to the profession through his leadership roles, serving as AI Committee Chair for Engineers for the Profession (E4P) and as a member of the Ontario Society of Professional Engineers (OSPE) working group on AI in Engineering, where he helps shape the future of professional engineering practice in the digital domain.

© 2017-2025 Lean Compliance™ All rights reserved.
bottom of page