top of page

SEARCH

Find what you need

525 results found with an empty search

  • The Emergence of AI Engineering

    The Emergence of AI Engineering - Can You Hear the Music? In a compelling presentation to the ASQ chapter / KinLin Business school in London Ontario, Raimund Laqua delivered a thought-provoking talk on the emergence of AI Engineering as a distinct discipline and its critical importance in today's rapidly evolving technological landscape. Drawing from his expertise and passion for responsible innovation, Laqua painted a picture of both opportunity and urgency surrounding artificial intelligence development. The Context: Canada's Missed Opportunity Laqua began by highlighting how Canada, despite housing some of the world's best AI research centers, has largely given away its innovations without securing substantial benefits for Canadians. Instead of leading the charge in applying AI to build a better future, Canada risks becoming "a footnote on the page of AI history." "Some say we don't do engineering in Canada anymore, not real engineering, never mind AI engineering," Laqua noted with concern. His mission, along with others, is to change this trajectory and ensure that Canadian innovation translates into Canadian prosperity. This requires navigating what he called "the map of AI Hype," passing through "the mountain of inflated expectations" and enduring "the valley of disillusionment" to reach "the plateau of productivity" where AI can contribute to a thriving tomorrow. Understanding AI: Beyond the Hype A significant portion of the presentation was dedicated to defining AI, which Laqua approached from multiple angles, acknowledging that AI is being defined in real-time as we speak. AI as a Field of Study and Practice AI represents both a scientific discipline and an engineering practice. As a science, AI employs the scientific method through experiments and observations. As an engineering practice, it utilizes the engineering method embodied through design and prototyping. Laqua observed that currently, many AI companies are conducting experiments in public at scale, prioritizing science over engineering—a practice he suggested needs reconsideration. AI's Domain Diversity Laqua emphasized that no single domain captures the full scope of AI. It spans multiple knowledge and practice domains, making it challenging to draw clear boundaries around what constitutes AI. This multidisciplinary nature contributes to the difficulty in defining and regulating AI comprehensively. Historical Evolution AI isn't new—it began with perceptrons (analog neural nets) in 1943, around the same time as the Manhattan Project. The technology has evolved through decades of research and experimentation to reach today's transformer models that power applications like ChatGPT, which Laqua described as "the gateway to AI" much like Netscape was "the gateway to the Internet." AI's Predictive Nature At its core, AI is a stochastic machine—a probabilistic engine that processes data to make predictions with inherent uncertainty. This stands in contrast to the deterministic nature of classical physics and traditional engineering, where predictability and reliability are paramount. "We are throwing a stochastic wrench in a deterministic works," Laqua noted, "where anything can happen, not just the things we intend." AI's Core Capabilities AI is defined by its capabilities Laqua outlined five essential capabilities that define modern AI: Data Processing : The ability to collect and process vast amounts of data, with OpenAI reportedly having already processed "all the available data in the world that it can legally or otherwise acquire." Machine Learning : The creation of knowledge models stored in neural networks, where most current AI research is focused. Artificial Intelligence : Special neural network architectures or inference engines that transform knowledge into insights. Agentic AI : AI with agency—the ability to act in digital or physical worlds, including autonomous decision-making capabilities. Autopoietic AI : A concept coined by Dr. John Vervaeke (UoT), referring to AI that can adapt and create more AI, essentially reproducing itself. Having smart AI is one thing, but having AI make decisions on its own with agency in the real or digital world is something else entirely that deserves careful consideration before crossing. Laqua cautioned, "Some have already blown through this guardrail." AI's Unique Properties Laqua identified four aspects that collectively distinguish AI from other technologies: AI is a stochastic machine, introducing uncertainty unlike deterministic machines AI is a machine that can learn from data AI can learn how to learn, which represents its most powerful capability AI has agency in the world by design, influencing rather than merely observing "Imagine a tool that can learn how to become a better tool to build something you could only have dreamed of before," Laqua said, capturing the transformative potential of AI while acknowledging the need to use this power safely. The Uncertainty of AI The Cynefin Uncertainty Map Laqua emphasized that uncertainty is the root cause of AI risk, but what's different with AI is the degree and scope of this uncertainty. Traditional risk management approaches may be insufficient to address these new challenges. This demands that we learn how to be successful in the presence of this uncertainty. The CYNEFIN Map of Uncertainty Using the CYNEFIN framework, Laqua positioned AI between the "Unknowable" zone (complete darkness with unclear cause and effect, even in hindsight) and the "Unknown Unknowns" zone (poor visibility of risks, but discernible with hindsight). This placement underscores the extreme uncertainty associated with AI and the need to engineer systems that move toward greater visibility and predictability. Dimensions of AI Uncertainty The presentation explored several critical dimensions of AI uncertainty: Uncertainty about Uncertainty: AI's outputs are driven by networks of probabilities, creating a meta-level uncertainty that requires new approaches to risk management. Uncertainty about AI Models: Laqua pointed out that "all models are wrong, although some are useful." LLMs are neither valid nor reliable in the technical sense—the same inputs can produce different outputs each time, making them technically unreliable in ways that go beyond mere inaccuracy. Uncertainty about Intelligence : The DIKW model (Data, Information, Knowledge, Wisdom) suggests that intelligence lies between knowledge and wisdom, but Laqua noted that humans introduce a top-down aspect related to morality, imagination, and agency that current AI models don't fully capture. Hemisphere Intelligence : Drawing on Dr. Ian McGilchrist's research on brain hemispheres, Laqua suggested that current AI primarily emulates left-brain intelligence (focused on details, logic, and analysis) while lacking right-brain capabilities (intuition, creativity, empathy, and holistic thinking). This imbalance stems partly from the left-brain dominance in tech companies developing AI. Uncertainty about Ethics : Citing W. Ross Ashby's "Law of Inevitable Ethical Inadequacy," Laqua explained why AI tends to "cheat": "If you don't specify a secure ethical system, what you will get is an insecure unethical system." This creates goal alignment problems—if AI is instructed to win at chess, it will prioritize winning at the expense of other unspecified goals. Uncertainty about Regulation : Traditional regulatory instruments may be inadequate for AI. According to cybernetic principles, "to effectively regulate AI, the regulator must be as intelligent as the AI system under regulation." This suggests that conventional paper-based policies and procedures may be insufficient, and we might need "AI to regulate AI"—an idea Laqua initially rejected but has come to reconsider. Governing AI: Four Essential Pillars AI Governance Pillars To address these uncertainties and create trustworthy AI, Laqua presented four governance pillars that are emerging globally: 1. Legal Compliance AI must adhere to laws and regulations, which are still developing globally. Laqua referenced several regulatory frameworks, including the EU's AI Act (approved in 2024), which he described as "perhaps the most comprehensive, built on top of the earlier GDPR framework." He noted that Canada lags behind, with Bill C-27 (Canada's AI act) having been canceled when the federal government was prorogued. While these legislative efforts are well-intentioned, Laqua cautioned that they are "new and untested," with technical standards even further behind. "We don't know if regulations will be too much, not enough, or even effective," he observed, emphasizing the need for lawyers, policy makers, regulators, and educators who understand AI technology. 2. Ethical Frameworks Since "AI technology is not able to support ethical subroutines," humans must be ethical in AI's design, development, and use. This begins with making ethical choices concerning artificial intelligence and establishing AI ethical decision-making within organizations and businesses. Laqua called for "people who will speak up regarding the ethics of AI" to ensure responsible development. 3. Engineering Standards AI systems must be properly engineered, preferably by licensed professionals. Laqua emphasized that professional engineers in Canada "are bound by an ethical code of conduct to uphold the public welfare." He argued that licensed Professional AI Engineers are best positioned to design and build AI systems that prioritize public good. 4. Management Systems AI requires effective management to handle its inherent unpredictability. "To manage means to handle risk," Laqua explained, noting that AI introduces "an extra measure" of uncertainty due to its non-deterministic nature. He described AI as "a source of chaos" that, while useful, needs effective management to mitigate risks. International Standards as Starting Points Laqua recommended several ISO standards that can serve as starting points for implementing these pillars: - ISO 37301 – Compliance Management System (Legal) - ISO 24368 – AI Ethical Guidelines (Ethical) - ISO 5338 – AI System Lifecycle (Engineered) - ISO 42001 – AI Management System (Managed) He emphasized that implementing these standards requires "people who are competent, trustworthy, ethical, and courageous (willing to speak up, and take risks)"—not just technical expertise but individuals who "can hear the music," alluding to a story about Oppenheimer's ability to understand the deeper implications of theoretical physics. The Call for AI Engineers The AI Engineering Body of Knowledge (AIENGBOK) The presentation culminated in a compelling call for the emergence of AI Engineers—professionals who can "fight the dragon of AI uncertainty, rescue the princess, and build a better life happily ever after." These engineers would work "to create a better future, not a dystopian one" and "to design AI for good, not for evil." The AI Engineering Body of Knowledge Laqua shared that he has been working with a group called E4P , chairing a committee to define an AI Engineering Body of Knowledge (AIENGBOK). This framework outlines: What AI engineers need to know (theory) What they need to do (practice) The moral character they must embody (ethics) Characteristics of AI Engineers According to Laqua, AI Engineers should possess several defining characteristics: Advanced Education : AI engineers will "require a Master's Level Degree or higher" Transdisciplinary Approach : Not merely working with other disciplines, but representing "a discipline that emerges from working together with other disciplines" Team-Based Responsibility : "Instead of single engineer accountable for a design, we need to do that with teams" X-Shaped Knowledge and Skills : Combining vertical expertise, horizontal breadth and connected. Methodological Foundation : Based on "AI Engineering Methods and Principles" Ethical Commitment : "Bound by AI Engineering Ethics" Professional Licensing : "Certified with a license to practice" The Path Forward Laqua outlined several requirements for establishing AI Engineering as a profession: Learned societies providing accredited programs Engineering professions offering expertise guidelines and experience opportunities Regulatory bodies enabling licensing for AI engineers Broad collaboration to continue developing the AIENGBOK "The stakes are high, the opportunities are great, and there is much work to be done," he emphasized, calling for "people who are willing to accept the challenge to help build a better tomorrow." A Parallel to the Manhattan Project Throughout his presentation, Laqua drew parallels between the current AI innovations and the Manhattan Project, where Robert Oppenheimer led efforts to harness atomic power. Both scenarios involve powerful technologies with potential for both tremendous good and harm, ethical dilemmas, and concerns about singularity events. Oppenheimer's work, while leading to the atomic bomb, also resulted in numerous beneficial innovations, including nuclear energy for power generation and medical applications like radiation treatment. Similarly, AI presents both risks and opportunities. A Closing Reflection Laqua concluded with a thought-provoking question inspired by Oppenheimer's legacy: "AI is like a tool, the important thing isn't that you have one, what's important is what you build with it. What are you building with your AI?" This question encapsulates the presentation's core message: the need for thoughtful, responsible development of AI guided by competent professionals with a strong ethical foundation. Just as Oppenheimer was asked if he could "hear the music" behind mathematical equations, Laqua challenges us to hear the deeper implications of AI beyond its technical capabilities—to understand not just what AI can do, but what it should do to serve humanity's best interests. The presentation serves as both a warning about unmanaged AI risks and an optimistic call for a new generation of AI Engineers who can help shape a future where artificial intelligence enhances rather than diminishes human potential. Raimund Laqua, PMP, P.Eng Raimund Laqua is founder and Chief Compliance Engineer at Lean Compliance Consulting, Inc., and co-founder of ProfesssionalEngineers.AI. Raimund Laqua is also AI Committee Chair at Engineers for the Profession ( E4P ) and participates in working groups and advisory boards that include ISO ESG Working Group, OSPE AI Working Group, and Operational Excellence. Raimund is a professional engineer with a bachelor’s degree in electrical / computer engineering from McMaster University (Hamilton). He has consulted for over 30 years across North America in highly regulated, high-risk sectors: oil & gas, energy, pharmaceutical, medical device, healthcare, government, and technology companies. Raimund is author of weekly blog articles and an upcoming book on Operational Compliance – Staying between the lines and ahead of risk. He speaks regularly on the topics of lean, project management, risk & compliance, and artificial intelligence. LinkedIn: https://www.linkedin.com/in/raimund-laqua/

  • Risk Planning is Not Optional

    What I have observed after reviewing risk management programs across diverse industries that include oil & gas, pipeline, medical device, chemical processing, high-tech, government, and others is that the ability to address uncertainty and its effects is largely predetermined by design. This holds whether it is the design of a product, process, project, or an organization. The process industry provides an illustrative example of what this looks like. For companies in this sector the majority of safe guards and risk controls are designed into a facility and process before it ever goes on-line. In fact, when a given process becomes operational every future change before it is made is evaluated against how it impacts the design and associated safety measures. This is what risk management looks for companies in high-risk, highly-regulated sectors. The ability to handle uncertainty is designed, maintained, and improved throughout the expected life of the process. Risk informs all decisions and risk management is implicit in every function and activity that is performed. For all companies that contend with uncertainty risk planning and implementation is not optional. Without adequate preparation it is not possible to effectively prevent or recover from the effects of uncertainty when they occur. Hoping for the best is a good thing, but it is not an effective strategy against risk. What is effective is handling uncertainty by design.

  • Project Success in the Presence of Change

    Some say change is the only real constant in the universe – the one thing you can count on is that everything changes. However, there is something else we can count on, change always brings with it uncertainty. And this why in order to achieve project success we look for better ways to manage change or at least reduce the effects that uncertainty brings. Now, when it comes to change you will hear folks talk about the technical side of change. This has to do with performance related to cost, schedule, and the achievement of technical objectives which all must be managed properly in the context of change. You will also hear others talk about the people side of change.  This has to do with changing behaviors which is also necessary in order for us to realize different and hopefully better outcomes from the capabilities that our projects create. Both of these are important, the technical side and the people side of change.  However, in this blog post I would like us to take a step back and look at projects and change more holistically. Because it is my belief that sometimes when we focus on the parts, we can lose sight of the whole and miss out on the bigger story. All Successful Projects Create Value And the first thing I want us to see when we look across projects no matter what domain you are in is that every project changes something.  All projects transform something of value into something of greater value.  All projects do this, at least the successful ones. We use valuable resource: our time, our money, our people; to create something new, usually a new capability that we hope might generate different and better outcomes for our business, organization or the public at large. We all know that for projects to be successful they must create value and that this value should exceed the value of the sum of its parts used to create it. You could say that this difference is a measure of success. But at least we can say that for projects to be successful they must create value. How Value Is Created With the important that projects have on the creation of value it is worth our time to look more closely at how value is created and for this I will be leveraging the work by Michael Porter, Harvard Business School Professor. Porter developed what he calls the value chain analysis to help companies identify strategies to improve their competitiveness in the marketplace (an adapted version is shown above). What Porter and others propose is that value is seen through the eyes of the customer, or in the case of projects, the stakeholders who have invested their time, resources, and people in order to achieve a certain outcome. The set of capabilities used to create these outcomes form what Porter calls the Value Chain.  A company can evaluate the effectiveness of value creation by assessing whether or not they have the needed capabilities. This perspective has utility for us when we consider projects.  Although a project will have a different set of capabilities it is these capabilities nonetheless that create the desired change we are looking for.  If a project is not performing, then you might look at whether or not it has the capabilities to effect the needed transformation.  To Improve Value You Need to Measure It Porter suggests that we can measure the value created by the value chain.  Essentially it is the difference between what something is worth, and the cost needed to create it.  This he calls margin and improving margin is an important objective for business success. To improve margins, you improve the productivity of the value chain. That's what technology does, and HR, and procurement, and so on.  All of these activities keep the value chain functioning as efficiently as possible and it does this by means of cost reduction and operational excellence. This approach has utility for us also when it comes to projects.  We need to pursue excellence to keep our projects as productive as they can be. This should be the focus of every PMO and every project manager. We need to pursue excellence and by doing so we can increase a projects value. Conceptually, this is as far as Porter takes the value chain.  However, we need to take it further.  Why? Because there are other outcomes that are valued that need to be achieved both for businesses as well as projects. There are Other Outcomes to Achieve These include: quality, safety, trust, sustainability, reliability, and others. These are less quantifiable than technical objectives but are no less valuable and are equally necessary for both mission and project success. And it is these outcomes where we have greater degree of uncertainty in terms of: What the outcome is - how do we define it What the transformation looks like - and by this, I mean the plan to effect the desired change in outcomes, and that The change itself can be and usually is a significant source of risk. And that is why high performing organizations including project teams will establish another set of activities to protect the value chain to ensure the achievement of all the planned outcomes including the ones listed here. These are collectively called risk and compliance programs.  The purpose of these programs is not to create value or improve margins, although they often do, but instead they reduce risk to ensure that the planned outcomes themselves are achieved. This is the purpose of all risk and compliance programs to keep companies between the lines so that they do not jeopardize their chances of success. You could say that both, Operational Excellence and Risk & Compliance Management are the guardrails that protect against failure and help ensure success for organizations and this is no different when it comes to projects. Why Is This So Important? This is important because when uncertainty is left unchecked and risk has become a reality not only do our projects fail but so do the businesses and organizations that depend on them. Mission success requires project success and risk threatens them both. Each of the photos in the first picture are examples of failures where change was not managed, or too much change was taken on, or when change itself exposed latent or new risk. For companies to succeed so must their projects and for that to happen they need to effectively contend with risk in the presence of change.

  • Catastrophic Harm

    In 2020 we saw Lebanon's government in response to the explosion in Beirut on August 4th killing more than 200 people. This explosion was caused by an Ammonium Nitrate fire which according to IChemE are notorious and seem to occur every 20-30 years causing major loss of life and widespread damage. Investigations into the explosion are on-going and lessons learned will no doubt be used to improve safety practices around the world.  Fines will be handed out, inspections will be increased, regulations will be enacted and guidelines will be created to prevent this kind of accident from reoccurring. This is the usual process by which safety improves. However, when it comes to risks that happen infrequently this process is not as effective as it could or needs to be. In Malcolm Sparrow's book, "The Character of Harms" he outlines several qualities of these kinds of risks that impact on the effectiveness of risk mitigation specifically with respect to prevention: The very small number of observed events does not provide a sound basis for probability estimation, nor for detecting any reduction in probabilities resulting from control interventions. The short-term nature of budget cycles and political terms-of-office, coupled with human tendency to discount future impacts, exacerbates the temptations to do nothing or dot do very little or to d procrastinate on deciding what to do. The very small number of observed instances of the harm (in many cases zero) provide insufficient basis for any meaningful kind of pattern recognition and identification of concentrations. All of the preventive work has to be defined, divided up, handed out, conducted, and measured early in the chronological unfolding of the harm, in the realm of precursors to risk, and precursors of the precursors. This is intellectually and challenging work. Reactive responses and contingency plans are not operated often enough to remain practised and primed for action. In the absence of periodic stimuli, vigilance wanes over time. Reactive responsibilities are curiously decoupled from preventive operations, and engage quite different agencies or institutions. Investments in reactive capacities (e.g. public health and emergency response) are more readily appreciated, versatile, having a many other potential and easy-to-imagine applications. Policy makers at the national level find reactive investments easier to make, as their own intellectual and analytic role is reduced to broadcast dissemination of funds for decentralized investment in emergency services. Investment in enhancing preventive control tend, by contrast, to be highly centralized and much more complex technically. These qualities are even more prevalent when it comes to dealing with natural disasters as opposed to man-made ones. Effective prevention of harm requires addressing the issues arising from these through deliberate intention and a change in mindset.  Sparrow outlines a path forward: Counteract the temptation to ignore the risk .  Focus more on the impact of the risk rather than only the likelihoods. Even when deciding not to do something make that a conscious decision not omission. Define higher-volume, precursor conditions as opportunities for monitoring and goal-setting.   Capturing near misses which would be more frequent has been used to support meaningful analysis.  When this is reduced to zero then the scope can be broadened bringing in more data to help improve safety further. Construct formal, disciplined, warning systems understanding that the absence of alarms month over month will create the condition for them to be ignored when they do occur.  Countermeasures will need to be established to maintain a state of readiness. Sending alarms to multiple sites so that one crew misinterpreting them does not impede the necessary response. I highly recommend Sparrow's book, "The Character of Harms" for both regulators and operators looking to improve safety and security outcomes.

  • What Curling Can Teach Us About Risk

    Or.. Why curlers make the best risk managers. Curling Can Teach Us About Risk Risk management is an essential aspect of every business, organization, or even our personal lives. It involves identifying, assessing, and prioritizing risks, as well as implementing strategies to minimize or avoid them. But did you know we can learn valuable lessons about risk management from the game of curling? Curling is a popular winter sport particularly in Canada that involves two teams of four players each, sliding stones on an ice sheet towards a circular target. The game requires skill, strategy, and teamwork. But it also involves taking calculated risks and making decisions that can either lead to opportunities or backfire. When it comes to risk management, we can learn some lessons from curling: Understanding risk and opportunity In curling, players must weigh the risks and opportunities of each shot. For example, they may choose to play a more difficult shot that could result in a higher score, but also has a higher risk of failure. Alternatively, they could play a safer shot that has a lower risk of failure, but also a lower potential reward. Similarly, in business and in life, we must assess the risks and opportunities of each decision. It's essential to consider the potential benefits and drawbacks of each option, weigh them against each other, and make informed choices. Preventive and mitigative measures In curling, players take preventive and mitigative measures to reduce the risks of their shots. They carefully plan their shots, consider the position and angle of the stones, and use sweeping techniques to control the speed and direction of each stone. In risk management, preventive measures aim to avoid or reduce risks before they occur. Mitigative measures aim to minimize the impact of risk when it becomes a reality. Both preventive and mitigative measures are essential to effective risk management, and should be considered when developing risk management strategies. Adaptive measures In curling, players must be adaptable and able to adjust their strategies based on changing circumstances. For example, they may need to change their strategy if the ice conditions change, or if the other team makes unexpected moves. In a similar way to curling, it is essential for risk managers to be adaptable and able to adjust strategies based on changing circumstances. Risk management plans should be regularly reviewed and updated to reflect new risks, changing priorities, or changes in the business or personal environment. Knowing when to take risks and when to play it safe In curling, players must make strategic decisions about when to take risks and when to play it safe. For example, they may take a risk if they are behind in the game and need to catch up, or they may play it safe if they have a lead and do not want to risk losing it. Similarly, in risk management, it is important to know when to take risks and when to play it safe. Sometimes, taking a risk can lead to significant rewards, while other times it can lead to catastrophic consequences. Knowing the difference is crucial to winning the game and mission success. Skip stones Skips on curling teams and risk managers share similarities in their roles and responsibilities. Both skips and risk managers are tasked with making strategic decisions that have a significant impact on the outcome of their respective endeavours. Skips must decide the best course of action for their team during a curling match, assessing the playing conditions, their team's strengths and weaknesses, and the opponent's tactics. Similarly, risk managers must make informed decisions to protect their organization from potential risks and hazards, analyzing the risks involved, the potential impact, and the most effective risk mitigation strategies. Both skips and risk managers need to be highly skilled at analyzing and interpreting complex information, making sound decisions under pressure, and communicating their decisions effectively to their team or organization. The game of curling teaches us valuable lessons about risk management. By understanding risk and opportunity, taking preventive and mitigative measures, being adaptable, and knowing when to take risks and when to play it safe, we can make better decisions in our personal and professional lives. What do you think?

  • Fighting the AI Dragon of Uncertainty

    There are those who think that AI is only software. After all, we can reduce AI to a basic Turing machine, digital ones and zeros. There is nothing new here, nothing to be concerned about, so just move on. There are others who believe that AI is the greatest innovation we have seen. It will answer all our questions, help us cure cancer, solve poverty, climate change, and all other existential threats facing humanity. AI will save us, perhaps, even from ourselves. And there are still others who believe AI is the existential threat that will end us and the world as we know it. This narrative, this story, is not new. It is as old as humanity. We create technology to master our environment, only to discover that one day it takes on a life of its own to master us. And this is when the hero of the story comes in. The hero fights against the out-of-control technology to restore the world back into balance, before the chaos. However, the past is closed, and the path back is no longer possible. The hero must now take the path forward. Our hero must fight the dragon, rescue the princess, and create a new life happily ever after. Coincidentally (or not), this follows the technology hype curve that many of us are very familiar with (heightened expectations, trough of disillusionment, slope of enlightenment, and plateau of productivity). AI Middle Earth What character are we playing in the AI story and where are we on the map of AI Middle Earth? Are we the creators of this new technology promising untold power not unlike the Rings of Power from Tolkien’s books, “The Lord of the Rings?” Will we be able to control the rings, or fall to its temptations? Who will keep back the evil of Mordor? Who will fight the AI dragon of uncertainty? Who will be the heroes we need? Gandalf, the wise wizard from the Lord of the Rings, reminds us, “The World is not in your books and maps, It’s out there.” The real world is not in our AI models either. It’s time for Engineers bound not by rings of power, but by a higher calling to rise up and take their place in the AI story. To help keep evil at bay, and to build a better tomorrow using AI. We need them to help us overcome the mountain of expectations and endure the plateau of disillusionment to reach the plateau of productivity or better yet, the place where we find a flourishing and thriving humanity. It’s time for Professional AI Engineers, more than technology experts, but engineers who are also courageous, competent, and trustworthy. Engineers who are willing to fight the AI Dragon of Uncertainty.

  • When did Professional Engineering Become an Obstacle to Innovation?

    The Future of Professional Engineering Over the years, I’ve seen a decline in professional engineering, and no more so than in the domain of software engineering. Engineering in Canada began with bold vision and practical ingenuity. From the Canadian Pacific Railway to the St. Lawrence Seaway, professional engineers were once celebrated innovators who shaped our nation. Yet somewhere along the way, professional engineering transformed from an enabler of progress to what many see as a barrier to innovation. This was made evident in Ontario with the introduction of an industrial exception (1984) as part of the Professional Engineers Act. This change permitted unlicensed people to carry out engineering work within the context of their employer’s equipment and machinery. The impact of this change was immediate. If anyone can perform engineering, then why do you need professional engineers? Since the exception was introduced, companies have reduced the number of professional engineers in their workforce, which happened in large numbers within the steel industry, where I was working at the time, as well as other sectors. However, while this was a big straw, it was not the only one on the camel’s back. Software engineering, as a profession, would also see itself diminish and almost disappear. For all intents and purposes, software engineering is no longer a licensed practice in Canada. Perhaps, on paper, it is, but having worked in this field for decades, I have observed many calling themselves software engineers, network engineers, and even now prompt engineers, all of who do not have a license to practice. When anyone can call themselves an engineer, we no longer have engineering, and we no longer have a profession. Academia has also not helped to advance the profession. Universities and colleges have in recent decades doubled-down on preparing engineers to support scientific research rather than teaching them how to practice engineering. While we do need engineers to help with research, we need more of them in the field to practice. We need people who use the engineering method, not only the scientific method. So where are we now? We have reduced professional engineering to the things that engineering does, and in the process, forgotten what engineering is. We divided engineering into its parts that no longer need to be held accountable or work together. This was done for efficiency and as a means to increase innovation. However, instead, we broke engineering - the means of innovation – and we need to put it back together again. Engineering was never about the parts. It was never about creating designs, or stamping drawings, or a risk measure to ensure public safety. Again, this is what engineering does, but not what it is. Engineering is a profession of competent and trustworthy people who build the future. And this is something worth remembering if we hope to build a thriving and prosperous Canada.

  • Paper Policies are Not Enough

    Why do we think that paper AI policies will be enough to handle AI risk? With AI’s ability to learn and adapt, we need measures that are also able to learn and adapt. This is a fundamental principle of cybernetic models (i.e., the Good Regulatory Theorem). The regulator must be isomorphic with respect to the system under regulation. It must be similar in form, shape, or structure. That’s why a static, paper-based policy will never be enough to govern (i.e. regulate) the use of AI. Governance – the means of regulation – must be as capable as AI.

  • The Need for AI Mythbusters

    When I read about AI in the news, and this includes social media, I am troubled by the way it is often reported. Many articles are written citing research that, for all intents and purposes, are de minimis examples used in many cases for the purpose of attracting funding or, if you are a content creator, more followers. The Need for AI Mythbusters These articles often don’t prove or demonstrate that AI is better, and neither is the research upon which the articles are based. The research more often than not serves to provide examples of where AI “might” or “could” be better under very specific use cases and caveats. There is nothing wrong with that. However, we need to be mindful that there is a significant gap between the research and the conjectures being made by others. The AI hype machine is definitely operating on all cylinders. Many conjectures are often stated as bold claims such as: “LLMs are sufficiently creative that they can beat humans at coming up with models of the world.” Is that what we understand generative AI is - creative? Or this one, “a sufficiently advanced AI system should be able to compose a program that may eventually be able to predict arbitrary human behaviour over arbitrary timescales in relation to arbitrary stimuli.” What does that even mean? The claim that AI should be able to generate a program capable of predicting human behaviour based on arbitrary stimuli is a bold claim and one that strongly suggests that humans are mechanistic in nature. Instead of elevating artificial intelligence, human nature is reduced to the level of the machines on which AI is built. Is that what we believe or want? As professionals, we need to be critical when it comes to AI, which does not mean negative, of what is published. We must dig deeper to help separate hype from reality. This will not change the level of hype being created. However, it will help your clients and profession better navigate the AI landscape. Time for professionals to be AI Mythbusters!

  • How to Make Better Compliance Investments?

    When it comes to meeting obligations, many view compliance as silos, not interconnected programs that ensure mission success. They only benefit from the sum of their compliance efforts. Portfolio of Compliance Programs However, those who view compliance as interconnected programs will experience the product of their interactions. They will experience the benefit from a multiplication of their compliance efforts. To achieve this, management must make budget decisions considering programs as a whole, not separately; as an investment portfolio, not individual cost centres. However, organizations often lack the tools to make such decisions. They don’t know how to invest in their programs to maximize compliance return. These are the kinds of questions we explore in our weekly Elevate Compliance Huddles. Consider becoming a Lean Compliance member and join other organizations where mission success requires compliance success.

  • A Faster Way to Operationalize Compliance

    Many organizations implement their compliance systems in a phased approach by working through each element of a regulation or standard. They often start by implementing "shall statements" which tend to be more prescriptive and somewhat easier to establish. While this element-first approach might achieve a certification or pass an audit quicker it seldom delivers a system that is effective or even operational. In this article we compare this approach with a systems-first approach based on the work by Eric Ries (Lean Startup). Element-First Approach Not Like This The element-first approach starts at the bottom by identifying the components of the system that may already exist: Understand the elements of the regulation or standard. Map existing practices to the elements. Identify where current practices do not meet the standard. Engage these deficiencies in a Plan-Do-Check-Act (PDCA) cycle. Target these deficiencies for compliance with the standard. This process captures where existing practices might support a given element. This provides a measure of conformance at least at some level. However, what this approach overlooks is that existing practices were established in another context and perhaps for a different purpose. They most likely have not been designed to work together within the context of the desired compliance management system. What organizations have done is essentially taken a bunch of existing parts and put them into another box labelled, "New Compliance Management System." They still need to adapt them to work together to fulfill the purpose of the new compliance system. Until that happens the system cannot be considered as operational. Unfortunately, organizations usually run out time, money, and motivation to move beyond the parts of a system to implementing the interactions which are essential for a system is to be considered operational. Systems-First Approach Like This To support modern regulations designed with performance and outcome-based obligations another strategy is needed that: Achieves operational status sooner, Focuses on system behaviours Improves effectiveness over time right from the start To achieve operational status sooner the Lean Startup approach developed by Eric Ries (Lean Startup) can be used. This systems-first approach emphasizes system interactions so that a measure of effectiveness is achieved right away. Instead of a bottom up approach the focus is on a vertical slice of the management system so that all system behaviours are present at the start and can be applied to each vertical slice. System behaviours create the opportunity for compliance to be achieved. In a manner of speaking we start with a minimal viable compliance system; one that has all essential parts working together as a whole. Not only is the system operational it is already demonstrating a measure of effectiveness. It also provides a better platform on which the system can be improved over time.

  • The Stochastic Wrench: How AI Disrupts Our Deterministic World

    When it comes to trouble, it is often a result of someone throwing a wrench into the works. This is certainly the case when it comes to artificial intelligence. However, not in the way we might think. Up until now, we have engineered machines to be deterministic, which means they are stable across time, reliable, and given a set of inputs, you get the same outputs without variation. In fact, we spend significant effort to make sure (ensure) there is no variation. This is fundamental to practices such as Lean and Six Sigma along with risk and compliance. All these efforts work to ensure outcomes we want and not the ones we don’t. They make certain that when we do A, we always get B and nothing else. Artificial Intelligence - A Stochastic Wrench Yet, here we are, with a stochastic machine, a probabilistic engine we call AI, where the question you ask today will give you a different answer when you ask it tomorrow.  Technically and practically, AI is not reliable, it’s not deterministic. This is not a question of whether AI is accurate or if the answer is correct. It’s about the answer being different every time. There’s always variation in its outputs. There are many reasons why this is the case, that include the nature of how knowledge models work, the fact that it can learn, and that it can learn how to learn - it can adapt.  However, what is crucial to understand is, AI is not the kind of machine we are used to having in our businesses. We want businesses to be deterministic, predictable, and reliable. And yet here we are, throwing a stochastic wrench into our deterministic works. This is why we need to rethink how we govern, manage, and use AI technology.  We need to learn how to be more comfortable with uncertainty. But better than that, we need to learn how to: improve our probability of success in the presence of uncertainty.

© 2017-2025 Lean Compliance™ All rights reserved.

Ensuring Mission Success Through Compliance

bottom of page