top of page

SEARCH

Find what you need

564 results found with an empty search

  • Project Success in the Presence of Change

    Some say change is the only real constant in the universe – the one thing you can count on is that everything changes. However, there is something else we can count on, change always brings with it uncertainty. And this why in order to achieve project success we look for better ways to manage change or at least reduce the effects that uncertainty brings. Now, when it comes to change you will hear folks talk about the technical side of change. This has to do with performance related to cost, schedule, and the achievement of technical objectives which all must be managed properly in the context of change. You will also hear others talk about the people side of change.  This has to do with changing behaviors which is also necessary in order for us to realize different and hopefully better outcomes from the capabilities that our projects create. Both of these are important, the technical side and the people side of change.  However, in this blog post I would like us to take a step back and look at projects and change more holistically. Because it is my belief that sometimes when we focus on the parts, we can lose sight of the whole and miss out on the bigger story. All Successful Projects Create Value And the first thing I want us to see when we look across projects no matter what domain you are in is that every project changes something.  All projects transform something of value into something of greater value.  All projects do this, at least the successful ones. We use valuable resource: our time, our money, our people; to create something new, usually a new capability that we hope might generate different and better outcomes for our business, organization or the public at large. We all know that for projects to be successful they must create value and that this value should exceed the value of the sum of its parts used to create it. You could say that this difference is a measure of success. But at least we can say that for projects to be successful they must create value. How Value Is Created With the important that projects have on the creation of value it is worth our time to look more closely at how value is created and for this I will be leveraging the work by Michael Porter, Harvard Business School Professor. Porter developed what he calls the value chain analysis to help companies identify strategies to improve their competitiveness in the marketplace (an adapted version is shown above). What Porter and others propose is that value is seen through the eyes of the customer, or in the case of projects, the stakeholders who have invested their time, resources, and people in order to achieve a certain outcome. The set of capabilities used to create these outcomes form what Porter calls the Value Chain.  A company can evaluate the effectiveness of value creation by assessing whether or not they have the needed capabilities. This perspective has utility for us when we consider projects.  Although a project will have a different set of capabilities it is these capabilities nonetheless that create the desired change we are looking for.  If a project is not performing, then you might look at whether or not it has the capabilities to effect the needed transformation.  To Improve Value You Need to Measure It Porter suggests that we can measure the value created by the value chain.  Essentially it is the difference between what something is worth, and the cost needed to create it.  This he calls margin and improving margin is an important objective for business success. To improve margins, you improve the productivity of the value chain. That's what technology does, and HR, and procurement, and so on.  All of these activities keep the value chain functioning as efficiently as possible and it does this by means of cost reduction and operational excellence. This approach has utility for us also when it comes to projects.  We need to pursue excellence to keep our projects as productive as they can be. This should be the focus of every PMO and every project manager. We need to pursue excellence and by doing so we can increase a projects value. Conceptually, this is as far as Porter takes the value chain.  However, we need to take it further.  Why? Because there are other outcomes that are valued that need to be achieved both for businesses as well as projects. There are Other Outcomes to Achieve These include: quality, safety, trust, sustainability, reliability, and others. These are less quantifiable than technical objectives but are no less valuable and are equally necessary for both mission and project success. And it is these outcomes where we have greater degree of uncertainty in terms of: What the outcome is - how do we define it What the transformation looks like - and by this, I mean the plan to effect the desired change in outcomes, and that The change itself can be and usually is a significant source of risk. And that is why high performing organizations including project teams will establish another set of activities to protect the value chain to ensure the achievement of all the planned outcomes including the ones listed here. These are collectively called risk and compliance programs.  The purpose of these programs is not to create value or improve margins, although they often do, but instead they reduce risk to ensure that the planned outcomes themselves are achieved. This is the purpose of all risk and compliance programs to keep companies between the lines so that they do not jeopardize their chances of success. You could say that both, Operational Excellence and Risk & Compliance Management are the guardrails that protect against failure and help ensure success for organizations and this is no different when it comes to projects. Why Is This So Important? This is important because when uncertainty is left unchecked and risk has become a reality not only do our projects fail but so do the businesses and organizations that depend on them. Mission success requires project success and risk threatens them both. Each of the photos in the first picture are examples of failures where change was not managed, or too much change was taken on, or when change itself exposed latent or new risk. For companies to succeed so must their projects and for that to happen they need to effectively contend with risk in the presence of change.

  • Catastrophic Harm

    In 2020 we saw Lebanon's government in response to the explosion in Beirut on August 4th killing more than 200 people. This explosion was caused by an Ammonium Nitrate fire which according to IChemE are notorious and seem to occur every 20-30 years causing major loss of life and widespread damage. Investigations into the explosion are on-going and lessons learned will no doubt be used to improve safety practices around the world.  Fines will be handed out, inspections will be increased, regulations will be enacted and guidelines will be created to prevent this kind of accident from reoccurring. This is the usual process by which safety improves. However, when it comes to risks that happen infrequently this process is not as effective as it could or needs to be. In Malcolm Sparrow's book, "The Character of Harms" he outlines several qualities of these kinds of risks that impact on the effectiveness of risk mitigation specifically with respect to prevention: The very small number of observed events does not provide a sound basis for probability estimation, nor for detecting any reduction in probabilities resulting from control interventions. The short-term nature of budget cycles and political terms-of-office, coupled with human tendency to discount future impacts, exacerbates the temptations to do nothing or dot do very little or to d procrastinate on deciding what to do. The very small number of observed instances of the harm (in many cases zero) provide insufficient basis for any meaningful kind of pattern recognition and identification of concentrations. All of the preventive work has to be defined, divided up, handed out, conducted, and measured early in the chronological unfolding of the harm, in the realm of precursors to risk, and precursors of the precursors. This is intellectually and challenging work. Reactive responses and contingency plans are not operated often enough to remain practised and primed for action. In the absence of periodic stimuli, vigilance wanes over time. Reactive responsibilities are curiously decoupled from preventive operations, and engage quite different agencies or institutions. Investments in reactive capacities (e.g. public health and emergency response) are more readily appreciated, versatile, having a many other potential and easy-to-imagine applications. Policy makers at the national level find reactive investments easier to make, as their own intellectual and analytic role is reduced to broadcast dissemination of funds for decentralized investment in emergency services. Investment in enhancing preventive control tend, by contrast, to be highly centralized and much more complex technically. These qualities are even more prevalent when it comes to dealing with natural disasters as opposed to man-made ones. Effective prevention of harm requires addressing the issues arising from these through deliberate intention and a change in mindset.  Sparrow outlines a path forward: Counteract the temptation to ignore the risk .  Focus more on the impact of the risk rather than only the likelihoods. Even when deciding not to do something make that a conscious decision not omission. Define higher-volume, precursor conditions as opportunities for monitoring and goal-setting.   Capturing near misses which would be more frequent has been used to support meaningful analysis.  When this is reduced to zero then the scope can be broadened bringing in more data to help improve safety further. Construct formal, disciplined, warning systems understanding that the absence of alarms month over month will create the condition for them to be ignored when they do occur.  Countermeasures will need to be established to maintain a state of readiness. Sending alarms to multiple sites so that one crew misinterpreting them does not impede the necessary response. I highly recommend Sparrow's book, "The Character of Harms" for both regulators and operators looking to improve safety and security outcomes.

  • What Curling Can Teach Us About Risk

    Or.. Why curlers make the best risk managers. Curling Can Teach Us About Risk Risk management is an essential aspect of every business, organization, or even our personal lives. It involves identifying, assessing, and prioritizing risks, as well as implementing strategies to minimize or avoid them. But did you know we can learn valuable lessons about risk management from the game of curling? Curling is a popular winter sport particularly in Canada that involves two teams of four players each, sliding stones on an ice sheet towards a circular target. The game requires skill, strategy, and teamwork. But it also involves taking calculated risks and making decisions that can either lead to opportunities or backfire. When it comes to risk management, we can learn some lessons from curling: Understanding risk and opportunity In curling, players must weigh the risks and opportunities of each shot. For example, they may choose to play a more difficult shot that could result in a higher score, but also has a higher risk of failure. Alternatively, they could play a safer shot that has a lower risk of failure, but also a lower potential reward. Similarly, in business and in life, we must assess the risks and opportunities of each decision. It's essential to consider the potential benefits and drawbacks of each option, weigh them against each other, and make informed choices. Preventive and mitigative measures In curling, players take preventive and mitigative measures to reduce the risks of their shots. They carefully plan their shots, consider the position and angle of the stones, and use sweeping techniques to control the speed and direction of each stone. In risk management, preventive measures aim to avoid or reduce risks before they occur. Mitigative measures aim to minimize the impact of risk when it becomes a reality. Both preventive and mitigative measures are essential to effective risk management, and should be considered when developing risk management strategies. Adaptive measures In curling, players must be adaptable and able to adjust their strategies based on changing circumstances. For example, they may need to change their strategy if the ice conditions change, or if the other team makes unexpected moves. In a similar way to curling, it is essential for risk managers to be adaptable and able to adjust strategies based on changing circumstances. Risk management plans should be regularly reviewed and updated to reflect new risks, changing priorities, or changes in the business or personal environment. Knowing when to take risks and when to play it safe In curling, players must make strategic decisions about when to take risks and when to play it safe. For example, they may take a risk if they are behind in the game and need to catch up, or they may play it safe if they have a lead and do not want to risk losing it. Similarly, in risk management, it is important to know when to take risks and when to play it safe. Sometimes, taking a risk can lead to significant rewards, while other times it can lead to catastrophic consequences. Knowing the difference is crucial to winning the game and mission success. Skip stones Skips on curling teams and risk managers share similarities in their roles and responsibilities. Both skips and risk managers are tasked with making strategic decisions that have a significant impact on the outcome of their respective endeavours. Skips must decide the best course of action for their team during a curling match, assessing the playing conditions, their team's strengths and weaknesses, and the opponent's tactics. Similarly, risk managers must make informed decisions to protect their organization from potential risks and hazards, analyzing the risks involved, the potential impact, and the most effective risk mitigation strategies. Both skips and risk managers need to be highly skilled at analyzing and interpreting complex information, making sound decisions under pressure, and communicating their decisions effectively to their team or organization. The game of curling teaches us valuable lessons about risk management. By understanding risk and opportunity, taking preventive and mitigative measures, being adaptable, and knowing when to take risks and when to play it safe, we can make better decisions in our personal and professional lives. What do you think?

  • Fighting the AI Dragon of Uncertainty

    There are those who think that AI is only software. After all, we can reduce AI to a basic Turing machine, digital ones and zeros. There is nothing new here, nothing to be concerned about, so just move on. There are others who believe that AI is the greatest innovation we have seen. It will answer all our questions, help us cure cancer, solve poverty, climate change, and all other existential threats facing humanity. AI will save us, perhaps, even from ourselves. And there are still others who believe AI is the existential threat that will end us and the world as we know it. This narrative, this story, is not new. It is as old as humanity. We create technology to master our environment, only to discover that one day it takes on a life of its own to master us. And this is when the hero of the story comes in. The hero fights against the out-of-control technology to restore the world back into balance, before the chaos. However, the past is closed, and the path back is no longer possible. The hero must now take the path forward. Our hero must fight the dragon, rescue the princess, and create a new life happily ever after. Coincidentally (or not), this follows the technology hype curve that many of us are very familiar with (heightened expectations, trough of disillusionment, slope of enlightenment, and plateau of productivity). AI Middle Earth What character are we playing in the AI story and where are we on the map of AI Middle Earth? Are we the creators of this new technology promising untold power not unlike the Rings of Power from Tolkien’s books, “The Lord of the Rings?” Will we be able to control the rings, or fall to its temptations? Who will keep back the evil of Mordor? Who will fight the AI dragon of uncertainty? Who will be the heroes we need? Gandalf, the wise wizard from the Lord of the Rings, reminds us, “The World is not in your books and maps, It’s out there.” The real world is not in our AI models either. It’s time for Engineers bound not by rings of power, but by a higher calling to rise up and take their place in the AI story. To help keep evil at bay, and to build a better tomorrow using AI. We need them to help us overcome the mountain of expectations and endure the plateau of disillusionment to reach the plateau of productivity or better yet, the place where we find a flourishing and thriving humanity. It’s time for Professional AI Engineers, more than technology experts, but engineers who are also courageous, competent, and trustworthy. Engineers who are willing to fight the AI Dragon of Uncertainty.

  • When did Professional Engineering Become an Obstacle to Innovation?

    The Future of Professional Engineering Over the years, I’ve seen a decline in professional engineering, and no more so than in the domain of software engineering. Engineering in Canada began with bold vision and practical ingenuity. From the Canadian Pacific Railway to the St. Lawrence Seaway, professional engineers were once celebrated innovators who shaped our nation. Yet somewhere along the way, professional engineering transformed from an enabler of progress to what many see as a barrier to innovation. This was made evident in Ontario with the introduction of an industrial exception (1984) as part of the Professional Engineers Act. This change permitted unlicensed people to carry out engineering work within the context of their employer’s equipment and machinery. The impact of this change was immediate. If anyone can perform engineering, then why do you need professional engineers? Since the exception was introduced, companies have reduced the number of professional engineers in their workforce, which happened in large numbers within the steel industry, where I was working at the time, as well as other sectors. However, while this was a big straw, it was not the only one on the camel’s back. Software engineering, as a profession, would also see itself diminish and almost disappear. For all intents and purposes, software engineering is no longer a licensed practice in Canada. Perhaps, on paper, it is, but having worked in this field for decades, I have observed many calling themselves software engineers, network engineers, and even now prompt engineers, all of who do not have a license to practice. When anyone can call themselves an engineer, we no longer have engineering, and we no longer have a profession. Academia has also not helped to advance the profession. Universities and colleges have in recent decades doubled-down on preparing engineers to support scientific research rather than teaching them how to practice engineering. While we do need engineers to help with research, we need more of them in the field to practice. We need people who use the engineering method, not only the scientific method. So where are we now? We have reduced professional engineering to the things that engineering does, and in the process, forgotten what engineering is. We divided engineering into its parts that no longer need to be held accountable or work together. This was done for efficiency and as a means to increase innovation. However, instead, we broke engineering - the means of innovation – and we need to put it back together again. Engineering was never about the parts. It was never about creating designs, or stamping drawings, or a risk measure to ensure public safety. Again, this is what engineering does, but not what it is. Engineering is a profession of competent and trustworthy people who build the future. And this is something worth remembering if we hope to build a thriving and prosperous Canada.

  • Paper Policies are Not Enough

    Why do we think that paper AI policies will be enough to handle AI risk? With AI’s ability to learn and adapt, we need measures that are also able to learn and adapt. This is a fundamental principle of cybernetic models (i.e., the Good Regulatory Theorem). The regulator must be isomorphic with respect to the system under regulation. It must be similar in form, shape, or structure. That’s why a static, paper-based policy will never be enough to govern (i.e. regulate) the use of AI. Governance – the means of regulation – must be as capable as AI.

  • The Need for AI Mythbusters

    When I read about AI in the news, and this includes social media, I am troubled by the way it is often reported. Many articles are written citing research that, for all intents and purposes, are de minimis examples used in many cases for the purpose of attracting funding or, if you are a content creator, more followers. The Need for AI Mythbusters These articles often don’t prove or demonstrate that AI is better, and neither is the research upon which the articles are based. The research more often than not serves to provide examples of where AI “might” or “could” be better under very specific use cases and caveats. There is nothing wrong with that. However, we need to be mindful that there is a significant gap between the research and the conjectures being made by others. The AI hype machine is definitely operating on all cylinders. Many conjectures are often stated as bold claims such as: “LLMs are sufficiently creative that they can beat humans at coming up with models of the world.” Is that what we understand generative AI is - creative? Or this one, “a sufficiently advanced AI system should be able to compose a program that may eventually be able to predict arbitrary human behaviour over arbitrary timescales in relation to arbitrary stimuli.” What does that even mean? The claim that AI should be able to generate a program capable of predicting human behaviour based on arbitrary stimuli is a bold claim and one that strongly suggests that humans are mechanistic in nature. Instead of elevating artificial intelligence, human nature is reduced to the level of the machines on which AI is built. Is that what we believe or want? As professionals, we need to be critical when it comes to AI, which does not mean negative, of what is published. We must dig deeper to help separate hype from reality. This will not change the level of hype being created. However, it will help your clients and profession better navigate the AI landscape. Time for professionals to be AI Mythbusters!

  • How to Make Better Compliance Investments?

    When it comes to meeting obligations, many view compliance as silos, not interconnected programs that ensure mission success. They only benefit from the sum of their compliance efforts. Portfolio of Compliance Programs However, those who view compliance as interconnected programs will experience the product of their interactions. They will experience the benefit from a multiplication of their compliance efforts. To achieve this, management must make budget decisions considering programs as a whole, not separately; as an investment portfolio, not individual cost centres. However, organizations often lack the tools to make such decisions. They don’t know how to invest in their programs to maximize compliance return. These are the kinds of questions we explore in our weekly Elevate Compliance Huddles. Consider becoming a Lean Compliance member and join other organizations where mission success requires compliance success.

  • A Faster Way to Operationalize Compliance

    Many organizations implement their compliance systems in a phased approach by working through each element of a regulation or standard. They often start by implementing "shall statements" which tend to be more prescriptive and somewhat easier to establish. While this element-first approach might achieve a certification or pass an audit quicker it seldom delivers a system that is effective or even operational. In this article we compare this approach with a systems-first approach based on the work by Eric Ries (Lean Startup). Element-First Approach Not Like This The element-first approach starts at the bottom by identifying the components of the system that may already exist: Understand the elements of the regulation or standard. Map existing practices to the elements. Identify where current practices do not meet the standard. Engage these deficiencies in a Plan-Do-Check-Act (PDCA) cycle. Target these deficiencies for compliance with the standard. This process captures where existing practices might support a given element. This provides a measure of conformance at least at some level. However, what this approach overlooks is that existing practices were established in another context and perhaps for a different purpose. They most likely have not been designed to work together within the context of the desired compliance management system. What organizations have done is essentially taken a bunch of existing parts and put them into another box labelled, "New Compliance Management System." They still need to adapt them to work together to fulfill the purpose of the new compliance system. Until that happens the system cannot be considered as operational. Unfortunately, organizations usually run out time, money, and motivation to move beyond the parts of a system to implementing the interactions which are essential for a system is to be considered operational. Systems-First Approach Like This To support modern regulations designed with performance and outcome-based obligations another strategy is needed that: Achieves operational status sooner, Focuses on system behaviours Improves effectiveness over time right from the start To achieve operational status sooner the Lean Startup approach developed by Eric Ries (Lean Startup) can be used. This systems-first approach emphasizes system interactions so that a measure of effectiveness is achieved right away. Instead of a bottom up approach the focus is on a vertical slice of the management system so that all system behaviours are present at the start and can be applied to each vertical slice. System behaviours create the opportunity for compliance to be achieved. In a manner of speaking we start with a minimal viable compliance system; one that has all essential parts working together as a whole. Not only is the system operational it is already demonstrating a measure of effectiveness. It also provides a better platform on which the system can be improved over time.

  • The Stochastic Wrench: How AI Disrupts Our Deterministic World

    When it comes to trouble, it is often a result of someone throwing a wrench into the works. This is certainly the case when it comes to artificial intelligence. However, not in the way we might think. Up until now, we have engineered machines to be deterministic, which means they are stable across time, reliable, and given a set of inputs, you get the same outputs without variation. In fact, we spend significant effort to make sure (ensure) there is no variation. This is fundamental to practices such as Lean and Six Sigma along with risk and compliance. All these efforts work to ensure outcomes we want and not the ones we don’t. They make certain that when we do A, we always get B and nothing else. Artificial Intelligence - A Stochastic Wrench Yet, here we are, with a stochastic machine, a probabilistic engine we call AI, where the question you ask today will give you a different answer when you ask it tomorrow.  Technically and practically, AI is not reliable, it’s not deterministic. This is not a question of whether AI is accurate or if the answer is correct. It’s about the answer being different every time. There’s always variation in its outputs. There are many reasons why this is the case, that include the nature of how knowledge models work, the fact that it can learn, and that it can learn how to learn - it can adapt.  However, what is crucial to understand is, AI is not the kind of machine we are used to having in our businesses. We want businesses to be deterministic, predictable, and reliable. And yet here we are, throwing a stochastic wrench into our deterministic works. This is why we need to rethink how we govern, manage, and use AI technology.  We need to learn how to be more comfortable with uncertainty. But better than that, we need to learn how to: improve our probability of success in the presence of uncertainty.

  • How to Make Things More Certain

    Author's note: In the pursuit of improving anything, we need to explore the edge of our understanding. This is no different when it comes to compliance. In this article, I delve into philosophy and future causality. You may wonder what this has to do with compliance. As it turns out, how we conceptualize the future influences how we think about risk, compliance and even AI. Interfering with the Future The world according to classical physics is deterministic. If you know the initial conditions and given fixed laws of nature, then the future will also be “fixed” – what will be, will be. This provides a sense of certainty and predictability. However, that’s not how we experience the world. We do observe the past as fixed, but the future appears open to possibilities, in a deep sense, anything can happen – a source of potential but also uncertainty. According to Dr. Jenann Ismael, Professor of Philosophy at John Hopkins University, the future is not so much something for us to know as it unfolds from an epistemic perspective but something that is becoming through the application of knowledge we have collected. We use knowledge about the past to interfere with the future. It's our agency that determines the future and makes it more certain. Dr. Ismael provides an explanation for this from the domain of physics, her focus with respect to philosophy. Classical physics use a birds-eye third person view rather than an immersive first-person perspective to model the world. This separates the observer from the environment to isolate interactions but it also leaves out how observers interact with it. From an observers point of view, we participate in the environment which we are trying to represent and therefore Interference is inevitable. Dr. Ismael uses "interference" over other words such as "influence" because of its dynamic behaviour. We gather knowledge to represent the world at the same time that we are acting in the world. This creates the opportunity for interference behaving much like ripples in a pond when we skip stones. Interfering with the Future Knowledge of the past can be applied to delay, discourage, or prevent what we don’t want as well as advance, encourage, and make certain what we do want. This is not unlike the practice of risk management where measures are used to interfere with the natural course of events to achieve preferred ends. Our choices make some possibilities more probable than others. The future becomes more “fixed” perspectively (from our point of view) not because of determinism but because of agency. This doesn’t mean we can bend physics to our will but rather only that our choices influence the way the future becomes, understanding there are other forces at work. However, up until the time we decide, the future does not have that information from which to make certain the course of preferred events. This contributes to the uncertainty we experience. We can get a better appreciation of this dynamic from the field of quantum mechanics. At a quantum level, the act of measuring affects what we observe. According to the Heisenberg Uncertainty Principle, we can’t know with perfect accuracy both the position or the speed (momentum) of a particle at the same time. Until the measurement is taken knowledge of both the particle’s position and speed are possible but also uncertain. It's only when we take the measurement that one is made more certain and the other less so. Ripples of Intent Dr. Ismael further suggests that our decisions create ripples in the future record that become part of the future we are trying to anticipate. When the future becomes a reality, we observe not only what “is” but also records of what “is now” the effects of our prior choices. In other words, our choices have effects beyond proximal events. Our day-to-day experiences also reinforce our intuitions regarding how our decisions interfere with the future. When we consider the future and act on our predictions we affect the future itself. This arises because of the self-referencing nature of processes involved. "As long as one's activity is connected with the domain one is representing; some of what one encounters will be the ripples produced by one's own activities. One can't treat those features of the landscape as potential objects of knowledge." – Dr. Jenann Ismael This is one of the reasons why we limit the publication of poll predictions during elections. We don’t want the measurement of what “is” to affect what “will be.” To limit the effect we isolate the measurement from the reality we are observing. However, when the measurement becomes part of that reality it can’t help but interfere with it creating ripples in the future record. Another example is the use of Artificial General Intelligence (AGI). AI systems of this kind are also self-referencing. The output they generate interferes with the future they are trying to represent. AI is not an impartial observer in the classical sense. AI is an observer-participant which gives it a measure of agency, something that may or may not be desirable, but in any case should be accounted for. This may be interpreted by some as the makings of a self-fulfilling prophecy, or creating what we colloquially call luck (good or bad). This could also be the effects of ripples in the future made by our prior choices. We can establish safeguards, quarantine the effects, or introduce other precautions concerning these ripples. At the same time these ripples can be used strategically, which we do most of the time. We act as if our decisions matter and have causal effects on the future. Are we standing still, moving towards, or creating the future? When we think of the future as unfolding and deterministic we envision ourselves as standing still, waiting for the future to present itself. In this context, we can decide to: Hope for the best. Prepare for the future we anticipate by strengthening resiliency. However, if the future is also becoming, we can decide to: Steer towards a preferred possibility making it more probable than others. Interfere with the future by creating ripples of potential opportunity. The observer-participant dynamic may not be ideal for gaining knowledge, however, it's strategic to make things happen in the presence of possibilities.

  • The Limits of Paper-Based Governance in Regulating AI in Business Systems

    In a world increasingly defined by the rapid advancement and integration of artificial intelligence (AI) into business systems, the traditional tools of governance are showing their age. Paper-based governance—rooted in static policies, procedures, and compliance checklists—was designed for a time when systems were static and human-controlled. But AI is neither static nor entirely human-controlled. Its adaptive, self-learning, and agentic nature fundamentally challenges the effectiveness of these legacy mechanisms. Paper-based versus Operational Governance Why Paper Policies Fall Short Paper-based governance relies on predefined rules, roles, and responsibilities that are documented, communicated, and enforced through audits and assessments. While this approach has been effective for many traditional business systems, it assumes that systems operate in a predictable manner and that risks can be anticipated and mitigated through static controls. Unfortunately, this assumption does not hold true for AI technologies. AI systems are inherently stochastic machines that operate in the domain of probabilities and uncertainty. These systems also evolve through self-learning, often adapting to new data in ways that cannot be fully predicted at the time of deployment. They operate dynamically, making decisions based on complex, interrelated algorithms that may change over time. Static paper policies are inherently incapable of keeping up with this fluidity, leaving organizations vulnerable to unforeseen risks and compliance gaps. Consider an AI system used for dynamic pricing in e-commerce. Such a system continuously adjusts prices based on real-time market conditions, competitor pricing, and consumer behavior. A static policy dictating acceptable pricing strategies might quickly become irrelevant or fail to address emergent risks like discriminatory pricing or market manipulation. Paper policies or guardrails, no matter how thoughtfully constructed, simply cannot adapt as quickly as the systems they aim to govern. The Need for Operational Governance To effectively regulate AI, the regulatory mechanisms themselves must be as adaptive, intelligent, and dynamic as the systems they oversee. This principle is encapsulated in the Good Regulatory Theorem of Cybernetics, which states that a regulatory system must be a model of the system that it regulates – it must be isomorphic;, matching in structure and variety to the system it regulates. In practical terms, this means moving beyond paper-based policies and guardrails to develop operational governance frameworks that are: Dynamic: Capable of real-time monitoring and adjustment to align with the evolving behavior of AI systems. Data-Driven : Leveraging the same data streams and analytical capabilities as the AI systems to detect anomalies, biases, or potential violations. Automated : Incorporating AI-powered tools to enforce compliance, identify risks, and implement corrective actions in real-time. Transparent and Observable : Ensuring that AI systems and their governance mechanisms are explainable and auditable, both internally and externally. Building Operational Governance Systems The shift from paper-based to operational governance systems involves several critical capabilities: Real-Time Monitoring : Implement systems that continuously monitor AI behaviour, performance, and outcomes to detect deviations from intended purposes or compliance requirements. Continuious Algorithmic Auditing : Conduct continuous audits of AI algorithms to assess their fairness, transparency, and adherence to ethical standards. Feedback and FeedForward Loops : Establish closed-loop systems that allow regulatory mechanisms to steer and adapt based on observed behavior and anticipated risk. Collaborative Ecosystems : Foster collaboration between stakeholders, business leaders, and engineers to develop shared frameworks and best practices for AI governance. These must work together as part of Operational Compliance , defined as a state of operability when all essential compliance functions, behaviours, and interactions exist and perform at levels necessary to create the outcomes of compliance – better safety, security, sustainability, quality, ethics, and ultimately trust. Looking Forward AI is transforming the business landscape, introducing unprecedented opportunities and risks. To govern these systems effectively, organizations must embrace governance mechanisms that are as intelligent and adaptive as the AI technologies they regulate. Paper-based governance, while foundational, is no longer sufficient. The future lies in dynamic, data-driven, and automated regulatory frameworks that embody the principles of isomorphic governance. Only then can organizations always stay between the lines and ahead of risk in an AI-powered world.

© 2017-2025 Lean Compliance™ All rights reserved.
bottom of page