top of page

SEARCH

Find what you need

428 items found for ""

  • Stopping AI from Lying

    Recently, I asked Microsoft’s Copilot to describe "Lean Compliance." I knew that information about Lean Compliance used in current foundation models was not up-to-date and so would need to merged with real-time information which is what co-pilot attempted to do. However, what it came up with was a mix of accuracy and inaccuracy. It said someone else founded Lean Compliance rather than me. Instead, of not including that aspect of "Lean Compliance", it made it up. I instructed Copilot to make the correction which it did at least within the context of my prompt session. It also apologized for making the mistake. While this is just one example, I know my experience with AI chat applications is not unique. Had I not known the information was incorrect, I may have used it in decision-making or disseminated the wrong information to others. Many are fond of attributing human qualities to AI which is called anthropomorphism. Instead of considering output as false and in need of correction, many will say that the AI system hallucinated — as if that makes it better. And why did Copilot apologize? This practice muddies the waters and makes it difficult to discuss machine features and properties such as how to deal with incorrect output. However, if we are going to anthropomorphize then why not go all the way, and say AI lied. We don’t do this because it applies a standard of morality to the AI system. We know that machines are not capable of being ethical. They don’t have ethical subroutines to discern between what’s right and wrong. This is a quality of humans not machines. That's why when it comes to AI systems we need to stop attributing human qualities to them if we hope to stop the lies and get on with the task of improving output quality.

  • For Compliance to Change It Must Raise Its Standard

    Compliance in many circles is viewed as a solved problem. Organizations declare their compliance by attestation, verified by internal audits, and confirmed by external audits. Any gaps are quickly closed to sustain a status of “In Compliance.” What is there then left to do? However, for many organizations, the scope of obligations that determine “In Compliance” consist only of legal requirements. Obligations that fall under voluntary, ethical, social, or even what is beneficial to an organization are left out of consideration. These other obligations arise from commitments to sustainability, safety, security, quality, environmental, and other strategic outcomes. They have more to do with buying down risk, meeting industry targets, and advancing better outcomes than just adherence to prescriptive rules, legal or otherwise. To meet the broader set of obligations requires intentional and sustained effort where measures of performance and effectiveness define success rather than only measures of conformance. Unfortunately, the impetus to pursue an operational approach is hard to find when you believe you are already “In Compliance”, confirmed by audits and certified by standards organizations. For compliance to change it must raise its standard. That’s why we created Lean TCM (Total Compliance Management) to help organizations raise their compliance standards to meet all their obligations and keep all their promises connected with rules, standards, targets, and outcomes. From legal requirements to ESG commitments and everywhere in between. This transformation starts when you decide to raise your standards, which can begin today. The sooner you decide, the sooner you experience the benefits that come from always staying between the lines and ahead of risk.

  • Keep Humans In The Loop

    When there is a chance of harm, the decision to proceed is an ethical choice and can only be made by humans. AI should not make ethical decisions for you. They are not accountable and cannot answer for the outcome.

  • AI Risks Document-Centric Compliance

    For domains where compliance is "document-centric" focused on procedural conformance the use of AI poses significant risk due to inappropriate use of AI to create, evaluate, and assess documentation we use to describe what we do (or should do). Disclosure of AI use will be an important safeguard going forward, but that will not be enough to limit exposure resulting from adverse effects of AI. To contend with uncertainties, organizations must better understand how AI works and how to use it responsibly. To bring the risks into focus, let’s consider the use of Large Language Models (LLMs) used in applications such as ChatGPT, Bard, Gemini, and others. What do LLM's model? While it's important to understand what these LLMs do, it's also important to know what they don't do, and what they don't know. First and foremost, LLMs create a representation of language based on a training set of data. LLMs use this representation to predict words and nothing else. LLMs do not create a representation of how the world works (i.e. physics), or systems, controls, and processes within your business. They do not model your compliance program, your cybersecurity framework, or any other aspect of your operations. LLMs are very good (and getting better) at predicting words. And so it's easy to imagine that AI systems actually understand the words they digest and the output they generate, but they don't. It may look like AI understands, but it doesn't and it certaintly cannot tell you what you should do. Limitations of Using AI to Process Documents Let's dial in closer and consider a concrete example. This week the Responsible AI Institute as part of their work (which I support) released an AI tool that can evaluate your organization's existing RAI policies and procedures to generate a gap analysis based on the National Institute of Standards and Technology (NIST) risk management framework. Sounds wonderful! This application is no doubt well intended and is not the first or the last AI tool to process compliance documentation. However, tools of this kind raise several questions concerning the nature of the gaps that can be discovered and if a false sense of assurance will be created by using these tools. More Knowlege Required Tools that use LLMs to generate content, for example, such as remedies to address gaps in conformance with a standard, may look like plausible steps to achieve compliance objectives, or controls to contend with risk. However, and this is worth repeating, LLM’s do not understand or have knowledge concerning how controls work, or management systems, or how to contend effectively with uncertainty. They also don't have knowledge of your specific goals, targets, or planned outcomes. LLMs model language to predict words, that's all. This doesn't mean the output from AI is not correct or may not work. However, only you – a human – can make that determination. We also know that AI tools of this kind at best can identify procedural conformance with prescription. They do not (and cannot) evaluate how effective a given policy is at meeting your obligations. Given that many standards consist of a mixture of perscriptive, performance, and outcome-based obligations, this leaves out a sizeable portion of "conformance" from consideration. To evalute gaps that matter requires an operational knowledge of compliance functions, behaviours, and interactions necessary to achieve the outcome of compliance which is something that's not modelled by LLMs and something it doesn't know. The problem is that many who are responsible for complaince don't know these things either. Lack of operational knowledge is a huge risk. If you don’t have operational knowledge of compliance you will not know if the output from AI is reasonable, safe, or harmful. Not only that, if you are using AI to reduce your complement of compliance experts (analysts, engineers, data scientists, etc.) your situation will be far worse. And you won't know how bad until it happens, when it's to late to do anything about it. Not the Only Risk As I wrote in a previous article, AI is not an impartial observer in the classical sense. AI systems are self-referencing. The output they generate interferes with the future they are trying to represent. This creates a feedback loop which gives it a measure of agency that is undesirable, and contributes in part to public fear and worry concerning AI. We don't want AI to amplify or attenuate the signal – it should be neutral, free of biases. We don't yet understand well enough the extent that AI interferes with our systems and processes and in the case of compliance, the documentation we use to describe them. I raised these concerns during a recent Responsible AI Institute webinar where this interference was acknowledged as a serious risk. Unfortunately, it's not on anyone’s radar. While there are discussions that risk exists, there is less conversation on what they are, or how they might be ameliorated. Clearly, AI is still in the experimental stage. Not the Last Gap When it comes to compliance there are always gaps. Some of these are between what's described in documentation and a given standard. Others include gaps in performance, effectiveness, and gaps in overall assurance. Adopting AI generated remedies creates another category of gaps and therefore risk that need to be handled. The treatment for this is to elevate your knowledge of AI and its use. You need to understand what AI can and cannot do. You also need to know what it should or shouldn't do. The outputs from AI may look reasonable, the promise of greater efficiences compelling. But these are not the measures of success. To succeed at compliance requires operational knoweldge of what compliance is and how it works. This will help you contend with risks associated with the use of AI, along with how best to meet all your obligations in the presence of uncertainty.

  • How to Make Things More Certain

    Author's note: in the pursuit of improving anything, we need to explore the edge of our understanding. This is no different when it comes to compliance. In this article, I delve into philosophy and future causality. You may wonder what this has to do with compliance. As it turns out, how we conceptualize the future influences how we think about risk and more importantly our posture. The world according to classical physics is deterministic. If you know the initial conditions and given fixed laws of nature, then the future will also be “fixed” – what will be, will be. This provides a sense of certainty and predictability. However, that’s not how we experience the world. We do observe the past as fixed, but the future appears open to possibilities, in a deep sense, anything can happen – a source of potential but also uncertainty. According to Dr. Jenann Ismael, Professor of Philosophy at John Hopkins University, the future is not so much something for us to know as it unfolds from an epistemic perspective but something that is becoming through the application of knowledge we have collected. We use knowledge about the past to interfere with the future. It's our agency that determines the future and makes it more certain. Dr. Ismael provides an explanation for this from the domain of physics, her focus with respect to philosophy. Classical physics use a birds-eye third person view rather than an immersive first-person perspective to model the world. This separates the observer from the environment to isolate interactions but it also leaves out how observers interact with it. From an observers point of view, we participate in the environment which we are trying to represent and therefore Interference is inevitable. Dr. Ismael uses "interference" over other words such as "influence" because of its dynamic behaviour. We gather knowledge to represent the world at the same time that we are acting in the world. This creates the opportunity for interference behaving much like ripples in a pond when we skip stones. Interfering with the Future Knowledge of the past can be applied to delay, discourage, or prevent what we don’t want as well as advance, encourage, and make certain what we do want. This is not unlike the practice of risk management where measures are used to interfere with the natural course of events to achieve preferred ends. Our choices make some possibilities more probable than others. The future becomes more “fixed” perspectively (from our point of view) not because of determinism but because of agency. This doesn’t mean we can bend physics to our will but rather only that our choices influence the way the future becomes, understanding there are other forces at work. However, up until the time we decide, the future does not have that information from which to make certain the course of preferred events. This contributes to the uncertainty we experience. We can get a better appreciation of this dynamic from the field of quantum mechanics. At a quantum level, the act of measuring affects what we observe. According to the Heisenberg Uncertainty Principle, we can’t know with perfect accuracy both the position or the speed (momentum) of a particle at the same time. Until the measurement is taken knowledge of both the particle’s position and speed are possible but also uncertain. It's only when we take the measurement that one is made more certain and the other less so. Ripples of Intent Dr. Ismael further suggests that our decisions create ripples in the future record that become part of the future we are trying to anticipate. When the future becomes a reality, we observe not only what “is” but also records of what “is now” the effects of our prior choices. In other words, our choices have effects beyond proximal events. Our day-to-day experiences also reinforce our intuitions regarding how our decisions interfere with the future. When we consider the future and act on our predictions we affect the future itself. This arises because of the self-referencing nature of processes involved. "As long as one's activity is connected with the domain one is representing; some what one encounters will be the ripples produced by one's own activities. One can't treat those features of the landscape as potential objects of knowledge." – Dr. Jenann Ismael This is one of the reasons why we limit the publication of poll predictions during elections. We don’t want the measurement of what “is” to affect what “will be.” To limit the effect we isolate the measurement from the reality we are observing. However, when the measurement becomes part of that reality it can’t help but interfere with it creating ripples in the future record. Another example is the use of Artificial General Intelligence (AGI). AI systems of this kind are also self-referencing. The output they generate interferes with the future they are trying to represent. AI is not an impartial observer in the classical sense. AI is an observer-participant which gives it a measure of agency, something that may or may not be desirable, but in any case should be accounted for. This may be interpreted by some as the makings of a self-fulfilling prophecy, or creating what we colloquially call luck (good or bad). This could also be the effects of ripples in the future made by our prior choices. We can establish safeguards, quarantine the effects, or introduce other precautions concerning these ripples. At the same time these ripples can be used strategically, which we do most of the time. We act as if our decisions matter and have causal effects on the future. Are we standing still, moving towards, or creating the future? When we think of the future as unfolding and deterministic we envision ourselves as standing still, waiting for the future to present itself. In this context, we can decide to: Hope for the best. Prepare for the future we anticipate by strengthening resiliency. However, if the future is also becoming, we can decide to: Steer towards a preferred possibility making it more probable than others. Interfere with the future by creating ripples of potential opportunity. The observer-participant dynamic may not be ideal for gaining knowledge, however, it's strategic to make things happen in the presence of possibilities.

  • Is your compliance software hindering your effectiveness?

    Technology is a pervasive force that significantly influences our lives in various ways, particularly with the widespread integration of AI. The impact of software on us is not always apparent, as we've learned from years of using social media. It's crucial to be aware of how technology can amplify certain behaviors while constraining others. Gone are the days when we could perceive technology as neutral, merely consisting of data collection, processing, or output devices. We now understand that information possesses influence beyond our explicit requests or desires. In many ways, information has agency. Therefore, it's imperative to ensure that our technology choices align with our values, contribute to our objectives, and, most importantly, reinforce the behaviors essential for achieving our mission. Failing to do so may result in reinforcing what benefits technology at the expense of our own interests. Be mindful of your technology choices and choose wisely.

  • Hacking Reactivity in Pursuit of Future Goals

    Over the last several years I have written, along with others, concerning the need for compliance to be more proactive. This is set against a prevailing reactive approach characterized by waiting until something bad happens or compelled by laws, or pressured by stakeholders to improve compliance particularly with respect to safety, security, sustainability, environmental, and other high-risk objectives. Reactivity, in these contexts is not desirable or the best behaviour for organizations that want to stay between the lines and ahead of risk. However, reactivity is not on its own negative. There are many cases where reacting to past events is exactly what's needed. One such place, critical to compliance, is to adapt to variations in systems and processes to ensure systems perform within specified boundaries. This is accomplished by measuring outputs and comparing them to a defined standard. Deviation from standard results in corrective actions to eliminate the gap and return back to normal operations. This reactive process is foundational for regulating processes of all kinds including those used in compliance. It's found everywhere within organizations and contributes to shaping overall corporate culture. In this article we consider how to exploit the power of reactivity to achieve more than just staying between the lines. We will explore how to hack reactivity in pursuit of future goals, so that we can also stay ahead of risk. The Power of Systems - Resisting Change Compliance systems are used to meet procedural obligations such as adherence to standard operating procedures, controls, measurements, management review, audits, and so on. In addition, compliance will also have performance obligations associated with goals and targets connected with commitments. These will include, for example, targets connected with zero emissions, zero violations, zero defects, zero breaches, and other vision zero initiatives. In both cases, processes are established to measure change from conformance or performance standards. Any change from standard (called a deviation) is then eliminated. The presence of deviation initiates corrective actions in the form of a CAPA (Corrective Action and Preventive Action). Corrective actions may arise from audits or inspections but also as part of system level monitoring. To address a deviation, an iterative process such as a Plan-Do-Check-Act cycle may be conducted and repeated until the deviation is minimized or eliminated. While this process is reactive since corrective actions are triggered by past events, it's possible to harness this reactivity to meet future goals. The key to leveraging reactivity for proactive ends lies in bringing the future into the present, by making anticipated goals into actual goals and raising standards to meet future needs. Changing Goals When embracing a new goal, a gap emerges between the current and desired system states. This gap shares similarities with deviations that are addressed by means of corrective actions. Since this gap has not yet happened, instead of executing corrective actions in response to actual performance, improvement actions are conducted in anticipation of future levels of performance. An example of this approach is the Toyota Kata, a process associated with the Toyota Production System. It involves: The Improvement Kata, a four-step routine focused on setting challenging objectives, understanding the current situation, defining the next target, and experimenting toward that target. The Coaching Kata represents leadership's role in guiding individuals or teams through this improvement process, fostering continuous learning and problem-solving. Toyota Kata can be viewed as an adaptive process that integrates both proactive and reactive behaviours to pursue a better future state. Defining future objectives and targets is proactive while experimenting towards successive targets is reactive. Raising Standards Improvement methodologies such as Toyota Kata are not the only way that we can harness reactive behaviours to achieve proactive ends. Another approach is to leverage the system itself to improve. Raising standards induces the affected system in the present to adapt to new levels of performance targets by invoking reactive behaviours. The system will initiate corrective actions to achieve and sustain the new level of performance. In this case, corrective actions are used as improvement actions triggered by the adoption of higher standards. This approach is considered proactive in terms of the future state of the system but reactive concerning addressing the gap between the old and new standard. An Integrative Approach The cases we have considered share similarities. They both change system performance triggered by either past or future events which create: corrective or improvement actions respectively. When combined together they form an adaptive system: Adaptive systems refer to systems that have the ability to adjust and modify themselves in response to changes in their environment or in accordance with specified goals. These systems are designed to be flexible and responsive, allowing them to thrive in dynamic and evolving conditions. Adaptability, is one of the properties of the Operational Compliance Model we introduced in previous articles: Instead of building compliance systems that react only to past events, we design them to respond to anticipated future events. This is accomplished by introducing feed-forward processes and behaviours that when combined with feed-back processes and behaviours create adaptive cycles of change across three critical aspects: conformance, performance, and effectiveness. Creating an adaptive system harnesses the power of reactivity to achieve proactive ends. When it comes to compliance, proactivity is needed to stay ahead of risk, and reactivity to stay between the lines. However, together they provide a powerful means for compliance to continuously adapt in the midst of changing obligations and uncertainties. This ensures that organizations always stay between the lines and ahead of risk. Not a luxury, but a necessity for mission success.

  • What Prevents Compliance From Failing?

    Jame Clear, author of Atomic Habits, writes: “You do not rise to the level of your goals, You fall to the level of your systems” He is correct. Left on our own we drift into disorder away from our goals. Systems prevent you from falling into disorder. Systems act as a guardrail by resisting change to reduce variation. Now, how do you raise your system levels? That’s the role of management programs which introduce change. They adjust system targets to higher levels of performance to advance overall outcomes. Programs bridge the gap between operational objectives and organizational outcomes by elevating the quality of our systems. Without them you fall to the level of procedural conformance. With them you elevate your compliance to higher standards of safety, security, sustainability, and other compliance objectives. Programs are an essential component of operational compliance, necessary (but not sufficient) to meet performance and outcome-based obligations. Are you missing this essential function of compliance?

  • One Day or Day 1

    Many organizations recognize that meeting all their obligations and staying ahead of risk requires adopting a holistic, proactive, and integrative approach to compliance. However, they also find themselves trapped by a siloed, reactive, and divided practice reinforced by years of prescriptive rules and audits. They often tell me, I know we need to change but we have too much on our plate. We’re too busy putting in controls, auditing, and working on corrective actions to be proactive. Perhaps one day we will be in better shape to change. But I tell them, That day will never come, you will never catch up, and you will never make the changes you need to really protect value creation and keep all your stakeholder commitments. The difference between compliance failure or success depends on one decision: One Day or Day 1? You need to decide to change today. You may not know what’s needed or how to proceed at first. That can be improved over time. But no change will happen until you decide to start. You can wait until something bad happens and when it might be too late to change. Or You can decide to make One Day into Day 1.

  • Operational Compliance

    🔸The Law of Inevitable Ethical Inadequacy🔸 The cybernetics law of Inevitable Ethical Inadequacy is simply stated as, “If you don’t specify that you require a secure ethical system, what you get is an insecure unethical system." This means that unless the system specifies ethical goals it will regulate away from being ethical towards the other goals you have targeted. You can replace the word ethical with "safety" or "quality" or "environmental" which are more concrete examples of ethical-based programs that govern an organization. If they are not part of a value creation system, according to this law, the system (in this case the value chain) will always optimize away from "quality", "safety", or environmental" goals towards non-ethical outcomes. This dynamic may help explain the tensions that always exist between production and safety, or production and quality, and so on. When productivity is the only goal the value chain will regulate towards that goal at the expense of all others. This is never more important now when it comes the use of Artificial Intelligence (AI). If organizations want to steer aware from harms associated from the use of AI in their value chain, they must explicitly state their objectives for the responsible use of AI. Otherwise they will inevitably optimize towards productivity at the expense of ethical values. In theory and in practice, compliance outcomes cannot be separate objectives overlaid on top of operational systems and processes. Compliance goals must be explicitly specified in the value outcomes we intend to achieve. Compliance must also have corresponding oeprational programs to regulate the business towards those outcomes. That’s why we are seeing more roles in the “C-Suite” such as Chief Security Officer, Chief Safety Officer, Chief Sustainability Officer, and so on. These are the general managers of the programs needed to regulate the organization towards targeted compliance outcomes. This is the world of Operational Compliance – the way organizations operate in high-risk, highly regulated environments. They are highly regulated not only because of government regulation. It's also because they want to ensure they advance the outcomes they want and avoid the ones they don't.

  • Why Compliance Might Be Caught In A Trap

    Over the years I learned that many organizations increasingly find they are not able to keep up with all their compliance obligations. On paper they are fine, but in practice is another story altogether. The cause can be attributed partly to the expansion of regulatory requirements. To stay between-the-lines many choose to double down on audits and inspections. However, this often proves to be too slow and too late to drive needed improvements, let alone keep up with the speed of risk. The traditional approach to compliance characterized by reactive, siloed, and reductive practices is unable to deliver what organizations need to meet all their obligations associated with safety, security, sustainability, environmental, quality, regulatory, fraud, and other compliance objectives. Working hard at following rules and procedures is not working or enough to realize the benefits of their efforts. Organizations are still unable to answer questions such as: Are they any safer? Is their quality better? Does their security provide adequate protection? Is fraud reduced? These have more to with outcomes of compliance rather than adherence to prescriptive rules. In many ways, organizations are caught in a trap of working hard and hoping for the best not knowing if their efforts will be effective in any unit of measure. As a result, these organizations are vulnerable and perhaps only one mishap, one non-conformance, one violation, one breach, or one explosion away from mission failure. An Old Sign On The Door How can organizations escape this trap when the sign on the compliance door reads: “We are in compliance with all applicable rules, laws and regulations as far we know. Will be back after our next incident." When there is nothing to improve, there is no need of escape. However, there are important reasons to escape this trap. Over the last decade regulators have started to modernize their programs to become more risk-based; moving away from rules towards performance and outcome-based designs. The intended impact is to enhance public safety beyond what prescription alone could provide. This means that regulators are now more focused on risk mitigation rather than adherence to rules. Also, in recent years the number and nature of obligations has increased coming from industry, stakeholders, and the investment community connected with ESG, climate change, carbon neutrality, environmental sustainability, cyber security, and many other objectives. We have reached a tipping point where there are just as many non-regulatory as regulatory requirements that need to be managed. Compliance needs a new sign. A Better Sign And A New Hope For Compliance Operationalizing obligations requires more than training, following procedures, completing checklists and conducting audits. Organizations must learn how to advance towards targets, handle risk, and continual improve their performance. This requires that organizations adopt an operational approach: one that is proactive, integrative, and holistic. A program that reduces waste, handles risk, and delivers compliance outcomes rather than only audit reports. Compliance must become an operational function not just an administrative expense. Organizations that have implemented an operational program for their compliance, have a new sign on their door: “We are experiencing the benefits of our compliance and improving our effectiveness with confidence every day. Meet you up ahead, already there." That's a better sign and a better way to do compliance.

  • How Do You Feel About Compliance?

    When it comes to practising compliance it often feels like driving a car, or more precisely a standard (small pun intended), one with a gear shift and a clutch. My first car was a standard, and I remember what it was like to use a clutch, watch where I was going, and steer the car to avoid hitting anyone – all at the same time. It was overwhelming, at least at first. You definitely wonder if it might be better to just focus on one thing, to make it simpler and less overwhelming. Perhaps, just focus on the brakes - that should be enough? But will it really be enough to get you from where you are now to where you want to go? The answer is no. You need to learn all that’s essential for you to drive, and that means learning how things work together not just on their own. And this can only be learned by practising them at the same time. This same question should be asked of compliance when it comes to meeting obligations. Is focusing on the parts of compliance really enough to get you from where you are now to where you need be with your obligations? For compliance to be successful, you also need to practice everything that’s needed – all at the same time. We need to master how to drive the whole compliance system – not just how to work the parts. And yes, it will feel like driving a standard. However, in time, driving compliance will become second nature and you will focus more on the journey and what destinations you might visit rather than on the different parts of the system and the dynamics of driving. You will start to experience the benefits of compliance. And this will feel very different. You will look forward with anticipation to the benefits you will experience because you have learned how to successfully drive compliance towards targeted outcomes. And you will be filled with assurance rather than anxiety knowing that you have what it takes to make it happen. This may sound like a luxury or a nice to have, but it is a necessity for those where compliance failure means mission failure. When it comes to getting to where we want to go we expect to use an entire car, and learn how to drive so that it delivers what’s promised. Not a luxury, but what’s expected. Why don’t we expect the same from our compliance? Why are we not expecting and using the entire program, and learn how to drive it so that it delivers what's promised – all the commitments we have made associated with all our obligations? We can feel differently about our compliance. We can feel assurance (confidence and certainty) rather than anxiety (unease and worry). But we first need to learn how to drive.

bottom of page