top of page

SEARCH

Find what you need

497 items found for ""

  • To Move Forward, You Need to Leave Some Things Behind

    Running the Race To succeed at life and in business, we need to avoid obstacles on our path or mitigate their effects. This is historically the practice of Risk Management – the identification and handling of the effects of uncertainty on the objectives that guide us to our goals. However, what is also necessary is to leave behind obstacles that are holding us back, or might slow us down from achieving our objectives. At a fundamental level, this is the practice of Lean Management – the identification and removal of waste (another form of risk) that consumes our energy, leaving us without the strength we need to reach our goals. To achieve what matters most, there is a saying that captures this truth: “So then, like people running a race, we must take off everything that is heavy. We must put off all wrong, wrong things that get in our way. We must not stop running until we reach the mark that has been put in front of us.”- Worldwide English (New Testament) To move forward, we need to leave some things behind, those things that trip us up, slow us down, or keep us from achieving our mission: What habits or practices may cause you to trip or fall? What work are you doing that no longer needs to be done or could be done by someone else? What might cause you to give up prematurely? What do you need to take off and leave behind to better run your race? If you are in need of risk-adjusted plan of success for you compliance consider engaging in one of our Compliance Kaizens .

  • Culture Doesn't Drive Practice – Practice Drives Culture

    There's a common misconception in organizational development that culture is something we can deliberately engineer to achieve success. Many leaders and consultants advocate for "building the right culture" as a prerequisite for implementing quality improvements or organizational change. This thinking, however, fundamentally misunderstands how culture actually develops and functions within organizations. Culture Doesn't Drive Practice – Practice Drives Culture The Cart Before the Horse When executives say, "We need to create a quality culture before we can improve our processes," they're putting the cart before the horse. Culture isn't a lever we can pull to generate desired outcomes. Rather, it's the accumulated residue of consistent actions, decisions, and behaviours over time. It's more like a shadow that follows us than a tool we can wield. Think of organizational culture as similar to a person's character. You don't develop integrity by deciding to "have an integrity culture." You develop integrity by consistently making ethical choices, telling the truth, and following through on commitments. The reputation for integrity follows these actions; it doesn't precede them. The True Path to Cultural Change The reality is that meaningful cultural change begins with concrete actions and practices. If you want a quality culture, start by: Implementing robust quality control processes Training teams in quality management techniques Measuring and tracking quality metrics Recognizing and rewarding quality-focused behaviour Addressing quality issues promptly and thoroughly Over time, as these practices become routine and their benefits become apparent, they naturally shape the organizational culture. Team members begin to internalize quality-focused thinking not because they were told to have a "quality mindset," but because they've experienced firsthand the value of quality practices. Learning from Successful Organizations Organizations that successfully develop strong cultures don't achieve this by focusing on culture itself. Toyota didn't become synonymous with quality by launching culture initiatives. Instead, they relentlessly focused on implementing and refining their production system, developing standardized work processes, and practising continuous improvement. The renowned Toyota culture emerged as a natural consequence of these sustained practices. The Danger of Culture-First Thinking Treating culture as a tool or prerequisite for improvement can be actively harmful. It often leads to: Paralysis: Teams waiting for the "right culture" before making necessary changes Superficial solutions: Focusing on cultural artifacts (mission statements, values posters) rather than substantive changes Misallocation of resources: Investing in culture-building exercises instead of practical improvements Frustration: When cultural change initiatives fail to deliver tangible results Culture is Not a Tool for Success, it's Evidence of Success Culture is the natural byproduct of consistent actions and practices over time. By focusing on implementing and maintaining the right practices, rather than trying to engineer culture directly, organizations can achieve both their immediate objectives and the cultural changes they desire. The next time someone suggests you need to change your culture before you can improve, remember: culture doesn't drive practice – practice drives culture. Moving Forward: Action First, Culture Follows Instead of viewing culture as a tool for success, organizations should focus on implementing the specific practices and behaviours they want to see. Want a culture of innovation? Start by creating time and space for experimentation. Want a culture of customer service? Begin by improving your response times and service quality metrics. The cultural shift will follow naturally as these practices prove their value and become embedded in daily operations. It's through this sustained practice that beliefs, attitudes, and ultimately culture evolve.

  • When Words Are Not Enough: The Limitations of AI in Understanding Reality

    When Words are Not Enough: The Limitations of AI In the race toward artificial general intelligence, we find ourselves at a curious crossroads. Massive data centers spring up across the globe like modern-day temples, housing the computational power needed to process vast amounts of human knowledge. These centers feed our most advanced language models, which parse through billions of words describing everything from scientific discoveries to human experiences, searching for patterns that might unlock deeper understanding of our world. This technological pursuit has undeniably accelerated our scientific understanding. AI systems can now analyze research papers at unprecedented speeds, identify patterns in complex datasets, and generate hypotheses that might have taken humans years to formulate. They serve as powerful tools in our quest to understand the universe's underlying mechanics. Yet, there's a fundamental limitation in this approach that we must acknowledge: AI systems don't directly observe or experience the world – they only see it through the lens of human description. It's as if we're asking them to understand a sunset by reading poetry about it, without ever witnessing the actual play of light across the evening sky. This abstraction from reality creates a significant blind spot. The world as described in text, no matter how detailed or extensive, represents only a fraction of what exists. Consider how much of your daily experience resists capture in words: the precise sensation of warm sand between your toes, the ineffable feeling of connecting with a piece of music, or the subtle emotional resonance of a loved one's presence. Perhaps most crucially, words fall short when we attempt to capture the most fundamental aspects of human experience – beauty, goodness, and truth. These concepts exist in a realm beyond mere description. Beauty isn't just a set of aesthetic principles; it's a lived experience that touches something deep within us. Goodness cannot be reduced to a list of moral rules; it emerges from the complex interplay of intention, action, and consequence. And truth? Truth often reveals itself in the spaces between words, in the direct experience of reality that no description can fully convey. As we continue to advance AI technology, we must remain mindful of these limitations. While AI represents a powerful tool for processing and analyzing human knowledge, it cannot replace the direct experience of being in the world. The map, no matter how detailed, is not the territory. Perhaps the real promise of AI lies not in its ability to replicate human understanding, but in its potential to complement it, leaving us more time and space to engage with those aspects of existence that transcend description. In our pursuit of artificial intelligence, we would do well to remember that some of life's most profound truths can only be known through direct experience. They must be lived, felt, and understood in ways that no amount of data processing can capture.

  • Will AI Replace Professionals?

    Professional practice represents far more than technical expertise or procedural knowledge. It embodies a complex integration of technical mastery with moral judgment, developed through years of learning and experience. Doctors, lawyers, engineers, geologists, and other professionals operate within ethical frameworks that guide their decisions and actions. These professionals don't simply apply rules—they exercise wisdom, judgment, and moral reasoning in service of society. The Current State of AI Artificial intelligence has indeed made remarkable progress in performing specific tasks within professional domains. AI can analyze medical images, review legal documents, optimize engineering designs, or process geological data with impressive accuracy. However, this capability in executing discrete tasks should not be confused with the full scope of professional practice. The Two Modes of Thinking To understand AI's limitations in professional practice, we can turn to neuroscientist Iain McGilchrist's framework of brain hemisphere functionality. This framework helps explain why AI excels at certain tasks while falling short of what is required for professional practice. The Master and his Emmisary Machine-Like Intelligence (Left Hemisphere - apprehending) AI demonstrates remarkable proficiency in functions that mirror left-hemisphere characteristics: Sequential processing and analytical reasoning Categorization and rule-based decision making Processing explicit knowledge and fixed representations Focusing on isolated parts rather than wholes Operating within predetermined parameters Quantitative analysis and literal interpretation This alignment explains why AI, and computing in general, has successfully replaced many mechanistic, routine tasks in organizations. Traditional organizational structures, with their emphasis on standardization and procedural efficiency, have created natural opportunities for AI integration. Professional Wisdom (Right Hemisphere - comprehending) However, professional practice also requires capabilities that align with right-hemisphere functions: Understanding context and implicit meaning Processing new experiences and adapting to uncertainty Exercising emotional intelligence and empathy Recognizing complex patterns and relationships Making nuanced judgments based on experience Integrating ethical considerations with technical knowledge These capabilities emerge from human experience, moral development, and professional wisdom—qualities that cannot be reduced to algorithms or data processing. Looking Forward Organizations are increasingly recognizing the limitations of purely mechanistic approaches. This awareness has led to a growing emphasis on what McGilchrist, along with others, term as"whole-brain" thinking in professional practice and organizational governance. This shift acknowledges that effective organizational and professional practice requires both technical expertise and human wisdom. Current AI systems, despite their sophistication, remain firmly within the domain of left-hemisphere functionality. They can process information, follow rules, and may even make up their own rules, but they cannot replicate the contextual understanding, ethical reasoning, and professional judgment that characterize true professional practice. The relationship between AI and professional practice will no doubt continue to be defined in the years ahead. AI will evolve further prioritizing handling routine, mechanistic aspects of organizational and professional work. However, the core of professional practice—the integration of technical expertise with moral judgment, contextual wisdom, and ethical reasoning—will remain uniquely human. Professional practice ultimately represents the embodiment of not just knowledge, but conscience, wisdom, and a fundamental commitment to serving society's best interests – to do good, not harm. These essential qualities ensure that while AI may enhance professional practice, it cannot and should not replace the professionals themselves.

  • Compliance Improvement Spiral

    Everything flows, and so must compliance. Compliance Improvement Spiral Compliance cannot stay the same; it must continually improve, but as importantly, it must continually innovate. These forces help define the difference between compliance programs and compliance systems: Compliance Programs introduce change to achieve better outcomes. Innovation is characterized by creating potential, introducing novelty, and exploiting opportunities on objectives (positive risk) – Pro-activity . Compliance Systems resist change to achieve greater consistency. Improvement is characterized by closing of gaps, reduction in variation, and ameliorating threats on objectives (negative risk) – Re-activity. While compliance needs both, the emphasis today is on the building of systems without the benefit of a program. It's no wonder why compliance has struggled to measure, let alone achieve, effectiveness. Without programs, compliance does not have the context or the conditions for compliance systems to know what and how to improve to achieve better outcomes. To achieve compliance success in the year ahead ensure you have an operational compliance program to help guide and steer your systems towards higher standards.

  • Compliance Must Be Intelligent

    AI Safety Labels There is an idea floating around the internet and within some regulatory bodies that we should apply safety labels to AI systems, akin to pharmaceutical prescriptions. While well intended this is misguided for a variety of reasons, namely AI’s adaptive nature.   Unlike static technologies, AI systems continuously learn and evolve, rendering traditional regulatory controls such as audits and labelling obsolete the moment they are conducted.   To effectively manage AI safety, regulatory frameworks (i.e., systems of regulation) must be real-time, intelligent, and capable of anticipating potential deviations.   Following the laws of cybernetics, to be a good regulator they must be a model of the system they are regulating.   What this means in practice is that to regulate artificial intelligence, compliance must also be intelligent. Why AI Safety is Different The prevailing approach to meeting compliance obligations (ex. safety, security, sustainability, quality, etc.) consists of conducting point-in-time comprehensive audits designed to validate a system's performance and assess potential risks. This method works effectively for traditional technologies but becomes fundamentally flawed when applied to AI. Traditional engineered systems are static entities with predefined, unchanging behaviours. In contrast, AI systems represent a new paradigm of adaptive intelligence. An AI system's behaviour is not a fixed state but a continuously shifting landscape, making any single-point assessment obsolete almost instantaneously. Unlike a medication with a fixed chemical composition or a traditional software application with static code, AI possesses the remarkable ability to learn, evolve, and dynamically modify its own behavioural parameters – it can change the rules. This means effective AI safety cannot be reduced to a simple label based on an assessment that happened sometime in the past. Learning from other Domains Software as a Medical Device (SaMD) The Software as a Medical Device (SaMD) domain provides a nuanced perspective on managing adaptive systems. In this field, "freezing" a model is a critical strategy to ensure consistent performance and safety. However, this approach directly conflicts with AI's core value proposition – its ability to learn, adapt, and improve. Design Spaces as Guardrails Borrowing from the International Council for Harmonization (ICH) of Technical Requirements for Pharmaceuticals, we can conceptualize a more sophisticated approach centered on "design spaces" for AI systems. This approach transcends traditional compliance frameworks by establishing system design boundaries of acceptable system behavior. Changes (or system adaptations) are permitted as long as the overall system operates within validated design constraints. This is used to accelerate commercialization of derivative products, but also offers important insights to how safety could be managed for adaptive systems such as AI. An AI Regulatory Framework: Intelligent Compliance Laws of AI Regulation for Compliance Cybernetics pioneer Ross Ashby's Law of Requisite Variety provides a critical insight into managing complex systems. The law stipulates that to effectively control a system, the regulatory mechanism must possess at least equivalent complexity and adaptability as the system being regulated. For AI governance, this translates to developing regulatory frameworks (i.e. , systems of regulation) that are: Dynamically intelligent Contextually aware Capable of anticipating and preempting potential behavioural deviations in the systems they regulate The bottom line is that regulation, the function of compliance, must be as intelligent as the system they are regulating. Looking Forward Safety labels, while well-intentioned, represent a reductive approach to a profoundly complex challenge. Our governance models must innovate beyond traditional, static approaches and embrace the inherent complexity of adaptive intelligence to ensure critical system attributes are present that include: Safety:  Proactively preventing direct harm to users, systems, and broader societal contexts Security:  Robust protection against potential manipulation, unauthorized access, and malicious exploitation Sustainability : Ensuring long-term ethical, environmental, and resource-conscious considerations Quality : Maintaining consistent performance standards and reliable outputs Ethical Compliance : Adhering to evolving societal, moral, and cultural standards And many others Developing intelligent, responsive compliance mechanisms represents a complex, multidisciplinary challenge. These guardrails must themselves be: Self-learning and self-updating Transparent in decision-making processes Capable of sophisticated, nuanced reasoning Flexible enough to accommodate emerging technologies and societal changes The path forward requires unprecedented collaboration across domains: Researchers pushing theoretical and technological boundaries Ethicists exploring philosophical and moral implications Legal experts developing adaptive regulatory frameworks Compliance professionals creating innovative regulation mechanisms Policymakers establishing forward-looking governance structures Engineers designing and building responsible and safe AI The future of AI governance including the associated systems of regulation lies not in simplistic warnings based on static audits, but in developing intelligent, responsive, and dynamically evolving regulatory ecosystems. It's time for compliance to be intelligent.

  • AI Risk: When Possibilities Become Exponential

    Artificial Intelligence (AI) risk databases are growing, AI risk taxonomies and classifications are expanding, and AI risk registers are being created and added to at an accelerated rate. Here are a few resources that are attempting to capture them: AI Risk Repository by MIT [ https://airisk.mit.edu/](https://airisk.mit.edu/) AI Risk Database - [ https://airisk.io/](https://airisk.io/) Unfortunately, this exercise is like "trying to stop the tide with a broom." How can we stay ahead of all the risk that is coming our way? A wise risk manager once told me, “If you want to eliminate the risk, eliminate the hazard.” Conceptually, this is how we now think about risk. Hazards are sources of uncertainty, and as we know, uncertainty creates the opportunities for risk. You can try, and many will, to deal with the combinatorial explosion of the effects of AI uncertainty. They will create an ever-expanding risk taxonomy and corresponding practices. Unfortunately, they will soon discover that there will never be enough time, enough resources, or enough money to contend with all the risks that really matter. There are not enough brooms to push back the tsunami of AI risk. Yet, some will take the advice of the wise risk manager and contend with the uncertainties first. Their AI systems will handle not only the risks that are identified but also the ones still to emerge because they will have removed the opportunity for risk to manifest in the first place. They will stop the tsunami from being created in the first place. Heed the advice of the wise risk manager: “If you want to handle AI risk, contend with the uncertainties first.”

  • The Evolution of AI Systems: From Learning to Self-Creation

    In today's world of artificial intelligence, not all systems are created equal. As we push the boundaries of technological innovation, we're witnessing a fascinating progression of AI capabilities that promises to reshape our understanding of intelligence itself. The Learning Foundation: Machine Learning Systems Imagine an AI that can learn from past experiences, much like a student studying for an exam. Machine Learning Systems are our first step into computational intelligence. These systems digest vast amounts of data, recognizing patterns and improving their performance over time. Think of recommendation algorithms that get better at suggesting movies or navigation apps that learn optimal routes – that's machine learning in action. Insights Beyond Patterns: Artificial Intelligence Systems But learning isn't just about recognition – it's about understanding. Artificial Intelligence Systems take the next leap by deriving meaningful insights from data. Where machine learning sees patterns, AI systems see stories, connections, and deeper meanings. They're not just calculating; they're interpreting. Picture an AI that can analyze market trends, predict scientific breakthroughs, or understand complex human behaviors. Autonomous Action: Agentic AI Systems The plot thickens with Agentic AI Systems – the problem-solvers with a mind of their own. These systems don't just analyze; they act. Imagine an AI that can make decisions, create strategies, and execute complex tasks with minimal human intervention. Still, they operate under human supervision, like a highly capable assistant who knows when to ask for guidance. The Frontier of Self-Evolution: Autopoietic AI Systems Here's where things get truly mind-bending. Autopoietic AI Systems represent the future edge of artificial intelligence – systems capable of changing both themselves and their environment. They're not just learning or acting; they're actively reshaping their world. Imagine an AI that can simultaneously redesign its own internal architecture and modify the external environment around it. These systems don't just adapt to the world – they transform it, creating new conditions, solving complex challenges, and fundamentally reimagining the interactions between technology and environment. Looking Forward From recognizing patterns to potentially redesigning themselves, AI systems are on an extraordinary journey. Each stage builds upon the last, pushing the boundaries of what we believe is possible. As we hurtle forward in this technological revolution, we must pause and ask the fundamental question: to what end? The artificial intelligence we are developing holds immense potential for transformative good—solving global challenges, advancing medical breakthroughs, and expanding human understanding. Yet, it also carries profound risks of unintended consequences, potential harm, and systemic disruption. Our task is not merely to create powerful technologies, but to guide them with wisdom, foresight, and a deep commitment to collective human well-being. We stand at a critical juncture where our choices will determine whether these intelligent systems become tools of progress or sources of unprecedented complexity and potential harm. The moral imperative is clear: we must approach this technological frontier with humility, ethical scrutiny, and a holistic vision that prioritizes the broader implications for humanity and our shared planetary future.

  • Safety of the Intended Functionality: Re-imagining Safety in Intelligent Systems

    When it comes to intelligent systems, safety has outgrown its traditional boundaries of risk assessment.   While the traditional approach of Functional Safety  focuses on protecting against system failures and random hardware malfunctions, Safety of the Intended Functionality  (SOTIF) addresses new challenges of intelligent systems that can operate without experiencing a traditional "failure" yet still produce unintended or unsafe outcomes.   The ISO 21448 (SOTIF) standard was introduced in 2022 to address these challenges and risk scenarios that include:   the inability of the function to correctly perceive the environment; the lack of robustness of the function, system, or algorithm with respect to sensor input variations, heuristics used for fusion, or diverse environmental conditions; the unexpected behaviour due to decision making algorithm and/or divergent human expectations.   These factors are particularly pertinent to functions, systems, or algorithms that rely on machine learning, making SOTIF crucial to ensure responsible and safe AI. Functional Safety vs. SOTIF Traditional Functional Safety, as used by standards like ISO 26262, primarily addresses risks arising from electronic system or component malfunctions. It operates on a predictable model where potential failures can be identified, quantified, and mitigated through redundancy and error-checking mechanisms. In contrast, SOTIF recognizes that modern intelligent systems—particularly those incorporating artificial intelligence and machine learning—can generate unsafe scenarios even when all components are technically functioning correctly. “An acceptable level of safety for road vehicles requires the absence of unreasonable risk caused by every hazard associated with the intended functionality and its implementation, including both hazards due to insufficiencies of specification or performance insufficiencies .” – ISO 21448 Where Functional Safety sees systems as collections of components with measurable failure rates, SOTIF views systems as complex, adaptive entities capable of generating both intended and unexpected behaviours in the presence of uncertainty. Addressing this risk requires a more nuanced understanding of potential unintended consequences, focusing not just on what can go wrong mechanically or electrically, but on the broader ecosystem of system interactions and decision-making processes. Expanding Beyond Failure Mode Analysis Traditional safety models operate on a binary framework of function and failure, typically addressing risks through statistical probability and hardware redundancy. SOTIF introduces a more nuanced perspective that recognizes inherent uncertainty in intelligent systems. It shifts the safety conversation from "How can we prevent specific failures?" to "How can we understand and manage potential hazardous situations?" This is driven by the understanding that intelligent systems may exist within a context of profound uncertainty. Unlike mechanical systems with predictable, linear behaviours, intelligent systems such as autonomous vehicles interact with complex, often unpredictable environments. ISO 21448 uses the "Three Circle Behavioural Model" to illustrate where possible gaps may exist in overall safety. In this model safe behaviour is categorized by: The desired behavior is the ideal (and sometimes aspirational) safety-oriented behavior that disregards any technical limitations. It embodies the user’s and society’s expectations of the system’s behavior. The specified behavior (intended functionality) is a representation of the desired behavior that takes into account constraints such as legal, technical, commercial, and customer acceptance. The implemented behavior is the actual system behavior in the real world. From Automotive Origins to Broader Applications While SOTIF was created to support autonomous vehicles, its principles are universally applicable. The framework provides a conceptual model for understanding safety in any system that must make intelligent decisions in complex, dynamic environments. SOTIF represents a shift from reactive to proactive risk management. Instead of waiting for problems to emerge, this approach seeks to anticipate and design for potential challenges before they occur. It's a form of predictive engineering that requires deep understanding of systems design, limitations, and potential interactions. A critical aspect of SOTIF is its recognition of human factors. It's not just about how a system functions in isolation, but how it interacts with human operators, users, and the broader environment. This holistic view acknowledges that safety is fundamentally about creating systems that can work intelligently and responsibly alongside human beings. Looking Forward Safety of the Intended Functionality (SOTIF) is more than a technical standard—it's a new approach to understanding safety in an increasingly complex and uncertain landscape. It challenges us to think beyond traditional safety approaches, to see safety not as the prevention of technical failure, but also about ensuring intended outcomes. As we continue to develop more sophisticated intelligent systems, the principles of SOTIF offer a crucial framework for ensuring that our technological advances are not just beneficial, but fundamentally responsible. References: ISO 26262:2018 (Road Vehicles - Functional Safety) - https://www.iso.org/standard/68383.html ISO 21448:2022 (Road Vehicles - Safety of the Intended Functionality) - https://www.iso.org/standard/77490.html

  • The Philosophy of Operational Compliance

    The philosophy of Operational Compliance  is a guiding mindset and approach that shapes how individuals and organizations align their actions with obligations associated with laws, regulations, ethical standards, and internal policies. It goes beyond mere rule-following to embrace a culture of accountability, integrity, and promise-keeping. This week I would like to share the core tenets of the philosophy of Operational Compliance :   Proactive Rather than Reactive   Operational Compliance  philosophy revolves around anticipating, planning, and acting to achieve the outcomes of compliance. This extends to identifying risks and implementing controls before they become a reality. It also makes sure that everything goes right by ensuring the conditions for success are always present.    Risk-Based Approach   Not all risks are equal; operational compliance prioritizes areas with the highest potential impact on stakeholders. Tailoring compliance efforts to the organization's size, industry, and operational complexity ensures efficient resource allocation.   Culture of Integrity   Operational Compliance  is seen as part of an organization’s mode of operation, not just a regulatory function. Building a culture of always 'doing the right thing the right way' fosters trust among employees, customers, and regulators.   Alignment with Organizational Goals   Operational Compliance  integrates with business objectives rather than being a separate or opposing force. The philosophy recognizes that ethical behaviour and meeting obligations contribute to long-term success and sustainability.   Continuous Compliance   Operational Compliance  acknowledges that laws, regulations, voluntary obligations, and risks require continuous compliance. Ongoing monitoring, training, and updates to policies ensure the organization remains on mission and between the lines.   Transparency and Accountability   Embracing open communication about compliance priorities, challenges, and successes strengthens trust. Holding all members of the organization accountable—from leadership to entry-level staff—is central to effective compliance.   Engineered and Ethical   Operational Compliance  involves leveraging knowledge and tools to make ethical decisions that are effectively implemented in practice. It embodies the essence of engineering, where obligations are fulfilled through intentional design informed by organizational values rather than relying solely on hard work and hoping for the best.    Focus on Stakeholders   Operational Compliance  supports the organization while also benefiting its broader community of stakeholders, including customers, suppliers, and the community. It ensures that the organization’s actions uphold its commitment to honouring all promises made to those invested in its activities.   Balancing Flexibility with Discipline   Operational Compliance allows for innovation and agility within the boundaries of obligations. It avoids rigidity that might stifle growth while maintaining strong controls where necessary.   Keeping Promises by Example   Leaders embody operational compliance by keeping promises and set the tone from the top and throughout all management levels. Visible commitment fosters a strong compliance culture throughout the organization.     The philosophy of Operational Compliance  is about embedding a mindset of accountability, integrity, and promise-keeping into the DNA of an organization. It is less about checking boxes and more about fostering a resilient culture that protects value creation to ensure mission success and builds stakeholder trust. What do you think about Operational Compliance? What is your mode of operation for compliance? What defines your foundational principles and beliefs surrounding compliance?

  • Emergent Uncertainty

    As systems improve we expect that the certainty of meeting objectives increases. Instead, what often happens is that systems become more complex over time which results in the emergence of new uncertainty which leads to increased risk. It is at this point that systems become unresponsive and are no longer able meet their objectives. Remediation is necessary to bring the system under control. However, this can be too slow and often too late to prevent future consequences. This is one of the reasons why you need to be proactive which means: anticipate, plan and act to prevent these conditions before they happen.

  • How to Define Compliance Goals

    Properly defining and setting goals is critical to mission success including the success of environmental, safety, security, quality, regulatory and other compliance programs. However, defining compliance goals remains a real challenge particularly for obligations associated with outcome and performance-based regulations and standards. When these goals are ambiguous or ill-defined they contribute to wasted efforts and ultimately compliance risk for an organization. To be more certain about goals we first need to define what we mean by a goal and such things as objectives, targets, and the like. The following are definitions we have used that lay out a framework for goal-directed obligations. Outcomes These are the ends that we expect to attain over time where progress is expected through the achievement of planned goals. These are often described in qualitative terms but may also have defined measures to indicate and track progress towards the desired outcome. An example outcome would be achieving carbon neutrality by 2050. Goals Goals are defined measures of intermediate success or progress. They are often binary comparable to goal lines that are reached or not. Goals are usually connected to outcomes that are long-term in nature whereas targets tend to be associated with performance and are short-term achievements. There are two kinds of goals, terminal and instrumental: Terminal goals are the highest level outcome that we want to reach. They define the "ends" of our endeavours. For compliance these might include: zero defects, zero fatalities, zero violations, zero releases, zero fines, and others. Instrumental goals are intermediate outcomes or results that are critical or that must occur in order to achieve the higher-level outcome. These are often used to define measures of effectiveness (MoE) for compliance programs as they provide clear indication of progress towards terminal goals. Objectives Objectives are the results that we expect to attain over a planned period of time. These results contribute to (or cause) progress towards the targeted outcome. An outcome may require several objectives done in parallel, sequentially, continuously, and some contingent on others. Some form of causation model (deterministic, probabilistic, linear, non-linear, etc.) is needed to estimate the confidence level of creating the desired outcomes using planned objectives. In cases of greater uncertainty these models will be adjusted over time as more information is gathered and correlation between objectives and outcomes are better known. Risk Risk is defined (ISO 31000, COSO) as the effects of uncertainty on objectives which involves having a causation model. In practice, outcomes tend to be more uncertain than the achievement of objectives. However, everything happens in the presence of uncertainty so it is important to properly identify uncertainty and contend with its effects. There are two primary forms of uncertainty: Epistemic uncertainty; lack of knowledge or know how; this risk is reducible. Reducible risk is treated by buying down uncertainty to improve the probability of meeting each objective. Aleatory uncertainty; caused by inherent randomness or natural/common variation; this risk is irreducible. Irreducible risk is treated by applying margin in the form of contingency, management reserve, buffers, insurance and other measures to mitigate the effects of the risk. Targets Targets are a measure of performance (MoP) or progress when connected to an objective. These targets may be a single point or a range (min and max) of performance needed to achieve an objective. Strategy Strategy defines a plan for how goals, objectives, and targets will be obtained. Strategy is the approach to create the desired outcomes as measured by terminal and instrumental goals by achieving planned objectives at the targeted levels of performance, in the presence of uncertainty.

bottom of page