top of page

SEARCH

Find what you need

428 items found for ""

  • Zones of Compliance

    Which Zone Are You Operating In? Regulatory designs of which there are four primary types spanning micro-means to macro ends, demand different operational capabilities for compliance. In fact, at least half an organization's obligations are non-legal requirements having more to do with outcomes and performance rather than rules or controls. Meeting all these obligations requires measures of conformance, measures of performance, measures of effectiveness, and measures of assurance. To establish these capabilities organizations must transform how they address compliance. They need to take on operational principles and practices that help ensure that essential functions, behaviours, and interactions are working at levels sufficient to create the outcome of compliance. However, many organizations are caught in a prescriptive, reactive, and reductive trap where audits, complaints, and incidents are the only drivers for change. They are operating at the edge of uncertainty; one violation, one injury, one defect, or one mishap away from mission failure. They are in operating in the: REACTIVE COMPLIANCE ZONE. It’s here that compliance functions as a guardrail; the last line of defence at the end of the line. Instead of operating at the edge of uncertainty, ethical and forward-thinking organizations operate in the: PROACTIVE COMPLIANCE ZONE. It’s there where compliance functions as an offensive force ensuring that organizations are always between the lines and ahead of risk. Instead of a guard rail, compliance is a dynamic enabler of compliance outcomes, proactivity, and holistic improvements triggered by the presence of uncertainty not only incidents that happened in the past. Operating in the PROACTIVE COMPLIANCE ZONE creates a strong compliance culture ensuring not only compliance success but also mission success.

  • Training Users To Be Unethical

    The decline of moral integrity Every time we skip past an EULA (end-user license agreement) and just click the checkbox we give up something and probably more than we realize. We have given up: our data our privacy rights our software ownership the content we create and other things that may not be in our best interest. But more than all of those, we have given up our moral integrity. This doesn't mean these practices were necessarily illegal or in violation of any government regulation. However, slowly but surely, we have agreed to practices that in some cases were arguably unethical, unjust, unfair, and unwarranted. Even the act of not reading the EULA but signing it anyways has ceded moral territory. We reinforced by our actions that these licenses don't matter and what we are giving up doesn't matter as well. We were implicitly being asked to "just trust." And here's the thing, by agreeing to what is unethical we become more unethical ourselves. Over-time we lower our standards, our values and our morality. And for what? Who knows what else we might agree to knowingly or unknowingly for the promise of a shiny new application, platform or AI chat-bot. How far are we willing to lower our standards? There is more at stake then access to software; we are at risk of losing our souls. With the rise of Artificial Intelligence systems, the demand for data and access to our digital representations are growing without bounds. Many have already naively given up confidential corporate and private data to AI chat-bots putting themselves and their businesses at risk. We are placing our trust in something where trust was not earned. We haven't performed our due diligence. We did not ask critical and important questions. We just clicked the box. How did we get here? It's not hard to believe that years of skipping EULAs has trained us to just trust in technology and the organizations behind them. Don't look too closely or ask too many questions, and don't read the small print. Just click the box and everything will be fine. We may believe we don't have any choices, but we always do. Don’t accept anything that weakens your ability to live by your higher standards or might otherwise comprise your moral integrity.

  • Are AI-Enhanced KPIs Smarter?

    Using Key Performance Indicators (KPIs) to regulate and drive operational functions is table stakes for effective organizations and for those that want to elevate their compliance. In a recent report by MIT Sloan Management Review and Boston Consulting Group (BCG), “The Future of Strategic Management: Enhancing KPIs with AI” the authors provide the results of a global survey of more than 3,000 managers and interviews with 17 executives to examine how managers and leaders use AI to enhance strategic measurement to advance strategic outcomes. More specifically, their study explores how these organizations have adopted KPIs and created new ones using AI. In this report the authors categorize AI-enhanced KPIs in the following way: Smart Descriptive KPIs: synthesize historical and current data to deliver insights into what happened or what is happening. Smart Predictive KPIs: anticipate future performance, producing reliable leading indicators and providing visibility into potential outcomes. Smart Prescriptive KPIs: use AI to recommend actions that optimize performance. Furthermore, the report identifies that developing smart KPIs requires categorizing variables into three distinct types: Strategic Outcome Variables: well-known overarching targets, such as revenue or profit. Operational Drivers Variables: that might impact the strategic outcome, such as pricing, consumer reviews, or website traffic. Contextual Factors: external factors beyond a company’s control, typically measured or tracked through external data such as consumer spending forecasts, inter-country freight, or government regulation. While there is some evidence that KPIs can be enhanced, the report suggests the need for a shift in mindset and practice with respect to the category of KPIs: From Performance Tracking to Redefining Performance From Static Benchmarks to Dynamic Predictors From Judgment-FIrst to Algorithmically Defined Strategic Metrics From KPI Management to Smart KPI Governance and Oversight From Keeping an Eye on KPIs to KPI Dialogues and Discussion From Strategy with KPIs to Strategy for and with KPIs To facilitate these transitions (or disruptions) the authors of the report provide several recommendations: Realign Data Governance to Enable Measurable Smarter KIPs Establish KPI Governance Systems Use Digital Twins to Enhance Key Performance Metrics Prioritize Cultural Readiness and People-Centric Approaches Strategical Alignment with Smart KPIs My Thoughts In general, Key Performance Indicators (KPIs) should by definition have predictive utility which separates them from set of metrics that one might otherwise measure. The three categories for KPIs outlined in the report suggest how KPIs might be used given their predictive quality. KPIs with low correlation might help describe what's happening but are not good candidates for a KPI compared with those with significant correlation. However, even good KPIs cannot suggest how to effect performance changes. Making systems changes relies on knowledge of what measures of effectiveness, performance, conformance, and assurance are targeted along with understanding of the underlying concept of operations. Notwithstanding, the use of AI does hold promise to help with lagging indicators to find new and different correlations. However, leading indicators is a different story. Leading indicator are the holy grail of operational performance and require knowledge of what should be rather than only what once was. Data describing this knowledge seldom appears in operational records or logs and would need to be integrated with an AI System. Without controlled experiments causation should always be treated with a grain of salt. We need to be mindful that the future is not as deterministic as some may believe. When there is human agency involved the future is open, not closed or bound to AI predictions. It's helpful to remember that there are other forces at work: You can’t turn lagging indicators into leading indicators. (Risk Theory) You can’t turn an “is”, description of what is, into an “ought”, a prescription of what should be. (Hume’s Law) A system will always regulate away from outcomes you don’t specify. (Ashby’s Cybernetics Law of Ethical Inadequacy) When a measure becomes a target, it ceases to be a good measure. (Goodhart’s Law) What steps should be followed when using AI for KPIs? Instead of considering AI as a solution looking for a problem, first identify the problem that is in need of solving. Do you have a problem with: Decision making? Execution or follow-through? Conformance or regulation? Lack of understanding of operational systems, processes, and behaviours? Uncertainty and risk? Insufficient or untapped performance? When the problem is a lack of quality KPIs then one might consider establishing a Smarter KPI Program. The report by MIT-BCG makes an important point that is worth repeating. What they suggest is not so much about creating better KPI's as it is about establishing an on-going set of processes, practices and mindset to use algorithmically defined metrics. This requires more than following a procedure. The following questions will help define the context for such a program: What do better KPI’s look like? What strategy should we follow to achieve that? What capabilities do we need to support this strategy? What obstacles or opportunities need to be negotiated or exploited? What measures will be used to define success?

  • Don’t Confuse Computer Programs with Compliance Programs

    Many organizations identify the need for a program to help them meet all their compliance obligations. They will then procure a computer program (or a suite of them) that claims to be the best solution to their compliance challenges and help them achieve compliance success. After implementation, they may observe the solution has helped them report on data and metrics, store and manage procedures, keep track of controls, and remind them when to get ready for their next audit. However, many will also discover their quality, security, safety, sustainability, or environmental outcomes have not improved. They are still just as uncertain as they were before that their compliance efforts are making a difference to what really matters. What they most likely thought they purchased was a compliance program; something that could actually advance outcomes and help them stay ahead of risk. A computer program while necessary to manage information and may help to achieve certification is not enough for compliance success. If you want to make a qualitative difference to compliance outcomes you need a compliance program and preferably one that is operational. Don’t make the mistake between computer and compliance programs. Make sure you have what you really need for mission and compliance success.

  • We Don’t Live in Models; We Live in Reality

    With all the talk about artificial intelligence it’s easy to get caught up in a world of machine models, digital twins, and virtual reality. It’s important to remember that we don’t live in these worlds; we live in reality. The world we live in is not the “Matrix” nor is it a game we can start over with a push of a button. We have one life to live and one world to live in. The question is how best to live our real lives in the real world. May we have the courage to face and meet the demands of reality rather than escaping to simulated worlds with artificial friendships and artificial lives.

  • Smarter Than Human AI - Still a Long Way to Go?

    The rapidly advancing field of artificial intelligence, particularly large language models (LLMs), is constantly pushing the boundaries of what machines can achieve. However, directly comparing LLMs to human intelligence presents a nuanced challenge. Unlike the singular focus of traditional AI, human cognition encompasses a kaleidoscope of distinct but interconnected abilities, often categorized as "intelligences." Let's take a look at these twelve intelligences compared with the current capabilities of LLMs. Logical-mathematical prowess:Humans effortlessly solve equations, analyze patterns, and navigate complex numerical calculations. While LLMs are trained on vast data sets, their ability to perform these tasks falls short of the intuitive understanding and flexibility we exhibit. Linguistic mastery: We wield language with eloquence, weaving words into narratives, arguments, and expressions of creative genius. LLMs, while capable of generating human-like text, often struggle with context, emotional nuances, and the spark of true creative expression. Bodily-kinesthetic agility: Our ability to move with grace, express ourselves through dance, and manipulate objects with dexterity represents a realm inaccessible to LLMs, limited by their purely digital existence. Spatial intuition: From navigating physical environments to mentally rotating objects, humans excel in spatial reasoning. While LLMs are learning, their understanding of spatial concepts lacks the natural and intuitive edge we possess. Musical understanding: The human capacity to perceive, create, and respond to music with emotional depth remains unmatched. LLMs can compose music, but they lack the deep understanding and emotional connection that fuels our musicality. Interpersonal intelligence: Building relationships, navigating social dynamics, and understanding emotions represent complex human strengths. LLMs, though improving, struggle to grasp the intricacies of human interaction and empathy. Intrapersonal awareness: Our ability to reflect on ourselves, understand our emotions, and set goals distinguishes us as unique individuals. LLMs lack the self-awareness and introspection necessary for this type of intelligence. Existential contemplation: Pondering life's big questions and seeking meaning are quintessentially human endeavours. LLMs, despite their ability to process information, lack the sentience and consciousness required for such philosophical contemplations. Moral reasoning: Making ethical judgments and navigating right and wrong are hallmarks of human intelligence. LLMs, while trained on moral frameworks, lack the nuanced understanding and ability to adapt these frameworks to new situations that we possess. Naturalistic connection: Our ability to connect with nature, understand ecological systems, and appreciate its beauty lies beyond the reach of LLMs. Their understanding of nature, while informative, lacks the embodied experience and emotional connection that fuels our appreciation. Spiritual exploration: The human yearning for connection with something beyond ourselves represents a deeply personal and subjective experience that LLMs cannot replicate. Creative expression: Humans innovate, imagine new possibilities, and express themselves through various art forms with unmatched originality and emotional depth. LLMs, although capable of creative output within defined parameters, lack the spark of true creativity. LLMs represent powerful tools with rapidly evolving capabilities. However, their intelligence remains distinct from the multifaceted and interconnected nature of human intelligence. Each of our twelve intelligences contributes to the unique tapestry of our being. While LLMs may excel in specific areas, they lack the holistic understanding and unique blend of intelligences that define us as humans. As we explore the future of AI, understanding these differences is crucial. LLMs have a long way to go before they can match the full spectrum of human intelligence, but through collaboration, they can enhance and augment our capabilities, not replace them. The journey continues, and further exploration remains essential. What are your thoughts on the comparison between human and machine intelligence? Let's continue the dialogue. Note: The theory of multiple intelligences while accepted in some fields is criticized in others. This demonstrates that more research and study is needed in the field of cognitive science and that claims regarding "Smarter Than Human AI" should be taken with a healthy degree of skepticism.

  • Risk Blindness: A Failure in Risk Perception

    Imagine you're attending a meeting to discuss potential dangers during a company restructuring. You, and everyone else present, understand the importance of identifying and mitigating hazards. But then, a key player storms out, claiming there's nothing to worry about. Confused? Welcome to the world of risk management, where narrow definitions often create blind spots and uncertainty. This is exactly what happened to me. We were discussing operational threats stemming from a reorganization, something particularly critical in high-risk industries. The Process Safety Manager (PSM), responsible for traditional hazard assessments, confidently declared he had no business there – "there are no hazards here," he stated before walking out. He was right, but only from a technical standpoint. In his world, "hazards" have a specific meaning. However, he failed to recognize a crucial point: the reorganization itself posed a risk to his ability to manage those very hazards. Changes in roles, responsibilities, systems, and processes could potentially disrupt his established safety protocols. In short, he was overlooking organizational hazards. This incident highlights a critical challenge in risk management: silos and fragmented definitions. Different domains have their own risk vocabularies, often leaving broader threats unseen. What we need is a more holistic approach, something like Total Risk Management (TRM). TRM would act as an umbrella, encompassing all potential sources of uncertainty that can impact an organization's success. It acknowledges that risks go beyond technical hazards and delve into organizational dynamics, reputational concerns, financial vulnerabilities, and more. ISO 31000 from my perspective, recognizes what is well known to many in high-risk industries which is that uncertainty is the root cause of all risk. Uncertainty creates the opportunity for risk independent of its effects. In fact, several risk-based regulations mandate this approach and further define its natures: aleatory and epistemic uncertainty which help direct what measures to use to handle them. These distinctions have helped manage complexities beyond the de-minimis and provide the foundation for a universal definition. We're attempting to implement this philosophy and it's an uphill battle, but one worth fighting. By adopting a broader perspective on both risk and compliance, we can ensure that even during periods of change, we can effectively safeguard our people, operations, and ultimately, our mission. Let's keep the dialogue going! Share your thoughts and experiences with risk & compliance. Together, we can elevate our understanding and create a safer, more resilient future for our organizations.

  • AI in PSM: A Double-Edged Sword for Process Safety Management

    Process safety management (PSM) stands as a vital defence against hazards in high-risk industries. Yet, even the most robust systems require constant evaluation and adaptation. Artificial intelligence (AI) has emerged as a transformative force, promising both incredible opportunities and significant challenges for how we manage risk. In this article, we explore seven key areas where AI could reshape PSM, acknowledging both its potential and limitations: 1. From Reactive to Predictive: Navigating the Data Deluge AI's ability to analyze vast data-sets could revolutionize decision-making. Imagine recommending not just which maintenance project to prioritize, but also predicting potential failures before they occur. However, harnessing this potential requires overcoming data challenges. Integrating disparate data sources and ensuring its quality are crucial steps to ensuring reliable predictions and avoiding pitfalls of biased or incomplete information. 2. Taming the Change Beast: Balancing Innovation with Risk Change, planned or unplanned, can disrupt even the most robust safety systems. AI, used intelligently, could analyze the impact of proposed changes on processes, people, and procedures, potentially mitigating risks and fostering informed decision making. Although, over reliance on AI for risk assessment could create blind spots, neglecting nuanced human understanding of complex systems and the potential for unforeseen consequences. 3. Bridging the Gap: Real-Time vs. Paper Safety The chasm between documented procedures and actual practices can pose a significant safety risk. AI-powered real-time monitoring could offer valuable insights into adherence to standards and flag deviations promptly. Not surprisingly, concerns about privacy and potential misuse of such data cannot be ignored. Striking a balance between effective monitoring and ethical data collection is essential. 4. Accelerated Learning: Mining Data for Greater Safety with Caution Applying deep learning to HAZOPs, PHAs, and risk assessments could uncover patterns and insights not previously discovered. However, relying solely on assisted intelligence could overlook crucial human insights, and nuances, potentially missing critical red flags. AI should be seen as a tool to support, not replace, human expertise. 5. Beyond Checklists: Measuring True PSM Effectiveness Moving beyond simply "following the rules" towards measuring the effectiveness of controls in managing risk remains a core challenge for PSM. While AI can offer valuable data-driven insights into risk profiles, attributing cause and effect and understanding complex system interactions remain complexities that require careful interpretation and human expertise. 6. Breaking the Silo: Integrating PSM into the Business Fabric - Carefully Integrating safety considerations into business decisions through AI holds immense potential for a holistic approach. At the same time concerns about unintended consequences and potential conflicts between safety and economic goals must be addressed. Transparency and open communication are essential to ensure safety remains a core value, not a mere metric. 7. The Elusive Question: Proving "Safe Enough" The ultimate challenge? Guaranteeing absolute safety. While AI cannot achieve the impossible, it can offer unparalleled data-driven insights into risk profiles, enabling organizations to continuously improve and confidently move towards a safer state. However, relying solely on AI-driven predictions could mask unforeseen risks and create a false sense of security. True safety demands constant vigilance and a healthy dose of skepticism. AI in PSM presents a fascinating double-edged sword. By carefully considering its potential and pitfalls, we can usher in a future where intelligent technologies empower us to create a safer, more efficient world, but without losing sight of the human element that will always remain crucial in managing complex risks. What are your thoughts on the role of AI in Process Safety Management (PSM)?

  • What is Operational Compliance?

    When people hear the phrase, “Operational Compliance” they often think of it in the same way as “Operational Risk” - a siloed function to audit conformance to legal rules that sits apart and not embedded within the business. However, this defines “Procedural Compliance” which is based on a traditional and reactive model for compliance. Instead, “Operational Compliance”, which is based on a holistic and proactive model, defines a state of operability when all essential compliance functions, behaviours, and interactions exist and perform at levels necessary to create the outcomes of compliance. These outcomes are associated with keeping promises connected with: safety, security, sustainability, environmental, quality, regulatory adherence, corporate ethics, responsible AI, and ultimately stakeholder trust. “Operational Compliance” is governed by two fundamental organizational obligations: (1) Stay between the lines, and (2) Stay ahead of risk. These can only be advanced when compliance is integral to the value chain and when obligations are operationalized which are essential aspects of "Operational Compliance." Elevate your compliance by taking a step away from procedural towards Operational Compliance - a more effective way to do compliance. Authors Note (Raimund Laqua): Follow me on LinkedIn or Subscribe to Lean Compliance (free) to stay notified regarding my upcoming book on "Operational Compliance" expected to be published later this year.

  • Prioritizing CI Projects – Mission Impossible?

    Continuous improvement is needed across all business functions including those that are responsible for safety, security, sustainability, quality, regulatory, and other stakeholder obligations. Whether you are responsible for maintenance, continuous improvement, or capital projects there comes a day when you need to provide an answer to which projects you should do and in what order to improve the probability of mission success. Let’s imagine that today you are that person who has to decide. Here is your challenge should you chose to accept it: Mission Possible Note: while this scenario is fictitious, it is based on real-world examples I have been involved in over the years. Scenario You are the CI Officer responsible for continuous improvement across your organization. You have compiled a list of candidate projects that promise improvements to productivity (margin, throughput, costs, waste, etc.) as well as better outcomes for compliance, quality, safety, security, and so on. Some of these projects depend on others, and some may cause significant disruption before benefits are realized. Each has different costs, benefits, and risk associated with them. Some may actually fail, and some are critical to mission success. You need to decide which ones to do and in what order so they don’t compromise current outcomes or productivity. In other words, improvements can’t break the bank or the business. Ideally, changes (on the whole) should generate financial gains sufficient to fund other projects creating a virtuous cycle of improvement. Problem Create a self-funding continuous improvement (CI) portfolio providing a rank order of projects that optimizes overall outcomes and productivity while avoiding negative impacts to the business. Assume the first set of projects will receive sufficient capital to get things going. This initial set should be optimized to minimize the initial investment but sufficient to create future gains to fund successive improvements based on the rank ordering of projects. New projects will be added at the end of each year and incorporated into the portfolio of projects. Assumptions and Constraints Your organization provides highly regulated services to customers. Your organization is organized as functional teams with hierarchical management. Advancing outcomes is preferred over cost reductions. The project portfolio should be self-funding beyond the initial seed investment. Mission critical projects have highest priority. No staff reductions. Eliminated resources will be reallocated to support further improvements. Assume a 5 year planning horizon with 20% new projects added each year. Assume that 33% (1/3rd) of the projects are critical to mission success but with various degrees of criticality. Methodology and Approach How would you meet this challenge? What approach would you use? What principles could be applied to categorize and select projects? What additional information do you need to know about the business, projects or otherwise? What capabilities are needed to meet the portfolio objectives? How would you ensure improvement benefits are realized? How would you manage and measure progress across the five years? And finally, would you accept this challenge? Why or why not?

  • The Need For Digital Twin Safety

    Digital twins are virtual counterparts of physical entities that merge real-time data from sensors and IoT devices with sophisticated analytics and simulation, facilitating monitoring, analysis, and optimization of operations and assets. Alongside benefits of Digital Twins, the integration of Artificial Intelligence (AI) introduces additional considerations and risks commensurate with how digital twins are used either as a Digital Shadow, Decision Support, or for Autonomous Control: Digital Shadow: Digital twins provide real-time representations of physical entities, offering insights without direct interaction. AI algorithms enhance the analysis of data within digital twins, but they also introduce the risk of bias or errors if not carefully trained and validated. Moreover, AI-driven decisions may be opaque, making it challenging to understand their rationale and assess their reliability. Decision Support: Digital twins can serve as decision support tools by providing actionable intelligence through advanced analytics and simulation. AI algorithms within digital twins enable predictive modeling and optimization, but they may also amplify errors or biases present in the data. Additionally, complex AI models may lack interpretability, hindering decision-makers' ability to trust and understand their recommendations. Autonomous Control: Digital twins in its most advanced state enable autonomous control by acting on decision-making based on real-time data and predictive insights. AI algorithms drive autonomous actions within digital twins, enhancing efficiency and responsiveness. However, they also introduce risks of malfunction or adversarial attacks, potentially leading to unintended consequences or safety hazards. Additionally, AI-driven autonomous systems may face ethical considerations regarding accountability and transparency in decision-making. While the integration of AI enhances the capabilities of digital twins, it also introduces considerations related to algorithmic bias, interpretability, and system reliability. Addressing these considerations requires rigorous validation, transparency, and ethical oversight to ensure the responsible and effective use of AI within digital twin technologies across diverse applications and industries. Digital Twin Safety In light of the risks associated with digital twins, particularly when integrating Artificial Intelligence, establishing robust safety programs is imperative to protect the public and effectively contend with potential risks. A comprehensive Digital Twin Safety Program should encompass rigorous risk assessment, validation, and continuous monitoring mechanisms. This involves identifying and evaluating potential risks arising from data inaccuracies, algorithmic biases, cybersecurity threats, and system malfunctions. Additionally, the safety program should prioritize transparency and accountability in decision-making processes, ensuring that stakeholders understand the basis of AI-driven actions and can intervene if necessary. Regular audits and evaluations of digital twin systems are essential to identify emerging risks and adapt mitigation strategies accordingly. In addition, collaboration between industry stakeholders, regulatory bodies, and technology developers is crucial to establish standards, guidelines, and best practices for the responsible deployment of digital twin technologies that use machine intelligence capabilities. By implementing robust safety programs, organizations can mitigate risks, safeguard public welfare, and foster trust in the use of digital twins across various domains.

  • Two Obligations You Cannot Ignore

    When it comes to compliance there are two primary obligations that you cannot ignore: stay between the lines and stay ahead of risk. Staying between the lines is focused on keeping risk out and certainty in. We want to operate within ethical, legal, and beneficial boundaries necessary to maintain mission success. This is accomplished by such things as codes of conduct, rules, limits, guardrails, protocols, guidelines, procedures, and policies. Improvements are triggered by incidents of operating near or outside the lines. Staying ahead of risk is focused on advancing the probability of mission success. This is a dynamic and continuous endeavour to keep the dragons of uncertainty at bay and far enough away to interfere with our mission. This is accomplished by contending with uncertainty using margins and buying down risk to levels needed for our strategy to succeed. Improvements are triggered by the presence of uncertainty between us and our objectives.

bottom of page