top of page

SEARCH

Find what you need

493 items found for ""

  • Leveraging Safety Moments for AI Safety in Critical Infrastructure Domains

    Artificial intelligence (AI) is increasingly becoming an integral part of critical infrastructure such as energy, transportation, healthcare, and finance. While AI offers numerous benefits and opportunities for efficiency and innovation, it also introduces new risks and challenges that need to be addressed. To ensure the safe and secure integration of AI into safety critical systems and processes, organizations can draw inspiration from the concept of "safety moments" and apply it to AI safety practices. In this article, we explore the practice of safety moments and discuss how it can be extended to enhance AI safety in critical infrastructure domains. Understanding Safety Moments Safety moments are short, focused discussions or presentations held within organizations to increase awareness and promote safety consciousness among employees. Typically, safety moments occur at the beginning of meetings or shifts and revolve around sharing personal experiences, lessons learned, near misses, or relevant safety topics. The aim is to foster a proactive safety culture, encourage active engagement, and prompt individuals to think critically about potential risks and hazards. Extending Safety Moments to AI Safety Raising Awareness: Safety moments can be utilized to raise awareness about AI safety in critical infrastructure domains. By sharing real-world examples, case studies, or incidents related to AI systems, employees can gain a better understanding of the potential risks and consequences associated with AI technology. This awareness helps create a culture of vigilance and responsibility towards AI safety. Learning from Incidents: Safety moments involve discussing near misses or incidents that have occurred in the workplace. Similarly, in the context of AI safety, organizations can encourage employees to report the equivalent of near misses or incidents related to AI systems. These discussions can provide valuable insights into the vulnerabilities, limitations, and potential failure modes of AI systems, allowing organizations to learn from past mistakes and improve their safety measures. Regular Training and Education: Safety moments can serve as a platform for ongoing training and education on AI safety. By dedicating time during safety moments to share updates, best practices, and emerging trends in AI safety, organizations can ensure that employees stay informed and equipped with the knowledge needed to identify potential risks and mitigate them effectively. This continuous learning approach helps build a resilient workforce capable of handling AI-related challenges. Encouraging Open Dialogue: Safety moments create a safe space for employees to openly discuss safety, privacy, and security concerns and ideas. Similarly, in the context of AI safety, organizations should foster a culture that encourages open dialogue and the sharing of concerns related to AI systems. This collaborative approach allows for a broader perspective, diverse insights, and the identification of potential blind spots in the deployment and operation of AI technology. Multidisciplinary Collaboration: AI safety in critical infrastructure domains requires a multidisciplinary approach involving experts from various fields such as AI, cybersecurity, engineering, and ethics. Safety moments can facilitate cross-functional collaboration by bringing together professionals from different disciplines to discuss AI safety challenges, exchange knowledge, and develop comprehensive strategies to ensure the safe integration of AI into critical infrastructure domains. Summary As AI continues to be adopted in critical infrastructure domains, ensuring the safety and security of AI systems becomes paramount. By extending the practice of safety moments to AI safety, organizations can create a culture of awareness, collaboration, and continuous learning. This approach empowers employees to actively engage in AI safety practices, identify potential risks, and collectively work towards mitigating them. By incorporating AI safety into safety moments, critical infrastructure domains can harness the transformative power of AI while safeguarding the integrity and resilience of their operations.

  • AI's Wisdom Deficit

    In German, there are two words for knowledge: "wissen" and "kennen." The former refers to knowing about something, while the latter signifies intimate knowledge gained through experience. Although we can roughly equate these to "explicit" and "tacit" knowledge, the English language fails to capture their nuanced meanings like other languages do. It is in the second form of knowledge where profound insights emerge. According to the DIKW model, wisdom arises from knowledge, particularly knowledge derived from experience and understanding, rather than pure logic. We most often refer to the former as wisdom and the latter intelligence. Intelligence without wisdom has its problems; it is akin to a child in a candy shop. Having knowledge about everything without the ability to discern what is good or bad, what is beneficial or harmful, is of temporary and limited value. Even King Solomon, considered the wisest person in the world, spent his days exploring and experimenting in his pursuit of learning. He devoted himself to knowledge, constructing the greatest temple ever built, accumulating immense wealth, and indulging in his every desire. While he gained vast knowledge the wisdom to discern between good and evil is what held the most value for him. King Solomon new that this was something beyond himself and so he asked his God for this kind of wisdom, and he urges us to do the same. The philosopher David Hume , known for the "is-ought" gap makes a similar observation. He claims that you can't deduce an ought (what should be) from what is. In other words, you can't know what is good from knowledge of what is. That kind of wisdom comes from outside the realm of facts. In recent years, progress in artificial intelligence has been staggering. However, AI lacks the knowledge (and most likely always will) that comes from experience along with the wisdom to discern between what is good and what is not. It is this wisdom that should be our ultimate pursuit, better than all the knowledge in the world. As T.S. Eliot aptly said and bears repeating: "It is impossible to design a system so perfect that no one needs to be good." And being good is what humans must continually strive to become in all our endeavours.

  • Thoughts about AI

    I was listening to a podcast recently where Mo Gawdat (ex-google CBO) was interviewed and asked about his thoughts concerning AI. Here are some of the things he said: Three facts about AI: AI has happened ( the genie is out of the bottle and can’t be put back in) AI will be smarter and already is than many of us Bad things will happen What is AI (I have paraphrased this)? Before AI we told the computer how to do what we want - we trained the dog With generative AI we tell it what we want and it figures out how to do it - we enjoy the dog In the future, AI will tell us what it wants and how to do it - the dog trains us Barriers we should never have crossed, but have anyways: Don’t put AI on the open internet Don’t teach AI to write code Don’t let AI prompt another AI What is the problem? Mo answers this by saying the problem is not the machines, the problem lies with us. We are the ones doing this (compulsion, greed, novelty, competition, hubris, etc.), and we may soon reach the point where we are no longer in the drivers seat. That is the existential threat that many are concerned about. Who doesn’t want a better dog? But what if the dog wants a better human? Before we get there we will have a real smart dog, that is way smarter (10 times, 100 times, or even higher) than us, which we will not understand. Guardrails for explain-ability will amount to AI creating a flowchart of what it is doing (oh how the tables have turned), one that is incomprehensible to most if not all of us. How many of us can understand String Theory or Quantum Physics even if you can read the text books – very few of us. Why do we think that we will understand what AI is doing? Sure, AI can dumb it done or AI-splain it to us so we feel better. Perhaps, we should add another guardrail to Mo’s list: 4. Don’t let AI connect to the physical world. However, I suspect we have already passed that one as well. Or how about this? 5. Don’t do stupid things with AI You can view the podcast on YouTube here:

  • AI Risks Document-Centric Compliance

    For domains where compliance is "document-centric" focused on procedural conformance the use of AI poses significant risk due to inappropriate use of AI to create, evaluate, and assess documentation we use to describe what we do (or should do). Disclosure of AI use will be an important safeguard going forward, but that will not be enough to limit exposure resulting from adverse effects of AI. To contend with uncertainties, organizations must better understand how AI works and how to use it responsibly. To bring the risks into focus, let’s consider the use of Large Language Models (LLMs) used in applications such as ChatGPT, Bard, Gemini, and others. What do LLM's model? While it's important to understand what these LLMs do, it's also important to know what they don't do, and what they don't know. First and foremost, LLMs create a representation of language  based on a training set of data. LLMs use this representation to predict words and nothing else. LLMs do not create a representation of how the world works (i.e. physics), or systems, controls, and processes within your business. They do not model your compliance program, your cybersecurity framework, or any other aspect of your operations. LLMs are very good (and getting better) at predicting words. And so it's easy to imagine that AI systems actually understand the words they digest and the output they generate, but they don't. It may look like AI understands, but it doesn't and it certaintly cannot tell you what you should do. Limitations of Using AI to Process Documents Let's dial in closer and consider a concrete example. This week the Responsible AI Institute as part of their work (which I support) released an AI tool that can evaluate your organization's existing RAI policies and procedures to generate a gap analysis based on the National Institute of Standards and Technology (NIST) risk management framework. Sounds wonderful! This application is no doubt well intended and is not the first or the last AI tool to process compliance documentation. However, tools of this kind raise several questions concerning the nature of the gaps that can be discovered and if a false sense of assurance will be created by using these tools. More Knowlege Required Tools that use LLMs to generate content, for example, such as remedies to address gaps in conformance with a standard, may look like plausible steps to achieve compliance objectives, or controls to contend with risk. However, and this is worth repeating, LLM’s do not understand or have knowledge concerning how controls work, or management systems, or how to contend effectively with uncertainty. They also don't have knowledge of your specific goals, targets, or planned outcomes. LLMs model language to predict words, that's all. This doesn't mean the output from AI is not correct or may not work. However, only you – a human – can make that determination. We also know that AI tools of this kind at best can identify procedural conformance with prescription. They do not (and cannot) evaluate how effective a given policy is at meeting your obligations. Given that many standards consist of a mixture of perscriptive, performance, and outcome-based obligations, this leaves out a sizeable portion of "conformance" from consideration. To evalute gaps that matter requires an operational knowledge of compliance functions, behaviours, and interactions necessary to achieve the outcome of compliance which is something that's not modelled by LLMs and something it doesn't know. The problem is that many who are responsible for complaince don't know these things either. Lack of operational knowledge is a huge risk. If you don’t have operational knowledge of compliance you will not know if the output from AI is reasonable, safe, or harmful. Not only that, if you are using AI to reduce your complement of compliance experts (analysts, engineers, data scientists, etc.) your situation will be far worse. And you won't know how bad until it happens, when it's to late to do anything about it. Not the Only Risk As I wrote in a previous article , AI is not an impartial observer in the classical sense. AI systems are self-referencing. The output they generate interferes with the future they are trying to represent. This creates a feedback loop which gives it a measure of agency that is undesirable, and contributes in part to public fear and worry concerning AI. We don't want AI to amplify or attenuate the signal – it should be neutral, free of biases. We don't yet understand well enough the extent that AI interferes with our systems and processes and in the case of compliance, the documentation we use to describe them. I raised these concerns during a recent Responsible AI Institute webinar where this interference was acknowledged as a serious risk. Unfortunately, it's not on anyone’s radar. While there are discussions that risk exists, there is less conversation on what they are, or how they might be ameliorated. Clearly, AI is still in the experimental stage. Not the Last Gap When it comes to compliance there are always gaps. Some of these are between what's described in documentation and a given standard. Others include gaps in performance, effectiveness, and gaps in overall assurance. Adopting AI generated remedies creates another category of gaps and therefore risk that need to be handled. The treatment for this is to elevate your knowledge of AI and its use. You need to understand what AI can and cannot do. You also need to know what it should or shouldn't do. The outputs from AI may look reasonable, the promise of greater efficiences compelling. But these are not the measures of success. To succeed at compliance requires operational knoweldge of what compliance is and how it works. This will help you contend with risks associated with the use of AI, along with how best to meet all your obligations in the presence of uncertainty.

  • Stopping AI from Lying

    Recently, I asked Microsoft’s Copilot to describe "Lean Compliance." I knew that information about Lean Compliance used in current foundation models was not up-to-date and so would need to merged with real-time information which is what co-pilot attempted to do. However, what it came up with was a mix of accuracy and inaccuracy. It said someone else founded Lean Compliance rather than me. Instead, of not including that aspect of "Lean Compliance", it made it up. I instructed Copilot to make the correction which it did at least within the context of my prompt session. It also apologized for making the mistake. While this is just one example, I know my experience with AI chat applications is not unique. Had I not known the information was incorrect, I may have used it in decision-making or disseminated the wrong information to others. Many are fond of attributing human qualities to AI which is called anthropomorphism . Instead of considering output as false and in need of correction, many will say that the AI system hallucinated — as if that makes it better. And why did Copilot apologize? This practice muddies the waters and makes it difficult to discuss machine features and properties such as how to deal with incorrect output. However, if we are going to anthropomorphize then why not go all the way, and say AI lied . We don’t do this because it applies a standard of morality to the AI system. We know that machines are not capable of being ethical . They don’t have ethical subroutines to discern between what’s right and wrong. This is a quality of humans not machines. That's why when it comes to AI systems we need to stop attributing human qualities to them if we hope to stop the lies and get on with the task of improving output quality.

  • Are AI-Enhanced KPIs Smarter?

    Using Key Performance Indicators (KPIs) to regulate and drive operational functions is table stakes for effective organizations and for those that want to elevate their compliance. In a recent report by MIT Sloan Management Review and Boston Consulting Group (BCG), “The Future of Strategic Management: Enhancing KPIs with AI” the authors provide the results of a global survey of more than 3,000 managers and interviews with 17 executives to examine how managers and leaders use AI to enhance strategic measurement to advance strategic outcomes. More specifically, their study explores how these organizations have adopted KPIs and created new ones using AI. In this report the authors categorize AI-enhanced KPIs in the following way: Smart Descriptive KPIs : synthesize historical and current data to deliver insights into what happened or what is happening. Smart Predictive KPIs : anticipate future performance, producing reliable leading indicators and providing visibility into potential outcomes. Smart Prescriptive KPIs : use AI to recommend actions that optimize performance. Furthermore, the report identifies that developing smart KPIs requires categorizing variables into three distinct types: Strategic Outcome Variables : well-known overarching targets, such as revenue or profit. Operational Drivers Variables : that might impact the strategic outcome, such as pricing, consumer reviews, or website traffic. Contextual Factors: external factors beyond a company’s control, typically measured or tracked through external data such as consumer spending forecasts, inter-country freight, or government regulation. While there is some evidence that KPIs can be enhanced, the report suggests the need for a shift in mindset and practice with respect to the category of KPIs: From Performance Tracking to Redefining Performance From Static Benchmarks to Dynamic Predictors From Judgment-FIrst to Algorithmically Defined Strategic Metrics From KPI Management to Smart KPI Governance and Oversight From Keeping an Eye on KPIs to KPI Dialogues and Discussion From Strategy with KPIs to Strategy for and with KPIs To facilitate these transitions (or disruptions) the authors of the report provide several recommendations: Realign Data Governance to Enable Measurable Smarter KIPs Establish KPI Governance Systems Use Digital Twins to Enhance Key Performance Metrics Prioritize Cultural Readiness and People-Centric Approaches Strategical Alignment with Smart KPIs My Thoughts In general, Key Performance Indicators (KPIs) should by definition have predictive utility which separates them from set of metrics that one might otherwise measure. The three categories for KPIs outlined in the report suggest how KPIs might be used given their predictive quality. KPIs with low correlation might help describe what's happening but are not good candidates for a KPI compared with those with significant correlation. However, even good KPIs cannot suggest how to effect performance changes. Making systems changes relies on knowledge of what measures of effectiveness, performance, conformance, and assurance are targeted along with understanding of the underlying concept of operations. Notwithstanding, the use of AI does hold promise to help with lagging indicators to find new and different correlations. However, leading indicators is a different story. Leading indicator are the holy grail of operational performance and require knowledge of what should be rather than only what once was. Data describing this knowledge seldom appears in operational records or logs and would need to be integrated with an AI System. Without controlled experiments causation should always be treated with a grain of salt. We need to be mindful that the future is not as deterministic as some may believe. When there is human agency involved the future is open, not closed or bound to AI predictions. It's helpful to remember that there are other forces at work: You can’t turn lagging indicators into leading indicators. ( Risk Theory ) You can’t turn an “is”, description of what is, into an “ought”, a prescription of what should be. ( Hume’s Law ) A system will always regulate away from outcomes you don’t specify. (A shby’s Cybernetics Law of Ethical Inadequacy ) When a measure becomes a target, it ceases to be a good measure. (Goodhart’s Law) What steps should be followed when using AI for KPIs? Instead of considering AI as a solution looking for a problem, first identify the problem that is in need of solving. Do you have a problem with: Decision making? Execution or follow-through? Conformance or regulation? Lack of understanding of operational systems, processes, and behaviours? Uncertainty and risk? Insufficient or untapped performance? When the problem is a lack of quality KPIs then one might consider establishing a Smarter KPI Program.  The report by MIT-BCG makes an important point that is worth repeating. What they suggest is not so much about creating better KPI's as it is about establishing an on-going set of processes, practices and mindset to use algorithmically defined metrics. This requires more than following a procedure. The following questions will help define the context for such a program: What do better KPI’s look like? What strategy should we follow to achieve that? What capabilities do we need to support this strategy? What obstacles or opportunities need to be negotiated or exploited? What measures will be used to define success?

  • Protect your Value Chain from AI Risk

    This year will mark the end of unregulated use of AI for many organizations. This has already happened in the insurance sector (State of Colorado) and others are not far behind. AI safety regulations and responsible use guidelines are forthcoming. Organizations must now learn to govern their use of AI across their value chain to protect stakeholders from preventable risk. This will require building Responsible AI and/or AI Safety Programs to deliver on obligations and contend with AI specific risk. To stay ahead of AI risk you can no longer wait. Ethical and forward looking organizations have already started to build out AI Safety and Responsible Use Programs. Don’t be left behind. Take steps starting today to protect your value chain.

  • How to Benefit from AI Technology

    We are really bad at adopting new technology. What we are worse at is exploiting new technology. - Eliyahu Goldratt Achieving Breakthrough Benefits Artificial Intelligence (AI) holds the promise of improving efficiency along with many other things some good, some bad, and some good with the bad. Some organizations will adopt AI and receive incremental benefits associated with increased efficiencies. However, others will not only adopt this technology they will exploit it and receive multiple benefits that compound over time. Eliyahu Goldratt (Father of Theory of Constraints) offers 4 questions to help you transform your operations using technology including AI. The key is first understanding the power the new technology offers. Ensuring Responsible Use Knowing how to use this technology in a manner that provides benefit while keeping risk below acceptable levels is what is most needed now. And when it comes to risk, waiting until something bad happens before improving is not the best strategy. That's why we recommend organizations consider the following three questions with respect to their use of AI technologies: Is our code of ethics adequate to address the practice of AI technology in our organization? What policies, standards or guidelines should be established or amended to ensure our responsible use of AI systems? What should we do differently to protect stakeholders from the negative effects of our use of AI technologies? We encourage you to consider answering these questions carefully and thoughtfully as they will serve to guide your adoption of AI technologies and systems. Should you need help to work through these questions and building out a Responsible AI program for your organization please reach out to us. Our advanced program is uniquely suited to help you take a proactive and integrative approach to meeting obligations that include those associated with responsible AI.

  • Smarter Than Human AI - Still a Long Way to Go?

    The rapidly advancing field of artificial intelligence, particularly large language models (LLMs), is constantly pushing the boundaries of what machines can achieve. However, directly comparing LLMs to human intelligence presents a nuanced challenge. Unlike the singular focus of traditional AI, human cognition encompasses a kaleidoscope of distinct but interconnected abilities, often categorized as "intelligences." Let's take a look at these twelve intelligences compared with the current capabilities of LLMs. Logical-mathematical prowess :Humans effortlessly solve equations, analyze patterns, and navigate complex numerical calculations. While LLMs are trained on vast data sets, their ability to perform these tasks falls short of the intuitive understanding and flexibility we exhibit. Linguistic mastery : We wield language with eloquence, weaving words into narratives, arguments, and expressions of creative genius. LLMs, while capable of generating human-like text, often struggle with context, emotional nuances, and the spark of true creative expression. Bodily-kinesthetic agility : Our ability to move with grace, express ourselves through dance, and manipulate objects with dexterity represents a realm inaccessible to LLMs, limited by their purely digital existence. Spatial intuition : From navigating physical environments to mentally rotating objects, humans excel in spatial reasoning. While LLMs are learning, their understanding of spatial concepts lacks the natural and intuitive edge we possess. Musical understanding : The human capacity to perceive, create, and respond to music with emotional depth remains unmatched. LLMs can compose music, but they lack the deep understanding and emotional connection that fuels our musicality. Interpersonal intelligence : Building relationships, navigating social dynamics, and understanding emotions represent complex human strengths. LLMs, though improving, struggle to grasp the intricacies of human interaction and empathy. Intrapersonal awareness : Our ability to reflect on ourselves, understand our emotions, and set goals distinguishes us as unique individuals. LLMs lack the self-awareness and introspection necessary for this type of intelligence. Existential contemplation : Pondering life's big questions and seeking meaning are quintessentially human endeavours. LLMs, despite their ability to process information, lack the sentience and consciousness required for such philosophical contemplations. Moral reasoning: Making ethical judgments and navigating right and wrong are hallmarks of human intelligence. LLMs, while trained on moral frameworks, lack the nuanced understanding and ability to adapt these frameworks to new situations that we possess. Naturalistic connection : Our ability to connect with nature, understand ecological systems, and appreciate its beauty lies beyond the reach of LLMs. Their understanding of nature, while informative, lacks the embodied experience and emotional connection that fuels our appreciation. Spiritual exploration: The human yearning for connection with something beyond ourselves represents a deeply personal and subjective experience that LLMs cannot replicate. Creative expression: Humans innovate, imagine new possibilities, and express themselves through various art forms with unmatched originality and emotional depth. LLMs, although capable of creative output within defined parameters, lack the spark of true creativity. LLMs represent powerful tools with rapidly evolving capabilities. However, their intelligence remains distinct from the multifaceted and interconnected nature of human intelligence. Each of our twelve intelligences contributes to the unique tapestry of our being. While LLMs may excel in specific areas, they lack the holistic understanding and unique blend of intelligences that define us as humans. As we explore the future of AI, understanding these differences is crucial. LLMs have a long way to go before they can match the full spectrum of human intelligence, but through collaboration, they can enhance and augment our capabilities, not replace them. The journey continues, and further exploration remains essential. What are your thoughts on the comparison between human and machine intelligence? Let's continue the dialogue. Note: The theory of multiple intelligences while accepted in some fields is criticized in others. This demonstrates that more research and study is needed in the field of cognitive science and that claims regarding "Smarter Than Human AI" should be taken with a healthy degree of skepticism.

  • The Critical Role of Professional Engineers in Canada's AI Landscape

    Rapid advancements in AI technology present a double-edged sword: exciting opportunities alongside significant risks. While Canada is a contributor to the field, it lacks a cohesive national strategy to harness innovation and economic benefits while safeguarding the well-being of Canadians. Federal and provincial governments are crafting legislation and policies, but these efforts are disjointed, slow-moving, and unlikely to address current and emerging risks. Regulations arising from Bill C-27, for example, are expected to take years to implement, falling short of the necessary agility. Proposed strategies often emphasize establishing entirely new AI governance frameworks. Adding a new layer of regulations often creates overlap and confusion, hindering progress. It also overlooks the protections already offered by existing laws, regulatory bodies, and standards organizations. One of the areas being overlooked is the role of Professional Engineers. Professional engineering in Canada is uniquely positioned to lead the charge in responsible AI development. With legislative authority, self-governance, and a robust code of ethics, engineers already have the means to ensure responsible AI practices. Professional engineers bring a wealth of benefits to the table. Their deep understanding of technical systems and rigorous training in risk assessment make them ideally suited to design, develop, and implement AI solutions that are safe, reliable, and ethical. Furthermore, their commitment to upholding professional standards fosters public trust in AI technologies. Provincial regulators must act now to elevate engineering's role in the AI landscape. Here are steps that might be considered: Provincial engineering regulators should collaborate with federal and provincial governments to ensure existing regulatory frameworks are adapted to address AI-specific risks and opportunities. Professional engineering associations should develop and deliver training programs that equip engineers with the necessary skills and knowledge to develop and implement responsible AI. Engineers should actively participate in the development of AI standards and best practices to ensure responsible development and deployment of AI technologies. Governments and industry should work together to create funding opportunities that support research and development in responsible AI led by professional engineers. Provincial engineering regulators, in collaboration with professional engineering associations and stakeholders, should explore the creation of a specialized AI Engineering practice and develop a licensing framework for this practice. This framework would ensure engineers possess the specialized knowledge and experience required to develop and implement safe and ethical AI solutions. By taking these steps, Canada can leverage the expertise of professional engineers right now to ensure responsible AI development and secure its position as a leader in the global AI landscape.

  • AI in PSM: A Double-Edged Sword for Process Safety Management

    Process safety management (PSM) stands as a vital defence against hazards in high-risk industries. Yet, even the most robust systems require constant evaluation and adaptation.  Artificial intelligence (AI) has emerged as a transformative force, promising both incredible opportunities and significant challenges for how we manage risk.  In this article, we explore seven key areas where AI could reshape PSM, acknowledging both its potential and limitations: 1. From Reactive to Predictive: Navigating the Data Deluge AI's ability to analyze vast data-sets could revolutionize decision-making. Imagine recommending not just which  maintenance project to prioritize, but also predicting  potential failures before they occur.  However, harnessing this potential requires overcoming data challenges. Integrating disparate data sources and ensuring its quality are crucial steps to ensuring reliable predictions and avoiding pitfalls of biased or incomplete information. 2. Taming the Change Beast: Balancing Innovation with Risk Change, planned or unplanned, can disrupt even the most robust safety systems. AI, used intelligently, could analyze the impact of proposed changes on processes, people, and procedures, potentially mitigating risks and fostering informed decision making.  Although, over reliance on AI for risk assessment could create blind spots , neglecting nuanced human understanding of complex systems and the potential for unforeseen consequences. 3. Bridging the Gap: Real-Time vs. Paper Safety The chasm between documented procedures and actual practices can pose a significant safety risk. AI-powered real-time monitoring could offer valuable insights into adherence to standards and flag deviations promptly.  Not surprisingly, concerns about privacy and potential misuse of such data cannot be ignored. Striking a balance between effective monitoring and ethical data collection is essential. 4. Accelerated Learning: Mining Data for Greater Safety with Caution Applying deep learning to HAZOPs, PHAs, and risk assessments could uncover patterns and insights not previously discovered. However, relying solely on assisted intelligence could overlook crucial human insights, and nuances, potentially missing critical red flags. AI should be seen as a tool to support, not replace, human expertise. 5. Beyond Checklists: Measuring True PSM Effectiveness Moving beyond simply "following the rules" towards measuring the effectiveness of controls in managing risk remains a core challenge for PSM.  While AI can offer valuable data-driven insights into risk profiles, attributing cause and effect and understanding complex system interactions remain complexities that require careful interpretation and human expertise. 6. Breaking the Silo: Integrating PSM into the Business Fabric - Carefully Integrating safety considerations into business decisions through AI holds immense potential for a holistic approach.  At the same time concerns about unintended consequences and potential conflicts between safety and economic goals must be addressed. Transparency and open communication are essential to ensure safety remains a core value, not a mere metric. 7. The Elusive Question: Proving "Safe Enough" The ultimate challenge? Guaranteeing absolute safety. While AI cannot achieve the impossible, it can offer unparalleled data-driven insights into risk profiles, enabling organizations to continuously improve  and confidently move towards a safer state.   However, relying solely on AI-driven predictions could mask unforeseen risks and create a false sense of security. True safety demands constant vigilance and a healthy dose of skepticism. AI in PSM presents a fascinating double-edged sword. By carefully considering its potential and pitfalls, we can usher in a future where intelligent technologies empower us to create a safer, more efficient world, but without losing sight of the human element that will always remain crucial in managing complex risks. What are your thoughts on the role of AI in Process Safety Management (PSM)?

  • Is AI Sustainable?

    In this article we will explore sustainability and how it relates to AI technologies. To get there we will first consider AI Safety and the challenges that exist to design safe and responsible AI. AI technology such as ChatGPT should be designed to be safe. I don’t think many would argue with having this as a goal, particularly professional engineers who have a duty to regard the public welfare as paramount. However, ChatGPT is not designed in the traditional sense. The design of ChatGPT is very much a black box and something we don’t understand. And what we don’t understand we can’t control and therein lies the rub. How can we make ChatGPT safe when we don’t understand how it works? ChatGPT can be defined as a technology that learns and in a sense designs itself. We feed it data and through reinforcement learning we shape its output, with limited success, to be more of what we want and less of what we don’t want. Even guard rails used to improve safety are for the most part blunt and crude instruments having their own vulnerabilities. In an attempt to remove biases, new biases can be introduced. In some cases, guard rails change the output to be what some believe the answer should be rather than what the data reveals. Not only is this a technical challenge but also an ethical dilemma that needs to be addressed. The PLUS Decision Making model developed by The Ethics Resource Center can help organization’s make better decisions with respect to AI: P = Policies - Is it consistent with my organization's policies, procedures and guidelines? L = Lega l - Is it acceptable under the applicable laws and regulations? U = Universal - Does it conform to the universal principles/values my organization has adopted? S = Self - Does it satisfy my personal definition of right, good and fair? These questions do not guarantee ethical decisions are made. They instead help to ensure that ethical factors are considered. However, in the end it comes down to personal responsibility and wanting to behave ethically. Some have said that AI Safety is dead or at least a low priority in the race to develop Artificial General Intelligence (AGI). This sounds similar to on-going tensions between production and safety or quality or security or any of the other outcomes organizations are expected to achieve. We have always needed to balance what we do in the short term against the long term interests. In fact, this what it means to be sustainable. “meeting the needs of the present without compromising the ability of future generations to meet their own needs.” - United Nations This is another test we could add to the PLUS model. S = Sustainability - does this decision lead to meeting the needs of the present without sacrificing the ability of future generations to meet their own needs? I believe answering that question should be on the top of the questions being considered today. Is our pursuit of AGI sustainable with respect to human flourishing? AI Sustainability is perhaps what drives the need for AI safety, security, quality, legal, and ethical considerations. For example, just as sustainability requires balancing present needs with future well-being, prioritizing AI safety safeguards against unforeseen risks and ensures AI technology serves humanity for generations to come. However, it sustainability that drives our need for safety. Instead, of asking is AI Safe , perhaps we should be asking is AI Sustainable ?

bottom of page