top of page

SEARCH

Find what you need

577 results found with an empty search

  • Stopping AI from Lying

    Recently, I asked Microsoft’s Copilot to describe "Lean Compliance." I knew that information about Lean Compliance used in current foundation models was not up-to-date and so would need to merged with real-time information which is what co-pilot attempted to do. However, what it came up with was a mix of accuracy and inaccuracy. It said someone else founded Lean Compliance rather than me. Instead, of not including that aspect of "Lean Compliance", it made it up. I instructed Copilot to make the correction which it did at least within the context of my prompt session. It also apologized for making the mistake. While this is just one example, I know my experience with AI chat applications is not unique. Had I not known the information was incorrect, I may have used it in decision-making or disseminated the wrong information to others. Many are fond of attributing human qualities to AI which is called anthropomorphism . Instead of considering output as false and in need of correction, many will say that the AI system hallucinated — as if that makes it better. And why did Copilot apologize? This practice muddies the waters and makes it difficult to discuss machine features and properties such as how to deal with incorrect output. However, if we are going to anthropomorphize then why not go all the way, and say AI lied . We don’t do this because it applies a standard of morality to the AI system. We know that machines are not capable of being ethical . They don’t have ethical subroutines to discern between what’s right and wrong. This is a quality of humans not machines. That's why when it comes to AI systems we need to stop attributing human qualities to them if we hope to stop the lies and get on with the task of improving output quality.

  • Are AI-Enhanced KPIs Smarter?

    Using Key Performance Indicators (KPIs) to regulate and drive operational functions is table stakes for effective organizations and for those that want to elevate their compliance. In a recent report by MIT Sloan Management Review and Boston Consulting Group (BCG), “The Future of Strategic Management: Enhancing KPIs with AI” the authors provide the results of a global survey of more than 3,000 managers and interviews with 17 executives to examine how managers and leaders use AI to enhance strategic measurement to advance strategic outcomes. More specifically, their study explores how these organizations have adopted KPIs and created new ones using AI. In this report the authors categorize AI-enhanced KPIs in the following way: Smart Descriptive KPIs : synthesize historical and current data to deliver insights into what happened or what is happening. Smart Predictive KPIs : anticipate future performance, producing reliable leading indicators and providing visibility into potential outcomes. Smart Prescriptive KPIs : use AI to recommend actions that optimize performance. Furthermore, the report identifies that developing smart KPIs requires categorizing variables into three distinct types: Strategic Outcome Variables : well-known overarching targets, such as revenue or profit. Operational Drivers Variables : that might impact the strategic outcome, such as pricing, consumer reviews, or website traffic. Contextual Factors: external factors beyond a company’s control, typically measured or tracked through external data such as consumer spending forecasts, inter-country freight, or government regulation. While there is some evidence that KPIs can be enhanced, the report suggests the need for a shift in mindset and practice with respect to the category of KPIs: From Performance Tracking to Redefining Performance From Static Benchmarks to Dynamic Predictors From Judgment-FIrst to Algorithmically Defined Strategic Metrics From KPI Management to Smart KPI Governance and Oversight From Keeping an Eye on KPIs to KPI Dialogues and Discussion From Strategy with KPIs to Strategy for and with KPIs To facilitate these transitions (or disruptions) the authors of the report provide several recommendations: Realign Data Governance to Enable Measurable Smarter KIPs Establish KPI Governance Systems Use Digital Twins to Enhance Key Performance Metrics Prioritize Cultural Readiness and People-Centric Approaches Strategical Alignment with Smart KPIs My Thoughts In general, Key Performance Indicators (KPIs) should by definition have predictive utility which separates them from set of metrics that one might otherwise measure. The three categories for KPIs outlined in the report suggest how KPIs might be used given their predictive quality. KPIs with low correlation might help describe what's happening but are not good candidates for a KPI compared with those with significant correlation. However, even good KPIs cannot suggest how to effect performance changes. Making systems changes relies on knowledge of what measures of effectiveness, performance, conformance, and assurance are targeted along with understanding of the underlying concept of operations. Notwithstanding, the use of AI does hold promise to help with lagging indicators to find new and different correlations. However, leading indicators is a different story. Leading indicator are the holy grail of operational performance and require knowledge of what should be rather than only what once was. Data describing this knowledge seldom appears in operational records or logs and would need to be integrated with an AI System. Without controlled experiments causation should always be treated with a grain of salt. We need to be mindful that the future is not as deterministic as some may believe. When there is human agency involved the future is open, not closed or bound to AI predictions. It's helpful to remember that there are other forces at work: You can’t turn lagging indicators into leading indicators. ( Risk Theory ) You can’t turn an “is”, description of what is, into an “ought”, a prescription of what should be. ( Hume’s Law ) A system will always regulate away from outcomes you don’t specify. (A shby’s Cybernetics Law of Ethical Inadequacy ) When a measure becomes a target, it ceases to be a good measure. (Goodhart’s Law) What steps should be followed when using AI for KPIs? Instead of considering AI as a solution looking for a problem, first identify the problem that is in need of solving. Do you have a problem with: Decision making? Execution or follow-through? Conformance or regulation? Lack of understanding of operational systems, processes, and behaviours? Uncertainty and risk? Insufficient or untapped performance? When the problem is a lack of quality KPIs then one might consider establishing a Smarter KPI Program.  The report by MIT-BCG makes an important point that is worth repeating. What they suggest is not so much about creating better KPI's as it is about establishing an on-going set of processes, practices and mindset to use algorithmically defined metrics. This requires more than following a procedure. The following questions will help define the context for such a program: What do better KPI’s look like? What strategy should we follow to achieve that? What capabilities do we need to support this strategy? What obstacles or opportunities need to be negotiated or exploited? What measures will be used to define success?

  • Protect your Value Chain from AI Risk

    This year will mark the end of unregulated use of AI for many organizations. This has already happened in the insurance sector (State of Colorado) and others are not far behind. AI safety regulations and responsible use guidelines are forthcoming. Organizations must now learn to govern their use of AI across their value chain to protect stakeholders from preventable risk. This will require building Responsible AI and/or AI Safety Programs to deliver on obligations and contend with AI specific risk.  To stay ahead of AI risk you can no longer wait. Ethical and forward looking organizations have already started to build out AI Safety and Responsible Use Programs. Don’t be left behind. Take steps starting today to protect your value chain.

  • How to Benefit from AI Technology

    We are really bad at adopting new technology. What we are worse at is exploiting new technology. - Eliyahu Goldratt Achieving Breakthrough Benefits Artificial Intelligence (AI) holds the promise of improving efficiency along with many other things some good, some bad, and some good with the bad. Some organizations will adopt AI and receive incremental benefits associated with increased efficiencies. However, others will not only adopt this technology they will exploit it and receive multiple benefits that compound over time. Eliyahu Goldratt (Father of Theory of Constraints) offers 4 questions to help you transform your operations using technology including AI. The key is first understanding the power the new technology offers. Ensuring Responsible Use Knowing how to use this technology in a manner that provides benefit while keeping risk below acceptable levels is what is most needed now. And when it comes to risk, waiting until something bad happens before improving is not the best strategy. That's why we recommend organizations consider the following three questions with respect to their use of AI technologies: Is our code of ethics adequate to address the practice of AI technology in our organization? What policies, standards or guidelines should be established or amended to ensure our responsible use of AI systems? What should we do differently to protect stakeholders from the negative effects of our use of AI technologies? We encourage you to consider answering these questions carefully and thoughtfully as they will serve to guide your adoption of AI technologies and systems. Should you need help to work through these questions and building out a Responsible AI program for your organization please reach out to us. Our advanced program is uniquely suited to help you take a proactive and integrative approach to meeting obligations that include those associated with responsible AI.

  • Smarter Than Human AI - Still a Long Way to Go?

    The rapidly advancing field of artificial intelligence, particularly large language models (LLMs), is constantly pushing the boundaries of what machines can achieve. However, directly comparing LLMs to human intelligence presents a nuanced challenge. Unlike the singular focus of traditional AI, human cognition encompasses a kaleidoscope of distinct but interconnected abilities, often categorized as "intelligences." Let's take a look at these twelve intelligences compared with the current capabilities of LLMs. Logical-mathematical prowess :Humans effortlessly solve equations, analyze patterns, and navigate complex numerical calculations. While LLMs are trained on vast data sets, their ability to perform these tasks falls short of the intuitive understanding and flexibility we exhibit. Linguistic mastery : We wield language with eloquence, weaving words into narratives, arguments, and expressions of creative genius. LLMs, while capable of generating human-like text, often struggle with context, emotional nuances, and the spark of true creative expression. Bodily-kinesthetic agility : Our ability to move with grace, express ourselves through dance, and manipulate objects with dexterity represents a realm inaccessible to LLMs, limited by their purely digital existence. Spatial intuition : From navigating physical environments to mentally rotating objects, humans excel in spatial reasoning. While LLMs are learning, their understanding of spatial concepts lacks the natural and intuitive edge we possess. Musical understanding : The human capacity to perceive, create, and respond to music with emotional depth remains unmatched. LLMs can compose music, but they lack the deep understanding and emotional connection that fuels our musicality. Interpersonal intelligence : Building relationships, navigating social dynamics, and understanding emotions represent complex human strengths. LLMs, though improving, struggle to grasp the intricacies of human interaction and empathy. Intrapersonal awareness : Our ability to reflect on ourselves, understand our emotions, and set goals distinguishes us as unique individuals. LLMs lack the self-awareness and introspection necessary for this type of intelligence. Existential contemplation : Pondering life's big questions and seeking meaning are quintessentially human endeavours. LLMs, despite their ability to process information, lack the sentience and consciousness required for such philosophical contemplations. Moral reasoning: Making ethical judgments and navigating right and wrong are hallmarks of human intelligence. LLMs, while trained on moral frameworks, lack the nuanced understanding and ability to adapt these frameworks to new situations that we possess. Naturalistic connection : Our ability to connect with nature, understand ecological systems, and appreciate its beauty lies beyond the reach of LLMs. Their understanding of nature, while informative, lacks the embodied experience and emotional connection that fuels our appreciation. Spiritual exploration: The human yearning for connection with something beyond ourselves represents a deeply personal and subjective experience that LLMs cannot replicate. Creative expression: Humans innovate, imagine new possibilities, and express themselves through various art forms with unmatched originality and emotional depth. LLMs, although capable of creative output within defined parameters, lack the spark of true creativity. LLMs represent powerful tools with rapidly evolving capabilities. However, their intelligence remains distinct from the multifaceted and interconnected nature of human intelligence. Each of our twelve intelligences contributes to the unique tapestry of our being. While LLMs may excel in specific areas, they lack the holistic understanding and unique blend of intelligences that define us as humans. As we explore the future of AI, understanding these differences is crucial. LLMs have a long way to go before they can match the full spectrum of human intelligence, but through collaboration, they can enhance and augment our capabilities, not replace them. The journey continues, and further exploration remains essential. What are your thoughts on the comparison between human and machine intelligence? Let's continue the dialogue. Note: The theory of multiple intelligences while accepted in some fields is criticized in others. This demonstrates that more research and study is needed in the field of cognitive science and that claims regarding "Smarter Than Human AI" should be taken with a healthy degree of skepticism.

  • The Critical Role of Professional Engineers in Canada's AI Landscape

    Rapid advancements in AI technology present a double-edged sword: exciting opportunities alongside significant risks. While Canada is a contributor to the field, it lacks a cohesive national strategy to harness innovation and economic benefits while safeguarding the well-being of Canadians. Federal and provincial governments are crafting legislation and policies, but these efforts are disjointed, slow-moving, and unlikely to address current and emerging risks. Regulations arising from Bill C-27, for example, are expected to take years to implement, falling short of the necessary agility. Proposed strategies often emphasize establishing entirely new AI governance frameworks. Adding a new layer of regulations often creates overlap and confusion, hindering progress. It also overlooks the protections already offered by existing laws, regulatory bodies, and standards organizations. One of the areas being overlooked is the role of Professional Engineers. Professional engineering in Canada is uniquely positioned to lead the charge in responsible AI development. With legislative authority, self-governance, and a robust code of ethics, engineers already have the means to ensure responsible AI practices. Professional engineers bring a wealth of benefits to the table. Their deep understanding of technical systems and rigorous training in risk assessment make them ideally suited to design, develop, and implement AI solutions that are safe, reliable, and ethical. Furthermore, their commitment to upholding professional standards fosters public trust in AI technologies. Provincial regulators must act now to elevate engineering's role in the AI landscape. Here are steps that might be considered: Provincial engineering regulators should collaborate with federal and provincial governments to ensure existing regulatory frameworks are adapted to address AI-specific risks and opportunities. Professional engineering associations should develop and deliver training programs that equip engineers with the necessary skills and knowledge to develop and implement responsible AI. Engineers should actively participate in the development of AI standards and best practices to ensure responsible development and deployment of AI technologies. Governments and industry should work together to create funding opportunities that support research and development in responsible AI led by professional engineers. Provincial engineering regulators, in collaboration with professional engineering associations and stakeholders, should explore the creation of a specialized AI Engineering practice and develop a licensing framework for this practice. This framework would ensure engineers possess the specialized knowledge and experience required to develop and implement safe and ethical AI solutions. By taking these steps, Canada can leverage the expertise of professional engineers right now to ensure responsible AI development and secure its position as a leader in the global AI landscape.

  • AI in PSM: A Double-Edged Sword for Process Safety Management

    Process safety management (PSM) stands as a vital defence against hazards in high-risk industries. Yet, even the most robust systems require constant evaluation and adaptation.  Artificial intelligence (AI) has emerged as a transformative force, promising both incredible opportunities and significant challenges for how we manage risk.  In this article, we explore seven key areas where AI could reshape PSM, acknowledging both its potential and limitations: 1. From Reactive to Predictive: Navigating the Data Deluge AI's ability to analyze vast data-sets could revolutionize decision-making. Imagine recommending not just which  maintenance project to prioritize, but also predicting  potential failures before they occur.  However, harnessing this potential requires overcoming data challenges. Integrating disparate data sources and ensuring its quality are crucial steps to ensuring reliable predictions and avoiding pitfalls of biased or incomplete information. 2. Taming the Change Beast: Balancing Innovation with Risk Change, planned or unplanned, can disrupt even the most robust safety systems. AI, used intelligently, could analyze the impact of proposed changes on processes, people, and procedures, potentially mitigating risks and fostering informed decision making.  Although, over reliance on AI for risk assessment could create blind spots , neglecting nuanced human understanding of complex systems and the potential for unforeseen consequences. 3. Bridging the Gap: Real-Time vs. Paper Safety The chasm between documented procedures and actual practices can pose a significant safety risk. AI-powered real-time monitoring could offer valuable insights into adherence to standards and flag deviations promptly.  Not surprisingly, concerns about privacy and potential misuse of such data cannot be ignored. Striking a balance between effective monitoring and ethical data collection is essential. 4. Accelerated Learning: Mining Data for Greater Safety with Caution Applying deep learning to HAZOPs, PHAs, and risk assessments could uncover patterns and insights not previously discovered. However, relying solely on assisted intelligence could overlook crucial human insights, and nuances, potentially missing critical red flags. AI should be seen as a tool to support, not replace, human expertise. 5. Beyond Checklists: Measuring True PSM Effectiveness Moving beyond simply "following the rules" towards measuring the effectiveness of controls in managing risk remains a core challenge for PSM.  While AI can offer valuable data-driven insights into risk profiles, attributing cause and effect and understanding complex system interactions remain complexities that require careful interpretation and human expertise. 6. Breaking the Silo: Integrating PSM into the Business Fabric - Carefully Integrating safety considerations into business decisions through AI holds immense potential for a holistic approach.  At the same time concerns about unintended consequences and potential conflicts between safety and economic goals must be addressed. Transparency and open communication are essential to ensure safety remains a core value, not a mere metric. 7. The Elusive Question: Proving "Safe Enough" The ultimate challenge? Guaranteeing absolute safety. While AI cannot achieve the impossible, it can offer unparalleled data-driven insights into risk profiles, enabling organizations to continuously improve  and confidently move towards a safer state.   However, relying solely on AI-driven predictions could mask unforeseen risks and create a false sense of security. True safety demands constant vigilance and a healthy dose of skepticism. AI in PSM presents a fascinating double-edged sword. By carefully considering its potential and pitfalls, we can usher in a future where intelligent technologies empower us to create a safer, more efficient world, but without losing sight of the human element that will always remain crucial in managing complex risks. What are your thoughts on the role of AI in Process Safety Management (PSM)?

  • Is AI Sustainable?

    In this article we will explore sustainability and how it relates to AI technologies. To get there we will first consider AI Safety and the challenges that exist to design safe and responsible AI. AI technology such as ChatGPT should be designed to be safe. I don’t think many would argue with having this as a goal, particularly professional engineers who have a duty to regard the public welfare as paramount. However, ChatGPT is not designed in the traditional sense. The design of ChatGPT is very much a black box and something we don’t understand. And what we don’t understand we can’t control and therein lies the rub. How can we make ChatGPT safe when we don’t understand how it works? ChatGPT can be defined as a technology that learns and in a sense designs itself. We feed it data and through reinforcement learning we shape its output, with limited success, to be more of what we want and less of what we don’t want. Even guard rails used to improve safety are for the most part blunt and crude instruments having their own vulnerabilities. In an attempt to remove biases, new biases can be introduced. In some cases, guard rails change the output to be what some believe the answer should be rather than what the data reveals. Not only is this a technical challenge but also an ethical dilemma that needs to be addressed. The PLUS Decision Making model developed by The Ethics Resource Center can help organization’s make better decisions with respect to AI: P = Policies - Is it consistent with my organization's policies, procedures and guidelines? L = Lega l - Is it acceptable under the applicable laws and regulations? U = Universal - Does it conform to the universal principles/values my organization has adopted? S = Self - Does it satisfy my personal definition of right, good and fair? These questions do not guarantee ethical decisions are made. They instead help to ensure that ethical factors are considered. However, in the end it comes down to personal responsibility and wanting to behave ethically. Some have said that AI Safety is dead or at least a low priority in the race to develop Artificial General Intelligence (AGI). This sounds similar to on-going tensions between production and safety or quality or security or any of the other outcomes organizations are expected to achieve. We have always needed to balance what we do in the short term against the long term interests. In fact, this what it means to be sustainable. “meeting the needs of the present without compromising the ability of future generations to meet their own needs.” - United Nations This is another test we could add to the PLUS model. S = Sustainability - does this decision lead to meeting the needs of the present without sacrificing the ability of future generations to meet their own needs? I believe answering that question should be on the top of the questions being considered today. Is our pursuit of AGI sustainable with respect to human flourishing? AI Sustainability is perhaps what drives the need for AI safety, security, quality, legal, and ethical considerations. For example, just as sustainability requires balancing present needs with future well-being, prioritizing AI safety safeguards against unforeseen risks and ensures AI technology serves humanity for generations to come. However, it sustainability that drives our need for safety. Instead, of asking is AI Safe , perhaps we should be asking is AI Sustainable ?

  • Three Conditions for Responsible and Safe AI Practice

    Many organizations are embracing AI to advance their goals. However, ensuring the public's well-being requires AI practices to meet three critical conditions: Legality : AI development and use must comply with relevant laws and regulations, safeguarding fundamental rights and freedoms. Ethical Alignment : AI practices must adhere to ethical principles and established moral standards. Societal Benefit: AI applications should be demonstrably beneficial, improving the lives of individuals and society as a whole. Failing to satisfy any of these conditions can lead to both mission failure for the organization and negative societal impacts for the public.

  • The AI Gold Rush: When Customers Become Collateral Damage in the Search for Data

    The tech landscape these days is reminiscent of a gold rush, with companies scrambling for a new treasure: customer data. But in this pursuit, the focus on the customer has shifted. Companies are increasingly looking to mine (or perhaps exploit) their customer data to feed expanding AI systems. Instead of striving to deliver exceptional goods and services for their customers, companies are viewing customers as a means to an end – fuel for their AI engines, shiny generative models and machine learning. The question is, how far will they go to acquire data? This question applies not only to tech giants. Every software company with AI aspirations will face this dilemma. To secure enough data, vendors are now in a frenzy not unlike the gold rush days. They are revising EULAs (End User License Agreements), updating terms and conditions, and some are scraping as much data as they can get a hold of before regulations possibly close the door shut. It seems anything goes in the race to acquire enough access to data to build a compelling AI experience. Let's take a look at some recent examples: Zoom : Their entanglement in an AI privacy controversy raises red flags. ( link ) Adobe : Their recent terms clarification regarding their updated EULA. ( link ) Microsoft : The recent backtracking on their "recall feature" after privacy concerns surfaced is another example. ( link ) It's important to mention that OpenAI , Microsoft, and Google (to name a few) have already scraped much (if not all) of the internet to train their generative AI models apparently without consent or respect for copyright laws. And here's the concerning part: with the ubiquity of cloud storage and applications, anything you create or store online within a platform could become fair game for these hungry AI systems. Even content (documents, audio, video, artwork, images, etc.) that is created locally using other tools but stored in these platforms could be used. While companies may claim access to your data is for a better user experience, there is more that's at stake.  It 's balancing stakeholder expectations with customer values ( social license ) and evolving legal rights concerning data privacy and content ownership. Decisions now being made are more than just technical – they're deeply ethical and increasingly legal in nature. The acquisition of data is creating a slippery ethical slope with customers at risk of becoming collateral damage in the pursuit of an AI advantage. When customers become a means to an end, you will get that end but not any customers. – The cybernetics law of Inevitable Ethical Inadequacy (paraphrased) The goals we set are important to achieve success in business and in life, but it is how we achieve these goals that defines who we are and what we become – it defines our character. When you lose sight of the goal to satisfy customers you may risk not only your integrity and reputation, but also your entire business. "It is impossible to design a system so perfect that no one needs to be good" – TS Elliot Let's not fail to be good in all our endeavours.

  • A Safety Model for AI Systems

    As a framework, I thought Nancy Leveson’s Hierarchical Safety Model which incorporates Rasmussen’s risk ladder offers the right level of analysis to further the discussions regarding responsible and safe AI systems. Nancy is a professor at MIT and author of what is known as STAMP / STPA - a systems approach to risk management. In a nutshell, instead of thinking about risk in terms of only threats and impacts, she suggests we consider systems as containing hazardous processes which create the conditions for risk to manifest and propagate. This holistic approach is used in aerospace along with other high-risk endeavours. The following diagram is a slightly modified version of her model outlining engineering activities across system design/analysis, and system operations. This framework also shows where government, regulators, and corporate policy intersect which is critical to staying between the lines and ahead of risk. At this level of analysis we are talking about AI Systems (i.e. engineered systems) not about systems that use AI technology (Embedded AI). However, this could be extended to support the latter. A key takeaway is that AI engineering must incorporate and ensure responsible and safe design & practice across the socio-technical system, not just the AI technology. This is where professional AI engineers are most helpful and needed. Interested to hear your thoughts on this …

  • Model Convergence: The Erosion of Intellectual Diversity in AI

    As artificial intelligence models strive for greater accuracy, an unexpected phenomenon is emerging: the convergence of responses across different AI platforms. This trend raises concerns about the potential loss of diverse perspectives in AI-generated content. Have you noticed that when posing questions to various generative AI applications like ChatGPT, Gemini, or Claude, you often receive strikingly similar answers? For instance, requesting an outline on a specific topic typically yields nearly identical responses from these different models. Given the vast array of human perspectives on any given subject, one might expect AI responses to reflect this diversity. However, this is increasingly not the case. Model convergence occurs when multiple AI models, despite being developed by different organizations, produce remarkably similar outputs for the same inputs. This phenomenon can be attributed to several factors: Shared training data sources Similar model architectures Evaluation metrics that prioritize factual accuracy and coherence over diversity of thought While consistency and accuracy are crucial in many applications of AI, they may not always be the ideal outcome, particularly in scenarios where users seek to explore a breadth of ideas or conduct research on complex topics. The convergence of AI models towards singular responses could potentially limit the exposure to alternative viewpoints and novel ideas. This trend raises important questions about the future of AI-assisted learning and research: How can we maintain intellectual diversity in AI-generated content? What are the implications of this convergence for critical thinking and innovation? How might we design AI systems that can provide a range of perspectives while maintaining accuracy? As AI continues to play an increasingly significant role in information dissemination and decision-making processes, addressing these questions becomes crucial to ensure that AI enhances rather than constrains our intellectual horizons. What do you think? Have you noticed this behaviour? Do you think model convergence is a problem?

bottom of page