top of page

SEARCH

Find what you need

564 results found with an empty search

  • Protect your Value Chain from AI Risk

    This year will mark the end of unregulated use of AI for many organizations. This has already happened in the insurance sector (State of Colorado) and others are not far behind. AI safety regulations and responsible use guidelines are forthcoming. Organizations must now learn to govern their use of AI across their value chain to protect stakeholders from preventable risk. This will require building Responsible AI and/or AI Safety Programs to deliver on obligations and contend with AI specific risk.  To stay ahead of AI risk you can no longer wait. Ethical and forward looking organizations have already started to build out AI Safety and Responsible Use Programs. Don’t be left behind. Take steps starting today to protect your value chain.

  • How to Benefit from AI Technology

    We are really bad at adopting new technology. What we are worse at is exploiting new technology. - Eliyahu Goldratt Achieving Breakthrough Benefits Artificial Intelligence (AI) holds the promise of improving efficiency along with many other things some good, some bad, and some good with the bad. Some organizations will adopt AI and receive incremental benefits associated with increased efficiencies. However, others will not only adopt this technology they will exploit it and receive multiple benefits that compound over time. Eliyahu Goldratt (Father of Theory of Constraints) offers 4 questions to help you transform your operations using technology including AI. The key is first understanding the power the new technology offers. Ensuring Responsible Use Knowing how to use this technology in a manner that provides benefit while keeping risk below acceptable levels is what is most needed now. And when it comes to risk, waiting until something bad happens before improving is not the best strategy. That's why we recommend organizations consider the following three questions with respect to their use of AI technologies: Is our code of ethics adequate to address the practice of AI technology in our organization? What policies, standards or guidelines should be established or amended to ensure our responsible use of AI systems? What should we do differently to protect stakeholders from the negative effects of our use of AI technologies? We encourage you to consider answering these questions carefully and thoughtfully as they will serve to guide your adoption of AI technologies and systems. Should you need help to work through these questions and building out a Responsible AI program for your organization please reach out to us. Our advanced program is uniquely suited to help you take a proactive and integrative approach to meeting obligations that include those associated with responsible AI.

  • Smarter Than Human AI - Still a Long Way to Go?

    The rapidly advancing field of artificial intelligence, particularly large language models (LLMs), is constantly pushing the boundaries of what machines can achieve. However, directly comparing LLMs to human intelligence presents a nuanced challenge. Unlike the singular focus of traditional AI, human cognition encompasses a kaleidoscope of distinct but interconnected abilities, often categorized as "intelligences." Let's take a look at these twelve intelligences compared with the current capabilities of LLMs. Logical-mathematical prowess :Humans effortlessly solve equations, analyze patterns, and navigate complex numerical calculations. While LLMs are trained on vast data sets, their ability to perform these tasks falls short of the intuitive understanding and flexibility we exhibit. Linguistic mastery : We wield language with eloquence, weaving words into narratives, arguments, and expressions of creative genius. LLMs, while capable of generating human-like text, often struggle with context, emotional nuances, and the spark of true creative expression. Bodily-kinesthetic agility : Our ability to move with grace, express ourselves through dance, and manipulate objects with dexterity represents a realm inaccessible to LLMs, limited by their purely digital existence. Spatial intuition : From navigating physical environments to mentally rotating objects, humans excel in spatial reasoning. While LLMs are learning, their understanding of spatial concepts lacks the natural and intuitive edge we possess. Musical understanding : The human capacity to perceive, create, and respond to music with emotional depth remains unmatched. LLMs can compose music, but they lack the deep understanding and emotional connection that fuels our musicality. Interpersonal intelligence : Building relationships, navigating social dynamics, and understanding emotions represent complex human strengths. LLMs, though improving, struggle to grasp the intricacies of human interaction and empathy. Intrapersonal awareness : Our ability to reflect on ourselves, understand our emotions, and set goals distinguishes us as unique individuals. LLMs lack the self-awareness and introspection necessary for this type of intelligence. Existential contemplation : Pondering life's big questions and seeking meaning are quintessentially human endeavours. LLMs, despite their ability to process information, lack the sentience and consciousness required for such philosophical contemplations. Moral reasoning: Making ethical judgments and navigating right and wrong are hallmarks of human intelligence. LLMs, while trained on moral frameworks, lack the nuanced understanding and ability to adapt these frameworks to new situations that we possess. Naturalistic connection : Our ability to connect with nature, understand ecological systems, and appreciate its beauty lies beyond the reach of LLMs. Their understanding of nature, while informative, lacks the embodied experience and emotional connection that fuels our appreciation. Spiritual exploration: The human yearning for connection with something beyond ourselves represents a deeply personal and subjective experience that LLMs cannot replicate. Creative expression: Humans innovate, imagine new possibilities, and express themselves through various art forms with unmatched originality and emotional depth. LLMs, although capable of creative output within defined parameters, lack the spark of true creativity. LLMs represent powerful tools with rapidly evolving capabilities. However, their intelligence remains distinct from the multifaceted and interconnected nature of human intelligence. Each of our twelve intelligences contributes to the unique tapestry of our being. While LLMs may excel in specific areas, they lack the holistic understanding and unique blend of intelligences that define us as humans. As we explore the future of AI, understanding these differences is crucial. LLMs have a long way to go before they can match the full spectrum of human intelligence, but through collaboration, they can enhance and augment our capabilities, not replace them. The journey continues, and further exploration remains essential. What are your thoughts on the comparison between human and machine intelligence? Let's continue the dialogue. Note: The theory of multiple intelligences while accepted in some fields is criticized in others. This demonstrates that more research and study is needed in the field of cognitive science and that claims regarding "Smarter Than Human AI" should be taken with a healthy degree of skepticism.

  • The Critical Role of Professional Engineers in Canada's AI Landscape

    Rapid advancements in AI technology present a double-edged sword: exciting opportunities alongside significant risks. While Canada is a contributor to the field, it lacks a cohesive national strategy to harness innovation and economic benefits while safeguarding the well-being of Canadians. Federal and provincial governments are crafting legislation and policies, but these efforts are disjointed, slow-moving, and unlikely to address current and emerging risks. Regulations arising from Bill C-27, for example, are expected to take years to implement, falling short of the necessary agility. Proposed strategies often emphasize establishing entirely new AI governance frameworks. Adding a new layer of regulations often creates overlap and confusion, hindering progress. It also overlooks the protections already offered by existing laws, regulatory bodies, and standards organizations. One of the areas being overlooked is the role of Professional Engineers. Professional engineering in Canada is uniquely positioned to lead the charge in responsible AI development. With legislative authority, self-governance, and a robust code of ethics, engineers already have the means to ensure responsible AI practices. Professional engineers bring a wealth of benefits to the table. Their deep understanding of technical systems and rigorous training in risk assessment make them ideally suited to design, develop, and implement AI solutions that are safe, reliable, and ethical. Furthermore, their commitment to upholding professional standards fosters public trust in AI technologies. Provincial regulators must act now to elevate engineering's role in the AI landscape. Here are steps that might be considered: Provincial engineering regulators should collaborate with federal and provincial governments to ensure existing regulatory frameworks are adapted to address AI-specific risks and opportunities. Professional engineering associations should develop and deliver training programs that equip engineers with the necessary skills and knowledge to develop and implement responsible AI. Engineers should actively participate in the development of AI standards and best practices to ensure responsible development and deployment of AI technologies. Governments and industry should work together to create funding opportunities that support research and development in responsible AI led by professional engineers. Provincial engineering regulators, in collaboration with professional engineering associations and stakeholders, should explore the creation of a specialized AI Engineering practice and develop a licensing framework for this practice. This framework would ensure engineers possess the specialized knowledge and experience required to develop and implement safe and ethical AI solutions. By taking these steps, Canada can leverage the expertise of professional engineers right now to ensure responsible AI development and secure its position as a leader in the global AI landscape.

  • AI in PSM: A Double-Edged Sword for Process Safety Management

    Process safety management (PSM) stands as a vital defence against hazards in high-risk industries. Yet, even the most robust systems require constant evaluation and adaptation.  Artificial intelligence (AI) has emerged as a transformative force, promising both incredible opportunities and significant challenges for how we manage risk.  In this article, we explore seven key areas where AI could reshape PSM, acknowledging both its potential and limitations: 1. From Reactive to Predictive: Navigating the Data Deluge AI's ability to analyze vast data-sets could revolutionize decision-making. Imagine recommending not just which  maintenance project to prioritize, but also predicting  potential failures before they occur.  However, harnessing this potential requires overcoming data challenges. Integrating disparate data sources and ensuring its quality are crucial steps to ensuring reliable predictions and avoiding pitfalls of biased or incomplete information. 2. Taming the Change Beast: Balancing Innovation with Risk Change, planned or unplanned, can disrupt even the most robust safety systems. AI, used intelligently, could analyze the impact of proposed changes on processes, people, and procedures, potentially mitigating risks and fostering informed decision making.  Although, over reliance on AI for risk assessment could create blind spots , neglecting nuanced human understanding of complex systems and the potential for unforeseen consequences. 3. Bridging the Gap: Real-Time vs. Paper Safety The chasm between documented procedures and actual practices can pose a significant safety risk. AI-powered real-time monitoring could offer valuable insights into adherence to standards and flag deviations promptly.  Not surprisingly, concerns about privacy and potential misuse of such data cannot be ignored. Striking a balance between effective monitoring and ethical data collection is essential. 4. Accelerated Learning: Mining Data for Greater Safety with Caution Applying deep learning to HAZOPs, PHAs, and risk assessments could uncover patterns and insights not previously discovered. However, relying solely on assisted intelligence could overlook crucial human insights, and nuances, potentially missing critical red flags. AI should be seen as a tool to support, not replace, human expertise. 5. Beyond Checklists: Measuring True PSM Effectiveness Moving beyond simply "following the rules" towards measuring the effectiveness of controls in managing risk remains a core challenge for PSM.  While AI can offer valuable data-driven insights into risk profiles, attributing cause and effect and understanding complex system interactions remain complexities that require careful interpretation and human expertise. 6. Breaking the Silo: Integrating PSM into the Business Fabric - Carefully Integrating safety considerations into business decisions through AI holds immense potential for a holistic approach.  At the same time concerns about unintended consequences and potential conflicts between safety and economic goals must be addressed. Transparency and open communication are essential to ensure safety remains a core value, not a mere metric. 7. The Elusive Question: Proving "Safe Enough" The ultimate challenge? Guaranteeing absolute safety. While AI cannot achieve the impossible, it can offer unparalleled data-driven insights into risk profiles, enabling organizations to continuously improve  and confidently move towards a safer state.   However, relying solely on AI-driven predictions could mask unforeseen risks and create a false sense of security. True safety demands constant vigilance and a healthy dose of skepticism. AI in PSM presents a fascinating double-edged sword. By carefully considering its potential and pitfalls, we can usher in a future where intelligent technologies empower us to create a safer, more efficient world, but without losing sight of the human element that will always remain crucial in managing complex risks. What are your thoughts on the role of AI in Process Safety Management (PSM)?

  • Is AI Sustainable?

    In this article we will explore sustainability and how it relates to AI technologies. To get there we will first consider AI Safety and the challenges that exist to design safe and responsible AI. AI technology such as ChatGPT should be designed to be safe. I don’t think many would argue with having this as a goal, particularly professional engineers who have a duty to regard the public welfare as paramount. However, ChatGPT is not designed in the traditional sense. The design of ChatGPT is very much a black box and something we don’t understand. And what we don’t understand we can’t control and therein lies the rub. How can we make ChatGPT safe when we don’t understand how it works? ChatGPT can be defined as a technology that learns and in a sense designs itself. We feed it data and through reinforcement learning we shape its output, with limited success, to be more of what we want and less of what we don’t want. Even guard rails used to improve safety are for the most part blunt and crude instruments having their own vulnerabilities. In an attempt to remove biases, new biases can be introduced. In some cases, guard rails change the output to be what some believe the answer should be rather than what the data reveals. Not only is this a technical challenge but also an ethical dilemma that needs to be addressed. The PLUS Decision Making model developed by The Ethics Resource Center can help organization’s make better decisions with respect to AI: P = Policies - Is it consistent with my organization's policies, procedures and guidelines? L = Lega l - Is it acceptable under the applicable laws and regulations? U = Universal - Does it conform to the universal principles/values my organization has adopted? S = Self - Does it satisfy my personal definition of right, good and fair? These questions do not guarantee ethical decisions are made. They instead help to ensure that ethical factors are considered. However, in the end it comes down to personal responsibility and wanting to behave ethically. Some have said that AI Safety is dead or at least a low priority in the race to develop Artificial General Intelligence (AGI). This sounds similar to on-going tensions between production and safety or quality or security or any of the other outcomes organizations are expected to achieve. We have always needed to balance what we do in the short term against the long term interests. In fact, this what it means to be sustainable. “meeting the needs of the present without compromising the ability of future generations to meet their own needs.” - United Nations This is another test we could add to the PLUS model. S = Sustainability - does this decision lead to meeting the needs of the present without sacrificing the ability of future generations to meet their own needs? I believe answering that question should be on the top of the questions being considered today. Is our pursuit of AGI sustainable with respect to human flourishing? AI Sustainability is perhaps what drives the need for AI safety, security, quality, legal, and ethical considerations. For example, just as sustainability requires balancing present needs with future well-being, prioritizing AI safety safeguards against unforeseen risks and ensures AI technology serves humanity for generations to come. However, it sustainability that drives our need for safety. Instead, of asking is AI Safe , perhaps we should be asking is AI Sustainable ?

  • Three Conditions for Responsible and Safe AI Practice

    Many organizations are embracing AI to advance their goals. However, ensuring the public's well-being requires AI practices to meet three critical conditions: Legality : AI development and use must comply with relevant laws and regulations, safeguarding fundamental rights and freedoms. Ethical Alignment : AI practices must adhere to ethical principles and established moral standards. Societal Benefit: AI applications should be demonstrably beneficial, improving the lives of individuals and society as a whole. Failing to satisfy any of these conditions can lead to both mission failure for the organization and negative societal impacts for the public.

  • The AI Gold Rush: When Customers Become Collateral Damage in the Search for Data

    The tech landscape these days is reminiscent of a gold rush, with companies scrambling for a new treasure: customer data. But in this pursuit, the focus on the customer has shifted. Companies are increasingly looking to mine (or perhaps exploit) their customer data to feed expanding AI systems. Instead of striving to deliver exceptional goods and services for their customers, companies are viewing customers as a means to an end – fuel for their AI engines, shiny generative models and machine learning. The question is, how far will they go to acquire data? This question applies not only to tech giants. Every software company with AI aspirations will face this dilemma. To secure enough data, vendors are now in a frenzy not unlike the gold rush days. They are revising EULAs (End User License Agreements), updating terms and conditions, and some are scraping as much data as they can get a hold of before regulations possibly close the door shut. It seems anything goes in the race to acquire enough access to data to build a compelling AI experience. Let's take a look at some recent examples: Zoom : Their entanglement in an AI privacy controversy raises red flags. ( link ) Adobe : Their recent terms clarification regarding their updated EULA. ( link ) Microsoft : The recent backtracking on their "recall feature" after privacy concerns surfaced is another example. ( link ) It's important to mention that OpenAI , Microsoft, and Google (to name a few) have already scraped much (if not all) of the internet to train their generative AI models apparently without consent or respect for copyright laws. And here's the concerning part: with the ubiquity of cloud storage and applications, anything you create or store online within a platform could become fair game for these hungry AI systems. Even content (documents, audio, video, artwork, images, etc.) that is created locally using other tools but stored in these platforms could be used. While companies may claim access to your data is for a better user experience, there is more that's at stake.  It 's balancing stakeholder expectations with customer values ( social license ) and evolving legal rights concerning data privacy and content ownership. Decisions now being made are more than just technical – they're deeply ethical and increasingly legal in nature. The acquisition of data is creating a slippery ethical slope with customers at risk of becoming collateral damage in the pursuit of an AI advantage. When customers become a means to an end, you will get that end but not any customers. – The cybernetics law of Inevitable Ethical Inadequacy (paraphrased) The goals we set are important to achieve success in business and in life, but it is how we achieve these goals that defines who we are and what we become – it defines our character. When you lose sight of the goal to satisfy customers you may risk not only your integrity and reputation, but also your entire business. "It is impossible to design a system so perfect that no one needs to be good" – TS Elliot Let's not fail to be good in all our endeavours.

  • A Safety Model for AI Systems

    As a framework, I thought Nancy Leveson’s Hierarchical Safety Model which incorporates Rasmussen’s risk ladder offers the right level of analysis to further the discussions regarding responsible and safe AI systems. Nancy is a professor at MIT and author of what is known as STAMP / STPA - a systems approach to risk management. In a nutshell, instead of thinking about risk in terms of only threats and impacts, she suggests we consider systems as containing hazardous processes which create the conditions for risk to manifest and propagate. This holistic approach is used in aerospace along with other high-risk endeavours. The following diagram is a slightly modified version of her model outlining engineering activities across system design/analysis, and system operations. This framework also shows where government, regulators, and corporate policy intersect which is critical to staying between the lines and ahead of risk. At this level of analysis we are talking about AI Systems (i.e. engineered systems) not about systems that use AI technology (Embedded AI). However, this could be extended to support the latter. A key takeaway is that AI engineering must incorporate and ensure responsible and safe design & practice across the socio-technical system, not just the AI technology. This is where professional AI engineers are most helpful and needed. Interested to hear your thoughts on this …

  • Model Convergence: The Erosion of Intellectual Diversity in AI

    As artificial intelligence models strive for greater accuracy, an unexpected phenomenon is emerging: the convergence of responses across different AI platforms. This trend raises concerns about the potential loss of diverse perspectives in AI-generated content. Have you noticed that when posing questions to various generative AI applications like ChatGPT, Gemini, or Claude, you often receive strikingly similar answers? For instance, requesting an outline on a specific topic typically yields nearly identical responses from these different models. Given the vast array of human perspectives on any given subject, one might expect AI responses to reflect this diversity. However, this is increasingly not the case. Model convergence occurs when multiple AI models, despite being developed by different organizations, produce remarkably similar outputs for the same inputs. This phenomenon can be attributed to several factors: Shared training data sources Similar model architectures Evaluation metrics that prioritize factual accuracy and coherence over diversity of thought While consistency and accuracy are crucial in many applications of AI, they may not always be the ideal outcome, particularly in scenarios where users seek to explore a breadth of ideas or conduct research on complex topics. The convergence of AI models towards singular responses could potentially limit the exposure to alternative viewpoints and novel ideas. This trend raises important questions about the future of AI-assisted learning and research: How can we maintain intellectual diversity in AI-generated content? What are the implications of this convergence for critical thinking and innovation? How might we design AI systems that can provide a range of perspectives while maintaining accuracy? As AI continues to play an increasingly significant role in information dissemination and decision-making processes, addressing these questions becomes crucial to ensure that AI enhances rather than constrains our intellectual horizons. What do you think? Have you noticed this behaviour? Do you think model convergence is a problem?

  • Navigating AI Compliance with Integrity

    Artificial Intelligence (AI) is on a trajectory to revolutionize various industries, from healthcare to finance. Its ability to analyze vast amounts of data and make informed decisions has streamlined processes and improved efficiency. However, the rise of AI also brings forth ethical considerations that cannot be overlooked. In this article, we delve into the crucial topic of ethical considerations in AI compliance and how businesses can navigate this complex landscape with integrity. The Rise of Ethical Dilemmas As AI systems become more prevalent in our daily lives, questions surrounding privacy, bias, and accountability have come to the forefront. The ethical implications of AI are vast and multifaceted, requiring careful scrutiny and proactive measures to ensure compliance with ethical standards. In a world driven by data, it becomes imperative for organizations to uphold ethical principles while harnessing the power of AI technologies. Navigating the Ethical Tightrope When it comes to AI compliance, companies must walk a fine line between innovation and ethical responsibility. Transparency in AI algorithms, data privacy protection, and addressing bias in machine learning models are just a few aspects that demand attention. By cultivating a culture of ethics and integrity within their AI initiatives, businesses can build trust with consumers and stakeholders alike. The Role of Regulations Regulatory bodies are increasingly focusing on AI compliance to safeguard the rights and interests of individuals. Compliance with regulations such as the General Data Protection Regulation (GDPR) and the Ethical AI Framework is crucial for upholding ethical standards in AI development and deployment. By adhering to these regulations, organizations demonstrate their commitment to ethical practices and accountability. Ethical AI in Action One notable example of integrating ethics into AI development is the concept of explainable AI (XAI). XAI emphasizes transparency and interpretability in AI systems, ensuring that decisions made by AI models can be explained and understood by humans. This approach not only enhances accountability but also helps mitigate bias and discrimination in AI applications. Building a Sustainable Future As we navigate the complex terrain of AI compliance, it is essential to keep ethics at the forefront of technological advancements. By embracing ethical considerations and fostering a culture of integrity, businesses can pave the way for a sustainable future where AI innovations coexist harmoniously with ethical principles. In conclusion, the path to AI compliance is not free of challenges, but with a steadfast commitment to ethical values and integrity, organizations can navigate this terrain successfully. By prioritizing ethical considerations in AI development and deployment, businesses can not only comply with regulations but also earn the trust and confidence of their customers and stakeholders. Let's embark on this journey together, unravelling the ethical tightrope with integrity as our guiding light. Join the conversation about ethical considerations in AI compliance and share your thoughts on incorporating integrity into AI initiatives. Together, let's shape a future where AI technologies serve as a force for good, guided by ethical principles and a commitment to transparency and responsibility.

  • Can AI Rescue Your Project?

    Project teams often find themselves caught in a cycle of constant execution, leaving little time for process improvement. This predicament has led many to seek technological solutions, with artificial intelligence (AI) emerging as the latest panacea for project management challenges. While AI undoubtedly offers significant potential, it's crucial to examine its role critically and understand its limitations in addressing the complex issues that lead to project failure. Gartner, a leading research and advisory company, predicts a seismic shift in project management practices. Their forecast suggests that by 2030, AI will manage 80% of project management tasks, leveraging advanced technologies such as big data analytics, machine learning, and natural language processing. This projection has sparked considerable interest and debate within the project management community. According to Gartner's research, AI is poised to transform project management across six key domains: Enhanced project selection and prioritization : AI algorithms promise to streamline the decision-making process, potentially leading to higher success rates and reduced human bias in project selection. Augmented PMO support: Automated monitoring and reporting tools are expected to enhance the project management office's ability to anticipate issues and operate more efficiently. Optimized project planning and reporting: AI-driven systems aim to automate time-consuming tasks, improve risk management, and provide real-time insights through advanced analytics. Implementation of virtual project assistants : AI-powered chatbots and digital assistants could offer immediate updates, task management support, and context-aware guidance. Advanced testing capabilities : The proliferation of automated testing facilities may lead to more thorough, efficient, and unbiased evaluation of complex projects. Evolution of the project manager's role : As AI assumes more administrative responsibilities, project managers will likely need to focus on developing soft skills, strategic thinking, and AI literacy. While these advancements present exciting opportunities, it's essential to consider their impact on project success rates. The Standish Group reports that only 35% of projects are deemed successful, despite an annual global investment of approximately $48 trillion in project-based work. This statistic raises a critical question: Will the integration of AI truly address the fundamental issues causing project failure? To answer this, we must recognize that while technology can be an enabler of better project outcomes, it primarily enhances productivity rather than effectiveness. For AI to significantly improve project success rates, it must be strategically applied to address key challenges beyond mere efficiency gains. Projects typically fail due to a combination of factors that AI, in its current state, may not fully address: Inadequate project planning and strategy : AI can assist in data analysis and forecasting, but strategic decision-making still requires human insight and experience. Poor management of uncertainty and risk : While AI can identify patterns and potential risks, interpreting complex, context-dependent risks often requires human judgment and action. Insufficient capabilities for deliverable creation : AI tools can enhance productivity, but they cannot replace the specialized skills and innovation often needed to create project deliverables. Unrealistic expectations : AI may provide more accurate projections, but managing stakeholder expectations remains a human-centric skill. Ineffective change management : While AI can flag deviations from plans, successfully navigating organizational change requires empathy and leadership that AI cannot yet replicate. While AI presents exciting possibilities for project management, it should not be viewed as a silver bullet. To truly leverage AI's potential, organizations must integrate it thoughtfully into their project management practices, addressing both productivity and effectiveness. Project managers of the future will need to become adept at harnessing AI's capabilities while continuing to provide the strategic oversight, stakeholder management, and adaptive leadership that remain crucial to project success. As the project management landscape evolves, the most successful organizations will be those that strike a balance between technological innovation and human expertise, using AI as a powerful tool to augment, rather than replace, the critical thinking and interpersonal skills that drive project success. So, what do you think? Can AI save your project from failure?

© 2017-2025 Lean Compliance™ All rights reserved.
bottom of page