top of page

SEARCH

Find what you need

564 results found with an empty search

  • Will Your Next Compliance Expert be AI?

    In this post we take a look at a new AI technology called ChatGPT from OpenAI. It can answer many of your questions, code for you, and even create songs in the style of your favourite artists. Of course, we were interested in whether or not it might be a replacement for a compliance expert. So we asked it some questions and here is what we found. Why is compliance important? How do organizations improve their compliance? How do organizations meet their ESG objectives? How do organizations build trust? How do organizations contend with uncertainty and risk? How do promises help meet obligations? How do organizations become more proactive? And for fun ... And what did ChatGPT think about Lean Compliance? I couldn't agree more with those principles. So in terms of answering our questions the answers were good. The poem was not half-bad either. However, when asked questions about "what should our organization do?" or "what are our top compliance risks" these of course could not be answered. However, this is what a good compliance expert can provide and why you will always need people in the compliance role. Decision making that involve taking risks is something that only people can answer for. As T.S. Eliot wrote, "It is impossible to design a system so perfect that no one needs to be good.” Deciding what is good or bad is a human choice. Being good and using technology for good are also human decisions. I am sure that AI will continue to develop and so will ChatGPT. It may one day find a home within organizations. So far the costs are prohibitive - "eye watering". However, it would be great to ask questions like: "Do we have a policy that covers xyz", "What applicable regulations will this action impact?", "What commitments have we made to this ESG objective?", "Calculate our reputational risk if we go ahead with this action?" and so on.

  • Why you need to govern your use of AI

    Each organization will and should determine how they will govern the use of AI and the risks associated from using it. AI and its cousin machine learning are already being used by many organizations most likely even their suppliers. Much of this is not governed and without oversight. There is going to be a cost and side effects from using AI that we need to account for. Data used in AI will also need to be protected. If bad actors can corrupt your learning data sets then you will end up with corrupted insights informing your decisions. The European union is presently drafting guidelines for the protection of data sets used in machine learning to prevent corruption of outcomes from AI. This perhaps is better late than never and we should expect more regulations in the future. How are you governing your use of AI. What standards are you using? How are you contending with ethical considerations? Are you handling the risk from using AI?

  • Can You Trust AI?

    Artificial intelligence (AI) is one of the most exciting and transformative technologies of our time. From healthcare to transportation, education to energy, AI has the potential to revolutionize nearly every industry and sector. However, as with any powerful technology, there are concerns about its potential misuse and the need for regulations to ensure that it is developed and used in a responsible and ethical manner. In response to these concerns, many countries are proposing legislation to govern the use of AI, including the European Union's AI Act, the UK National AI Strategy and Proposed AI Act, Canada's Artificial Intelligence and Data Act (AIDA), and USA’s NIST Artificial Intelligence Risk Management Framework. In this article, we will explore these regulatory efforts and the importance of responsible AI development and use. European Union AI Act The European Union's Artificial Intelligence Act is a proposed regulation that aims to establish a legal framework for the development and use of artificial intelligence (AI) in the European Union. The regulation is designed to promote the development and use of AI while at the same time protecting fundamental rights, such as privacy, non-discrimination, and the right to human oversight. The Commission puts forward the proposed regulatory framework on Artificial Intelligence with the following specific objectives: Ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values; Ensure legal certainty to facilitate investment and innovation in AI; Enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems; Facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation. One of the key features of the regulation is the identification of certain AI applications as "high-risk." These include AI systems used in critical infrastructure, transportation, healthcare, and public safety. High-risk AI systems must undergo a conformity assessment process before they can be deployed to ensure that they meet certain safety and ethical standards. The regulation also prohibits certain AI practices that are considered unacceptable, such as AI that manipulates human behaviour or creates deepfake videos without disclosure. This is designed to prevent the development and use of AI that can be harmful to individuals or society as a whole. Transparency and accountability are also important aspects of the regulation. AI developers must ensure that their systems are transparent, explainable, and accountable. They must also provide users with clear and concise information about the AI system's capabilities and limitations. This is designed to increase trust in AI systems and to promote the responsible development and use of AI. Member states will be responsible for enforcing the regulation, and non-compliance can result in significant fines. This is designed to ensure that AI developers and users comply with the regulation and that the use of AI is safe and ethical. Overall, the European Union's Artificial Intelligence Act represents an important step in the regulation of AI in the EU. It balances the benefits of AI with the need to protect fundamental rights and ensures that the development and use of AI is safe, ethical, and transparent. UK National AI Strategy and Proposed AI Act The UK national AI strategy, launched in November 2021, is a comprehensive plan to position the UK as a global leader in the development and deployment of artificial intelligence technologies by 2030. The strategy is based on four key pillars: research and innovation, skills and talent, adoption and deployment, and data and infrastructure. The first pillar, research and innovation, aims to support the development of AI technologies and their ethical use. This involves investing in research and development to create cutting-edge AI solutions that can be applied to various industries and fields. The strategy also emphasizes the importance of ethical considerations in AI development, such as fairness, accountability, transparency, and explainability. The second pillar, skills and talent, aims to ensure that the UK has a pipeline of diverse and skilled AI talent. This involves investing in education, training, and re-skilling programs to equip people with the necessary skills to work with AI technologies. The strategy recognizes the importance of diversity in the workforce, particularly in AI, and seeks to encourage more women and underrepresented groups to pursue careers in AI. The third pillar, adoption and deployment, focuses on encouraging businesses and public sector organizations to adopt and deploy AI technologies to drive productivity, innovation, and sustainability. This involves promoting the use of AI to solve real-world problems and improve business processes. The strategy also recognizes the need for regulations and standards to ensure that AI is used ethically and responsibly. The fourth pillar, data and infrastructure, aims to invest in digital infrastructure and ensure that data is shared securely and responsibly. This involves promoting the development of data sharing platforms and frameworks, while also ensuring that privacy and security are protected. The strategy also recognizes the importance of data interoperability and standardization to facilitate the sharing and use of data. With respect to risk and safety, the strategy acknowledges the potential risks associated with AI, such as biased or unfair outcomes, loss of privacy, and the potential for AI to be used for malicious purposes. To mitigate these risks, the strategy calls for the development of robust ethical and legal frameworks for AI, as well as increased transparency and accountability in AI systems. The UK AI Act is a proposed legislation aimed at regulating the development, deployment, and use of artificial intelligence (AI) systems in the United Kingdom. The Act includes the following key provisions: The creation of a new regulatory body called the AI Regulatory Authority to oversee the development and deployment of AI systems. The introduction of mandatory risk assessments for high-risk AI systems, such as those used in healthcare or transportation. The requirement for companies to disclose when AI is being used to make decisions that affect individuals. The prohibition of certain AI applications, including those that pose a threat to human safety or privacy, or those that perpetuate discrimination. The establishment of a voluntary code of conduct for companies developing AI systems. The provision of rights for individuals affected by AI systems, including the right to explanation and the right to challenge automated decisions. Overall, the UK AI Act aims to balance the potential benefits of AI with the need to protect individuals from potential harm, ensure transparency and accountability, and promote ethical and responsible development and use of AI technology. Overall, the UK National AI Strategy combined with the proposed AI Act emphasizes the importance of responsible and sustainable AI development, and seeks to ensure that the benefits of AI are realized while minimizing the risks and challenges that may arise. Canadian Artificial Intelligence and Data Act (AIDA) Bill C-27 proposes a Canada's Artificial Intelligence and Data Act (AIDA), which is a new piece of legislation designed to create a framework for the responsible development and deployment of AI systems in Canada. The government aims to create a regulatory framework that promotes the responsible and ethical use of these technologies while balancing innovation and economic growth. AIDA is based on a set of principles that focus on privacy, transparency, and accountability. One of the key features of the bill is the establishment of the AI and Data Agency, a regulatory body that would oversee compliance with the proposed legislation. The agency would be responsible for developing and enforcing regulations related to data governance, transparency, accountability, and algorithmic bias. It would also provide guidance and support to organizations that use AI and data-related technologies. Governance requirements proposed under the AIDA include these requirements and are aimed at ensuring that anyone responsible for a high-impact AI system (i.e., one that could cause harm or produce biased results) takes steps to assess the system's impact, manage the risks associated with its use, monitor compliance with risk management measures, and anonymize any data processed in the course of regulated activities. The Minister designated by the Governor in Council to administer the AIDA is granted significant powers to make orders and regulations related to these governance requirements. These powers include the ability to order record collection, auditing, cessation of use, and publication of information related to the requirements, as well as the ability to disclose information obtained to other public bodies for the purpose of enforcing other laws. Transparency requirements proposed under the AIDA include these requirements which are aimed at ensuring that anyone who manages or makes available for use a high-impact AI system publishes a plain-language description of the system on a publicly available website. The description must include information about how the system is intended to be used, the types of content it is intended to generate, the decisions, recommendations or predictions it is intended to make, and the mitigation measures established as part of the risk management measures requirement. The Minister must also be notified as soon as possible if the use of the system results in or is likely to result in material harm. Finally, the penalties proposed under the AIDA for non-compliance with the governance and transparency requirements are significantly greater in magnitude than those found in Bill 64 or the EU's General Data Protection Regulation. They include administrative monetary penalties, fines for breaching obligations, and new criminal offences related to AI systems. These offences include knowingly using personal information obtained through the commission of an offence under a federal or provincial law to make or use an AI system, knowingly or recklessly designing or using an AI system that is likely to cause harm and causes such harm, and causing a substantial economic loss to an individual by making an AI system available for use with the intent to defraud the public. Fines for these offences can range up to $25,000,000 or 5% of gross global revenues for businesses and up to $100,000 or two years less a day in jail for individuals. Bill C-27 will have a significant impact on businesses that work with AI by imposing new obligations and penalties for non-compliance. It could potentially make Canada the first jurisdiction in the world to adopt a comprehensive legislative framework for regulating the responsible deployment of AI. The government will have flexibility in how it implements and enforces the provisions of the bill related to AI, with specific details to be clarified after the bill's passage. Businesses can look to the EU and existing soft law frameworks for guidance on best practices. The bill also includes provisions for consumer privacy protection. US NIST AI Risk Management and Other Guidelines There are no regulations in the US specific to AI, however, several government agencies have issued guidelines and principles related to AI, and there have been proposals for new regulations. The White House Office of Science and Technology Policy (OSTP) issued a set of AI principles in January 2020, which are intended to guide federal agencies in the development and deployment of AI technologies. The principles emphasize the need for transparency, accountability, and safety in AI systems, and they encourage the use of AI to promote public good and benefit society. The "Artificial Intelligence Risk Management Framework (AI RMF 1.0)" has been published by the US National Institute of Standards and Technology (NIST) to offer guidance on managing risks linked with AI systems. The framework outlines a risk management approach that organizations can apply to evaluate the risks associated with their AI systems, including aspects such as data quality, model quality, and system security. The framework underlines the significance of transparency and explainability in AI systems and the establishment of clear governance structures for these systems. In addition, the Federal Trade Commission (FTC) has issued guidelines on the use of AI in consumer protection, and the Department of Defense has developed its own set of AI principles for use in military applications. There have also been proposals for new federal regulations related to AI. In April 2021, the National Security Commission on Artificial Intelligence (NSCAI) released a report that recommended a range of measures to promote the development and use of AI in the United States, including the creation of a national AI strategy and the establishment of new regulatory frameworks for AI technologies. In summary, while there are currently no federal regulations specific to AI in the United States, several government agencies have issued guidelines and principles related to AI, and there have been proposals for new regulations. The principles and guidelines emphasize the need for transparency, accountability, and safety in AI systems, and there is growing interest in developing new regulatory frameworks to promote the responsible development and use of AI technologies. Conclusion Artificial intelligence (AI) is a rapidly evolving field that has the potential to transform numerous industries and sectors. However, with this growth comes the need for regulations to ensure that AI is developed and used responsibly and ethically. In recent years, several countries have proposed legislation to address these concerns, including the European Union's AI Act, the UK National AI Strategy and Proposed AI Act, Canada's Artificial Intelligence and Data Act (AIDA), and USA’s NIST Artificial Intelligence Risk Management Framework. The European Union's AI Act aims to establish a legal framework for the development and use of AI in the EU. It identifies certain AI applications as "high-risk" and requires them to undergo a conformity assessment process before deployment. The regulation also prohibits certain AI practices that are considered unacceptable and emphasizes the importance of transparency and accountability. The UK National AI Strategy and Proposed AI Act are designed to position the UK as a global leader in the development and deployment of AI technologies by 2030. The strategy focuses on research and innovation, skills and talent, adoption and deployment, and data and infrastructure, while the proposed AI Act includes provisions such as the creation of a new regulatory body and mandatory risk assessments for high-risk AI systems. Canada's Artificial Intelligence and Data Act (AIDA) proposes a framework for the responsible development and deployment of AI systems in Canada. The legislation includes provisions such as a requirement for AI developers to assess and mitigate the potential impacts of their systems and the establishment of a national AI advisory council. The US National Institute of Standards and Technology (NIST) has published “Artificial Intelligence Risk Management Framework (AI RMF 1.0) which provides guidance on managing the risks associated with AI systems. The framework also emphasizes the importance of transparency and explainability in AI systems, as well as the need to establish clear governance structures for AI systems. Overall, these proposed regulations and guidelines demonstrate the growing recognition of the need for responsible and ethical development and use of AI and highlight the importance of transparency, accountability, and risk management in AI systems specifically those with high-impact. Even though these regulations await further development and approval, it is incumbent on organizations to take reasonable precautions to ameliorate risk to protect the public from preventable harm arising from the use of AI. It is how well this is done that will largely determine if we can trust AI. As has been quoted before: "It is impossible to design a system so perfect that no one needs to be good" – TS Elliot. The question of trust lies with how "good" we will be in our use of AI. If you made it this far, you may be interested in learning more about this topic. Here are links to the legislation and guidelines referenced in this article: References: European Union AI Act - [https://artificialintelligenceact.eu/] UK AI National Strategy - [https://www.gov.uk/government/publications/national-ai-strategy] Canadian Bill C-27 AIDA - [https://www.parl.ca/DocumentViewer/en/44-1/bill/C-27/first-reading] USA NIST AI Risk Management Framework - [https://www.nist.gov/itl/ai-risk-management-framework] Also, if you are interested in developing an AI Risk & Compliance program to manage obligations with respect to the responsible and safe use of AI, consider joining our advanced program, "The Proactive Certainty Program™" More information can be found here website .

  • Breaking the Illusion: The Case Against Anthropomorphizing AI Systems

    Artificial intelligence (AI) has become increasingly prevalent in our lives, and as we interact more and more with these systems, it's tempting to anthropomorphize them, or attribute human-like characteristics to them. We might call them "intelligent" or "creative," or even refer to them as "he" or "she." However, there are several reasons why we should avoid anthropomorphizing AI systems. First and foremost, AI is not human. AI systems are designed to mimic human behaviour and decision-making, but they don't have the same experiences, emotions, or motivations that humans do. Therefore, attributing human characteristics to AI can lead to false expectations and misunderstandings. For example, if we think of an AI system as "intelligent" in the same way we think of a human as intelligent, we may assume that the AI system can think for itself and make decisions based on moral or ethical considerations. In reality, AI systems are programmed to make decisions based on data and algorithms, and they don't have the capacity for empathy or morality. Secondly, anthropomorphizing AI systems can be misleading and even dangerous. When we think of an AI system as having human-like qualities, we may assume that it has the same limitations and biases as humans. However, AI systems can be far more accurate and efficient than humans in certain tasks, but they can also be prone to their own unique biases and errors. For example, if we anthropomorphize a facial recognition AI system, we may assume that it can accurately identify people of all races and genders, when in reality, many AI facial recognition systems have been found to be less accurate for people of color and women. Thirdly, anthropomorphizing AI can have negative consequences for our relationship with technology. By attributing human-like qualities to AI systems, we may become overly reliant on them and trust them more than we should. This can lead to a loss of agency and responsibility, as we may assume that the AI system will make the best decision for us without questioning its choices. Additionally, if we think of AI systems as having emotions or intentions, we may treat them differently than we would treat other technology, which can be a waste of resources and distract from more important uses of AI. While it's novel to anthropomorphize AI systems, we should be aware of the potential negative consequences of doing so. By acknowledging that AI systems are not human and avoiding attributing human-like qualities to them, we can have a more accurate understanding of their capabilities and limitations, and make better decisions about how to interact with them. How to Stop Humanizing AI Systems To prevent or stop anthropomorphizing AI systems, here are some steps that could be taken: Educate people : Educating people about the limitations and capabilities of AI systems can help them avoid attributing human-like qualities to them. Use clear communication: When developing and deploying AI systems, clear and concise communication about their functionality and purpose should be provided to users . Design non-human-like interfaces: Designing interfaces that are distinctively non-human-like can help avoid users attributing human-like qualities to AI systems. Avoid anthropomorphic language: Avoid using anthropomorphic language when referring to AI systems, such as calling them "smart" or "intelligent," as this can reinforce the idea that they are human-like. Emphasize the role of programming: Emphasizing that AI systems operate based on pre-programmed rules and algorithms, rather than human-like intelligence, can help users avoid anthropomorphizing them. Provide transparency: Providing transparency about how the AI system works, its decision-making process, and data sources can help users understand its limitations and avoid anthropomorphizing it. Overall, it's essential to ensure that AI systems are perceived and understood as the tools they are, rather than human-like entities. This can be achieved through education, clear communication, and thoughtful and responsible design.

  • The AI Dilemma: Exploring the Unintended Consequences of Uncontrolled Artificial Intelligence

    Artificial intelligence (AI) is a rapidly developing technology that has the potential to revolutionize the world in unprecedented ways. However, as its capabilities continue to expand, concerns are being raised about the lack of responsibility and safety measures in its development and deployment. The Center for Humane Technology's Tristan Harris and Aza Raskin recently presented the AI Dilemma , exploring the risks of uncontrolled AI and the need for responsible use. The Problem The parallels between the early days of social media and the development of AI are striking. Both technologies were created, scaled to the masses, as we all hoped for the best, with users becoming the unwitting experiment, consenting to participate without fully understanding the potential risks. However, the consequences of AI could be far more severe, as it has the ability to interact with its environment in unpredictable ways. The risks of unchecked AI are vast. We are experiencing an uncontrolled reinforcing learning loop creating exponential capabilities, but with unmitigated risks. In many ways, this is a race condition without any kill switch or means of regulating outcomes to keep AI operating in a responsible manner. This is a problem that we, as humans, have created, and one that we must address. A Solution The AI Dilemma raises important questions that we must address. Where are the safeguards, the brakes, and the kill switch? Who is responsible for the “responsible” use of AI, and when does the science experiment stop, and responsible engineering begin? We must balance innovation with responsibility to ensure that AI is developed and used in ways that benefit society, not threaten it. A step we can take is to reinsert the engineering method into the development of AI. This means having a process to weigh the pros and cons, balance the trade-offs, and prioritize the safety, health, and welfare of the public. This will require more engineers, along with other professionals, in the loop, advocating for and practising responsible AI. The consequences of unchecked AI are substantial, and we must take action now to mitigate these risks. The AI Dilemma is a call to action, urging us to reevaluate our approach to AI and to prioritize the development and deployment of responsible AI. By doing so, we can ensure that AI is a force for good, enhancing our lives rather than threatening them. Instead of deploying science experiments to the public at scale we need to build responsible engineered solutions.

  • Manufacturers Integrity: A model for AI Regulation

    While governmental regulations exist to enforce compliance, manufacturers in certain markets have recognized the need for self-regulation to maintain high standards and build trust among stakeholders. This article explores the concept of manufacturers' integrity and the significance of self-regulation with application for AI practice and use. EU Example Government regulations provide a legal framework for manufacturers, however, self-regulation acts as an additional layer of accountability. By proactively addressing ethical concerns, industry associations and manufacturers can demonstrate a commitment to responsible practices and build credibility. The EU notion of manufacturers’ integrity offers an example of where self-regulation plays a significant role. Manufacturers' integrity refers to the ethical conduct and commitment to quality and safety demonstrated by businesses in the production and distribution of goods. In the EU manufacturers have a vital role in guaranteeing the safety of products sold within the extended single market of the European Economic Area (EEA). They bear the responsibility of verifying that their products adhere to the safety, health, and environmental protection standards set by the European Union (EU). The manufacturer is obligated to conduct the necessary conformity assessment, establish the technical documentation, issue the EU declaration of conformity, and affix the CE marking to the product. Only after completing these steps can the product be legally traded within the EEA market. While this model provides a framework for higher levels of safety and quality it requires manufacturers to establish internal governance, programs, systems and processes to regulate themselves. At a fundamental level this means: Identifying and taking ownership for obligations Making and keeping promises. For many these steps go beyond turning “shall” statements into policy. They require turning “should” statements into promises with the added step of first figuring out what “should” means for their products and services. Determining what "should" looks like is the work of leadership which needs to happen now for the responsible use of A.I. Principles of Ethical Use of AI for Ontario Countries across the world are actively looking at how best to address A.I. A team within Ontario's Digital Service has examined ethical principles from various jurisdictions around the world, including New Zealand, the United States, the European Union, and major research consortiums. From this research principles were created designed to complement the Canadian federal principles by addressing specific gaps. While intended as guidelines for government processes, programs and services they can inform other sectors regarding their own self-regulation of A.I. The following are 6 (Beta) principles proposed by Ontario's A.I. team: 1. Transparent and explainable There must be transparent use and responsible disclosure around data enhanced technology like AI, automated decisions and machine learning systems to ensure that people understand outcomes and can discuss, challenge and improve them. This includes being open about how and why these technologies are being used. When automation has been used to make or assist with decisions, a meaningful explanation should be made available. The explanation should be meaningful to the person requesting it. It should include relevant information about what the decision was, how the decision was made, and the consequences. Why it matters Transparent use is the key principle that helps enable other principles while building trust and confidence in government use of data enhanced technologies. It also encourages a dialogue between those using the technology and those who are affected by it. Meaningful explanations are important because they help people understand and potentially challenge outcomes. This helps ensure decisions are rendered fairly. It also helps identify and reverse adverse impacts on historically disadvantaged groups. 2. Good and fair Data enhanced technologies should be designed and operated in a way throughout their life cycle that respects the rule of law, human rights, civil liberties, and democratic values. These include dignity, autonomy, privacy, data protection, non-discrimination, equality, and fairness. Why it matters Algorithmic and machine learning systems evolve through their lifecycle and as such it is important for the systems in place and technologies to be good and fair at the onset, in their data inputs and throughout the life cycle of use. The definitions of good and fair are intentionally broad to allow designers and developers to consider all of the users both directly and indirectly impacted by the deployment of an automated decision making system. 3. Safe Data enhanced technologies like AI and ML systems must function in a safe and secure way throughout their life cycles and potential risks should be continually assessed and managed. Designers, policy makers and developers should embed appropriate safeguards throughout the life cycle of the system to ensure it is working as intended. This would include mechanisms related to system testing, piloting, scaling and human intervention as well as alternative processes in case a complete halt of system operations is required. The mechanisms must be appropriate to the context and determined before deployment but should be iterated upon throughout the system’s life cycle. Why it matters Automated algorithmic decisions can reflect and amplify undesirable patterns in the data they are trained on. As well, issues with the system can arise that only become apparent after the system is deployed. Therefore, despite our best efforts unexpected outcomes and impacts need to be considered. Accordingly, systems will require ongoing monitoring and mitigation planning to ensure that if the algorithmic system is making decisions that are not intended, a human can adapt, correct or improve the system. 4. Accountable and responsible Organizations and individuals developing, deploying or operating AI systems should be held accountable for their ongoing proper functioning in line with the other principles. Human accountability and decision making over AI systems within an organization needs to be clearly identified, appropriately distributed and actively maintained throughout the system’s life cycle. An organizational culture around shared ethical responsibilities over the system must also be promoted. Where AI is used to make or assist with decisions, a public and accessible process for redress should be designed, developed, and implemented with input from a multidisciplinary team and affected stakeholders. Algorithmic systems should also be regularly peer-reviewed or audited to ensure that unwanted biases have not inadvertently crept in over time. Why it matters Identifying and appropriately distributing accountability within an organization helps ensure continuous human oversight over the system is properly maintained. In addition to clear roles related to accountability, it is also important to promote an organizational culture around shared ethical responsibilities. This helps prevent gaps and avoids the situation where ethical considerations are always viewed as someone else’s responsibility. While our existing legal framework includes numerous traditional processes of redress related to governmental decision making, AI systems can present unique challenges to those traditional processes with their complexity. Input from a multidisciplinary team and affected stakeholders will help identify those issues in advance and design appropriate mechanisms to mitigate them. Regular peer review of AI systems is also important. Issues around bias may not be evident when AI systems are initially designed or developed, so it's important to consider this requirement throughout the life-cycle of the system. 5. Human centric AI systems should be designed with a clearly articulated public benefit that considers those who interact with the system and those who are affected by it. These groups should be meaningfully engaged throughout the system’s life cycle, to inform development and enhance operations. An approach to problem solving that embraces human centered design is strongly encouraged. Why it matters Clearly articulating a public benefit is an important step that enables meaningful dialogue early with affected groups and allows for measurement of success later. Placing the focus on those who interact with the system and those who are affected by it ensures that the outcomes do not cause adverse effects in the process of creating additional efficiencies. Developing algorithmic systems that incorporate human centred design will ensure better societal and economic outcomes from the data enhanced technologies. 6. Sensible and appropriate Every data enhanced system exists not only within its use case, but also within a particular sector of society and a broader context that can feel its impact. Data enhanced technologies should be designed with consideration of how they may apply to a particular sector along with awareness of the broader context. This context could include relevant social or discriminatory impacts. Why it matters Algorithmic systems and machine learning applications will differ by sector. As a result, while the above principles are a good starting point for developing ethical data enhanced technologies it is important that additional considerations be given to the specific sectors to which the algorithm is applied. Conclusion In conclusion, the concept of manufacturers' integrity and self-regulation emerges as a crucial model for AI regulation. While governmental regulations provide a legal framework, self-regulation acts as an additional layer of accountability, allowing manufacturers to demonstrate their commitment to responsible practices and build credibility among stakeholders. The EU example highlights the significance of manufacturers' integrity, where businesses bear the responsibility of ensuring the safety and adherence to standards for their products. This model emphasizes the need for manufacturers to establish internal governance, programs, systems, and processes to regulate themselves, requiring them to identify and take ownership of their obligations while making and keeping promises. Furthermore, the proposed principles of ethical AI use for Ontario shed light on the importance of transparent and explainable systems, good and fair practices, safety and security measures, accountability and responsibility, human-centric design, and sensible and appropriate application of AI technologies. These principles aim to ensure that AI systems respect the rule of law, human rights, civil liberties, and democratic values while incorporating meaningful engagement with those affected by the systems. By adhering to these principles, organizations can foster trust, avoid adverse impacts, and align AI technologies with ethical considerations and societal values. As governments and organizations worldwide grapple with the regulation of AI, the adoption of manufacturers' integrity and self-regulation, coupled with the principles of ethical AI use, can serve as a comprehensive framework for responsible AI practice and use. It is imperative for stakeholders to collaborate, continuously assess risks, promote accountability, and prioritize the human-centric design to mitigate the challenges and maximize the potential benefits of AI technologies. By doing so, we can shape a future where AI is harnessed ethically, transparently, and in alignment with the values and aspirations of society.

  • Leveraging Safety Moments for AI Safety in Critical Infrastructure Domains

    Artificial intelligence (AI) is increasingly becoming an integral part of critical infrastructure such as energy, transportation, healthcare, and finance. While AI offers numerous benefits and opportunities for efficiency and innovation, it also introduces new risks and challenges that need to be addressed. To ensure the safe and secure integration of AI into safety critical systems and processes, organizations can draw inspiration from the concept of "safety moments" and apply it to AI safety practices. In this article, we explore the practice of safety moments and discuss how it can be extended to enhance AI safety in critical infrastructure domains. Understanding Safety Moments Safety moments are short, focused discussions or presentations held within organizations to increase awareness and promote safety consciousness among employees. Typically, safety moments occur at the beginning of meetings or shifts and revolve around sharing personal experiences, lessons learned, near misses, or relevant safety topics. The aim is to foster a proactive safety culture, encourage active engagement, and prompt individuals to think critically about potential risks and hazards. Extending Safety Moments to AI Safety Raising Awareness: Safety moments can be utilized to raise awareness about AI safety in critical infrastructure domains. By sharing real-world examples, case studies, or incidents related to AI systems, employees can gain a better understanding of the potential risks and consequences associated with AI technology. This awareness helps create a culture of vigilance and responsibility towards AI safety. Learning from Incidents: Safety moments involve discussing near misses or incidents that have occurred in the workplace. Similarly, in the context of AI safety, organizations can encourage employees to report the equivalent of near misses or incidents related to AI systems. These discussions can provide valuable insights into the vulnerabilities, limitations, and potential failure modes of AI systems, allowing organizations to learn from past mistakes and improve their safety measures. Regular Training and Education: Safety moments can serve as a platform for ongoing training and education on AI safety. By dedicating time during safety moments to share updates, best practices, and emerging trends in AI safety, organizations can ensure that employees stay informed and equipped with the knowledge needed to identify potential risks and mitigate them effectively. This continuous learning approach helps build a resilient workforce capable of handling AI-related challenges. Encouraging Open Dialogue: Safety moments create a safe space for employees to openly discuss safety, privacy, and security concerns and ideas. Similarly, in the context of AI safety, organizations should foster a culture that encourages open dialogue and the sharing of concerns related to AI systems. This collaborative approach allows for a broader perspective, diverse insights, and the identification of potential blind spots in the deployment and operation of AI technology. Multidisciplinary Collaboration: AI safety in critical infrastructure domains requires a multidisciplinary approach involving experts from various fields such as AI, cybersecurity, engineering, and ethics. Safety moments can facilitate cross-functional collaboration by bringing together professionals from different disciplines to discuss AI safety challenges, exchange knowledge, and develop comprehensive strategies to ensure the safe integration of AI into critical infrastructure domains. Summary As AI continues to be adopted in critical infrastructure domains, ensuring the safety and security of AI systems becomes paramount. By extending the practice of safety moments to AI safety, organizations can create a culture of awareness, collaboration, and continuous learning. This approach empowers employees to actively engage in AI safety practices, identify potential risks, and collectively work towards mitigating them. By incorporating AI safety into safety moments, critical infrastructure domains can harness the transformative power of AI while safeguarding the integrity and resilience of their operations.

  • AI's Wisdom Deficit

    In German, there are two words for knowledge: "wissen" and "kennen." The former refers to knowing about something, while the latter signifies intimate knowledge gained through experience. Although we can roughly equate these to "explicit" and "tacit" knowledge, the English language fails to capture their nuanced meanings like other languages do. It is in the second form of knowledge where profound insights emerge. According to the DIKW model, wisdom arises from knowledge, particularly knowledge derived from experience and understanding, rather than pure logic. We most often refer to the former as wisdom and the latter intelligence. Intelligence without wisdom has its problems; it is akin to a child in a candy shop. Having knowledge about everything without the ability to discern what is good or bad, what is beneficial or harmful, is of temporary and limited value. Even King Solomon, considered the wisest person in the world, spent his days exploring and experimenting in his pursuit of learning. He devoted himself to knowledge, constructing the greatest temple ever built, accumulating immense wealth, and indulging in his every desire. While he gained vast knowledge the wisdom to discern between good and evil is what held the most value for him. King Solomon new that this was something beyond himself and so he asked his God for this kind of wisdom, and he urges us to do the same. The philosopher David Hume , known for the "is-ought" gap makes a similar observation. He claims that you can't deduce an ought (what should be) from what is. In other words, you can't know what is good from knowledge of what is. That kind of wisdom comes from outside the realm of facts. In recent years, progress in artificial intelligence has been staggering. However, AI lacks the knowledge (and most likely always will) that comes from experience along with the wisdom to discern between what is good and what is not. It is this wisdom that should be our ultimate pursuit, better than all the knowledge in the world. As T.S. Eliot aptly said and bears repeating: "It is impossible to design a system so perfect that no one needs to be good." And being good is what humans must continually strive to become in all our endeavours.

  • Thoughts about AI

    I was listening to a podcast recently where Mo Gawdat (ex-google CBO) was interviewed and asked about his thoughts concerning AI. Here are some of the things he said: Three facts about AI: AI has happened ( the genie is out of the bottle and can’t be put back in) AI will be smarter and already is than many of us Bad things will happen What is AI (I have paraphrased this)? Before AI we told the computer how to do what we want - we trained the dog With generative AI we tell it what we want and it figures out how to do it - we enjoy the dog In the future, AI will tell us what it wants and how to do it - the dog trains us Barriers we should never have crossed, but have anyways: Don’t put AI on the open internet Don’t teach AI to write code Don’t let AI prompt another AI What is the problem? Mo answers this by saying the problem is not the machines, the problem lies with us. We are the ones doing this (compulsion, greed, novelty, competition, hubris, etc.), and we may soon reach the point where we are no longer in the drivers seat. That is the existential threat that many are concerned about. Who doesn’t want a better dog? But what if the dog wants a better human? Before we get there we will have a real smart dog, that is way smarter (10 times, 100 times, or even higher) than us, which we will not understand. Guardrails for explain-ability will amount to AI creating a flowchart of what it is doing (oh how the tables have turned), one that is incomprehensible to most if not all of us. How many of us can understand String Theory or Quantum Physics even if you can read the text books – very few of us. Why do we think that we will understand what AI is doing? Sure, AI can dumb it done or AI-splain it to us so we feel better. Perhaps, we should add another guardrail to Mo’s list: 4. Don’t let AI connect to the physical world. However, I suspect we have already passed that one as well. Or how about this? 5. Don’t do stupid things with AI You can view the podcast on YouTube here:

  • AI Risks Document-Centric Compliance

    For domains where compliance is "document-centric" focused on procedural conformance the use of AI poses significant risk due to inappropriate use of AI to create, evaluate, and assess documentation we use to describe what we do (or should do). Disclosure of AI use will be an important safeguard going forward, but that will not be enough to limit exposure resulting from adverse effects of AI. To contend with uncertainties, organizations must better understand how AI works and how to use it responsibly. To bring the risks into focus, let’s consider the use of Large Language Models (LLMs) used in applications such as ChatGPT, Bard, Gemini, and others. What do LLM's model? While it's important to understand what these LLMs do, it's also important to know what they don't do, and what they don't know. First and foremost, LLMs create a representation of language  based on a training set of data. LLMs use this representation to predict words and nothing else. LLMs do not create a representation of how the world works (i.e. physics), or systems, controls, and processes within your business. They do not model your compliance program, your cybersecurity framework, or any other aspect of your operations. LLMs are very good (and getting better) at predicting words. And so it's easy to imagine that AI systems actually understand the words they digest and the output they generate, but they don't. It may look like AI understands, but it doesn't and it certaintly cannot tell you what you should do. Limitations of Using AI to Process Documents Let's dial in closer and consider a concrete example. This week the Responsible AI Institute as part of their work (which I support) released an AI tool that can evaluate your organization's existing RAI policies and procedures to generate a gap analysis based on the National Institute of Standards and Technology (NIST) risk management framework. Sounds wonderful! This application is no doubt well intended and is not the first or the last AI tool to process compliance documentation. However, tools of this kind raise several questions concerning the nature of the gaps that can be discovered and if a false sense of assurance will be created by using these tools. More Knowlege Required Tools that use LLMs to generate content, for example, such as remedies to address gaps in conformance with a standard, may look like plausible steps to achieve compliance objectives, or controls to contend with risk. However, and this is worth repeating, LLM’s do not understand or have knowledge concerning how controls work, or management systems, or how to contend effectively with uncertainty. They also don't have knowledge of your specific goals, targets, or planned outcomes. LLMs model language to predict words, that's all. This doesn't mean the output from AI is not correct or may not work. However, only you – a human – can make that determination. We also know that AI tools of this kind at best can identify procedural conformance with prescription. They do not (and cannot) evaluate how effective a given policy is at meeting your obligations. Given that many standards consist of a mixture of perscriptive, performance, and outcome-based obligations, this leaves out a sizeable portion of "conformance" from consideration. To evalute gaps that matter requires an operational knowledge of compliance functions, behaviours, and interactions necessary to achieve the outcome of compliance which is something that's not modelled by LLMs and something it doesn't know. The problem is that many who are responsible for complaince don't know these things either. Lack of operational knowledge is a huge risk. If you don’t have operational knowledge of compliance you will not know if the output from AI is reasonable, safe, or harmful. Not only that, if you are using AI to reduce your complement of compliance experts (analysts, engineers, data scientists, etc.) your situation will be far worse. And you won't know how bad until it happens, when it's to late to do anything about it. Not the Only Risk As I wrote in a previous article , AI is not an impartial observer in the classical sense. AI systems are self-referencing. The output they generate interferes with the future they are trying to represent. This creates a feedback loop which gives it a measure of agency that is undesirable, and contributes in part to public fear and worry concerning AI. We don't want AI to amplify or attenuate the signal – it should be neutral, free of biases. We don't yet understand well enough the extent that AI interferes with our systems and processes and in the case of compliance, the documentation we use to describe them. I raised these concerns during a recent Responsible AI Institute webinar where this interference was acknowledged as a serious risk. Unfortunately, it's not on anyone’s radar. While there are discussions that risk exists, there is less conversation on what they are, or how they might be ameliorated. Clearly, AI is still in the experimental stage. Not the Last Gap When it comes to compliance there are always gaps. Some of these are between what's described in documentation and a given standard. Others include gaps in performance, effectiveness, and gaps in overall assurance. Adopting AI generated remedies creates another category of gaps and therefore risk that need to be handled. The treatment for this is to elevate your knowledge of AI and its use. You need to understand what AI can and cannot do. You also need to know what it should or shouldn't do. The outputs from AI may look reasonable, the promise of greater efficiences compelling. But these are not the measures of success. To succeed at compliance requires operational knoweldge of what compliance is and how it works. This will help you contend with risks associated with the use of AI, along with how best to meet all your obligations in the presence of uncertainty.

  • Stopping AI from Lying

    Recently, I asked Microsoft’s Copilot to describe "Lean Compliance." I knew that information about Lean Compliance used in current foundation models was not up-to-date and so would need to merged with real-time information which is what co-pilot attempted to do. However, what it came up with was a mix of accuracy and inaccuracy. It said someone else founded Lean Compliance rather than me. Instead, of not including that aspect of "Lean Compliance", it made it up. I instructed Copilot to make the correction which it did at least within the context of my prompt session. It also apologized for making the mistake. While this is just one example, I know my experience with AI chat applications is not unique. Had I not known the information was incorrect, I may have used it in decision-making or disseminated the wrong information to others. Many are fond of attributing human qualities to AI which is called anthropomorphism . Instead of considering output as false and in need of correction, many will say that the AI system hallucinated — as if that makes it better. And why did Copilot apologize? This practice muddies the waters and makes it difficult to discuss machine features and properties such as how to deal with incorrect output. However, if we are going to anthropomorphize then why not go all the way, and say AI lied . We don’t do this because it applies a standard of morality to the AI system. We know that machines are not capable of being ethical . They don’t have ethical subroutines to discern between what’s right and wrong. This is a quality of humans not machines. That's why when it comes to AI systems we need to stop attributing human qualities to them if we hope to stop the lies and get on with the task of improving output quality.

  • Are AI-Enhanced KPIs Smarter?

    Using Key Performance Indicators (KPIs) to regulate and drive operational functions is table stakes for effective organizations and for those that want to elevate their compliance. In a recent report by MIT Sloan Management Review and Boston Consulting Group (BCG), “The Future of Strategic Management: Enhancing KPIs with AI” the authors provide the results of a global survey of more than 3,000 managers and interviews with 17 executives to examine how managers and leaders use AI to enhance strategic measurement to advance strategic outcomes. More specifically, their study explores how these organizations have adopted KPIs and created new ones using AI. In this report the authors categorize AI-enhanced KPIs in the following way: Smart Descriptive KPIs : synthesize historical and current data to deliver insights into what happened or what is happening. Smart Predictive KPIs : anticipate future performance, producing reliable leading indicators and providing visibility into potential outcomes. Smart Prescriptive KPIs : use AI to recommend actions that optimize performance. Furthermore, the report identifies that developing smart KPIs requires categorizing variables into three distinct types: Strategic Outcome Variables : well-known overarching targets, such as revenue or profit. Operational Drivers Variables : that might impact the strategic outcome, such as pricing, consumer reviews, or website traffic. Contextual Factors: external factors beyond a company’s control, typically measured or tracked through external data such as consumer spending forecasts, inter-country freight, or government regulation. While there is some evidence that KPIs can be enhanced, the report suggests the need for a shift in mindset and practice with respect to the category of KPIs: From Performance Tracking to Redefining Performance From Static Benchmarks to Dynamic Predictors From Judgment-FIrst to Algorithmically Defined Strategic Metrics From KPI Management to Smart KPI Governance and Oversight From Keeping an Eye on KPIs to KPI Dialogues and Discussion From Strategy with KPIs to Strategy for and with KPIs To facilitate these transitions (or disruptions) the authors of the report provide several recommendations: Realign Data Governance to Enable Measurable Smarter KIPs Establish KPI Governance Systems Use Digital Twins to Enhance Key Performance Metrics Prioritize Cultural Readiness and People-Centric Approaches Strategical Alignment with Smart KPIs My Thoughts In general, Key Performance Indicators (KPIs) should by definition have predictive utility which separates them from set of metrics that one might otherwise measure. The three categories for KPIs outlined in the report suggest how KPIs might be used given their predictive quality. KPIs with low correlation might help describe what's happening but are not good candidates for a KPI compared with those with significant correlation. However, even good KPIs cannot suggest how to effect performance changes. Making systems changes relies on knowledge of what measures of effectiveness, performance, conformance, and assurance are targeted along with understanding of the underlying concept of operations. Notwithstanding, the use of AI does hold promise to help with lagging indicators to find new and different correlations. However, leading indicators is a different story. Leading indicator are the holy grail of operational performance and require knowledge of what should be rather than only what once was. Data describing this knowledge seldom appears in operational records or logs and would need to be integrated with an AI System. Without controlled experiments causation should always be treated with a grain of salt. We need to be mindful that the future is not as deterministic as some may believe. When there is human agency involved the future is open, not closed or bound to AI predictions. It's helpful to remember that there are other forces at work: You can’t turn lagging indicators into leading indicators. ( Risk Theory ) You can’t turn an “is”, description of what is, into an “ought”, a prescription of what should be. ( Hume’s Law ) A system will always regulate away from outcomes you don’t specify. (A shby’s Cybernetics Law of Ethical Inadequacy ) When a measure becomes a target, it ceases to be a good measure. (Goodhart’s Law) What steps should be followed when using AI for KPIs? Instead of considering AI as a solution looking for a problem, first identify the problem that is in need of solving. Do you have a problem with: Decision making? Execution or follow-through? Conformance or regulation? Lack of understanding of operational systems, processes, and behaviours? Uncertainty and risk? Insufficient or untapped performance? When the problem is a lack of quality KPIs then one might consider establishing a Smarter KPI Program.  The report by MIT-BCG makes an important point that is worth repeating. What they suggest is not so much about creating better KPI's as it is about establishing an on-going set of processes, practices and mindset to use algorithmically defined metrics. This requires more than following a procedure. The following questions will help define the context for such a program: What do better KPI’s look like? What strategy should we follow to achieve that? What capabilities do we need to support this strategy? What obstacles or opportunities need to be negotiated or exploited? What measures will be used to define success?

© 2017-2025 Lean Compliance™ All rights reserved.
bottom of page