SEARCH
Find what you need
493 items found for ""
- When Getting Lean Puts You at Risk: A Cautionary Tale
In my 30+ years working with organizations on their transformation journeys, I've noticed a concerning trend that keeps me up at night. While companies rush to embrace Lean methodologies - and they absolutely should - many are inadvertently creating serious vulnerabilities in their pursuit of efficiency. Let me explain why this matters to you. The Efficiency Trap Picture this: Your team is deep into a value stream mapping exercise. Everyone's excited about identifying "waste" and streamlining operations. Someone points to a series of checks and approvals in your process. "Look at all these non-value-adding steps!" they exclaim. The room nods in agreement. But here's the thing - not all "inefficiencies" are created equal. The Hidden Value of Controls Those seemingly redundant checks? That "bureaucratic" approval process? They might actually be critical controls put in place after hard-learned lessons. The problem is, institutional memory fades. What was once a crucial safeguard becomes "just the way we've always done it" - until it's not. Think of it like removing what looks like redundant code from a critical system. Sure, it might make your code cleaner, but what if that "redundancy" was actually a crucial fail-safe? We would also never remove code just because we don't understand what it does? However, this happens more often than many care to admit in the name of "cost reductions." Real-World Consequences In my work across highly regulated, high-risk sectors, I've observed a concerning trend where the enthusiasm for Lean methodologies sometimes overshadows critical safety, security, quality, and regulatory considerations. Incidents of LEAN Teams value streaming management processes have eliminated what they viewed as "redundant" inspection steps, documentation requirements, and more. While this appeared efficient on paper, these were actually crucial safeguards developed from hard-learned lessons of the past. Here's what's at stake: in industries where a single oversight can trigger catastrophic consequences, labelling such things as safety controls, "waste" is a dangerous gamble. The potential for environmental disasters, safety incidents, and regulatory penalties demands a more nuanced approach. That's why I strongly advocate for including Lean and Compliance experts during improvement initiatives - professionals who understand both operational efficiency and managed compliance. Remember, true operational excellence in high-risk industries isn't just about removing steps - it's about optimizing processes while preserving the controls that keep us safe. The Solution: Lean & Compliance Expertise Here's where I see a massive opportunity: bringing together Lean methodology and compliance expertise. It's not an either/or situation. You can have both efficiency AND effective controls. The key? Having the right experts at the table. What Lean and Compliance Experts bring to the Table: Deep understanding of regulatory requirements Experience in optimizing control frameworks Ability to spot critical vs. redundant processes Knowledge of emerging risks and compliance trends Expertise in designing efficient, compliant processes Your Action Plan Ready to get this right? Here's what you need to do: Audit Your Lean Initiatives Who's on your transformation team? Are risk and compliance experts involved? How are you evaluating control removal decisions? Engage the Right Expertise Bring in risk & compliance specialists Document control rationales Create risk-aware improvement processes Measure What Matters Track both efficiency gains AND risk metrics Monitor compliance effectiveness Document the impact of process changes Yes, Lean methodologies can transform your organization. Yes, you should be looking for ways to eliminate waste. But remember - not everything that looks like waste actually is. The key is knowing the difference. Moving Forward Don't let your Lean journey become a cautionary tale. Invest in the right expertise. Create processes that are both efficient AND secure. Your future self (and stakeholders) will thank you. As you embark on your next process improvement initiative, ask yourself: "Do we really understand what these controls are protecting us from?" If you can't answer with certainty, it's time to bring in someone who does. Remember: True operational excellence isn't just about speed - it's about sustainable, secure, and safe processes that protect your organization while delivering value to your customers.
- Leveraging Systems Engineering for Effective Compliance
When it comes to developing capabilities that need to perform, that are reliable and that you can trust, within targeted budgets and time constraints, there is much to be learned from Defense programs. The document "Best Practices for Using Systems Engineering Standards (ISO/IEC/IEEE 15288, IEEE 15288.1, and IEEE 15288.2) on Contracts for Department of Defense Acquisition Programs" gets right to the point: "The Department of Defense (DoD) and the defense industry have found that applying systems engineering (SE) processes and practices throughout the system life cycle improves project performance, as measured by the project's ability to satisfy technical requirements within cost and schedule constraints." In other words, "projects that use effective SE processes perform better than those that do not." Given this knowledge, it is in the best interest of both acquirers and suppliers to ensure that defense acquisition projects use effective SE processes as the core of the technical management effort. Systems engineering is the primary means for determining whether and how the challenge posed by a program’s requirements can be met with available resources. It is a disciplined learning process that translates capability requirements into specific design features and thus identifies key risks to be resolved. Our prior best practices work has indicated that if programs apply detailed SE before the start of product development, the program can resolve these risks through trade-offs and additional investments, ensuring that risks have been sufficiently retired or that they are clearly understood and adequately resourced if they are being carried forward. The same principle applies to compliance systems, whether they are for safety, security, sustainability, quality, regulatory, responsible AI, or other outcomes. We have observed that effective systems engineering processes and practices are essential for compliance to deliver its purpose, protect value creation, and earn the trust of stakeholders. If mission success depends on compliance success, make sure you incorporate systems engineering as a key part of your team and approach. Lean Compliance offers an advanced program based on the principles of systems engineering along with other necessary domains . This program is called, "The Proactive Certainty Program™". You can learn more here:
- How To Get The Most From Your ISO Management System
Getting the most value from your ISO Management System requires more than just maintaining certification. By taking a strategic approach, organizations can transform their ISO standards from conformance requirements into powerful tools for business excellence. This guide outlines essential practices that help managers leverage their ISO Management System to drive operational improvements, enhance risk management, and achieve strategic objectives. Whether you're implementing a single standard or managing multiple ISO frameworks, these insights will help you maximize the return on your ISO investment. Maximizing ISO Management System Benefits Managers can maximize the benefits of their ISO management program by understanding its strategic value and focusing on continuous improvement, integration, and alignment with business objectives. Here’s what managers need to know to get the most out of their ISO management system: 1. Understand the Strategic Value of ISO Standards ISO standards, such as ISO 9001 (Quality Management), ISO 14001 (Environmental Management), ISO 27001 (Information Security), and ISO 45001 (Occupational Health and Safety), provide a structured framework for improving processes and achieving organizational goals. Action: Managers should view ISO standards not just as check-box requirements but as tools to drive operational excellence, enhance customer satisfaction, and improve risk management. Use ISO management systems to align processes with strategic goals, leveraging them to identify opportunities for growth, innovation, and competitive advantage. 2. Focus on Continuous Improvement ISO management programs are designed to support continuous improvement through the Plan-Do-Check-Act (PDCA) cycle, which emphasizes planning improvements, implementing changes, monitoring performance, and taking corrective action. Action: Regularly review and update processes based on performance data, audit results, and stakeholder obligations. Foster a culture of continuous improvement by encouraging teams to identify areas of improvement and risk. Utilize internal audits, performance metrics, and stakeholder expectations to drive the improvement process. 3. Integrate Multiple ISO Standards Many organizations adopt more than one ISO standard to cover different aspects of their operations, such as quality, environmental management, and information security. Integrating these standards can reduce duplication and streamline processes. Integrated management reduces complexity, saves time, and ensures consistency across various compliance areas. Action: Develop an Integrated Management System (IMS) that combines requirements from multiple ISO standards into a single, cohesive framework (e.g., ISO 37301) Train staff to understand how different standards overlap (e.g., risk management in ISO 9001 and ISO 27001) and leverage common requirements for efficiency. 4. Align ISO Programs with Business Objectives An ISO management system is most effective when it supports the organization’s strategic goals, such as customer satisfaction, cybersecurity, operational efficiency, or stakeholder trust. Aligning ISO programs with business objectives ensures that the management system adds value and supports the overall mission. Action: Set measurable objectives that align with the organization’s goals (e.g., reducing waste in line with ISO 14001 to support sustainability targets). Use performance indicators from ISO programs to track progress toward strategic objectives and adjust plans as needed. 5. Engage Leadership and Drive a Culture of Ownership Leadership commitment is crucial for the successful implementation of ISO standards, as it sets the tone for the entire organization. Engaged leadership fosters a culture of accountability and promise-keeping, making ISO principles part of the everyday mindset. Action: Managers should actively participate in ISO initiatives, set clear expectations, and communicate the benefits of the management system to all employees. Encourage staff at all levels to take ownership of their obligations and establish processes to keep all their commitments. 6. Leverage Data for Informed Decision-Making ISO management systems emphasize the use of data to monitor performance and make informed decisions. Action: Implement software solutions for data collection, analysis, and reporting to support real-time decision-making. Collect relevant data from key processes (e.g., incident reports for ISO 45001, audit findings for ISO 9001) and analyze it to identify trends, risks, and opportunities. Use data-driven insights to prioritize initiatives, allocate resources effectively, and justify investments in improvements. 7. Optimize Resource Allocation Efficiently managing resources (time, budget, personnel) is essential for maximizing the return on investment in ISO programs. Optimizing resource allocation ensures that ISO programs deliver maximum value without overburdening staff. Action: Identify key areas where improvements will have the most significant impact and allocate resources accordingly. Streamline processes and eliminate redundancies to make the best use of available resources. 8. Proactively Enhance System Performance Regular monitoring and analysis help keep your ISO management system dynamic, forward-looking, and aligned with future business needs. Action: Develop a comprehensive monitoring program that integrates leading indicators, process metrics, and future-focused assessments. Establish systematic monitoring to identify enhancement opportunities and address potential issues before they emerge Use performance data to guide improvement initiatives and system optimization, ensuring continuous advancement and capability building 9. Promote Risk-Based Thinking ISO standards emphasize a proactive approach to identifying and managing risks and opportunities. Focusing on risk management helps prevent problems before they occur, reducing disruptions and improving resilience. Action: Embed risk-based thinking into all levels of the organization, integrating it with decision-making processes. Use risk assessments to prioritize areas for improvement and develop contingency plans. 10. Stay Informed About Changes in ISO Standards ISO standards are periodically revised to reflect new best practices, regulatory changes, and industry developments. Action: Keep up to date with the latest revisions to ISO standards and understand how they impact your organization’s management system. Plan for transition periods and ensure training is provided to adapt to new requirements. Leverage resources such as ISO certification bodies, industry groups, and consultants to stay informed about changes. By following these practices, managers can ensure that their ISO management programs are not only compliant but also drive meaningful improvements across safety, security, sustainability, quality, reliability, and ethics. Lean Compliance offers an advanced program design specifically to help organizations teach better outcomes from their compliance programs. This program is called, "The Proactive Certainty Program™". You can learn more here:
- Building a Better Compliance Program: The Metrics That Actually Matter
As a compliance engineer with 30+ years of experience, I've learned that not all metrics are created equal. Today, I want to share a framework that has transformed how organizations approach compliance measurement. Here's the thing: We often get caught up in measuring everything we can, but what truly matters are the metrics that drive real compliance outcomes. I'm talking about tangible improvements in safety, security, sustainability, quality, and profitability—all of which build that precious stakeholder trust we're aiming for. Let me break down the five essential categories of metrics that I've seen make a real difference: 📈 Adherence Metrics: These show you're walking the walk. They measure how well you're meeting your rule-based obligations—think regulatory requirements, internal policies, and mandatory procedures. It's about having concrete evidence that you're doing what you say you're doing. 📈 Conformance Metrics: These demonstrate alignment with industry best practices and standard operating procedures. They're your proof that you're not just meeting the minimum requirements but following established practices that work. 📈 Performance Metrics : This is where we track progress against specific targets. Are we hitting our compliance KPIs? Are we meeting our performance obligations consistently? These metrics show if we're delivering on our promises. 📈 Effectiveness Metrics: These are the "so what" metrics—they measure the actual impact of our compliance efforts. Are we seeing fewer incidents? Better risk management? Improved outcomes? This is where we prove our program is making a difference. 📈 I ntegrity Metrics :Perhaps the most crucial of all—these metrics measure the confidence level in our ability to meet obligations and keep promises. They're about trust, reliability, and the strength of our compliance culture. Why does this framework work? Because it helps you focus on what truly matters. It's not about drowning in data—it's about measuring the right things that keep you: ✅ True to your mission ✅ Operating within boundaries ✅ Ahead of potential risks Here are some screenshots (below) from our Elevate Compliance Webinar we recently held. We talked about how to use metrics that really matter to compliance. If you’re interested, book a call with us to learn how you can use this framework to make compliance a success for your business.
- How to perform Gemba Walks for the Information Factory
LEAN teaches that it is important to go to the Gemba – the scene of the crime, so to speak, before we decide on what to change. This is the place were value is created and where we can best understand how to improve. Taiichi Ohno used the phrase: Don’t look with your eyes, look with your feet. Don’t think with your head, think with your hands. The principle behind these words is that in order to solve real problems we need to get as close to reality as we can. We need to go beyond what we perceive and what we might think. We should not rely on data and reports alone to know what is really going on. That is why he encouraged us to go to the factory floor (use your feet) then interact with people (think with your hands) to truly understand what is happening. By using “Andon” signalling and “Kanban” material handling line managers could see directly if a manufacturing process was performing well or not. There was a time when factory managers could meet customer demand without the use of an ERP system. Gemba walks have proven extremely useful for physical factories. However, how is this done for today's Information Factories ? Information Factories Information Factories are a category of business were data (raw material) is processed to create insights – the product of an information factory. The machinery includes data intake streams, data processing (removal of waste), data lakes, machine learning, and other forms of artificial intelligence (AI) to create insights that customers desire and willing to pay for. Here as with physical factories there are performance targets to reach, standards to conform to, quality to achieve, safety (people, equipment and data to protect), and environmental impacts and other risks to address. The challenge for LEAN practitioners is that Gemba for these factories is not something you can directly observe. When the place where value is created is hidden and unseen we need another way for us to "Go and See." Gemba Walks for Information Factories For information factories we don’t look with our eyes, we look with our algorithms. We don’t think we our heads, we think with AI. What Taiicho Ohno reminds us is that improvement requires people. And for that we need algorithms and AI where the rules are transparent and explainable for people to "go and see." I wonder if Taiicho Ohno might say to us today: Don’t only look with your algorithms, look with your eyes. Don’t only think with your AI, think with your head. We need to re-imagine what Gemba walks looks like so we can better observe the information factory floor. Perhaps, walking the physical Gemba will be replaced by walking digital threads that provide transparency and explainability so we can better understand and interpret what is really going on. This "Gemba" Thread could help reconstruct the "scene of the crime" so people can observe, interact, and take steps to improve the place where value is created. 1. "Digital Threads: The Future of Compliance: https://www.leancompliance.ca/post/digital-threads-the-future-of-compliance
- Can Research into AI Safety Help Improve Overall Safety?
The use of Artificial Intelligence (AI) to drive autonomous automobiles otherwise known as "self-driving cars" has in recent months become an area of much interest and discussion. The use of self-driving cars while offering benefits also poses some challenging problems. Some of these are technical while others are more of a moral and ethical nature. One of the key questions has to do with what happens if an accident occurs and particularly if the self-driving car caused the accident. How does the car decide if it should sacrifice its own safety to save a bus load of children? Can it deal with unexpected issues or only mimic behavior based on the data it learned from? Can we even talk about AI deciding for itself or having its own moral framework? Before we get much further, it is important to understand that in many ways, the use of computers and algorithms to control machinery already exists and has for some time. There is already technology of all sorts used to monitor, control, and make decisions. What is different now is the degree of autonomy and specifically in how machine learning is done to support artificial intelligence. In 2016, authors from Google Brain, Standford University, UC Berkley and OpenAI, published a paper entitled, "Concrete Problems in AI Safety." In this paper, the authors discuss a number of areas of research that could help to address the possibility of accidents caused by using artificial intelligence. Their approach does not look at extreme cases but rather looks through the lens of a day in the "life" of a cleaning robot. The paper defines accidents as, "unintended and harmful behavior that may emerge from machine learning systems when we specify the wrong objective function, are not careful about the learning process, or commit other machine learning-related implementation errors." It further goes on to outline several safety-related problems: Avoiding Negative Side Effects: How can we ensure that our cleaning robot will not disturb the environment in negative ways while pursuing its goals, e.g. by knocking over a vase because it can clean faster by doing so? Can we do this without manually specifying everything the robot should not disturb? Avoiding Reward Hacking: How can we ensure that the cleaning robot won’t game its reward function? For example, if we reward the robot for achieving an environment free of messes, it might disable its vision so that it won’t find any messes, or cover over messes with materials it can’t see through, or simply hide when humans are around so they can’t tell it about new types of messes. Scalable Oversight: How can we efficiently ensure that the cleaning robot respects aspects of the objective that are too expensive to be frequently evaluated during training? For instance, it should throw out things that are unlikely to belong to anyone, but put aside things that might belong to someone (it should handle stray candy wrappers differently from stray cellphones). Asking the humans involved whether they lost anything can serve as a check on this, but this check might have to be relatively infrequent—can the robot find a way to do the right thing despite limited information? Safe Exploration: How do we ensure that the cleaning robot doesn’t make exploratory moves with very bad repercussions? For example, the robot should experiment with mopping strategies, but putting a wet mop in an electrical outlet is a very bad idea. Robustness to Distributional Shift: How do we ensure that the cleaning robot recognizes, and behaves robustly, when in an environment different from its training environment? For example, strategies it learned for cleaning an office might be dangerous on a factory work floor. These problems, while instructive and helpful to explore AI safety, also offer a glimpse of similar issues observed in actual workplace settings. This is not to say that people behave like robots; far from it. However, seeing things from a different vantage point can provide new insights. Solving AI safety may also improve overall workplace safety. The use of artificial intelligence to drive autonomous machinery will no doubt increase in the months and years ahead. This will continue to raise many questions including how process and occupational safety will be impacted by the increase in machine autonomy. At the same time, research into AI safety may offer fresh perspectives on how we currently address overall safety. "Just when you think you know something, you have to look at in another way. Even though it may seem silly or wrong, you must try." From the movie, "Dead Poets Society"
- Will Your Next Compliance Expert be AI?
In this post we take a look at a new AI technology called ChatGPT from OpenAI. It can answer many of your questions, code for you, and even create songs in the style of your favourite artists. Of course, we were interested in whether or not it might be a replacement for a compliance expert. So we asked it some questions and here is what we found. Why is compliance important? How do organizations improve their compliance? How do organizations meet their ESG objectives? How do organizations build trust? How do organizations contend with uncertainty and risk? How do promises help meet obligations? How do organizations become more proactive? And for fun ... And what did ChatGPT think about Lean Compliance? I couldn't agree more with those principles. So in terms of answering our questions the answers were good. The poem was not half-bad either. However, when asked questions about "what should our organization do?" or "what are our top compliance risks" these of course could not be answered. However, this is what a good compliance expert can provide and why you will always need people in the compliance role. Decision making that involve taking risks is something that only people can answer for. As T.S. Eliot wrote, "It is impossible to design a system so perfect that no one needs to be good.” Deciding what is good or bad is a human choice. Being good and using technology for good are also human decisions. I am sure that AI will continue to develop and so will ChatGPT. It may one day find a home within organizations. So far the costs are prohibitive - "eye watering". However, it would be great to ask questions like: "Do we have a policy that covers xyz", "What applicable regulations will this action impact?", "What commitments have we made to this ESG objective?", "Calculate our reputational risk if we go ahead with this action?" and so on.
- Why you need to govern your use of AI
Each organization will and should determine how they will govern the use of AI and the risks associated from using it. AI and its cousin machine learning are already being used by many organizations most likely even their suppliers. Much of this is not governed and without oversight. There is going to be a cost and side effects from using AI that we need to account for. Data used in AI will also need to be protected. If bad actors can corrupt your learning data sets then you will end up with corrupted insights informing your decisions. The European union is presently drafting guidelines for the protection of data sets used in machine learning to prevent corruption of outcomes from AI. This perhaps is better late than never and we should expect more regulations in the future. How are you governing your use of AI. What standards are you using? How are you contending with ethical considerations? Are you handling the risk from using AI?
- Can You Trust AI?
Artificial intelligence (AI) is one of the most exciting and transformative technologies of our time. From healthcare to transportation, education to energy, AI has the potential to revolutionize nearly every industry and sector. However, as with any powerful technology, there are concerns about its potential misuse and the need for regulations to ensure that it is developed and used in a responsible and ethical manner. In response to these concerns, many countries are proposing legislation to govern the use of AI, including the European Union's AI Act, the UK National AI Strategy and Proposed AI Act, Canada's Artificial Intelligence and Data Act (AIDA), and USA’s NIST Artificial Intelligence Risk Management Framework. In this article, we will explore these regulatory efforts and the importance of responsible AI development and use. European Union AI Act The European Union's Artificial Intelligence Act is a proposed regulation that aims to establish a legal framework for the development and use of artificial intelligence (AI) in the European Union. The regulation is designed to promote the development and use of AI while at the same time protecting fundamental rights, such as privacy, non-discrimination, and the right to human oversight. The Commission puts forward the proposed regulatory framework on Artificial Intelligence with the following specific objectives: Ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values; Ensure legal certainty to facilitate investment and innovation in AI; Enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems; Facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation. One of the key features of the regulation is the identification of certain AI applications as "high-risk." These include AI systems used in critical infrastructure, transportation, healthcare, and public safety. High-risk AI systems must undergo a conformity assessment process before they can be deployed to ensure that they meet certain safety and ethical standards. The regulation also prohibits certain AI practices that are considered unacceptable, such as AI that manipulates human behaviour or creates deepfake videos without disclosure. This is designed to prevent the development and use of AI that can be harmful to individuals or society as a whole. Transparency and accountability are also important aspects of the regulation. AI developers must ensure that their systems are transparent, explainable, and accountable. They must also provide users with clear and concise information about the AI system's capabilities and limitations. This is designed to increase trust in AI systems and to promote the responsible development and use of AI. Member states will be responsible for enforcing the regulation, and non-compliance can result in significant fines. This is designed to ensure that AI developers and users comply with the regulation and that the use of AI is safe and ethical. Overall, the European Union's Artificial Intelligence Act represents an important step in the regulation of AI in the EU. It balances the benefits of AI with the need to protect fundamental rights and ensures that the development and use of AI is safe, ethical, and transparent. UK National AI Strategy and Proposed AI Act The UK national AI strategy, launched in November 2021, is a comprehensive plan to position the UK as a global leader in the development and deployment of artificial intelligence technologies by 2030. The strategy is based on four key pillars: research and innovation, skills and talent, adoption and deployment, and data and infrastructure. The first pillar, research and innovation, aims to support the development of AI technologies and their ethical use. This involves investing in research and development to create cutting-edge AI solutions that can be applied to various industries and fields. The strategy also emphasizes the importance of ethical considerations in AI development, such as fairness, accountability, transparency, and explainability. The second pillar, skills and talent, aims to ensure that the UK has a pipeline of diverse and skilled AI talent. This involves investing in education, training, and re-skilling programs to equip people with the necessary skills to work with AI technologies. The strategy recognizes the importance of diversity in the workforce, particularly in AI, and seeks to encourage more women and underrepresented groups to pursue careers in AI. The third pillar, adoption and deployment, focuses on encouraging businesses and public sector organizations to adopt and deploy AI technologies to drive productivity, innovation, and sustainability. This involves promoting the use of AI to solve real-world problems and improve business processes. The strategy also recognizes the need for regulations and standards to ensure that AI is used ethically and responsibly. The fourth pillar, data and infrastructure, aims to invest in digital infrastructure and ensure that data is shared securely and responsibly. This involves promoting the development of data sharing platforms and frameworks, while also ensuring that privacy and security are protected. The strategy also recognizes the importance of data interoperability and standardization to facilitate the sharing and use of data. With respect to risk and safety, the strategy acknowledges the potential risks associated with AI, such as biased or unfair outcomes, loss of privacy, and the potential for AI to be used for malicious purposes. To mitigate these risks, the strategy calls for the development of robust ethical and legal frameworks for AI, as well as increased transparency and accountability in AI systems. The UK AI Act is a proposed legislation aimed at regulating the development, deployment, and use of artificial intelligence (AI) systems in the United Kingdom. The Act includes the following key provisions: The creation of a new regulatory body called the AI Regulatory Authority to oversee the development and deployment of AI systems. The introduction of mandatory risk assessments for high-risk AI systems, such as those used in healthcare or transportation. The requirement for companies to disclose when AI is being used to make decisions that affect individuals. The prohibition of certain AI applications, including those that pose a threat to human safety or privacy, or those that perpetuate discrimination. The establishment of a voluntary code of conduct for companies developing AI systems. The provision of rights for individuals affected by AI systems, including the right to explanation and the right to challenge automated decisions. Overall, the UK AI Act aims to balance the potential benefits of AI with the need to protect individuals from potential harm, ensure transparency and accountability, and promote ethical and responsible development and use of AI technology. Overall, the UK National AI Strategy combined with the proposed AI Act emphasizes the importance of responsible and sustainable AI development, and seeks to ensure that the benefits of AI are realized while minimizing the risks and challenges that may arise. Canadian Artificial Intelligence and Data Act (AIDA) Bill C-27 proposes a Canada's Artificial Intelligence and Data Act (AIDA), which is a new piece of legislation designed to create a framework for the responsible development and deployment of AI systems in Canada. The government aims to create a regulatory framework that promotes the responsible and ethical use of these technologies while balancing innovation and economic growth. AIDA is based on a set of principles that focus on privacy, transparency, and accountability. One of the key features of the bill is the establishment of the AI and Data Agency, a regulatory body that would oversee compliance with the proposed legislation. The agency would be responsible for developing and enforcing regulations related to data governance, transparency, accountability, and algorithmic bias. It would also provide guidance and support to organizations that use AI and data-related technologies. Governance requirements proposed under the AIDA include these requirements and are aimed at ensuring that anyone responsible for a high-impact AI system (i.e., one that could cause harm or produce biased results) takes steps to assess the system's impact, manage the risks associated with its use, monitor compliance with risk management measures, and anonymize any data processed in the course of regulated activities. The Minister designated by the Governor in Council to administer the AIDA is granted significant powers to make orders and regulations related to these governance requirements. These powers include the ability to order record collection, auditing, cessation of use, and publication of information related to the requirements, as well as the ability to disclose information obtained to other public bodies for the purpose of enforcing other laws. Transparency requirements proposed under the AIDA include these requirements which are aimed at ensuring that anyone who manages or makes available for use a high-impact AI system publishes a plain-language description of the system on a publicly available website. The description must include information about how the system is intended to be used, the types of content it is intended to generate, the decisions, recommendations or predictions it is intended to make, and the mitigation measures established as part of the risk management measures requirement. The Minister must also be notified as soon as possible if the use of the system results in or is likely to result in material harm. Finally, the penalties proposed under the AIDA for non-compliance with the governance and transparency requirements are significantly greater in magnitude than those found in Bill 64 or the EU's General Data Protection Regulation. They include administrative monetary penalties, fines for breaching obligations, and new criminal offences related to AI systems. These offences include knowingly using personal information obtained through the commission of an offence under a federal or provincial law to make or use an AI system, knowingly or recklessly designing or using an AI system that is likely to cause harm and causes such harm, and causing a substantial economic loss to an individual by making an AI system available for use with the intent to defraud the public. Fines for these offences can range up to $25,000,000 or 5% of gross global revenues for businesses and up to $100,000 or two years less a day in jail for individuals. Bill C-27 will have a significant impact on businesses that work with AI by imposing new obligations and penalties for non-compliance. It could potentially make Canada the first jurisdiction in the world to adopt a comprehensive legislative framework for regulating the responsible deployment of AI. The government will have flexibility in how it implements and enforces the provisions of the bill related to AI, with specific details to be clarified after the bill's passage. Businesses can look to the EU and existing soft law frameworks for guidance on best practices. The bill also includes provisions for consumer privacy protection. US NIST AI Risk Management and Other Guidelines There are no regulations in the US specific to AI, however, several government agencies have issued guidelines and principles related to AI, and there have been proposals for new regulations. The White House Office of Science and Technology Policy (OSTP) issued a set of AI principles in January 2020, which are intended to guide federal agencies in the development and deployment of AI technologies. The principles emphasize the need for transparency, accountability, and safety in AI systems, and they encourage the use of AI to promote public good and benefit society. The "Artificial Intelligence Risk Management Framework (AI RMF 1.0)" has been published by the US National Institute of Standards and Technology (NIST) to offer guidance on managing risks linked with AI systems. The framework outlines a risk management approach that organizations can apply to evaluate the risks associated with their AI systems, including aspects such as data quality, model quality, and system security. The framework underlines the significance of transparency and explainability in AI systems and the establishment of clear governance structures for these systems. In addition, the Federal Trade Commission (FTC) has issued guidelines on the use of AI in consumer protection, and the Department of Defense has developed its own set of AI principles for use in military applications. There have also been proposals for new federal regulations related to AI. In April 2021, the National Security Commission on Artificial Intelligence (NSCAI) released a report that recommended a range of measures to promote the development and use of AI in the United States, including the creation of a national AI strategy and the establishment of new regulatory frameworks for AI technologies. In summary, while there are currently no federal regulations specific to AI in the United States, several government agencies have issued guidelines and principles related to AI, and there have been proposals for new regulations. The principles and guidelines emphasize the need for transparency, accountability, and safety in AI systems, and there is growing interest in developing new regulatory frameworks to promote the responsible development and use of AI technologies. Conclusion Artificial intelligence (AI) is a rapidly evolving field that has the potential to transform numerous industries and sectors. However, with this growth comes the need for regulations to ensure that AI is developed and used responsibly and ethically. In recent years, several countries have proposed legislation to address these concerns, including the European Union's AI Act, the UK National AI Strategy and Proposed AI Act, Canada's Artificial Intelligence and Data Act (AIDA), and USA’s NIST Artificial Intelligence Risk Management Framework. The European Union's AI Act aims to establish a legal framework for the development and use of AI in the EU. It identifies certain AI applications as "high-risk" and requires them to undergo a conformity assessment process before deployment. The regulation also prohibits certain AI practices that are considered unacceptable and emphasizes the importance of transparency and accountability. The UK National AI Strategy and Proposed AI Act are designed to position the UK as a global leader in the development and deployment of AI technologies by 2030. The strategy focuses on research and innovation, skills and talent, adoption and deployment, and data and infrastructure, while the proposed AI Act includes provisions such as the creation of a new regulatory body and mandatory risk assessments for high-risk AI systems. Canada's Artificial Intelligence and Data Act (AIDA) proposes a framework for the responsible development and deployment of AI systems in Canada. The legislation includes provisions such as a requirement for AI developers to assess and mitigate the potential impacts of their systems and the establishment of a national AI advisory council. The US National Institute of Standards and Technology (NIST) has published “Artificial Intelligence Risk Management Framework (AI RMF 1.0) which provides guidance on managing the risks associated with AI systems. The framework also emphasizes the importance of transparency and explainability in AI systems, as well as the need to establish clear governance structures for AI systems. Overall, these proposed regulations and guidelines demonstrate the growing recognition of the need for responsible and ethical development and use of AI and highlight the importance of transparency, accountability, and risk management in AI systems specifically those with high-impact. Even though these regulations await further development and approval, it is incumbent on organizations to take reasonable precautions to ameliorate risk to protect the public from preventable harm arising from the use of AI. It is how well this is done that will largely determine if we can trust AI. As has been quoted before: "It is impossible to design a system so perfect that no one needs to be good" – TS Elliot. The question of trust lies with how "good" we will be in our use of AI. If you made it this far, you may be interested in learning more about this topic. Here are links to the legislation and guidelines referenced in this article: References: European Union AI Act - [https://artificialintelligenceact.eu/] UK AI National Strategy - [https://www.gov.uk/government/publications/national-ai-strategy] Canadian Bill C-27 AIDA - [https://www.parl.ca/DocumentViewer/en/44-1/bill/C-27/first-reading] USA NIST AI Risk Management Framework - [https://www.nist.gov/itl/ai-risk-management-framework] Also, if you are interested in developing an AI Risk & Compliance program to manage obligations with respect to the responsible and safe use of AI, consider joining our advanced program, "The Proactive Certainty Program™" More information can be found here website .
- Breaking the Illusion: The Case Against Anthropomorphizing AI Systems
Artificial intelligence (AI) has become increasingly prevalent in our lives, and as we interact more and more with these systems, it's tempting to anthropomorphize them, or attribute human-like characteristics to them. We might call them "intelligent" or "creative," or even refer to them as "he" or "she." However, there are several reasons why we should avoid anthropomorphizing AI systems. First and foremost, AI is not human. AI systems are designed to mimic human behaviour and decision-making, but they don't have the same experiences, emotions, or motivations that humans do. Therefore, attributing human characteristics to AI can lead to false expectations and misunderstandings. For example, if we think of an AI system as "intelligent" in the same way we think of a human as intelligent, we may assume that the AI system can think for itself and make decisions based on moral or ethical considerations. In reality, AI systems are programmed to make decisions based on data and algorithms, and they don't have the capacity for empathy or morality. Secondly, anthropomorphizing AI systems can be misleading and even dangerous. When we think of an AI system as having human-like qualities, we may assume that it has the same limitations and biases as humans. However, AI systems can be far more accurate and efficient than humans in certain tasks, but they can also be prone to their own unique biases and errors. For example, if we anthropomorphize a facial recognition AI system, we may assume that it can accurately identify people of all races and genders, when in reality, many AI facial recognition systems have been found to be less accurate for people of color and women. Thirdly, anthropomorphizing AI can have negative consequences for our relationship with technology. By attributing human-like qualities to AI systems, we may become overly reliant on them and trust them more than we should. This can lead to a loss of agency and responsibility, as we may assume that the AI system will make the best decision for us without questioning its choices. Additionally, if we think of AI systems as having emotions or intentions, we may treat them differently than we would treat other technology, which can be a waste of resources and distract from more important uses of AI. While it's novel to anthropomorphize AI systems, we should be aware of the potential negative consequences of doing so. By acknowledging that AI systems are not human and avoiding attributing human-like qualities to them, we can have a more accurate understanding of their capabilities and limitations, and make better decisions about how to interact with them. How to Stop Humanizing AI Systems To prevent or stop anthropomorphizing AI systems, here are some steps that could be taken: Educate people: Educating people about the limitations and capabilities of AI systems can help them avoid attributing human-like qualities to them. Use clear communication: When developing and deploying AI systems, clear and concise communication about their functionality and purpose should be provided to users. Design non-human-like interfaces: Designing interfaces that are distinctively non-human-like can help avoid users attributing human-like qualities to AI systems. Avoid anthropomorphic language: Avoid using anthropomorphic language when referring to AI systems, such as calling them "smart" or "intelligent," as this can reinforce the idea that they are human-like. Emphasize the role of programming: Emphasizing that AI systems operate based on pre-programmed rules and algorithms, rather than human-like intelligence, can help users avoid anthropomorphizing them. Provide transparency: Providing transparency about how the AI system works, its decision-making process, and data sources can help users understand its limitations and avoid anthropomorphizing it. Overall, it's essential to ensure that AI systems are perceived and understood as the tools they are, rather than human-like entities. This can be achieved through education, clear communication, and thoughtful and responsible design.
- The AI Dilemma: Exploring the Unintended Consequences of Uncontrolled Artificial Intelligence
Artificial intelligence (AI) is a rapidly developing technology that has the potential to revolutionize the world in unprecedented ways. However, as its capabilities continue to expand, concerns are being raised about the lack of responsibility and safety measures in its development and deployment. The Center for Humane Technology's Tristan Harris and Aza Raskin recently presented the AI Dilemma , exploring the risks of uncontrolled AI and the need for responsible use. The Problem The parallels between the early days of social media and the development of AI are striking. Both technologies were created, scaled to the masses, as we all hoped for the best, with users becoming the unwitting experiment, consenting to participate without fully understanding the potential risks. However, the consequences of AI could be far more severe, as it has the ability to interact with its environment in unpredictable ways. The risks of unchecked AI are vast. We are experiencing an uncontrolled reinforcing learning loop creating exponential capabilities, but with unmitigated risks. In many ways, this is a race condition without any kill switch or means of regulating outcomes to keep AI operating in a responsible manner. This is a problem that we, as humans, have created, and one that we must address. A Solution The AI Dilemma raises important questions that we must address. Where are the safeguards, the brakes, and the kill switch? Who is responsible for the “responsible” use of AI, and when does the science experiment stop, and responsible engineering begin? We must balance innovation with responsibility to ensure that AI is developed and used in ways that benefit society, not threaten it. A step we can take is to reinsert the engineering method into the development of AI. This means having a process to weigh the pros and cons, balance the trade-offs, and prioritize the safety, health, and welfare of the public. This will require more engineers, along with other professionals, in the loop, advocating for and practising responsible AI. The consequences of unchecked AI are substantial, and we must take action now to mitigate these risks. The AI Dilemma is a call to action, urging us to reevaluate our approach to AI and to prioritize the development and deployment of responsible AI. By doing so, we can ensure that AI is a force for good, enhancing our lives rather than threatening them. Instead of deploying science experiments to the public at scale we need to build responsible engineered solutions.
- Manufacturers Integrity: A model for AI Regulation
While governmental regulations exist to enforce compliance, manufacturers in certain markets have recognized the need for self-regulation to maintain high standards and build trust among stakeholders. This article explores the concept of manufacturers' integrity and the significance of self-regulation with application for AI practice and use. EU Example Government regulations provide a legal framework for manufacturers, however, self-regulation acts as an additional layer of accountability. By proactively addressing ethical concerns, industry associations and manufacturers can demonstrate a commitment to responsible practices and build credibility. The EU notion of manufacturers’ integrity offers an example of where self-regulation plays a significant role. Manufacturers' integrity refers to the ethical conduct and commitment to quality and safety demonstrated by businesses in the production and distribution of goods. In the EU manufacturers have a vital role in guaranteeing the safety of products sold within the extended single market of the European Economic Area (EEA). They bear the responsibility of verifying that their products adhere to the safety, health, and environmental protection standards set by the European Union (EU). The manufacturer is obligated to conduct the necessary conformity assessment, establish the technical documentation, issue the EU declaration of conformity, and affix the CE marking to the product. Only after completing these steps can the product be legally traded within the EEA market. While this model provides a framework for higher levels of safety and quality it requires manufacturers to establish internal governance, programs, systems and processes to regulate themselves. At a fundamental level this means: Identifying and taking ownership for obligations Making and keeping promises. For many these steps go beyond turning “shall” statements into policy. They require turning “should” statements into promises with the added step of first figuring out what “should” means for their products and services. Determining what "should" looks like is the work of leadership which needs to happen now for the responsible use of A.I. Principles of Ethical Use of AI for Ontario Countries across the world are actively looking at how best to address A.I. A team within Ontario's Digital Service has examined ethical principles from various jurisdictions around the world, including New Zealand, the United States, the European Union, and major research consortiums. From this research principles were created designed to complement the Canadian federal principles by addressing specific gaps. While intended as guidelines for government processes, programs and services they can inform other sectors regarding their own self-regulation of A.I. The following are 6 (Beta) principles proposed by Ontario's A.I. team: 1. Transparent and explainable There must be transparent use and responsible disclosure around data enhanced technology like AI, automated decisions and machine learning systems to ensure that people understand outcomes and can discuss, challenge and improve them. This includes being open about how and why these technologies are being used. When automation has been used to make or assist with decisions, a meaningful explanation should be made available. The explanation should be meaningful to the person requesting it. It should include relevant information about what the decision was, how the decision was made, and the consequences. Why it matters Transparent use is the key principle that helps enable other principles while building trust and confidence in government use of data enhanced technologies. It also encourages a dialogue between those using the technology and those who are affected by it. Meaningful explanations are important because they help people understand and potentially challenge outcomes. This helps ensure decisions are rendered fairly. It also helps identify and reverse adverse impacts on historically disadvantaged groups. 2. Good and fair Data enhanced technologies should be designed and operated in a way throughout their life cycle that respects the rule of law, human rights, civil liberties, and democratic values. These include dignity, autonomy, privacy, data protection, non-discrimination, equality, and fairness. Why it matters Algorithmic and machine learning systems evolve through their lifecycle and as such it is important for the systems in place and technologies to be good and fair at the onset, in their data inputs and throughout the life cycle of use. The definitions of good and fair are intentionally broad to allow designers and developers to consider all of the users both directly and indirectly impacted by the deployment of an automated decision making system. 3. Safe Data enhanced technologies like AI and ML systems must function in a safe and secure way throughout their life cycles and potential risks should be continually assessed and managed. Designers, policy makers and developers should embed appropriate safeguards throughout the life cycle of the system to ensure it is working as intended. This would include mechanisms related to system testing, piloting, scaling and human intervention as well as alternative processes in case a complete halt of system operations is required. The mechanisms must be appropriate to the context and determined before deployment but should be iterated upon throughout the system’s life cycle. Why it matters Automated algorithmic decisions can reflect and amplify undesirable patterns in the data they are trained on. As well, issues with the system can arise that only become apparent after the system is deployed. Therefore, despite our best efforts unexpected outcomes and impacts need to be considered. Accordingly, systems will require ongoing monitoring and mitigation planning to ensure that if the algorithmic system is making decisions that are not intended, a human can adapt, correct or improve the system. 4. Accountable and responsible Organizations and individuals developing, deploying or operating AI systems should be held accountable for their ongoing proper functioning in line with the other principles. Human accountability and decision making over AI systems within an organization needs to be clearly identified, appropriately distributed and actively maintained throughout the system’s life cycle. An organizational culture around shared ethical responsibilities over the system must also be promoted. Where AI is used to make or assist with decisions, a public and accessible process for redress should be designed, developed, and implemented with input from a multidisciplinary team and affected stakeholders. Algorithmic systems should also be regularly peer-reviewed or audited to ensure that unwanted biases have not inadvertently crept in over time. Why it matters Identifying and appropriately distributing accountability within an organization helps ensure continuous human oversight over the system is properly maintained. In addition to clear roles related to accountability, it is also important to promote an organizational culture around shared ethical responsibilities. This helps prevent gaps and avoids the situation where ethical considerations are always viewed as someone else’s responsibility. While our existing legal framework includes numerous traditional processes of redress related to governmental decision making, AI systems can present unique challenges to those traditional processes with their complexity. Input from a multidisciplinary team and affected stakeholders will help identify those issues in advance and design appropriate mechanisms to mitigate them. Regular peer review of AI systems is also important. Issues around bias may not be evident when AI systems are initially designed or developed, so it's important to consider this requirement throughout the life-cycle of the system. 5. Human centric AI systems should be designed with a clearly articulated public benefit that considers those who interact with the system and those who are affected by it. These groups should be meaningfully engaged throughout the system’s life cycle, to inform development and enhance operations. An approach to problem solving that embraces human centered design is strongly encouraged. Why it matters Clearly articulating a public benefit is an important step that enables meaningful dialogue early with affected groups and allows for measurement of success later. Placing the focus on those who interact with the system and those who are affected by it ensures that the outcomes do not cause adverse effects in the process of creating additional efficiencies. Developing algorithmic systems that incorporate human centred design will ensure better societal and economic outcomes from the data enhanced technologies. 6. Sensible and appropriate Every data enhanced system exists not only within its use case, but also within a particular sector of society and a broader context that can feel its impact. Data enhanced technologies should be designed with consideration of how they may apply to a particular sector along with awareness of the broader context. This context could include relevant social or discriminatory impacts. Why it matters Algorithmic systems and machine learning applications will differ by sector. As a result, while the above principles are a good starting point for developing ethical data enhanced technologies it is important that additional considerations be given to the specific sectors to which the algorithm is applied. Conclusion In conclusion, the concept of manufacturers' integrity and self-regulation emerges as a crucial model for AI regulation. While governmental regulations provide a legal framework, self-regulation acts as an additional layer of accountability, allowing manufacturers to demonstrate their commitment to responsible practices and build credibility among stakeholders. The EU example highlights the significance of manufacturers' integrity, where businesses bear the responsibility of ensuring the safety and adherence to standards for their products. This model emphasizes the need for manufacturers to establish internal governance, programs, systems, and processes to regulate themselves, requiring them to identify and take ownership of their obligations while making and keeping promises. Furthermore, the proposed principles of ethical AI use for Ontario shed light on the importance of transparent and explainable systems, good and fair practices, safety and security measures, accountability and responsibility, human-centric design, and sensible and appropriate application of AI technologies. These principles aim to ensure that AI systems respect the rule of law, human rights, civil liberties, and democratic values while incorporating meaningful engagement with those affected by the systems. By adhering to these principles, organizations can foster trust, avoid adverse impacts, and align AI technologies with ethical considerations and societal values. As governments and organizations worldwide grapple with the regulation of AI, the adoption of manufacturers' integrity and self-regulation, coupled with the principles of ethical AI use, can serve as a comprehensive framework for responsible AI practice and use. It is imperative for stakeholders to collaborate, continuously assess risks, promote accountability, and prioritize the human-centric design to mitigate the challenges and maximize the potential benefits of AI technologies. By doing so, we can shape a future where AI is harnessed ethically, transparently, and in alignment with the values and aspirations of society.