top of page

BLOG POST

Manufacturers Integrity: A model for AI Regulation


Manufacturer's Integrity: A Model for AI Regulation
Manufacturer's Integrity: A Model for AI Regulation

While governmental regulations exist to enforce compliance, manufacturers in certain markets have recognized the need for self-regulation to maintain high standards and build trust among stakeholders. This article explores the concept of manufacturers' integrity and the significance of self-regulation with application for AI practice and use.


EU Example


Government regulations provide a legal framework for manufacturers, however, self-regulation acts as an additional layer of accountability. By proactively addressing ethical concerns, industry associations and manufacturers can demonstrate a commitment to responsible practices and build credibility.


The EU notion of manufacturers’ integrity offers an example of where self-regulation plays a significant role. Manufacturers' integrity refers to the ethical conduct and commitment to quality and safety demonstrated by businesses in the production and distribution of goods.


In the EU manufacturers have a vital role in guaranteeing the safety of products sold within the extended single market of the European Economic Area (EEA). They bear the responsibility of verifying that their products adhere to the safety, health, and environmental protection standards set by the European Union (EU). The manufacturer is obligated to conduct the necessary conformity assessment, establish the technical documentation, issue the EU declaration of conformity, and affix the CE marking to the product. Only after completing these steps can the product be legally traded within the EEA market.


While this model provides a framework for higher levels of safety and quality it requires manufacturers to establish internal governance, programs, systems and processes to regulate themselves. At a fundamental level this means:


  1. Identifying and taking ownership for obligations

  2. Making and keeping promises.


For many these steps go beyond turning “shall” statements into policy. They require turning “should” statements into promises with the added step of first figuring out what “should” means for their products and services. Determining what "should" looks like is the work of leadership which needs to happen now for the responsible use of A.I.


Principles of Ethical Use of AI for Ontario


Countries across the world are actively looking at how best to address A.I. A team within Ontario's Digital Service has examined ethical principles from various jurisdictions around the world, including New Zealand, the United States, the European Union, and major research consortiums. From this research principles were created designed to complement the Canadian federal principles by addressing specific gaps.


While intended as guidelines for government processes, programs and services they can inform other sectors regarding their own self-regulation of A.I.


The following are 6 (Beta) principles proposed by Ontario's A.I. team:


1. Transparent and explainable


There must be transparent use and responsible disclosure around data enhanced technology like AI, automated decisions and machine learning systems to ensure that people understand outcomes and can discuss, challenge and improve them. This includes being open about how and why these technologies are being used.


When automation has been used to make or assist with decisions, a meaningful explanation should be made available. The explanation should be meaningful to the person requesting it. It should include relevant information about what the decision was, how the decision was made, and the consequences.


Why it matters


Transparent use is the key principle that helps enable other principles while building trust and confidence in government use of data enhanced technologies. It also encourages a dialogue between those using the technology and those who are affected by it.


Meaningful explanations are important because they help people understand and potentially challenge outcomes. This helps ensure decisions are rendered fairly. It also helps identify and reverse adverse impacts on historically disadvantaged groups.


2. Good and fair


Data enhanced technologies should be designed and operated in a way throughout their life cycle that respects the rule of law, human rights, civil liberties, and democratic values. These include dignity, autonomy, privacy, data protection, non-discrimination, equality, and fairness.


Why it matters


Algorithmic and machine learning systems evolve through their lifecycle and as such it is important for the systems in place and technologies to be good and fair at the onset, in their data inputs and throughout the life cycle of use. The definitions of good and fair are intentionally broad to allow designers and developers to consider all of the users both directly and indirectly impacted by the deployment of an automated decision making system.


3. Safe


Data enhanced technologies like AI and ML systems must function in a safe and secure way throughout their life cycles and potential risks should be continually assessed and managed.


Designers, policy makers and developers should embed appropriate safeguards throughout the life cycle of the system to ensure it is working as intended. This would include mechanisms related to system testing, piloting, scaling and human intervention as well as alternative processes in case a complete halt of system operations is required. The mechanisms must be appropriate to the context and determined before deployment but should be iterated upon throughout the system’s life cycle.


Why it matters


Automated algorithmic decisions can reflect and amplify undesirable patterns in the data they are trained on. As well, issues with the system can arise that only become apparent after the system is deployed.


Therefore, despite our best efforts unexpected outcomes and impacts need to be considered. Accordingly, systems will require ongoing monitoring and mitigation planning to ensure that if the algorithmic system is making decisions that are not intended, a human can adapt, correct or improve the system.


4. Accountable and responsible


Organizations and individuals developing, deploying or operating AI systems should be held accountable for their ongoing proper functioning in line with the other principles. Human accountability and decision making over AI systems within an organization needs to be clearly identified, appropriately distributed and actively maintained throughout the system’s life cycle. An organizational culture around shared ethical responsibilities over the system must also be promoted.


Where AI is used to make or assist with decisions, a public and accessible process for redress should be designed, developed, and implemented with input from a multidisciplinary team and affected stakeholders. Algorithmic systems should also be regularly peer-reviewed or audited to ensure that unwanted biases have not inadvertently crept in over time.


Why it matters


Identifying and appropriately distributing accountability within an organization helps ensure continuous human oversight over the system is properly maintained. In addition to clear roles related to accountability, it is also important to promote an organizational culture around shared ethical responsibilities. This helps prevent gaps and avoids the situation where ethical considerations are always viewed as someone else’s responsibility.


While our existing legal framework includes numerous traditional processes of redress related to governmental decision making, AI systems can present unique challenges to those traditional processes with their complexity. Input from a multidisciplinary team and affected stakeholders will help identify those issues in advance and design appropriate mechanisms to mitigate them.


Regular peer review of AI systems is also important. Issues around bias may not be evident when AI systems are initially designed or developed, so it's important to consider this requirement throughout the life-cycle of the system.


5. Human centric


AI systems should be designed with a clearly articulated public benefit that considers those who interact with the system and those who are affected by it. These groups should be meaningfully engaged throughout the system’s life cycle, to inform development and enhance operations. An approach to problem solving that embraces human centered design is strongly encouraged.


Why it matters


Clearly articulating a public benefit is an important step that enables meaningful dialogue early with affected groups and allows for measurement of success later.


Placing the focus on those who interact with the system and those who are affected by it ensures that the outcomes do not cause adverse effects in the process of creating additional efficiencies.


Developing algorithmic systems that incorporate human centred design will ensure better societal and economic outcomes from the data enhanced technologies.


6. Sensible and appropriate


Every data enhanced system exists not only within its use case, but also within a particular sector of society and a broader context that can feel its impact. Data enhanced technologies should be designed with consideration of how they may apply to a particular sector along with awareness of the broader context. This context could include relevant social or discriminatory impacts.


Why it matters


Algorithmic systems and machine learning applications will differ by sector. As a result, while the above principles are a good starting point for developing ethical data enhanced technologies it is important that additional considerations be given to the specific sectors to which the algorithm is applied.


Conclusion


In conclusion, the concept of manufacturers' integrity and self-regulation emerges as a crucial model for AI regulation. While governmental regulations provide a legal framework, self-regulation acts as an additional layer of accountability, allowing manufacturers to demonstrate their commitment to responsible practices and build credibility among stakeholders. The EU example highlights the significance of manufacturers' integrity, where businesses bear the responsibility of ensuring the safety and adherence to standards for their products. This model emphasizes the need for manufacturers to establish internal governance, programs, systems, and processes to regulate themselves, requiring them to identify and take ownership of their obligations while making and keeping promises.


Furthermore, the proposed principles of ethical AI use for Ontario shed light on the importance of transparent and explainable systems, good and fair practices, safety and security measures, accountability and responsibility, human-centric design, and sensible and appropriate application of AI technologies. These principles aim to ensure that AI systems respect the rule of law, human rights, civil liberties, and democratic values while incorporating meaningful engagement with those affected by the systems. By adhering to these principles, organizations can foster trust, avoid adverse impacts, and align AI technologies with ethical considerations and societal values.


As governments and organizations worldwide grapple with the regulation of AI, the adoption of manufacturers' integrity and self-regulation, coupled with the principles of ethical AI use, can serve as a comprehensive framework for responsible AI practice and use. It is imperative for stakeholders to collaborate, continuously assess risks, promote accountability, and prioritize the human-centric design to mitigate the challenges and maximize the potential benefits of AI technologies. By doing so, we can shape a future where AI is harnessed ethically, transparently, and in alignment with the values and aspirations of society.


37 views

Become a Member

Lean Compliance Member

$30

30

Every month

Access to Exclusive Resources and Programs

Valid until canceled

Access to Recorded Webinars

Access to Exclusive Content (worksheets, templates, etc.)

Access to Exclusive Articles

Access to Exclusive Resources

Access to Elevate Compliance Huddle Worksheets and Content

50% Off First Compliance Consultation ($225 value)

Elevate Compliance Huddle

Mondays @ Noon on Zoom (weekly)

Elevate Compliance Huddle / Free Online Session

  • LinkedIn
© 2017-2024 Lean Compliance™ All rights reserved.

Elevating Safety, Security, Sustainability, Environmental, Quality, ESG, Healthcare, AI, and Regulatory Compliance

bottom of page