top of page

Toasters on Trial: The Slippery Slope of Crediting AI for Discoveries

Writer's picture: Raimund LaquaRaimund Laqua
Toasters on Trial
Toasters on Trial

In recent days, a thought-provoking statement was made suggesting that artificial intelligence (AI) should receive recognition for discoveries it helps to facilitate. This comment has sparked an interesting debate, highlighting a significant contradiction in how we view technology's role in society.


On one side of the argument, many argue that technology, including AI, should not be held responsible for its consequences or how humans choose to utilize it. This perspective is often illustrated by the "gun metaphor" - the idea that guns themselves do not kill people, but rather people kill people using guns. This analogy suggests that tools and technology are morally neutral, and the responsibility for their use lies solely with human users.


On the other hand, we now see some individuals proposing that AI should be credited for the discoveries it contributes to, particularly when these discoveries have positive outcomes. This stance attributes a level of agency and merit to AI systems that goes beyond viewing them as mere tools.


However, this raises an important question: can we logically maintain both of these positions simultaneously? If we accept that AI should receive credit for positive outcomes, it follows that we must also hold it accountable for negative consequences. This perspective would effectively personify technology, turning our machines into entities capable of both heroic and criminal acts.


Taking this logic to its extreme, we might find ourselves in a future where we attempt to assign blame to everyday appliances for their perceived failures. For instance, we could see people trying to sue their toasters for burning their bread before the end of this decade.


This scenario, while seemingly absurd, illustrates the potential pitfalls of attributing too much agency to our technological creations. It underscores the need for a nuanced and consistent approach to how we view the role of AI and other technologies in our society, particularly as they become increasingly sophisticated and integrated into our daily lives.


Recommendation: Establish an AI Ethics Committee


For organizations to get ahead of these issues we recommend they create a cross-functional AI Ethics Committee to oversee the ethical implications of AI use within the organization. This committee should:


  1. Evaluate AI projects and applications for potential ethical risks

  2. Develop and maintain ethical guidelines for AI development and deployment

  3. Provide guidance on complex AI-related ethical dilemmas

  4. Monitor emerging AI regulations and industry best practice.

  5. Collaborate with legal and compliance teams to ensure AI use aligns with regulatory requirements

  6. Conduct regular audits of AI systems to identify and mitigate bias or other ethical concerns

  7. Advise on transparency and explainability measures for AI-driven decisions

  8. Foster a culture of responsible AI use throughout the organization



 

Lean Compliance now provides an online program designed to teach decision-makers how to make ethical decisions related to AI. This advanced course integrates the PLUS model for ethical decision-making. You can learn more about this program here.







10 views

Related Posts

See All

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
  • LinkedIn
© 2017-2025 Lean Compliance™ All rights reserved.

Elevating Safety, Security, Sustainability, Quality, Regulatory, Legal, Ethical, Responsible AI,  and ESG Compliance

bottom of page