Artificial intelligence (AI) is a rapidly developing technology that has the potential to revolutionize the world in unprecedented ways. However, as its capabilities continue to expand, concerns are being raised about the lack of responsibility and safety measures in its development and deployment. The Center for Humane Technology's Tristan Harris and Aza Raskin recently presented the AI Dilemma, exploring the risks of uncontrolled AI and the need for responsible use.
The parallels between the early days of social media and the development of AI are striking. Both technologies were created, scaled to the masses, as we all hoped for the best, with users becoming the unwitting experiment, consenting to participate without fully understanding the potential risks. However, the consequences of AI could be far more severe, as it has the ability to interact with its environment in unpredictable ways.
The risks of unchecked AI are vast. We are experiencing an uncontrolled reinforcing learning loop creating exponential capabilities, but with unmitigated risks. In many ways, this is a race condition without any kill switch or means of regulating outcomes to keep AI operating in a responsible manner. This is a problem that we, as humans, have created, and one that we must address.
The AI Dilemma raises important questions that we must address. Where are the safeguards, the brakes, and the kill switch? Who is responsible for the “responsible” use of AI, and when does the science experiment stop, and responsible engineering begin? We must balance innovation with responsibility to ensure that AI is developed and used in ways that benefit society, not threaten it.
A step we can take is to reinsert the engineering method into the development of AI. This means having a process to weigh the pros and cons, balance the trade-offs, and prioritize the safety, health, and welfare of the public. This will require more engineers, along with other professionals, in the loop, advocating for and practising responsible AI.
The consequences of unchecked AI are substantial, and we must take action now to mitigate these risks. The AI Dilemma is a call to action, urging us to reevaluate our approach to AI and to prioritize the development and deployment of responsible AI. By doing so, we can ensure that AI is a force for good, enhancing our lives rather than threatening them.
Instead of deploying science experiments to the public at scale we need to build responsible engineered solutions.