top of page

BLOG POST

The Greatest AI Risk – AI Agency

When it comes to Artificial Intelligence what worries many is not so much how smart it might become but instead what it might do with the intelligence it learns. The “do” part is a function of its agency and is perhaps the greatest source of risk and concern facing today’s society.


AI Agency - The Power to Act in the World
AI Agency - The Power to Act in the World

Agency is the power to act in the world. In its narrow definition agency is intentional but may lack moral considerations. However, having agency without moral capacity is a serious problem and something where applied ethics (AI Ethics) is needed.


Before we explore the topic of AI Agency we need to consider the difference between autonomy and agency. Autonomy has more to do with the right to make decisions free from external control or unwarranted interference. Autonomy is the right of self-governance.


In this context, autonomous vehicles are better described as driving agents as they are acting on behalf of the driver’s intention. They do not have the right of self-governance or act on its own intention. However, when it comes to AI agency and autonomy these are often used interchangeably often describing aspirational goals of the creators rather than describing the AI capabilities themselves.


Agency is what turns our possibilities into realities, and therein lies the rub.

Agency is what turns descriptions of our world into something we experience. Having smart people is important, but it's what is done with this knowledge that we are more concerned about. It's the application of knowledge (engineering) which builds our realities.


Without agency:


  • Intelligence is just advice,

  • Information is just data, and

  • Knowledge is just a database.

Having smarter machines is not the problem. It's a means to an end. The question is – to what end?


For humans, informed by knowledge of our past and future desires, agency turns possibilities into present day realities. What those realities become depend very much (but not entirely) on one’s intentions.


Agency provides the means to transcend a world defined by a future described as unfolding, controlled by deterministic laws of nature, and stochastic uncertainty to a future that is becoming chosen by the decisions and actions we make.


Agency gives us the power to choose our future.

That’s why agency without good judgment is not desirable as it creates the opportunity for risk. When it comes to humans, we usually limit the amount of agency based on moral capacity and the level of accountability.


Just as with our children, we expect them to behave morally, however we do not hold them accountable in the same way as we do adults. As such we limit what they can do and the choices they can make.


When we are young our foolish choices are tolerated and at times encouraged to provide fodder for learning. However, as adults, fools are frowned upon in preference of those who demonstrate wisdom, good judgment, and sound choices.


To act in the world brings with it the responsibility to decide between bad and good, useless and useful, and what can harm and what can heal. Ethics provides the framework for these decisions to be made. In many ways, applied ethics is the application of wisdom to the domain of agency.


If AI is to have agency it must have the capacity to make moral decisions. This requires, at a minimum, ethical subroutines; something that is currently not available. Even if it was, this would need to be accompanied by accountability. At present, we don't attribute accountability to machines.


Agency always brings with it a measure of culpability.

Agency and accountability are two sides of the same coin. Agentic AI must be answerable for the decisions it makes. This in turn will require more than just explanation for what it has done. AI will need to be held accountable.


As humans are more than an embodiment of intelligence, we need another name to describe artificial intelligence with agency having ethical subroutines, and that is accountable for its actions.


We will need different categories to distinguish between each AI capability:


  • AI Machines - AI systems without agency (advisory, decision support, analysis, etc..)

  • AI Agents - AI Machines with agency but without moral capacity and limited culpability

  • AI Ethical Agents - AI Agents with moral capacity and full culpability


AI Machines can still have agency (self-referencing machines) even if they are unaware.

In theory, machines have a measure of agency to the degree they interact in the world. Machines can be designed to adapt to their environment based on pre-defined rules. However, when it comes to AI Machines the rules themselves can adapt.


These kinds of machines are self-referencing and are not an impartial observer in the classical sense. The output generated by AI machines interferes with the future they are trying to represent which forms a feedback loop.


AI in this scenario is better described as an observer-participant which gives it a greater measure of agency than classical machines. This is agency without purpose or intention manifesting as a vicious or virtuous cycle towards some unknown end.


Perhaps, this is what is meant by autonomous AI. These are AI machines that no longer act on behalf of its creator, but instead act on its own towards some unknown goal. No wonder this is creating significant angst in the population-at-large. We have created an open-loop system with the capacity to act in the world and to decide but lacking moral capacities.


What should be done?

AI has other risks besides its capacity to act in the world and to decide. However, Agentic AI by far poses the greatest risk to society. Its capacity to act in the world challenges our traditional definitions of machine and human interactions.


Some of the risk factors already exist and others are still in our future. Nonetheless, guidelines and guardrails should be developed to properly regulate AI proportionate to the level of risk it present.


However, guardrails will not be enough.


Humans must act ethically during the design, build, and use of AI technologies. This means, among other things, learning how to make ethical decisions and holding ourselves to a higher standard. This is something professional engineers are already trained to do and why they need to be at the table.


106 views

Comments


Become a Member

Lean Compliance Member

$30

30

Every month

Access to Exclusive Resources and Programs

Valid until canceled

Access to Recorded Webinars

Access to Exclusive Content (worksheets, templates, etc.)

Access to Exclusive Articles

Access to Exclusive Resources

Access to Elevate Compliance Huddle Worksheets and Content

50% Off First Compliance Consultation ($225 value)

Elevate Compliance Huddle

Mondays @ Noon on Zoom (weekly)

Elevate Compliance Huddle / Free Online Session

bottom of page