top of page

BLOG POST

Over 425 Articles To Help Elevate Your Compliance

Stopping AI from Lying

Recently, I asked Microsoft’s Copilot to describe "Lean Compliance." I knew that information about Lean Compliance used in current foundation models was not up-to-date and so would need to merged with real-time information which is what co-pilot attempted to do.


However, what it came up with was a mix of accuracy and inaccuracy. It said someone else founded Lean Compliance rather than me. Instead, of not including that aspect of "Lean Compliance", it made it up.


Stopping AI from Lying
Stopping AI from Lying

I instructed Copilot to make the correction which it did at least within the context of my prompt session. It also apologized for making the mistake.


While this is just one example, I know my experience with AI chat applications is not unique. Had I not known the information was incorrect, I may have used it in decision-making or disseminated the wrong information to others.


Many are fond of attributing human qualities to AI which is called anthropomorphism. Instead of considering output as false and in need of correction, many will say that the AI system hallucinated — as if that makes it better. And why did Copilot apologize?


This practice muddies the waters and makes it difficult to discuss machine features and properties such as how to deal with incorrect output. However, if we are going to anthropomorphize then why not go all the way, and say


AI lied.

We don’t do this because it applies a standard of morality to the AI system. We know that machines are not capable of being ethical. They don’t have ethical subroutines to discern between what’s right and wrong. This is a quality of humans not machines.


That's why when it comes to AI systems we need to stop attributing human qualities to them if we hope to stop the lies and get on with the task of improving output quality.


 




34 views0 comments

Related Posts

See All
The Book

Learn more about our upcoming book coming soon.

bottom of page