Trustworthiness Is Not a Realistic Goal for AI and Here’s Why
Alyssa Blackburn

By: Alyssa Blackburn on September 10th, 2024

Print/Save as PDF

Trustworthiness Is Not a Realistic Goal for AI and Here’s Why

Artificial Intelligence (AI)

As someone who works closely with information management and AI, I get asked a lot whether or not we should trust the outputs from generative AI.  I've come to the conclusion that trustworthiness is not a realistic goal when it comes to AI-generated content. Instead, we should approach everything AI produces with a sense of mistrust and use critical analysis skills in how we approach generative AI output.  

The Importance of Mistrust and Critical Analysis 

When I say that we should approach AI-generated content with mistrust, I don't mean that AI shouldn't be managed ethically. It's entirely possible to have ethical AI that isn't necessarily trustworthy. What we’re looking for here is both! We need information that is accurate (whether generated by AI or not) as well being confident that it has been produced in an ethical way. The focus should be on factors, such as the quality and integrity of the data and output and how we ensure we can trace back where it was generated from.  

 

The Subjectivity of Trust 

Trust is a subjective concept, which makes it difficult to regulate or measure. If you're talking about trust as an outcome, what's your measurement? How do you measure trust? Either you do or you don't, and some people's definition of that is different. Given the variability in how people define and perceive trust, I don't think it's a great way to evaluate AI. 

 

Focusing on Quality, Integrity, and Accuracy 

Instead of focusing on trust, we should be emphasizing the quality, integrity, and accuracy of the data and output. These are the things that give us a greater sense of trust. It’s not a hard and fast measurement, but the better our quality and integrity processes, the more likely we can trust the outputs. We need to ask questions like:

  • Does the data have quality?
  • Is the output quality?
  • Does it have integrity?
  • Is it accurate? 
  • Is it complete?

These are the factors that should be prioritized when managing and evaluating AI.

 

AIIM's Take on AI Trustworthiness

The Association for Intelligent Information Management (AIIM) takes the position that trustworthiness is not a reasonable expectation to set for AI. As information managers, our focus should be on ensuring the quality, integrity, and accuracy of the data and output.  

By prioritizing these factors and approaching AI-generated content with a sense of mistrust and critical analysis, we can work towards creating ethical AI that serves its intended purpose, even if it may not be entirely trustworthy in the traditional sense. 

 

About Alyssa Blackburn

Alyssa Blackburn is the Information Strategy Lead at AvePoint. With more than 15 years of experience in the information management industry, Alyssa has worked with both public and private sector organizations to deliver guidance for information management success in the digital age. She is responsible for the development of AvePoint’s information and records management solution, AvePoint Opus. A passionate information management professional, Alyssa is actively involved in the industry and is an in-demand speaker at conferences and industry events worldwide.