As a former Jeopardy Champion, I've experienced firsthand the importance of memorizing and recalling vast amounts of information. This skill set has not only served me well on the game show but has also informed my perspective on the relationship between human input and artificial intelligence (AI) in the realm of information management.
In 2011, I watched as IBM's Watson supercomputer competed against two of Jeopardy's most prominent champions, Ken Jennings and Brad Rutter. During one question, all three contestants – including Watson – provided incorrect responses (Watson actually giving the same wrong answer as previously given by one of the other contestants). This moment highlighted a crucial fact: Watson was incapable of learning new things on the fly and was limited to the information provided to it as input.
This realization underscores a fundamental truth about AI: it is only as good as the human and/or digital input it receives. As organizations increasingly invest in AI-based technologies, it is essential to recognize that the technology itself is imperfect, just like humans. To ensure the success of AI system implementations, we must prioritize iteration, testing, and quality control, acknowledging that the value of information and data is ultimately derived from human interpretation.
There is often speculation about AI taking over jobs, but I believe that most AI technologies currently serve as assistive tools rather than complete replacements for human roles. Secretary of Transportation Pete Buttigieg, one of the most articulate speakers and thinkers in government, recently addressed this topic at a transportation technology conference. He discussed assistive technologies used in cars, emphasizing that while these AI-based technologies can aid drivers, they do not eliminate the need for human awareness and control. Just because a car can drive itself doesn’t mean one should take a nap while in the driver’s seat.
The same principle applies to customer service departments that are increasingly relying on bots to respond to client challenges. As a user and observer of these technologies, I have developed a love-hate relationship with bots. When customers find themselves stuck in a loop with a bot that provides no meaningful assistance, it becomes clear that the issue lies not just with the technology itself, but with the methodology, development, and project management processes behind it. What may be quite useful for 80% of problems will become increasingly problematic for the 20% of problems that are far more complex to adjudicate.
In my experience, the development process is even more critical than the resulting technology. Focusing on methodology, project management, system development life cycle, quality control testing, and repetitive testing is essential for creating better products and avoiding complacency and mediocrity.
However, with the rise of generative AI tools, the responsibility for ensuring quality and avoiding complacency falls not only on the developers but also on the consumers. Information management practitioners in government agencies and the private sector must consider their level of responsibility in this regard.
To effectively manage expectations around AI, it is crucial to communicate that these technologies are assistive in nature. Just as a spell checker cannot compensate for a lack of contextual understanding (for example, understanding the difference between their, there and they’re – all spelled correctly but having a different meaning depending on the context), AI tools should be viewed as aids rather than complete solutions. Voice recognition technology, for example, requires proofreading and refinement to ensure accuracy.
When selling the value of AI to stakeholders, it is essential to focus on the operational benefits rather than just the technical details. By understanding the challenges and goals of different departments within an organization, information management professionals can demonstrate how AI can address specific pain points and support key strategies.
Navigating the intersection of human input and technology in the realm of AI and information management requires a nuanced approach. By recognizing the limitations of AI, prioritizing robust development processes, managing expectations, and effectively communicating the value of these technologies, we can harness the power of AI as an assistive tool while maintaining the critical role of human insight and interpretation. As we move forward in this rapidly evolving landscape, it is essential to keep sight of the fact that AI is only as good as the human input that shapes it.
This blog post is based on an original AIIM OnAir podcast recorded on March 5, 2024. When recording podcasts, AIIM uses AI-enabled transcription in Zoom. We then use that transcription as part of a prompt with Claude Pro, Anthropic’s AI assistant. AIIM staff (aka humans) then edit the output from Claude for accuracy, completeness, and tone. In this way, we use AI to increase the accessibility of our podcast and extend the value of great content.