Governments around the world have started the arduous process of developing regulations and standards for artificial intelligence. In June 2023, AIIM formally responded to a request for comment from the U.S. National Telecommunications and Information Administration (NTIA) on AI accountability. According to the NTIA website, more than 1,400 responses were submitted.
Information management is not an insular profession and it's most successful in an organization when it's focused outward on the needs of stakeholders and business outcomes. As such, AIIM's leadership strongly believes that we must take a strong, active stance on how AI tools use and produce information.
AIIM believes it's vital that regulators develop flexible and practical guardrails for how information is treated during AI development and use. Guardrails will empower innovation, boost adoption, and ensure accountability.
The key tenets of AIIM’s position are:
- Not all AI is equal and public policy should reflect the different levels of risk. Unlike generative AI, some AI is simple, easy to comprehend, and can be audited to determine how decisions were made by the technology. Regulators should classify AI into different categories and establish policy accordingly.
- Need a flexible, universal framework. Stakeholders need a framework to better understand their obligations and ensure compliance. A framework would also encourage further innovation and adoption of AI.
- Accuracy is key to advancing AI accountability. “Trustworthiness” of AI output is unattainable. Accuracy is a more worthwhile and plausible ambition. It establishes credibility, currency of the information, completeness, and chain of control.
- Transparency will ensure accountability. AIIM supports the principle in the U.S. Administration’s AI Bill of Rights that consumers must know when, how, and why AI is being used.
- Responsibility is shared. The developers and organizations who use AI share responsibility and liability for AI output.
- Mandatory auditing of some AI may be implausible. Narrow AI tools are auditable and largely defensible. It’s important to recognize the difficulty of auditing generative AI tools, making mandatory auditing difficult and potentially impossible.
- The volume and quality of AI output should influence regulations. When considering recordkeeping obligations, regulators should keep in mind the enormous volume of data AI can produce and that the data quality may be subpar and not worth retaining.
The reality is the pace of change in AI development has exceeded the pace at which regulations and standards are being developed. And as AI adoption and use increases within our own organizations, it is our responsibility as information management professionals to protect our organizations and stakeholders.
AIIM has published a complimentary version of its letter to NTIA and encourages information management professionals to use the letter as a tool to help guide conversations, decisions, and policy about using AI in their own organizations.
We thank the authors and editors of AIIM’s letter to NTIA:
Authors
- Jed Cawthorne, MBA, CIP, IG, Principal Evangelist, Shinydocs Corporation
- Tori Miller Liu, MBA, FASAE, CAE, President & CEO, AIIM
- Jennifer Ortega, Ulman Public Policy
- Alan Pelz-Sharpe, Founder, Deep Analysis
Editors
- Ron Cameron, CEO, KnowledgeLake
- Jason Cassidy, CEO, Shinydocs Corporation
- Rikkert Engels, CEO and Founder, Xillio
- Karen Hobert, Future of Work Thought Leadership & Research, Cisco
- Kramer Reeves, Executive Vice President, Work-Relay
AIIM looks forward to continuing to participate in regulatory conversations about information management in the age of AI.