This is the third part of a 3-part series on the Ethical Use of Data for Training Machine Learning Technology by guest authors Andrew Pery and Michael Simon. You can also check out Part 1 and Part 2 from this series. Part 3: Regulatory Efforts in the U.S. Present a Bleak Perspective In the United States, governmental efforts to examine AI have made far less progress as compared to the E.U. The most recent effort at the federal level, the Algorithmic Accountability Act of 2019 (S.1108) sponsored by Senator Ron Wyden (D-OR) and Senator Cory Booker (D-NJ)(with a parallel House bill, H.R.2231, sponsored by Representative Yvette Clark (D-NY)), seeks "To direct the Federal Trade Commission to require entities that use, store, or share personal information to conduct automated decision system impact assessments and data protection impact assessments." The proposed law would require the Federal Trade Commission to enact regulations within the next two years to require companies that make over $50 million per year or collect data on more than 1 million people to perform an "automated decision system impact assessment." However, unlike the GDPR's transparency requirements (no matter how debatable), the proposed bill would not require those assessments to be made public. Despite this lack of a transparency provision, the bill was quickly endorsed by a number of civil rights groups.
This is the second part of a 3-part series on the Ethical Use of Data for Training Machine Learning Technology by guest authors Andrew Pery and Michael Simon. Part 1 is available here. Part 2: The Ethical and Legal Challenges of AI The AI technology bias and its potentially unintended consequences is gaining the attention of policymakers, technology companies, and civil liberties groups. In a recent article based upon an ABA Business Law Section Panel: Examining Technology Bias: Do Algorithms Introduce Ethical & Legal Challenges? The panelist-authors noted that:
Making an ECM implementation successful requires planning and attention to detail. The best way to create the right solution is to identify organizational goals and priorities. Learn how to manage a successful implementation in our free guide.
This is the first part of a 3-part series on the Ethical Use of Data for Training Machine Learning Technology by guest authors Andrew Pery and Michael Simon. Part 1: Bad Things Can Come from Non-neutral Technology AI technology is becoming pervasive, impacting virtually every facet of our lives. A recent Deloitte report estimates that shipments of devices with embedded AI will increase from 79 million in 2018 to 1.2 billion by 2022: "Increasingly, machines will learn from experiences, adapt to changing situations, and predict outcomes…Some will infer users' needs and desires and even collaborate with other devices by exchanging information, distributing tasks, and coordinating their actions."
No matter where you turn, it seems you can't help but run into discussion about Artificial Intelligence being the future of Intelligent Information Management. In fact, when we surveyed the AIIM Community about it and found that: 81% of organizations reported that Deep Learning and Machine Learning are key to their future technology and business planning.
There‘s a lot of excitement about Artificial Intelligence and business automation these days, and for good reason. Developments in AI — and its sidekicks “Deep Learning” and “Machine Learning” — bring the promise of transforming work as we know it. Those transformed work processes will operate in a completely different way: fully automated and autonomous, with smart machines doing the work. The vision is to free humans from performing mundane and repetitive business tasks and assist them with better access to better information to better serve customers and the business.
While Artificial Intelligence (AI) has the potential to be a very powerful tool in information management, the topic is so wrapped up in hyperbole and confusion that it can be challenging to cut through all the noise, causing many to fear the complexity of AI. As you may know, AIIM recently launched a new training course titled Practical AI for the Information Professional. The challenge presented to my colleague Kashyap and I was to take a very complex and hyped topic and make it understandable and relevant to the real world needs of the business -- cutting through the hype, demystifying the technology and providing sound advice and guidance. What we found in our research was that AI is not as complicated or daunting as most believe. We discovered in our dozens of conversations with folks from organizations of all sizes that there are three major misunderstandings about the use and value of AI.