The AIIM Blog
Keep your finger on the pulse of Intelligent Information Management with industry news, trends, and best practices.
Artificial Intelligence (AI) | Machine Learning
This is the third part of a 3-part series on the Ethical Use of Data for Training Machine Learning Technology by guest authors Andrew Pery and Michael Simon. You can also check out Part 1 and Part 2 from this series. Part 3: Regulatory Efforts in the U.S. Present a Bleak Perspective In the United States, governmental efforts to examine AI have made far less progress as compared to the E.U. The most recent effort at the federal level, the Algorithmic Accountability Act of 2019 (S.1108) sponsored by Senator Ron Wyden (D-OR) and Senator Cory Booker (D-NJ)(with a parallel House bill, H.R.2231, sponsored by Representative Yvette Clark (D-NY)), seeks "To direct the Federal Trade Commission to require entities that use, store, or share personal information to conduct automated decision system impact assessments and data protection impact assessments." The proposed law would require the Federal Trade Commission to enact regulations within the next two years to require companies that make over $50 million per year or collect data on more than 1 million people to perform an "automated decision system impact assessment." However, unlike the GDPR's transparency requirements (no matter how debatable), the proposed bill would not require those assessments to be made public. Despite this lack of a transparency provision, the bill was quickly endorsed by a number of civil rights groups.
Share
Artificial Intelligence (AI) | Machine Learning
This is the third part of a 3-part series on the Ethical Use of Data for Training Machine Learning Technology by guest authors Andrew Pery and Michael Simon. You can also check out Part 1 and Part 2 from this series. Part 2: The Ethical and Legal Challenges of AI The AI technology bias and its potentially unintended consequences is gaining the attention of policymakers, technology companies, and civil liberties groups. In a recent article based upon an ABA Business Law Section Panel: Examining Technology Bias: Do Algorithms Introduce Ethical & Legal Challenges? The panelist-authors noted that:
Share
Making an ECM implementation successful requires planning and attention to detail. The best way to create the right solution is to identify organizational goals and priorities. Learn how to manage a successful implementation in our free guide.
Artificial Intelligence (AI) | Machine Learning
This is the third part of a 3-part series on the Ethical Use of Data for Training Machine Learning Technology by guest authors Andrew Pery and Michael Simon. You can also check out Part 1 and Part 2 from this series. Part 1: Bad Things Can Come from Non-neutral Technology AI technology is becoming pervasive, impacting virtually every facet of our lives. A recent Deloitte report estimates that shipments of devices with embedded AI will increase from 79 million in 2018 to 1.2 billion by 2022: "Increasingly, machines will learn from experiences, adapt to changing situations, and predict outcomes…Some will infer users' needs and desires and even collaborate with other devices by exchanging information, distributing tasks, and coordinating their actions."
Share
Artificial Intelligence (AI) | Information Security | Privacy
According to the 2019 IDC study of spending on Artificial Intelligence (AI), it's estimated to reach $35.8 billion in 2019 and is expected to double by 2022 to $ 79.2 billion representing an annual growth rate of 38% for the period 2018-2022. The economic benefits and utility of AI technologies are clear and compelling. No doubt, applications of AI may address some of the most vexing social challenges such as health, the environment, economic empowerment, education, and infrastructure. At the same time, as AI technologies become more pervasive, they may be misused and, in the absence of increased transparency and proactive disclosures, create ethical and legal gaps. Increased regulation may be the only way to address such gaps.
Share
AIIM on Air | Artificial Intelligence (AI)
When I was a kid in grade school, I always hated homework because it often stood in the way of going outside to play with my friends. I can remember joking around with them and saying that we needed to build a robot to do our homework for us. That way, we could spend our after school time riding bikes and playing together.
Share
Seven (yes, seven!) years ago, AIIM published “The Big Data Balancing Act - Too much yin and not enough yang?” The author of the report was none other than Nuxeo’s David Jones, who worked as a business analyst for AIIM at the time.
Share