Information Management Blog

The AIIM Blog

Keep your finger on the pulse of Intelligent Information Management with industry news, trends, and best practices.

Andrew Pery

Andrew Pery is a marketing executive with over 25 years of experience in the high technology sector focusing on content management and business process automation. Currenly Andrew is CMO of Top Image Systems.  Andrew holds a Masters of Law degree with Distinction from Northwestern University is a Certified Information Privacy Professional (CIPP/C) and a Certified Information Professional (CIP/AIIM).

Blog Feature

Artificial Intelligence (AI)  |  Privacy

Could a Mobile App Help Contain COVID-19? Balancing Privacy Rights & Public Interest

As the COVID-19 pandemic continues to accelerate, there are some innovative efforts to minimize its impact. In one such approach, a multidisciplinary group of computer scientists, mathematicians, and epidemiologists at the Big Data Institute at Oxford University have developed a mathematical model instantiated in a mobile application that traces contact. Those involved in the project believe it's "..possible to stop the epidemic…if contact tracing is sufficiently fast, sufficiently efficient, and happens at scale." Typically, contact tracing is the most effective way to contain an outbreak. However, with a virus like COVID-19, that's preponderantly transmitted by asymptomatic patients, "classical contact tracing will not be enough to achieve the speed and efficiency needed, but it could be achieved by a contact tracing mobile app if used by a sufficiently large proportion of the population."

Read More

Blog Feature

Artificial Intelligence (AI)  |  Machine Learning

Ethical Use of Data for Training Machine Learning Technology - Part 3

This is the third part of a 3-part series on the Ethical Use of Data for Training Machine Learning Technology by guest authors Andrew Pery and Michael Simon. You can also check out Part 1 and Part 2 from this series. Part 3: Regulatory Efforts in the U.S. Present a Bleak Perspective In the United States, governmental efforts to examine AI have made far less progress as compared to the E.U. The most recent effort at the federal level, the Algorithmic Accountability Act of 2019 (S.1108) sponsored by Senator Ron Wyden (D-OR) and Senator Cory Booker (D-NJ)(with a parallel House bill, H.R.2231, sponsored by Representative Yvette Clark (D-NY)), seeks "To direct the Federal Trade Commission to require entities that use, store, or share personal information to conduct automated decision system impact assessments and data protection impact assessments." The proposed law would require the Federal Trade Commission to enact regulations within the next two years to require companies that make over $50 million per year or collect data on more than 1 million people to perform an "automated decision system impact assessment." However, unlike the GDPR's transparency requirements (no matter how debatable), the proposed bill would not require those assessments to be made public. Despite this lack of a transparency provision, the bill was quickly endorsed by a number of civil rights groups.

Read More

14 Steps to a Successful ECM Implementation

Making an ECM implementation successful requires planning and attention to detail. The best way to create the right solution is to identify organizational goals and priorities. Learn how to manage a successful implementation in our free guide.

Blog Feature

Artificial Intelligence (AI)  |  Machine Learning

Ethical Use of Data for Training Machine Learning Technology - Part 2

This is the third part of a 3-part series on the Ethical Use of Data for Training Machine Learning Technology by guest authors Andrew Pery and Michael Simon. You can also check out Part 1 and Part 2 from this series. Part 2: The Ethical and Legal Challenges of AI The AI technology bias and its potentially unintended consequences is gaining the attention of policymakers, technology companies, and civil liberties groups. In a recent article based upon an ABA Business Law Section Panel: Examining Technology Bias: Do Algorithms Introduce Ethical & Legal Challenges? The panelist-authors noted that:

Read More

Blog Feature

Artificial Intelligence (AI)  |  Machine Learning

Ethical Use of Data for Training Machine Learning Technology - Part 1

This is the third part of a 3-part series on the Ethical Use of Data for Training Machine Learning Technology by guest authors Andrew Pery and Michael Simon. You can also check out Part 1 and Part 2 from this series. Part 1: Bad Things Can Come from Non-neutral Technology AI technology is becoming pervasive, impacting virtually every facet of our lives. A recent Deloitte report estimates that shipments of devices with embedded AI will increase from 79 million in 2018 to 1.2 billion by 2022: "Increasingly, machines will learn from experiences, adapt to changing situations, and predict outcomes…Some will infer users' needs and desires and even collaborate with other devices by exchanging information, distributing tasks, and coordinating their actions."

Read More

Blog Feature

Artificial Intelligence (AI)  |  Information Security  |  Privacy

Regulation of AI-Based Applications: The Inevitable New Frontier

According to the 2019 IDC study of spending on Artificial Intelligence (AI), it's estimated to reach $35.8 billion in 2019 and is expected to double by 2022 to $ 79.2 billion representing an annual growth rate of 38% for the period 2018-2022. The economic benefits and utility of AI technologies are clear and compelling. No doubt, applications of AI may address some of the most vexing social challenges such as health, the environment, economic empowerment, education, and infrastructure. At the same time, as AI technologies become more pervasive, they may be misused and, in the absence of increased transparency and proactive disclosures, create ethical and legal gaps. Increased regulation may be the only way to address such gaps.

Read More

Blog Feature

GDPR  |  Information Security  |  Privacy

Mitigating Third Party Risks Under GDPR

One of the most vexing problems for organizations is mitigating GDPR compliance risks when dealing with third parties, particularly the nature and extent of obligations between data controllers and processors. By virtue of the GDPR accountability principle, organizations are required to adhere to the six fundamental principles of safeguarding privacy rights that impact the collection, processing and disposition of personally identifiable information. These obligations extend beyond the walls of an organization to third parties that process personally identifiable information. Also, GDPR provides for a broad definition of processing and imposes stringent requirements on organizations that engage third parties to process personally identifiable information.

Read More