Growing up, my parents taught me that there are some questions that aren’t appropriate to ask. Generally, it’s safe to avoid asking people their age, their salary, their weight, their politics, etc. Some questions can make the people being asked feel uncomfortable and so should be avoided.
As the COVID-19 pandemic continues to accelerate, there are some innovative efforts to minimize its impact. In one such approach, a multidisciplinary group of computer scientists, mathematicians, and epidemiologists at the Big Data Institute at Oxford University have developed a mathematical model instantiated in a mobile application that traces contact. Those involved in the project believe it's "..possible to stop the epidemic…if contact tracing is sufficiently fast, sufficiently efficient, and happens at scale." Typically, contact tracing is the most effective way to contain an outbreak. However, with a virus like COVID-19, that's preponderantly transmitted by asymptomatic patients, "classical contact tracing will not be enough to achieve the speed and efficiency needed, but it could be achieved by a contact tracing mobile app if used by a sufficiently large proportion of the population."
Making an ECM implementation successful requires planning and attention to detail. The best way to create the right solution is to identify organizational goals and priorities. Learn how to manage a successful implementation in our free guide.
This is the third part of a 3-part series on the Ethical Use of Data for Training Machine Learning Technology by guest authors Andrew Pery and Michael Simon. You can also check out Part 1 and Part 2 from this series. Part 3: Regulatory Efforts in the U.S. Present a Bleak Perspective In the United States, governmental efforts to examine AI have made far less progress as compared to the E.U. The most recent effort at the federal level, the Algorithmic Accountability Act of 2019 (S.1108) sponsored by Senator Ron Wyden (D-OR) and Senator Cory Booker (D-NJ)(with a parallel House bill, H.R.2231, sponsored by Representative Yvette Clark (D-NY)), seeks "To direct the Federal Trade Commission to require entities that use, store, or share personal information to conduct automated decision system impact assessments and data protection impact assessments." The proposed law would require the Federal Trade Commission to enact regulations within the next two years to require companies that make over $50 million per year or collect data on more than 1 million people to perform an "automated decision system impact assessment." However, unlike the GDPR's transparency requirements (no matter how debatable), the proposed bill would not require those assessments to be made public. Despite this lack of a transparency provision, the bill was quickly endorsed by a number of civil rights groups.
This is the third part of a 3-part series on the Ethical Use of Data for Training Machine Learning Technology by guest authors Andrew Pery and Michael Simon. You can also check out Part 1 and Part 2 from this series. Part 2: The Ethical and Legal Challenges of AI The AI technology bias and its potentially unintended consequences is gaining the attention of policymakers, technology companies, and civil liberties groups. In a recent article based upon an ABA Business Law Section Panel: Examining Technology Bias: Do Algorithms Introduce Ethical & Legal Challenges? The panelist-authors noted that:
This is the third part of a 3-part series on the Ethical Use of Data for Training Machine Learning Technology by guest authors Andrew Pery and Michael Simon. You can also check out Part 1 and Part 2 from this series. Part 1: Bad Things Can Come from Non-neutral Technology AI technology is becoming pervasive, impacting virtually every facet of our lives. A recent Deloitte report estimates that shipments of devices with embedded AI will increase from 79 million in 2018 to 1.2 billion by 2022: "Increasingly, machines will learn from experiences, adapt to changing situations, and predict outcomes…Some will infer users' needs and desires and even collaborate with other devices by exchanging information, distributing tasks, and coordinating their actions."
According to the 2019 IDC study of spending on Artificial Intelligence (AI), it's estimated to reach $35.8 billion in 2019 and is expected to double by 2022 to $ 79.2 billion representing an annual growth rate of 38% for the period 2018-2022. The economic benefits and utility of AI technologies are clear and compelling. No doubt, applications of AI may address some of the most vexing social challenges such as health, the environment, economic empowerment, education, and infrastructure. At the same time, as AI technologies become more pervasive, they may be misused and, in the absence of increased transparency and proactive disclosures, create ethical and legal gaps. Increased regulation may be the only way to address such gaps.