Strengthening Privacy Rights with Privacy Enhancing Technologies
Andrew Pery

By: Andrew Pery on November 5th, 2018

Print/Save as PDF

Strengthening Privacy Rights with Privacy Enhancing Technologies

Privacy  |  Information Security  |  GDPR

More rigorous privacy regulations such as the EU GDPR and a number of US privacy initiatives such as the recently ratified California Consumer Privacy Act impose higher standards on data controllers and processors to safeguard privacy rights – including data subject consent management, accommodating data subject requests, data portability and more onerous data controller and processor accountability standards.

Moreover, there seems to be momentum developing for a more comprehensive US Federal Privacy initiative consistent with GDPR. The current US data privacy regime is somewhat fragmented and sectoral in nature and is perceived to be an impediment to competing in the digital economy. Congress appears to be responding to these challenges. There are several proposed legislative initiatives under consideration, in particular:

  • The Consent Act, the ambition of which is to “establish privacy protections for customers of online edge providers. These protections include requiring edge providers to notify customers about the collection and use of “sensitive customer proprietary information.”; and
  • Social Media Privacy Protection and Consumer Rights Act “that would allow users to opt-out of tracking, simplify legalese, and require platforms to notify users of a data breach within 72 hours.”

These initiatives are also endorsed by the technology sector, including Google, Facebook, and Microsoft. In particular, Google has recently published a Framework for Responsible Data Protection Regulation that espouses broader data subject rights, provides for enhanced transparency, accountability, and provisions for privacy by design embedded in software that enables data subjects to exercise greater control over their privacy settings.


Get Your Free Report: GDPR After the Deadline


The sense of urgency by organizations to implement technological and organizational measures that mitigate compliance risk is accelerating. A recent AIIM survey found that while prior to GDPR coming into force, only 30% of organizations were fully compliant with GDPR; after the deadline, 50% of the same organizations surveyed indicated that they are 75% compliant.

The survey surfaced several business-critical areas where potential compliance risks are high:

  • Discoverability of personal information across distributed file shares, email systems, and content repositories;
  • Challenges associated with information classification that identifies personally identifiable information;
  • Lack of ability to respond to data subject requests in a timely manner and which is consistent with the GDPR requirement to respond within 30 days of the request.
  • Respond to audit requests and breach response readiness in line with the 72-hour breach notification requirement; and
  • Review and remediation of all existing Data Processing Agreements and obligations analysis relating to consent management and data controller and processor agreements.

A recent survey by IAPP and Trust Arc highlighted the investment priorities in privacy-enhancing technologies, which are consistent with the pain points of the AIIM survey. Investment priorities focus on:

  • Data Mapping
  • Personal Data Discovery
  • De-identification
  • Consent Management; and
  • Content Analytics (Privacy Information Management)

In a previous blog, I’ve covered data mapping and data discovery technologies. One of the more vexing challenges for organizations is to strike a balance between the social utility of AI-based content analytics technologies and safeguarding privacy rights, particularly data subject profiling. Technological innovations such as always-on mobile devices and wearable technologies, the rapid advancement of geolocation technology and predictive analytics create a digital fingerprint of data subject biographical information about their preferences and behavior.

The commercial incentive to share data for secondary uses with third parties is compelling as it promotes innovation. The social utility of personally identifiable information spans health care delivery and public services so long as the information is properly de-identified. De-identification consists of several algorithms designed to remove personal information. Generally, there are two forms of de-identification: pseudonymization, which removes the association, and personal data is replaced by one or more artificial identifiers and anonymization that removes the association between the identifying dataset and the data subject in a manner whereby re-identification is not possible. The applicability of these two de-identification alternatives depends on the intended secondary uses of the data, the sensitivity of the data, and the associated risk of re-identification. A particularly useful guide for the application of de-identification technologies and best practices is the National Institute of Standards and Technology (NIST) De-Identification of Personal Information.

There are also emerging efforts to incorporate privacy by design principles into content analytics technologies intended to profile data subjects. At the recently held 40th International Conference of Data Protection and Privacy Commissioners released its Declaration on Ethics and Data Protection in Artificial Intelligence the ambition of which is to ensure that “Artificial intelligence and machine learning technologies should be designed, developed and used in respect of fundamental human rights and in accordance with the fairness principle, in particular by:

  • “Considering individuals’ reasonable expectations by ensuring that the use of artificial intelligence systems remains consistent with their original purposes and that the data are used in a way that is not incompatible with the original purpose of their collection
  • taking into consideration not only the impact that the use of artificial intelligence may have on the individual but also the collective impact on groups and on society at large; and
  • ensuring that artificial intelligence systems are developed in a way that facilitates human development and does not obstruct or endanger it, thus recognizing the need for delineation and boundaries on certain uses.”

 

Free Report: General Data Protection Regulation (GDPR) After the Deadline

About Andrew Pery

Andrew Pery is a marketing executive with over 25 years of experience in the high technology sector focusing on content management and business process automation. Currenly Andrew is CMO of Top Image Systems.  Andrew holds a Masters of Law degree with Distinction from Northwestern University is a Certified Information Privacy Professional (CIPP/C) and a Certified Information Professional (CIP/AIIM).