top of page

Activate data privacy with Privacy enhancing technologies

What are PETs and how can they enable privacy within your organization?


The importance and priority for data privacy and security has increased tremendously today for individuals, companies and regulatory agencies. Businesses are having to rethink how to approach privacy given the increasing awareness among consumers and the escalating focus from regulators. The cost of privacy compliance can be significant and it is very difficult to address privacy issues retroactively. A proactive and pragmatic approach to privacy is the only way forward. Companies are realizing the need to incorporate privacy by design.


Privacy Enhancing Technologies (PETs) are a broad range of technologies that in conjunction with changes to policies and business frameworks make it possible for companies to be data-driven without compromising the privacy of its customers and employees. A recent Pew Research Center survey found 79% of American adults are concerned regarding how service providers and applications use their data while 52% may go as far as refusing products or services with privacy threats. PETs can potentially reshape the data economy and foster relationships of trust between users, corporations and regulatory agencies.


How do Privacy Enhancing Technologies work?

PETs help address privacy and security challenges in numerous ways to enable anonymity, pseudonymity, unlinkability and unobservability of data subjects. While encryption and data obfuscation are among the most commonly used technologies, PETs include the following technologies:

  • Homomorphic encryption

Typically, encrypted data needs to be decrypted prior to processing. Decrypting exposes the data to the very same threats encryption tried to safeguard against in the first place. Homomorphic encryption could be the ultimate answer to vulnerability inherent in all other approaches to data protection.

Homomorphic encryption allows computation or processing to be performed on encrypted data without needing to decrypt data. It requires a public key to encrypt data and allows only the authorized individual with the matching key to access its unencrypted data. With homomorphic encryption, data processing can be performed by employees (or a third party) on encrypted data without decryption.

  • Differential Privacy

Differential privacy approaches try to preserve the privacy of an individual within a group when data about the group is shared. This is achieved by injecting a minimum random perturbation or noise into the aggregate data set that does not alter the “characteristics” of the aggregate data set while also ensuring that it is not possible to understand attributes about any single individual in the group. This allows the bare minimum about an individual’s data is known when looking at the group level data. The Laplace mechanism is a commonly used mathematical technique to inject the random noise into the aggregate dataset. Differential privacy approaches try to make sure it is not possible to determine from an analysis of the aggregate dataset whether any individual’s data was or was not included in the aggregate dataset, thereby providing privacy about an individual’s data. These techniques have been used widely in analysis of data in various areas including data from census, recommendation systems, location-based services and social networks.

  • Federated Learning

Large datasets used for training machine learning algorithms may contain sensitive information that an organization or individual may not be willing to share. Federated learning is a more privacy-friendly approach that enables the training of machine learning algorithms on all available data in such a way that the integrity and privacy of data are protected.

Instead of accumulating all the training data at one centralized point for training, federated learning trains machine learning algorithms on decentralized edge devices (like mobile phones) or servers based on the data available in each node or device. This way, the training of models gets subdivided into training that is performed locally, on-device, or within the organization itself. In these models, data from an edge or a device is not transferred at any point to a centralized server; only the results of trained algorithm are shared in an anonymized form with a central server to produce a ‘global’ model. This approach provides additional safeguards against potential intruders being able to access data or infringe on privacy.

  • Zero-knowledge Proofs (ZKPs)

First introduced by MIT researchers in 1985, ZKPs employ cryptographic algorithms to verify the veracity of information without exposing the data itself. ZKPs bypass the requirement of sharing personal data for proving one’s personal identity, paving a way for the creation of an identity authentication system without the risk of a data privacy breach. ZKPs can work effectively in the development of fraud prevention systems that require users to validate competence (in terms of sharing sensitive information) for buying a product or service. ZKPs can be used to protect data privacy in a diverse range of scenarios including deanonymization of users in blockchain technology, online voting and demonstration of income or age within the admissible range.

  • Data Masking Techniques

Some data masking techniques also act as privacy enhancing technologies. Data masking techniques aim to create a fake yet realistic version of sensitive data. They serve the purpose of creating a version of the original data that cannot be deciphered or reverse engineered. Data can be altered using several methods, such as character shuffling, encryption, character or word substitution.


  • Pseudonymization: There is an explicit mention of pseudonymization in the European Union’s GDPR [Article 4(5)], which discusses different data protection techniques including data masking, encryption and hashing. Pseudonymization is a data management procedure that replaces personally identifiable information (PII) with one or more artificial identifiers. Later, these identifiers could be recalled to re-identify the record.

  • Obfuscation: This data masking technique involves replacing sensitive information by adding misleading information to a log. The most common methods used to obfuscate data are encryption, tokenization, and data masking. With obfuscation, organizations are better able to address cyber risks. Obfuscation also enables more secure data sharing with third parties.

  • Data minimization: According to Article 5(1)(c) of GDPR, data controllers should limit the collection, storage and processing of personal data to what is strictly adequate, relevant, and necessary to accomplish a specific purpose. Data minimization requires organizations to collect a minimum amount of data that enables them to provide key elements of service. In the privacy-first world, companies should switch to a ‘less is more’ mindset.

Why should organizations invest in PETs?


Organizations have compelling drivers today to adopt PETs for safeguarding consumer data privacy. Gartner has identified privacy enhancing technologies as one of the top strategic trends in 2021 and by 2025, it predicts that 50% of organizations will adopt PETs for processing data in untrusted environments and multiparty data analytics use cases.

PETs can help to minimize data misuse, maximize data security and empower organizations with privacy integration into their design. PETs not only protect sensitive information but also cement privacy practices and promote digital trust. Many of the PETs discussed here have been widely used by large corporations to protect consumer and institutional data, whilst building ground-breaking products in today’s post-GDPR, CPRA, and third-party cookie world. Prominent examples include Google’s RAPPOR (Randomised Aggregatable Privacy-preserving Ordinal Response), LinkedIn’s PriPeARL (Privacy Preserving Analytics and Reporting) framework and differential privacy deployments by Apple and Microsoft.

PETs are complex and may be difficult to implement. But PETs can be useful in reducing risk with data and they are an important tool in the privacy toolbox. They can help in achieving compliance with certain parts of privacy and data protection laws.

Featured Posts

Recent Posts

Follow Us

  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page