Use Privacy-Preserving Machine Learning Techniques
Click to read more. • This practice addresses requirements
from the EU guidelines for trustworthy ML.
Click to read more.
Intent
Motivation
Applicability
Description
Whenever processing data that can be used to identify or trace back information to individuals – such as medical records – it is imperative to use privacy preserving techniques in order to protect the individuals’ privacy. Moreover, machine learning models are known to leak information (see membership attacks), and may reveal crucial information about the training data.
Anonymisation, pseudonymisation, differential privacy, federated machine learning, or using cryptographic techniques – such as homomorphic encryption – are examples of privacy preserving machine learning techniques.
The tool support for privacy preserving machine learning is mature, with tools such as Opacus), CrypTen or PySift being developed and maintained by large research centres such as Facebook AI Research (FAIR).
Adoption
Related
Read more
Click to read more. • This practice addresses requirements
from the EU guidelines for trustworthy ML.
Click to read more.