In order to avoid unintentional harm that improper development of AI technologies can have on the society, the European Commission, through the High-Level expert group on AI, published a series guidelines and checklists for development of trustworthy AI.

By analysing the guidelines, we found out they do not specify actions to be taken by those involved in building AI systems. In order to increase the actionability of the guidelines, we made efforts to extend our catalogue of practices with new practices that address the EU requirements for trustworthy AI.

In particular, we focused on a major sub-field of AI, called machine learning (ML). By analysing the academic and grey literature, we extracted 14 operational practices for trustworthy ML that address the 7 requirements for trustworthy AI from the picture above as follows:

Human agency and oversight

Technical robustness and safety

Privacy and Data Governance

Transparency

Diversity, Non-Discrimination and Fairness

Accountability