Requirements for trustworthy AI from the High-Level expert group on AI setup by the EU Commission.
In order to avoid unintentional harm that improper development of AI technologies can have on the society, the European Commission, through the High-Level expert group on AI, published a series guidelines and checklists for development of trustworthy AI.
By analysing the guidelines, we found out they do not specify actions to be taken by those involved in building AI systems. In order to increase the actionability of the guidelines, we made efforts to extend our catalogue of practices with new practices that address the EU requirements for trustworthy AI.
In particular, we focused on a major sub-field of AI, called machine learning (ML). By analysing the academic and grey literature, we extracted 14 operational practices for trustworthy ML that address the 7 requirements for trustworthy AI from the picture above as follows:
Human agency and oversight
Technical robustness and safety
Privacy and Data Governance
- Establish Responsible AI Values
- Inform Users on Machine Learning Usage
- Explain Results and Decisions to Users
- Provide Safe Channels to Raise Concerns
Diversity, Non-Discrimination and Fairness
- Test for Social Bias in Training Data
- Prevent Discriminatory Data Attributes Used As Model Features
- Assess and Manage Subgroup Bias