Provide Safe Channels to Raise Concerns
44 / 46 •
Governance •
This practice was ranked as advanced.
Click to read more. • This practice addresses requirements
from the EU guidelines for trustworthy ML.
Click to read more.
Click to read more. • This practice addresses requirements
from the EU guidelines for trustworthy ML.
Click to read more.
Intent
Obtain honest feedback, allowing timely remediation, rather than giving rise to conflict.
Motivation
Users can help improve the application by providing feedback.
Applicability
Safe communication channels should be applied to any machine learning application.
Description
Communication channels between users and developers can help to discuss issues, dilemmas or emergent concerns regarding ethical use of machine learning. Users may, for example, wish to raise concerns about (perceived) bias or inquire about how an machine learning system reached a decision.
In order to facilitate communication, increase transparency and obtain feedback for your application, it is recommended to provide safe channels for users to raise concerns.
These channels can be as simple as mailing lists, blogs (e.g., Disqus) or phone numbers. Make sure to include options for anonymisation in order to protect the users’ privacy and be as inclusive as possible.
Adoption
Related
- Inform Users on Machine Learning Usage
- Explain Results and Decisions to Users
- Test for Social Bias in Training Data
- Provide Audit Trails
Read more
- Ethics Guidelines for Trustworthy AI
- Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims
44 / 46 •
Governance •
This practice was ranked as advanced.
Click to read more. • This practice addresses requirements
from the EU guidelines for trustworthy ML.
Click to read more.
Click to read more. • This practice addresses requirements
from the EU guidelines for trustworthy ML.
Click to read more.