Machine learning systems involve highly complex between data, algorithms and models. As a result they are often difficult to understand, even for other experts.
In order to increase transparency and align the application with ethics guidelines, it is imperative to inform users on the reasons why a decision was made. For example, the EU GDPR law, as well as the Credit score in the USA, require the right to an explanation for automated decision making systems.
Not only may users be more accepting of decisions made by machine learning systems when they understand what the decision was based on, it also helps them to raise concerns when the explanation is unsatisfactory, or – in the extreme case – plain wrong. In turn this helps to improve machine learning systems to make better decisions, and provide improved explanations in the future.
- Inform Users on Machine Learning Usage
- Provide Audit Trails
- Employ Interpretable Models When Possible
- Provide Safe Channels to Raise Concerns