The social implications that machine learning and more general artificial intelligence technologies may have on society call for tight regulations to ensure the consumer rights are respected. For example, the EU GDPR law, as well as the Credit score in the US, require the right to an explanation for automated decision making systems.
Post-mortem analysis and traceability of model behaviour is difficult in case teams do not design the application for audit trails. A first step towards opening the applications for audits is to devise a strategy for auditing and make it part of the development life-cycle.
Such a strategy should include plans for storing and preserving audit trails of past decisions, together with data describing each stage of the development life-cycle. For example, define and store design checklists (which describe why a model was preferred over others), keep records about the training data distribution (at the time of developing a model), keep records regarding the known failure modes of the model, and keep production logs that can trace back decisions to a model and the used training data.
Automating the audit artefacts – such as automatic generation of reports – will enhance your ability to adhere to the audit strategy and will facilitate communication across teams.
- Have Your Application Audited
- Log Production Predictions with the Model's Version and Input Data
- Employ Interpretable Models When Possible
- Ethics Guidelines for Trustworthy AI
- Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing
- Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims