Abstract

Machine learning systems are becoming increasingly ubiquitous. These systems’s adoption has been expanding, accelerating the shift towards a more algorithmic society, meaning that algorithmically informed decisions have greater potential for significant social impact. However, most of these accurate decision support systems remain complex black boxes, meaning their internal logic and inner workings are hidden to the user and even experts cannot fully understand the rationale behind their predictions. Moreover, new regulations and highly regulated domains have made the audit and verifiability of decisions mandatory, increasing the demand for the ability to question, understand, and trust machine learning systems, for which interpretability is indispensable. The research community has recognized this interpretability problem and focused on developing both interpretable models and explanation methods over the past few years. However, the emergence of these methods shows there is no consensus on how to assess the explanation quality. Which are the most suitable metrics to assess the quality of an explanation? The aim of this article is to provide a review of the current state of the research field on machine learning interpretability while focusing on the societal impact and on the developed methods and metrics. Furthermore, a complete literature review is presented in order to identify future directions of work on this field.

Keywords

InterpretabilityComputer scienceMeaning (existential)Artificial intelligenceField (mathematics)AuditQuality (philosophy)Data scienceMachine learningManagement scienceKnowledge managementRisk analysis (engineering)EngineeringPsychologyEpistemology

Affiliated Institutions

Related Publications

There has been much discussion of the "right to explanation" in the EU General Data Protection Regulation, and its existence, merits, and disadvantages. Implementing a right to ...

Surrey Open Research repository (Univ... 1512 citations

Publication Info

Year
2019
Type
article
Volume
8
Issue
8
Pages
832-832
Citations
1600
Access
Closed

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1600
OpenAlex
75
Influential
1258
CrossRef

Cite This

Diogo V. Carvalho, Eduardo M. Pereira, Jaime S. Cardoso (2019). Machine Learning Interpretability: A Survey on Methods and Metrics. Electronics , 8 (8) , 832-832. https://doi.org/10.3390/electronics8080832

Identifiers

DOI
10.3390/electronics8080832

Data Quality

Data completeness: 81%