Abstract

We propose a possible solution to a public challenge posed by the Fair Isaac Corporation (FICO), which is to provide an explainable model for credit risk assessment. Rather than present a black box model and explain it afterwards, we provide a globally interpretable model that is as accurate as other neural networks. Our "two-layer additive risk model" is decomposable into subscales, where each node in the second layer represents a meaningful subscale, and all of the nonlinearities are transparent. We provide three types of explanations that are simpler than, but consistent with, the global model. One of these explanation methods involves solving a minimum set cover problem to find high-support globally-consistent explanations. We present a new online visualization tool to allow users to explore the global model and its explanations.

Keywords

Cover (algebra)Set (abstract data type)Computer scienceArtificial neural networkNode (physics)Black boxCredit riskLayer (electronics)Risk modelEconometricsArtificial intelligenceActuarial scienceMathematicsEconomicsEngineering

Related Publications

Publication Info

Year
2018
Type
preprint
Citations
65
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

65
OpenAlex

Cite This

Chaofan Chen, Kangcheng Lin, Cynthia Rudin et al. (2018). An Interpretable Model with Globally Consistent Explanations for Credit Risk. arXiv (Cornell University) . https://doi.org/10.48550/arxiv.1811.12615

Identifiers

DOI
10.48550/arxiv.1811.12615