Abstract
As artificial intelligence (AI) systems increasingly shape high-stakes decisions, the demand for transparent and trustworthy behaviourhas grown correspondingly. Yet despite extensive research on explainableAI (XAI), the foundational concepts of interpretability and explainabilityremain ambiguous and often used inconsistently across disciplines. Thisconceptual fragmentation limits our ability to formulate rigorous expla-nation objectives, compare explanation methods, and evaluate their suit-ability in practical systems. This paper addresses these issues throughthree contributions. First, we provide a unified conceptual clarificationof interpretation (sense-reading) and explanation (sense-giving), drawingon insights from linguistics, philosophy, cognitive science, and knowledgemanagement. These definitions disentangle the cognitive, algorithmic,and communicative aspects of explanations in AI. Second, we introducethe Interpret/Explain Schema (IES), which specifies how interpretationand explanation arise within the data–model–output pipeline of an AIsystem. Third, building on the IES, we propose the General Frameworkfor Generating Explanations (GFGE), a modular and model-agnosticframework that organises the components required to construct expla-nations, regardless of model class or explanation technique. We validateGFGE by instantiating it with a broad range of XAI methods, includ-ing post-hoc attribution techniques, attribution-driven hybrid methods,counterfactual explanations, surrogate models, prototype and concept-based approaches, and intrinsically interpretable argumentation-basedmodels. These instantiations demonstrate that GFGE captures the struc-tural backbone shared across heterogeneous XAI techniques, offering aunifying and theoretically grounded foundation for designing, analysing,and comparing explanations in AI systems.
Affiliated Institutions
Related Publications
An Interpretable Model with Globally Consistent Explanations for Credit Risk
We propose a possible solution to a public challenge posed by the Fair Isaac Corporation (FICO), which is to provide an explainable model for credit risk assessment. Rather than...
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications
With the broader and highly successful usage of machine learning (ML) in industry and the sciences, there has been a growing demand for explainable artificial intelligence (XAI)...
Explaining Explanations in AI
Recent work on interpretability in machine learning and AI has focused on the\nbuilding of simplified models that approximate the true criteria used to make\ndecisions. These mo...
Causability and explainability of artificial intelligence in medicine
Explainable artificial intelligence (AI) is attracting much interest in medicine. Technically, the problem of explainability is as old as AI itself and classic AI represented co...
Model Agnostic Supervised Local Explanations
Model interpretability is an increasingly important component of practical machine learning. Some of the most common forms of interpretability systems are example-based, local, ...
Publication Info
- Year
- 2025
- Type
- preprint
- Citations
- 0
- Access
- Closed