Table 5.
Essential properties of XAI | Measures of explanation effectiveness |
|||||
---|---|---|---|---|---|---|
Mental model | User satisfaction | Trust assessment | Task performance | Correctability | ||
Stakeholders of XAI | AI regulators | |||||
AI developers | ||||||
AI managers | ||||||
AI users | [29] [30] [31] [32] [33] |
[32] | [31] [32] [33] |
[30] [32] [34] |
||
Individuals affected by AI-based decisions | [29] | [29] | ||||
Objectives of XAI | Explainability to evaluate AI | [30] [32] [34] |
||||
Explainability to justify AI | [29] [32] |
|||||
Explainability to improve AI | [29] [30] [31] [32] [34] |
[31] [32] [33] |
||||
Explainability to learn from AI | ||||||
Explainability to manage AI | ||||||
Quality of personalized explanations | Fidelity | [33] | [30] [32] [34] |
|||
Generalizability | N/A | N/A | N/A | N/A | N/A | |
Explanatory power | ||||||
Interpretability | ||||||
Comprehensibility | [33] | |||||
Plausibility | [29] [30] [31] [32] [34] |
[29] [32] |
[32] | |||
Effort | [31] | |||||
Privacy | ||||||
Fairness |