Table 3.
[29] | [30] | [31] | [32] | [33] | [34] | |||
---|---|---|---|---|---|---|---|---|
Essential properties of XAI |
Stakeholders of XAI | AI regulators | ||||||
AI developers | ||||||||
AI managers | ||||||||
AI users | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||
Individuals affected by AI-based decisions | ✓ | |||||||
Objectives of XAI | Explainability to evaluate AI | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |
Explainability to justify AI | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||
Explainability to improve AI | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||
Explainability to learn from AI | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||
Explainability to manage AI | ||||||||
Quality of personalized explanations | Fidelity | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |
Generalizability | N/A | N/A | N/A | N/A | N/A | N/A | ||
Explanatory power | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||
Interpretability | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||
Comprehensibility | ✓ | |||||||
Plausibility | ✓ | ✓ | ✓ | ✓ | ✓ | |||
Effort | ✓ | |||||||
Privacy | ||||||||
Fairness | ||||||||
Measures of explanation effectiveness | Mental model | |||||||
User satisfaction | ✓ | ✓ | ✓ | ✓ | ✓ | |||
Trust assessment | ✓ | ✓ | ||||||
Task performance | ✓ | ✓ | ✓ | |||||
Correctability | ✓ | ✓ | ✓ |