Skip to main content
. 2021 Aug 16;379(2207):20200363. doi: 10.1098/rsta.2020.0363

Table 2.

Illustrations of explainability requirements for different stakeholders and scenarios.

dimension/example regulation investigation service service decision support decision support
stakeholders regulator accident investigatora service provider end user expert user prediction recipient
scenario system approval investigate accident or incident system deployment service use decision support decision support
purpose of explanation confidence, compliance clarity, compliance, continuous improvement confidence, compliance, (continuous improvement) challenge, consent and control confidence, consent and control, challenge challenge
timing of explanations pre-deployment post-incident pre-deployment same time as decision same time as decision same time as decision
data explainability global local, global global n.a. local, global local
model explainability global (interpretable models, adversarial examples, influential instances) global (permutation feature importance, counterfactual explanations, TreeSHAP) global (interpretable models, adversarial examples, influential instances) n.a. global (permutation feature importance, interpretable models) n.a.
prediction explainability n.a. local (KernelSHAP, counterfactual explanations) n.a. local (KernelSHAP, DeepLIFT, interpretable models) local (interpretable models, counterfactual explanations) local (KernelSHAP, DeepLIFT, interpretable models)

aService Provider may investigate service ‘outages’ (incidents) and Lawyers/Courts may also investigate challenges from decision recipients, using similar methods.