Feature interaction and importance |
Illustrates not only important features, but also their relative importance toward clinical interpretation |
Numerical weights are often not easily interpretable, or might be misinterpreted |
Attention mechanism |
Does not directly inform the clinical end user of the answer but does highlight the areas of most interest to support easier decision-making. Thus, user might be more tolerant of imperfect accuracy |
Simply providing this information to a clinical end user might not be useful. Major issues are information overload, alert fatigue, etc. Providing areas of attention without clarity on what to do with the results can potentially be even more confusing if the end user is unsure of what to make of a highlighted section (and also likely to miss nonhighlighted areas that are sometimes crucial) |
Data dimensionality reduction |
Simplifying the data down to a small subset can make the model’s underlying behavior comprehensible. It also can be generally advantageous with potentially more robust regularized models that are less likely to overfit training data |
Risk of missing other features that can still be important in individual cases, but the reduced models inadvertently do not include them |
Knowledge distillation and rule extraction |
Potentially more robust models with summarized representations of complex data that allows clinical end users to naturally infer meaning from87
|
If clinical end users cannot intuitively interpret the meaning of these representations, then the representations are likely to make it even harder for the end users to interpret and explain |
Intrinsically interpretable models |
Simple models that are more familiar and intuitive to clinical end users. Even if they don't understand how these types of models are constructed, many medical professionals will at least have some familiarity with how to apply them |
If ensemble of simple models is used to enhance the accuracy, then a clinical end user is not able to interpret the results |