Skip to main content
CMAJ : Canadian Medical Association Journal logoLink to CMAJ : Canadian Medical Association Journal
letter
. 2020 Mar 16;192(11):E290. doi: 10.1503/cmaj.74722

Artificial intelligence isn’t

David M Burns 1
PMCID: PMC7083545  PMID: 32179541

Seeking to compare the reasoning of human and artificial intelligence (AI) in the context of medical diagnosis is an overly optimistic anthropomorphism. The term AI, as used to describe machine learning algorithms employed in this domain, is itself a misnomer. This is apparent when comparing modern machine learning algorithms based on artificial neural networks to nonneural algorithms (e.g., logistic regression). Unfortunately, this comparison was not made by the authors of the CMAJ Analysis article.1

Logistic regression, established in the 1800s, is the machine learning algorithm most commonly applied to structured medical data for diagnostic and prognostic purposes (e.g., Framingham Risk Score and Kocher Criteria). The same nomenclature of “learning” or “training” is equally well applied to this algorithm that has been used for centuries. Simply put, machine learning algorithms are mathematical formulae with free parameters derived retrospectively from clinical data. These formulae are not intelligent according to even the most generous of definitions, and they have no capacity for reasoning.

It is notable that nonneural machine learning algorithms are still the most accurate for structured clinical data and continue to dominate the field. Neural network algorithms do not provide intelligence, but they do provide the capacity to model more complex unstructured data (i.e., natural language, images and time series) and incorporate this information into predictive tools. This capability for modelling complex inputs is the most compelling advantage of neural network algorithms.

However, the downside of this complexity is that neural network models are typically uninterpretable, meaning humans cannot understand or explain how the prediction is derived from the clinical data. The greatest risk in deploying neural network algorithms, or any machine learning algorithm with limited interpretability, is to assign too much trust to it and ignore the potential for unknown confounders or biases. Such confounders and biases could harm patients and are known to be hard to identify and correct for. By conflating human and machine intelligence, we further increase this risk.

Fortunately or unfortunately, true AI with the capacity for reasoning remains in the realm of science fiction. People should not pretend otherwise; although there are many benefits of the technology currently available, there is also real capacity for harm by using it inappropriately.

Footnotes

Competing interests: None declared.

Reference

  • 1.Pelaccia T, Forestier G, Wemmert C. Deconstructing the diagnostic reasoning of human versus artificial intelligence. CMAJ 2019;191: E1332–5. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from CMAJ : Canadian Medical Association Journal are provided here courtesy of Canadian Medical Association

RESOURCES