Dear Sir,
We would like to complement Finette et al.1 on the progress they have made in the development of a frontline health worker mHealth assessment platform for children 2–60 months of age and for an innovative study design. Mobile health platforms such as theirs have tremendous potential to bridge the gap in skilled health-care personnel shortages in low- and middle-income countries (LMICs). However, they should be developed using rigorous evidence and a transparent process to engender confidence and foster adoption in LMICs. Thus, we are concerned about the lack of reporting transparency and detail of the clinical decision process implemented within the platform. The platform uses 42 clinical data points that are interpreted based on WHO Integrated Management of Childhood Illnesses and integrated community case management protocols and “other” evidence-based data points. The individual risk assessments are based on “physician-based logic” and “Bayesian weighting and cluster pattern analysis.” No detail or references on the processes used to produce this physician-based logic or on how this was validated are provided. Indeed, the WHO protocols were not written for machine implementation and require a significant degree of interpretation before they can be implemented within computer logic. It is, therefore, important that the logic used to implement these protocols and the integration of this logic with the physician-based logic be described. To further confound interpretation, the algorithm also seems to have been updated during the course of the study.
Patients, providers, and funders are rightly very keen to adopt new technologies to overcome the limitations such as lack of access and inadequate training. However, the allure of significant positive potential leading to this very rapid growth in medical applications should not trump a rigorous evidence-based approach to developing and adopting these new technologies. Overpromising without adequate validation is amply evidenced by the recent tragic consequences of International Business Machines Watson2 and should be a cautionary tale for us all. We, therefore, need to uphold rigorous processes of development, reporting,3 and regulation. The importance of fidelity and rigor to performance has not gone unnoticed by many authorities, including World Health Organization/International Telecommunication Union4 and the Food and Drug Administration in a new draft document for software as a device.5 We would also encourage an open, unbiased, and transparent approach as is being promoted for clinical guidelines6 for clinical algorithms.
We applaud and would strongly support an evidence-based, data-driven, and algorithmic approach to revolutionize the current WHO protocols and significantly improve the outcomes of children around the world. However, lack of transparency and potential bias due to personal and commercial interests has the potential to introduce confusion and mistrust between patients and providers that could derail worthy initiatives by significantly limiting their potential, tarnishing their reputation, and, hence, adversely impacting their potential. Before integrating a platform such as that of Finette and colleagues into a clinical setting, we would recommend that well-established standards for model development, transparent reporting, and validation (both internal and external) be undertaken. We will then be able to confidently harness the growing potential of computing power and machine learning and produce useful and safe tools at the bedside.
REFERENCES
- 1.Finette B, et al. 2019. Development and initial validation of a frontline health worker mHealth assessment platform (MEDSINC®) for children 2–60 months of age. Am J Trop Med Hyg 100: 1556–1565. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Strickland E, 2019. IBM Watson, heal thyself: how IBM Watson overpromised and underdelivered on AI health care. IEEE Spectr 56: 24–31. [Google Scholar]
- 3.Collins GS, Moons KGM, 2019. Reporting of artificial intelligence prediction models. Lancet 393: 1577–1579. [DOI] [PubMed] [Google Scholar]
- 4.Wiegand T, Krishnamurthy R, Kuglitsch M, Lee N, Pujari, Salathe M, Wenzel M, Xu S, 2019. WHO and ITU establish benchmarking process for artificial intelligence in health. Lancet 394: 9–11. [DOI] [PubMed] [Google Scholar]
- 5.FDA , 2019. Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD)—Discussion Paper and Request for Feedback. Available at: https://www.regulations.gov/docket?D=FDA-2019-N-1185. Accessed May 29, 2019. [Google Scholar]
- 6.Incze M, Ross JS, 2019. On the need for (only) high-quality clinical practice guidelines. JAMA Intern Med 179: 561. [DOI] [PubMed] [Google Scholar]