Dear Editor,
In their commentary entitled “Unveiling the Black Box: Imperative for Explainable AI in Cardiovascular Disease (CVD) Prevention”, Wu et al. rightly emphasise the need for explainability and transparency of artificial intelligence (AI) systems.1 We agree resonantly. These are essential for healthcare workforce buy-in, accountability to patients and delivery of equitable care. In our earlier article in this journal, we discussed the ways in which AI is being incorporated into a CVD prevention service in the National University Heart Centre Singapore (NUHCS).2 van Royen et al. have earlier proposed key quality criteria for AI-based models for CVD management.3 In the same regard, the AI tools used in our healthcare system aim for full transparency and accessibility of comprehensive clinical and social data as the foundation for AI modelling. We make aggregated healthcare information accessible at national level, and software and code is shared across healthcare institutions and aggregate population trends published regularly.2
Our AI models focus on clearly defined clinical use-cases, with similarly clear outcomes and intent of usage, including the initiation of necessary medications to reduce cardiovascular risk based on risk factors at the recommended doses.2 Any CVD prediction models developed for cardiovascular risk prediction undergo rigorous internal and external validation and benchmarking against existing risk scoring systems, trained on population-level large local datasets.4
We believe that AI systems built on such a framework empower discoveries and insights - being cautious not to reject findings that we cannot explain. Human players of GO have been recorded to make more novel unobserved moves and higher quality decisions after the introduction of the deep learning AI program, AlphaGo.5 We aspire towards explaining novel suggestions and correlations of the AI.
Explainable AI models must also remain embedded in a strong patient-physician relationship, allowing patients to better understand their health, while allowing physicians to focus on tailoring the guideline-directed clinical management towards the individual circumstances and beliefs of the people we treat.
Declaration of interests
All authors disclose no financial and personal relationships with other people or organizations that could inappropriately influence their work.
References
- 1.Wu Y., Lin C. Unveiling the black box: imperative for explainable AI in cardiovascular disease prevention. Lancet Reg Health West Pac. 2024 [Google Scholar]
- 2.Dalakoti M., Wong S., Lee W., et al. Incorporating AI into cardiovascular diseases prevention-insights from Singapore. Lancet Reg Health West Pac. 2024;48 doi: 10.1016/j.lanwpc.2024.101102. Published 2024 May 27. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.van Royen F.S., Asselbergs F.W., Alfonso F., Vardas P., van Smeden M. Five critical quality criteria for artificial intelligence-based prediction models. Eur Heart J. 2023;44(46):4831–4834. doi: 10.1093/eurheartj/ehad727. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Lim C., Hilal S., Ma S., et al. Recalibrated Singapore-modified framingham risk score 2023 (SG-FRS-2023) https://bpb-us-w2.wpmucdn.com/blog.nus.edu.sg/dist/4/6173/files/2023/10/2023_Recalibrated_Singapore-Modified_Framingham_Risk_Score_SG-FRS-2023_report.pdf
- 5.Shin M., Kim J., van Opheusden B., Griffiths T.L. Superhuman artificial intelligence can improve human decision-making by increasing novelty. Proc Natl Acad Sci USA. 2023;120(12) doi: 10.1073/pnas.2214840120. [DOI] [PMC free article] [PubMed] [Google Scholar]
