Skip to main content
The Lancet Regional Health: Western Pacific logoLink to The Lancet Regional Health: Western Pacific
letter
. 2024 Jul 11;48:101146. doi: 10.1016/j.lanwpc.2024.101146

Unveiling the black box: imperative for explainable AI in cardiovascular disease (CVD) prevention–author reply

Mayank Dalakoti a,b,c,e,, Scott Wong d,e, Roger Foo a,b
PMCID: PMC11296039  PMID: 39099599

Dear Editor,

In their commentary entitled “Unveiling the Black Box: Imperative for Explainable AI in Cardiovascular Disease (CVD) Prevention”, Wu et al. rightly emphasise the need for explainability and transparency of artificial intelligence (AI) systems.1 We agree resonantly. These are essential for healthcare workforce buy-in, accountability to patients and delivery of equitable care. In our earlier article in this journal, we discussed the ways in which AI is being incorporated into a CVD prevention service in the National University Heart Centre Singapore (NUHCS).2 van Royen et al. have earlier proposed key quality criteria for AI-based models for CVD management.3 In the same regard, the AI tools used in our healthcare system aim for full transparency and accessibility of comprehensive clinical and social data as the foundation for AI modelling. We make aggregated healthcare information accessible at national level, and software and code is shared across healthcare institutions and aggregate population trends published regularly.2

Our AI models focus on clearly defined clinical use-cases, with similarly clear outcomes and intent of usage, including the initiation of necessary medications to reduce cardiovascular risk based on risk factors at the recommended doses.2 Any CVD prediction models developed for cardiovascular risk prediction undergo rigorous internal and external validation and benchmarking against existing risk scoring systems, trained on population-level large local datasets.4

We believe that AI systems built on such a framework empower discoveries and insights - being cautious not to reject findings that we cannot explain. Human players of GO have been recorded to make more novel unobserved moves and higher quality decisions after the introduction of the deep learning AI program, AlphaGo.5 We aspire towards explaining novel suggestions and correlations of the AI.

Explainable AI models must also remain embedded in a strong patient-physician relationship, allowing patients to better understand their health, while allowing physicians to focus on tailoring the guideline-directed clinical management towards the individual circumstances and beliefs of the people we treat.

Declaration of interests

All authors disclose no financial and personal relationships with other people or organizations that could inappropriately influence their work.

References


Articles from The Lancet Regional Health: Western Pacific are provided here courtesy of Elsevier

RESOURCES