Skip to main content
Biophysics Reviews logoLink to Biophysics Reviews
. 2024 Mar 27;5(1):011304. doi: 10.1063/5.0176850

Deep learning from latent spatiotemporal information of the heart: Identifying advanced bioimaging markers from echocardiograms

Amanda Chang 1, Xiaodong Wu 2, Kan Liu 3,a)
PMCID: PMC10978053  PMID: 38559589

Abstract

A key strength of echocardiography lies in its integration of comprehensive spatiotemporal cardiac imaging data in real-time, to aid frontline or bedside patient risk stratification and management. Nonetheless, its acquisition, processing, and interpretation are known to all be subject to heterogeneity from its reliance on manual and subjective human tracings, which challenges workflow and protocol standardization and final interpretation accuracy. In the era of advanced computational power, utilization of machine learning algorithms for big data analytics in echocardiography promises reduction in cost, cognitive errors, and intra- and inter-observer variability. Novel spatiotemporal deep learning (DL) models allow the integration of temporal arm information based on unlabeled pixel echocardiographic data for convolution of an adaptive semantic spatiotemporal calibration to construct personalized 4D heart meshes, assess global and regional cardiac function, detect early valve pathology, and differentiate uncommon cardiovascular disorders. Meanwhile, data visualization on spatiotemporal DL prediction models helps extract latent temporal imaging features to develop advanced imaging biomarkers in early disease stages and advance our understanding of pathophysiology to support the development of personalized prevention or treatment strategies. Since portable echocardiograms have been increasingly used as point-of-care imaging tools to aid rural care delivery, the application of these new spatiotemporal DL techniques show the potentials in streamlining echocardiographic acquisition, processing, and data analysis to improve workflow standardization and efficiencies, and provide risk stratification and decision supporting tools in real-time, to prompt the building of new imaging diagnostic networks to enhance rural healthcare engagement.

INTRODUCTION

Echocardiography has become one of the most robust imaging modalities in diagnostic cardiology. A key strength of echocardiography over tomographic imaging lies in its integration of comprehensive spatiotemporal cardiac imaging data in real-time, to help frontline or bedside patient risk stratification and management. Being noninvasive and portable, echocardiograms are also increasingly used as point-of-care imaging tools to aid rural patient care.1 Despite the importance of the echocardiography in daily practice, its acquisition, processing, and interpretation are all known to be subject to heterogeneity and variance from its reliance on manual and subjective human tracings,2,3 which challenges echocardiographic workflow and protocol standardization and undermines readers' interpretation accuracy. Meanwhile, as echocardiography orders grow briskly in contemporary clinical practice, there is a markedly increased demand for well-trained sonographers and cardiologists to timely perform, process, and analyze results, which in turn increases workload pressures, physician burnout, and echocardiography reporting errors.4

The application of machine learning (ML) automation techniques in echocardiography is a burgeoning field with inspiring potential in streamlining echocardiographic workflows and characterizing diagnostic and prognostic imaging information. Novel deep learning (DL) algorithms allow integration of temporal arm information based on unlabeled pixel echocardiographic data for a convolution of an adaptive semantic spatiotemporal calibration to construct personalized 4D heart meshes,5 assess global cardiac function,6 identify regional wall motion changes,7–9 detect early valve pathology,10 and even quantify coronary vessel abnormalities.11 As imaging segmentation accuracy improves,5,12 echocardiographic DL models have been used to diagnose and differentiate individual cardiovascular disorders, including acute myocardial infarction (AMI) and takotsubo syndrome (TTS).7,8 Meanwhile, DL models have also been utilized to streamline echocardiographic acquisition, processing, and interpretation to improve workflow efficiencies.13 While the wearable DL echocardiography patch has been developed to robotically obtain and process raw cardiac images,14 fully automated real-time imaging diagnostic pipelines have come one step closer to real-world clinical practice. Meanwhile, we beg the question: can echocardiographic technology automation help us increase the effectiveness and efficiency of regional imaging diagnostic networks and enhance rural patient care engagement?

ML IN BIOIMAGING PRACTICE

ML is a robust computational approach that includes a collection of statistical learning and modeling techniques that learn from established data and make predictions on unseen or new data. As opposed to explicit prior programming, ML improves its own predictive abilities through automatically learning from massive amounts of data and forms various algorithms to undertake its specific task.15,16 For supervised learning, ML algorithms are trained based on labeled datasets to map known inputs and outputs. Examples of supervised learning include support vector machines (SVM), naïve Bayes (NB), random forest (RF), and k-nearest neighbors (KNN),17 among others. In contrast, unsupervised learning processes raw, unlabeled datasets to find patterns for classification. Examples of unsupervised learning include hierarchical clustering, k-means clustering, Gaussian mixture models, and others.18 Convolutional neural networks (CNNs) are a type of neural network that extracts specific information in the data through “filters,” which then “pools” the data to get an aggregate value and finally assigns a weight (importance) to each region or object in the dataset to identify patterns in the data, particularly with images.19

Once an ML algorithm is selected and designed, it must be extensively trained on an expansive training dataset—supervised learning for labeled data and unsupervised learning for raw, unlabeled data—subsequently evaluated on a validation set, and finally tested on a test dataset. For instance, if the goal is for an algorithm to identify a specific dog breed, one can use supervised learning to define an output label for a data input (e.g., “golden retriever” output label for an associated input of a golden retriever). In contrast, if the goal is to find new patterns in a dataset, unsupervised learning can be used. In the case of dog breeds, the dog pictures will be clustered by breed using certain features that are not predetermined (this could be ear shape, color, tail length, or it could be a completely unknown pattern that the ML algorithm is using to categorize the dog breeds). As stated above, there are numerous ways to execute supervised (SVM, NB, RF, etc.) and unsupervised (hierarchical clustering, k-means clustering, Gaussian mixture models, etc.) learning models. Furthermore, semi-supervised learning utilizes partially labeled data while still allowing the ML algorithm to identify novel patterns. Once the learning phase has been completed on the training set, the ML algorithm is then evaluated on a validation set (typically significantly smaller than a training set), and the results are used to refine the ML algorithm. Once the developers are satisfied with the ML model's performance on the validation set, the ML algorithm is tested on the test dataset as a final “gold standard” of the ML model's performance.

Often, once a ML model has been validated and tested, the algorithm can be repurposed for a new task via transfer learning. This can be beneficial in many ways as it saves time and resources spent on coding/debugging, theoretically improves the efficiency of model training, and reduces the need for large datasets since the repurposed algorithm has presumably already been trained on a similar large dataset. Transfer learning is particularly helpful when the original and new tasks are similar and can greatly improve development efficiency. However, there are potentially important disadvantages when incorporating transfer learning incorrectly. For instance, the original model may not be designed specifically for the new task at hand, which may result in suboptimal performance or decreased computational efficiency. Therefore, prudence is required when deciding when to use transfer learning for specific tasks.

Presently, the notable medical application of ML relies mainly on its ability to make diagnostic and prognostic predictions by integrating features extracted from a vast amount of patient medical records, including demographic information, physical exam findings, laboratory values, and imaging results (Fig. 1).15,20,21 This characteristic allows ML to make potentially equally or more accurate disease diagnoses and outcome prediction when compared with trained healthcare professionals.22–24 Meanwhile, ML improves the efficiency of data processing and analysis that contributes to quicker therapeutic decision making and higher-quality patient care.25–27 As a broad generalization, this is achieved by developing CNNs that utilize echocardiography videos as input, extract specific features (e.g., textures, locations and/or brightness of pixels) via filters, pool, and weigh the extracted features for importance, and classify the image appropriately. As a specific example, in order to identify echocardiography views, Zhang et al. trained a 13-layer CNN which weighed 10 randomly selected image frames from 14 035 echocardiography videos and subsequently averaged the probability that each video was in a particular echocardiography view for classification.27 Following view classification, Zhang et al. similarly trained CNN models for image segmentation and disease classification (hypertrophic cardiomyopathy, cardiac amyloid, or pulmonary arterial hypertension). Likewise, similar image-based CNNs have also been used to identify cardiac structures/functional patterns on echocardiograms to improve automation and efficiency of view identification, heart chamber segmentation, and global function assessment.27–30 While these are all very promising applications of ML in still-frame echocardiography, there is likely a staggering amount of untapped diagnostic and prognostic potential in the overlooked temporal information that is inherently present in echocardiography videos.

FIG. 1.

FIG. 1.

Current and future AI applications in echocardiography. Traditional ML techniques have focused on quality assurance, image segmentation, view identification, and global function assessment based on still-frame echocardiographic images. In contrast, novel spatiotemporal DL models utilize temporal information and have had success in motion assessment, disease classification, and heart mesh construction. Likely future applications of spatiotemporal DL models include AI-guided image acquisition, wearable echocardiography patches, improved workflow efficiency and standardization, and imaging diagnostic networks. AI = artificial intelligence. ML: machine learning. DL: deep learning. RWM = regional wall motion.

NOVEL SPATIOTEMPORAL DL MODELS

Although DL research has been performed based on still-frame echocardiography images in its infancy, it has been quickly recognized that classifying individual image frames in isolation limits the perception of temporal features during or between cardiac cycles, which compromises effective identification of advanced imaging biomarkers for the diagnosis and prognosis of cardiovascular diseases.6,31,32 For instance, many subclinical cardiovascular disorders have early stage cardiac structure abnormalities, which may not be readily apparent on static images, resulting in readers' uncertainties on radiologic-pathologic correlation. Paradoxically, when structural abnormalities become evident, it is often too late to reverse the underlying pathology and myocardial remodeling with treatment. Of note, subvisual predictive information embedded in cardiac motion patterns can be uncovered with advanced learning and computational analysis.7,8,33 Characteristic spatiotemporal features highlight patient heterogeneity and help identify particular patient groups who could benefit from specific therapies during early disease stages.10,33 Recognizing early stage imaging phenotypes identifies opportunities for effective treatment before irreversible cardiac pathology occurs.

Recent technological advancements have empowered DL algorithms to analyze spatiotemporal information within or between consecutive images to recognize subtle myocardial motion/function changes from echocardiographic videos.6 ML algorithms can integrate the temporal information for data analysis through either deep convolutional neural networks (DCNNs) or recurrent neural networks (RNNs; Fig. 2). DCNNs are a subtype of DL which extracts specific information from the data through “filters,” which then “pools” the data to get an aggregate value and finally assigns a weight (importance) to each region or object in the dataset;19 DCNNs have typically been used to analyze images. In contrast, RNNs are artificial neural networks where data nodes have connections between each other and can create a “cycle,” thereby simulating a temporal dynamic effect.19 In other words, RNNs effectively serve as a time benchmark for the static images which are analyzed by CNNs (simulating a “time” variable for analysis), and the combined spatiotemporal information is used for the overall ML classification. An effective combination and leveraging of DCNNs and RNNs in spatiotemporal modeling helped identify unique early stage cardiac motion patterns through cumulative evaluation of the continuous and multi-dimensional movements of different parts of the myocardium, uncovering a predictive ability that is lost by human interpretation (Fig. 3).7,8,33 Other strategies such as a cardiac shape autoencoder and calculated spatiotemporal correspondence have also been tested, but the majority of the advancements at this time have been through CNNs and RNNs.12,34

FIG. 2.

FIG. 2.

Deep convolutional neural network (DCNN) and recurrent neural network (RNN) models. (a) DCNN model: convolution layers detect patterns in subregions of the input and pooling layers reduce the overall number of parameters for computation by aggregating the data and assigning an importance or “weight.” (b) The network architecture of a spatiotemporal DCNN model. (c) RNN model: outputs are generated for each input, and that specific output is cycled back into the same layer of the network to generate a conclusion. D) The network architecture of a RNN model.

FIG. 3.

FIG. 3.

DL segmentation-based echocardiography diagnosis model for disease classification. Consists of two independent steps: deep learning-based segmentation that is not trained with diagnosis data and the deep classification model where the latent radiomic features are analyzed for TTS and AMI differentiation. TTS: takotsubo syndrome. AMI: acute myocardial infarction.

SPATIOTEMPORAL DL NETWORKS IN ECHOCARDIOGRAPHY WORKFLOW

Image acquisition

For both human readers and ML algorithms, the clinical diagnostic power of echocardiography requires a basic standard for image quality; however, image quality can be highly variable depending on the experience of the ultrasound operator. Routine echocardiography acquisition involves multiple acquisition views/windows, each of which depicts different angles of the heart. Early ML techniques enable automated image segmentation which rapidly integrate the information from all views/windows to extract the necessary imaging features.27 CNN-based automated echocardiography quality assessment have been developed recently.35,36 A still-frame DCNN algorithm has shown greater accuracy in identifying and classifying standard echocardiographic views when compared with board-certified sonographers while categorizing low-resolution and suboptimal images.28 A fused CNN architecture which incorporated temporal information resulted in an improvement in accuracy for view classification compared to solely spatial information, reinforcing the positive impact of temporal data for ML applications.37

Research has been conducted to assess the feasibility of AI-assisted ultrasound probe guidance to reduce the necessity for highly trained operators for echocardiography acquisition. A commercial ultrasound artificial intelligence (AI) software was approved by the United States Food and Drug Administration, which utilizes several interconnected DL algorithms to simultaneously estimate image quality, distance between the current location of the probe and the desired location, and the corrective probe manipulation needed to improve diagnostic image quality.38 With its help, ultrasound-naïve nurses were able to obtain diagnostic image quality for left ventricular size and function in 98.8% of patients, right ventricular size in 92.5% of patients, and pericardial effusion in 98.8% of patients.38 A similar study showed that ultrasound-naïve medical students can obtain diagnostic quality images for left ventricular ejection fraction (LVEF) using the same software in 91% of attempts.39 By removing the experience barrier, these findings may have major implications for scenarios with limited availability of trained echocardiographers, such as in rural settings or during the COVID-19 pandemic. Recently, a wearable echocardiography patch has been developed to obtain and process raw cardiac images, which may bypass human manipulation altogether.14 AI technology has helped echocardiography image acquisition become more approachable and therefore increase access to healthcare for rural patient populations.

Image processing

Integrating DL algorithms in echocardiography studies have enabled automatic quantification of the size, structure, and function of the heart chambers, which contributes to image processing efficiency and accuracy in quantifying chamber volumes, ventricular mass, LVEF, myocardial strain, and mitral valve annulus size.27,40–43 Kusunose et al. trained ten separate DCNNs to identify regional wall motion abnormalities (RWMA) and achieved comparable accuracies to board-certified cardiologists.44 Although their study was limited by a relatively small training dataset (n = 400) and did not undergo external validation, it showed the automatic quantification potential of DL networks.

Subsequently, Huang et al. developed another automated interconnected DL model with: (1) a 3-dimensional (2D+time) CNN for view selection and image quality control, (2) a U-net model for segmentation to annotate left ventricle location, and (3) a 3-dimensional CNN to evaluate RWMA from the four standard echocardiogram views.9 Their model incorporated temporal information for each of the 10 638 echocardiograms, and it achieved an AUC of 0.912 (95% CI 0.896–0.928) and 0.891 (95% CI, 0.834–0.948) for cross-validation and external validation, respectively.

Recently, Ouyang et al. constructed a CNN model with connections and spatiotemporal convolutions across image frames to predict cardiac systolic function.6 This model had comparable variance to humans with greater reproducibility and was able to predict LVEF with a mean absolute error of 4.1% and classify heart failure with reduced ejection fraction with an AUC of 0.97 and 0.96 for internal and external validation, respectively. By integrating temporal information, automating ML analysis, and validating the model externally, ML models may be one step closer to integrating into real-world clinical practice. Of note, He et al. also conducted a randomized clinical trial based on their models to test the accuracy of AI vs sonographer for initial assessment of LVEF. AI assessments of LVEF were found to be noninferior to sonographers (i.e., cardiologists could not distinguish which values were determined by AI or by sonographers).13

Image interpretation and reporting

Echocardiography image interpretation and diagnosis are heavily reliant on pattern recognition and pathology correlation. In a multi-tasking human mind, the visual cortex may not be able to process all the diverse information that impinges on vision at the same time, and human readers tend to ignore “irrelevant” visual stimuli and focus on the most “important” regions or motion features of the heart to quickly parse complex scenes in real time.45 Therefore, human brains are wired for a bias toward selective myocardial areas and overt temporal features to identify pathology—“the eye sees what it expects to see.”46 In other words, despite a wealth of spatiotemporal information in echocardiographic videos, the human visual cortex is often subconsciously wired to prioritize easily identified “important” myocardial regions or motion patterns while disregarding “irrelevant” visual stimuli, thereby neglecting other vital imaging information.45,46 In contrast, ML models systematically analyze each pixel and corresponding changes and features for data analysis and classification. This results in a more objective means for discriminating cardiac conditions that is not dependent on human experience or skillset.

Different CNNs have been developed for echocardiography image interpretation to distinguish visually similar conditions such as pathologic vs physiologic hypertrophic cardiomyopathy, constrictive pericarditis vs restrictive cardiomyopathy, and TTS vs acute myocardial infarction.7,8,27,47,48 Data visualizations of these models have shown the benefits of incorporating temporal information to improve diagnostic performance with comparable or superior accuracy compared to human readers.7,8 DL assistance with automated image interpretation may help improve readers' accuracy and efficiency when determining a diagnosis in clinical practice.

From a pragmatic standpoint, AI-assisted image analysis and interpretation can improve efficiency and consistency of the clinic workflow. Clinical echocardiography services are resource intensive and time-consuming since they require well-trained sonographers to acquire images and perform measurements, and cardiologists read a series of still-frame images and echocardiography videos so as to approve the final reports. With the increasing demand of echocardiograms, AI provides a possible solution to counterbalance the increasing expert workload.13,49 Furthermore, AI-assisted clinical workflows may enhance standardization of echocardiogram reporting which improves both efficiency and accessibility.13 While further investigations need to be conducted before implementation into clinical practice, these results are promising for the potential benefits of AI-enhanced echocardiogram workflows for clinical care.

SPATIOTEMPORAL DL MODELING IN REAL-WORLD ECHOCARDIOGRAPHY PRACTICE

Diagnostic accuracy

Early spatiotemporal DL models have shown potential for identifying early stage cardiac valvular disorders. For example, rheumatic heart disease (RHD) is prevalent in low-resource settings, and echocardiography remains the gold standard for diagnosis.50 Early detection of RHD can lead to effective intervention and treatment thereby reducing overall mortality from this disease. Martins et al. developed a spatiotemporal DCNN for echocardiogram videos to screen RHD, which showed a significantly better accuracy compared to still-frame ML analysis, reinforcing the importance of temporal features for valvular disease diagnosis.33

To further discover the capability of spatiotemporal DL convolution to improve the diagnostic accuracy on myocardial disorders, we recently trained and tested real-time echocardiographic videos of American patients with TTS and AMI to establish a series of spatiotemporal DCNNs, which had been validated for generalizability on external echocardiograms from eight medical centers in the United States.7 The 3-dimensional DCNN(2D+time), and recurrent neural network (RNN), significantly outperformed 48 human expert readers in differentiating TTS and anterior wall STEMI in both internal and external TTS patients by reducing human readers' erroneous judgment calls on TTS.7 Meanwhile, another established spatiotemporal DCNN based on echoes from European TTS and AMI patients in the world's largest TTS registry (InterTAK Registry) obtained similar diagnostic accuracies.8 These studies have demonstrated the ability of spatiotemporal DL modeling in real-world practice to reduce reliance on human echocardiographic diagnosis, which increases clinical relevance of AI in assisting non-expert readers for urgently needed disease diagnosis and triage.

Evidence is accumulating that DL modeling can be comparable or superior to humans when completing risk stratification and triage tasks.7,8,33 DCNNs have repetitively demonstrated the potential to improve readers' diagnostic accuracy based on single- or limited-view 2D echocardiograms from multiple cardiovascular imaging studies,10,33 including our ongoing studies for acute coronary artery syndromes.7,8 Due to a paucity of automated resources for processing raw imaging data and lack of consistent reporting of data quality measures, large-scale and standardized training imaging databases are often unavailable for uncommon cardiovascular disorders.51,52 As a consequence, human readers often have to rely on instinct or limited personal experience to make urgently needed diagnosis and management decisions in clinical practice. Inter-observer variations and errors may occur due to inherent subjectivity. A major benefit of direct image interpretation with DCNNs is that predictions can be generated automatically from imaging data alone to avoid subjectivity in those rare diseases.9,27

Risk stratification/prognostication

Spatiotemporal DCNN applications extend beyond disease diagnosis to encompass disease prognostication and risk stratification. Traditional survival predictions from echocardiography are limited, typically being based on parametric imaging features, such as LVEF. Novel spatiotemporal DCNN models enable automated processing mechanisms to integrate many more echocardiographic temporal features to improve accuracy in predicting long-term prognosis.53–55 For instance, Shad et al. designed a 2-stream 3-dimensional CNN model to predict postoperative right ventricular (RV) failure via preoperative echocardiography videos for early identification of patients with likely RV failure following LV assistance device implantation.54 Meanwhile, Ulloa Cerna et al. trained a CNN using raw pixel data from echocardiographic videos to predict one-year all-cause mortality which outperformed predictions using solely echocardiography-derived measurements or echocardiography-derived measurements with clinical data.55 With improved information of the relevant variables and their intrinsic relationships, more accurate predictions can be made to aid in risk stratification and reduction, so as to support targeted prevention and treatment plans.

Compared to spatial imaging information, temporal imaging information can be robust and less acquisition skill dependent. Extracting latent temporal imaging features from the heart can be clinically valuable and feasible, particularly for non-standardized intermediate off-axis and continuously rotational and sweeping views in “low” quality 2D echocardiograms.28,37 DL models may evolve to the point where technically suboptimal images are sufficient for first-pass classification and risk stratification to support frontline triage.28,37

Novel disease classification/phenomapping

An emerging application of ML is generating novel categories for disease classification—phenomapping—based on existing clinical, imaging, and laboratory data. In particular, there has been an emphasis on improving the stratification of heart failure with preserved ejection fraction (HFpEF), which traditionally can have a variable treatment response due to high heterogeneity. To date, studies have incorporated echocardiographic data (manually extracted from echocardiography videos) with other clinical/laboratory data to identify anywhere from two to six distinct phenogroups.56–58 Parsing together these complex clinical and diagnostic features allows for more refined disease classification methods—by factoring in more temporal features, more delicate classifications may be developed which may potentially contribute to specific therapeutic strategies and improved outcomes.

Moreover, while many of the previous application examples are based on current understanding and knowledge of cardiovascular diseases, perhaps the most important potential of spatiotemporal DL is its ability to detect hidden myocardial motion patterns within echocardiographic videos that are not yet known. Such discoveries will operationalize the field of phenomapping and enable further discovery of novel disease subclasses. This advancement is especially important in cardiovascular diseases that carry a great deal of complexity in both understanding their physiological attributes as well as their therapeutic options. One example of this need is TTS, which has a similar long-term prognosis as AMI despite its relatively improved cardiac function.59 Studies have shown that underlying pathophysiologic mechanisms in TTS may result in divergent mortality outcomes.60,61 Further spatiotemporal DL research may identify novel classification categories among overall TTS patients, leading to early intervention and distinct management strategies among TTS subgroups to improve long-term outcomes.

CHALLENGES AND TROUBLESHOOTING

Despite promising progress thus far, there are major challenges that need to be surmounted before novel DL models can be effectively transformed and integrated into routine clinical echocardiographic practice.

First, as with all ML technologies, the training phase of spatiotemporal DL modeling construction is data-hungry—prediction model performance scales with the number of training examples. Creating spatiotemporal DL algorithms requires large-scale training data based on echocardiographic videos with structural formats. ML models are vulnerable to data sampling bias and the possibility of developing errors via assimilation of biased annotations of the data during the training process. Unique to echocardiography, the dataset is particularly susceptible to poor standardization from low image quality, interoperator variability, or differences in ultrasound machines.38 Well-accepted nomenclature, standardized protocols, reference labels, and uniform echocardiography data storage methods are often lacking throughout various medical centers. Unintended biases can be introduced due to differences in echocardiographic acquisition (hardware) and processing (software) systems. Lack of standardization of equipment and methodologies can lead to artifacts that confound the DL model. Additionally, interoperator or inter-interpreter variability, an “Achilles' heel” of traditional echocardiographic techniques, remains a pitfall that undermines the building and applications of spatiotemporal DL models in echocardiography practice. Therefore, generation of large amounts of echocardiography data from individuals of all backgrounds with the ability to homogenize data and alleviate variability and improved architecture design for algorithms to be more compatible with echocardiographic data are key components to successful implementation of novel DL models in clinical echocardiographic services. When utilizing integrated echocardiography, tomographic imaging (CT, CMR, and PET), and clinical data to build a comprehensive DL algorithm to help clinical diagnosis and prognostication for certain cardiac disorders, there is an added challenge of simultaneously processing different domains/formats of imaging data. With more variables and dimensions, the amount of time needed to process each data point increases accordingly, and therefore may not be pragmatic in a clinical setting or a prospective trial design. Although methodologies have been proposed to address these problems (e.g., attention mechanisms),62,63 novel automated image segmentation techniques are critically needed.64–67 In addition, increased computational powers (e.g., quantum computation) may help shorten the analytical process and improve predictive ability.

Second, while some studies have broached mortality predictions of spatiotemporal DCNNs, they are not currently powered to assess long-term outcomes.55,56 More robust patient data with longer follow-up time would empower spatiotemporal DL models to generate more accurate mortality and outcome predictions based on echocardiography videos, in order to guide treatment decisions, particularly if pathologic disease states are identified early. Accordingly, novel DL models have been increasingly studied for risk stratification. Cheng et al. trained a 3-dimensional CNN to detect impaired LV function and aortic valve regurgitation from apical 4-chamber videos (without Doppler)—a function that is not obvious to the human eye—to quickly evaluate disease severity without comprehensive evaluation with advanced echocardiography techniques.33 Similarly, Yuan et al. developed a DL model to predict coronary artery calcification (a marker of the extent of coronary artery disease) via echocardiography with comparable performance to computed tomography.11 Additionally, a novel DL model has been shown to construct a heart mesh (3-dimensions with time) based on 2-dimensional echocardiography data, and the heart mesh can be used for surgical/interventional planning as well as automated extraction of possible risk stratification factors such as LVEF, myocardial muscle mass, and volumetric changes of chamber volumes.5 As more advanced imaging biomarkers can be gleaned from echocardiography videos, prognostication and risk stratification are likely to improve in frontline care quality.

Third, DL has traditionally been viewed as a “black box”—while the inputs and outputs of the algorithm are both known, the exact features and reasoning used for classification are unknown. Of note, understanding the “why” of a decision/classification can help appraise model competency and helps explore causative pathophysiology. Identifying the key features in DL classification may also help researchers target specific areas for model improvement. Different methods exist to attempt to decode the “black box” including classical methods such as correlation values between different known features or saliency map image visualization which highlights important areas for classification and generative adversarial networks.68–71 Regardless, improved spatiotemporal DL data visualization and feature identification on echocardiography will assist readers by pointing out the contributing regions of interest, extracting latent temporal imaging features to support subsequent quantitative assessment.

Fourth, nowadays, the utilization of AI in the field of echocardiography is characterized by a scarcity of randomized, controlled trials. Given the difficulty of blinding a fully automated diagnostic pipeline, there are a limited number of prospective trials on the impact of DL echocardiographic models on clinical patient assessments and diagnostic cardiology workflow.13,72,73 In order for DL models to be more widely accepted and integrated in echocardiography practice, large-scale prospective trials to validate the real-world benefits of DL modeling on the diagnostic accuracy efficiency of echocardiography are needed. Ideally, there will be a unified DL model with a multitude of capabilities for automation of image acquisition, processing, and interpretation, and the model itself will be able to identify key abnormal or pathologic echocardiograms for rapid triage and appropriate workflows. Timely development of relevant consensus documents among academic AI communities may prompt greater standardization and effective communication between AI researchers and frontline echocardiography performers and readers, to support echocardiographic trial design and avoid isolation of AI-based echocardiography studies from major ongoing prospective clinical trials and facilitate the appropriate translation from AI research to bedside imaging diagnostic practice.

PERSPECTIVES

Despite the challenges in both the clinical arena and the computer science field for the application of DL models in echocardiography, we are nearing a future where DL will be integrated into daily echocardiography workflows to improve measurement standardization, accuracy, and reproducibility across serial exams of patients, regardless of who performed the exam or their level of experience (Fig. 4). With technologic advancements, new spatiotemporal DL models will help extract latent temporal imaging features and identify advanced imaging biomarkers earlier in a disease's course—some that were previously unknown or undetectable to the human eye—for prevention or treatment, so as to improve prognosis. DL-based novel technologies will live not only inside new generation echocardiography machines, but also in imaging services outside echocardiography labs. Integrating novel DL models into regional imaging diagnostic networks may prompt a process of “democratization” in access to echocardiograms, potentially reformatting a traditional echocardiography “lab” operation to a contemporary echocardiography “system” operation, so as to reduce healthcare disparities in low resource or rural populations.

FIG. 4.

FIG. 4.

Deep learning echocardiography and regional imaging diagnostic network automation. Patients present to the emergency room or outreach clinics with chest pain, and healthcare providers can use AI-assisted POCUS to obtain echocardiographic videos. The data can be sent remotely to tertiary centers for interpretation or be interpreted locally with real-time DCNN analysis. This ultimately contributes to efficient and accurate risk stratification to assist with triage decisions and fulfills AI's core values. AI: artificial intelligence. POCUS: point-of-care ultrasound. DCNN: deep convolutional neural network. ER: emergency room.

Successful management of cardiovascular diseases frequently requires advanced diagnostic and invasive procedures only available at tertiary medical centers. Rural residents have limited access to specialist care, and often experience extended travel time and difficulties to see specialty providers needed to care for their cardiac conditions. Care inequalities result in a divergence in cardiac mortality for rural vs urban residents,74 and rural patients have a significantly higher cardiovascular disease-associated mortality.75 In rural communities, almost all patients first contact their primary care providers when they have symptoms concerning for cardiac diseases.76,77 While all patients with symptoms suggestive of cardiac pathologies can be referred for further cardiac evaluation, effective frontline triage and risk stratification is critical for high value care and cost-effectiveness. Recently, point-of-care ultrasound (POCUS) and handheld ultrasound machines have increasingly been used as added diagnostic tools at primary care locations. As handheld ultrasound and POCUS in remote outreach clinical practice provide broad access to echocardiographic approaches and generate the potential to help cardiovascular disease screening and triage, a key question is emerging: what will be the most effective approach to process and analyze those focused echocardiograms to support patient triage and risk stratification? Even though imaging experts may come to a similar diagnosis or risk adjustment conclusion based on POCUS, DCNNs may do so much more efficiently. A well-trained and validated DCNN may improve clinic efficiency by providing a near-instant “first-pass” read on the disease and its severity along with the DCNN's reasoning; frontline healthcare providers can then review the conclusion and use their clinical judgment to accept the machine's conclusion or instead consider other clinical factors. Although the definitive diagnosis of selected “high-risk” patients may eventually require comprehensive echocardiography and other advanced tomographic imaging techniques (and interpretive expertise) in well-endowed cardiovascular facilities, changing current echocardiography processing and data analysis sequence and format will likely support building a novel patient-centered and cost-effective diagnostic cardiology workflow/pathway.

Based on handheld and POCUS examinations, spatiotemporal DL models have shown added value for medical robotic arms aimed at automated and remote acquisition and interpretation of echocardiographic images/videos. Combining new spatiotemporal DL techniques with POCUS to support daily decision-making will likely enhance both effective and efficient care delivery. Developing an AI-aided imaging diagnostic network based on spatiotemporal DL technologies will help amplify real-world benefits of POCUS in rural areas for risk stratification and triage of patients with cardiovascular diseases. Meanwhile, this strategy will likely augment clinical care by increasing workflow efficiency and allowing frontline providers to be more confident during their triage and management decision-making. Instead of the intensive time burden needed to obtain and interpret echocardiograms, they will be able to dedicate more time to direct patient interaction so as to improve quality of care. Improvements in the automation and efficiency of current imaging risk stratification and diagnostic pathways will eventually contribute to reducing healthcare disparities that are present in low-resource or rural populations.

SUMMARY

DL technologies have already been applied to support automated imaging processing functionalities in echocardiography, from view identification, heart chamber segmentation, to global and regional function assessment. Novel spatiotemporal DL models in echocardiograms empower the unique roles of temporal imaging features for the diagnosis and prognostication of cardiovascular disorders. DCNN data visualization helps extracting latent information not obvious to the human eye.78,79 In the era of advanced computational power, utilization of spatiotemporal DL algorithms for big data analytics in echocardiography promises reduction in cost, cognitive errors, and intra- and inter-observer variability, and “coach” trainees with various backgrounds and experience and facilitate imaging training pathways for rare cardiovascular disorders.80

ACKNOWLEDGMENTS

This work was supported in part by NIH under the NHLBI grant R01 HL171624.

AUTHOR DECLARATIONS

Conflict of Interest

The authors have no conflicts to disclose.

Ethics Approval

Not applicable. No human or animal subjects were used in this study.

Author Contributions

Amanda Chang: Writing – original draft (equal); Writing – review & editing (equal). Xiadong Wu: Supervision (equal); Writing – review & editing (equal). Kan Liu: Conceptualization (equal); Supervision (lead); Validation (lead); Visualization (equal); Writing – original draft (equal); Writing – review & editing (lead).

DATA AVAILABILITY

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

References

  • 1. Moore C. L. and Copel J. A., “ Point-of-care ultrasonography,” N. Engl. J. Med. 364, 749–757 (2011). 10.1056/NEJMra0909487 [DOI] [PubMed] [Google Scholar]
  • 2. Yuan N. et al. , “ Systematic quantification of sources of variation in ejection fraction calculation using deep learning,” JACC Cardiovasc. Imaging 14, 2260–2262 (2021). 10.1016/j.jcmg.2021.06.018 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Pellikka P. A. et al. , “ Variability in ejection fraction measured by echocardiography, gated single-photon emission computed tomography, and cardiac magnetic resonance in patients with coronary artery disease and left ventricular dysfunction,” JAMA Netw. Open 1, e181456 (2018). 10.1001/jamanetworkopen.2018.1456 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Michel J. B., Sangha D. M., and Erwin J. P. III, “ Burnout among cardiologists,” Am. J. Cardiol. 119, 938–940 (2017). 10.1016/j.amjcard.2016.11.052 [DOI] [PubMed] [Google Scholar]
  • 5. Laumer F. et al. , “ Weakly supervised inference of personalized heart meshes based on echocardiography videos,” Med. Image Anal. 83, 102653 (2023). 10.1016/j.media.2022.102653 [DOI] [PubMed] [Google Scholar]
  • 6. Ouyang D. et al. , “ Video-based AI for beat-to-beat assessment of cardiac function,” Nature 580, 252–256 (2020). 10.1038/s41586-020-2145-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Zaman F. et al. , “ Spatio-temporal hybrid neural networks reduce erroneous human “judgement calls” in the diagnosis of takotsubo syndrome,” EClinicalMedicine 40, 101115 (2021). 10.1016/j.eclinm.2021.101115 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Laumer F. et al. , “ Assessment of artificial intelligence in echocardiography diagnostics in differentiating takotsubo syndrome from myocardial infarction,” JAMA Cardiol. 7, 494–503 (2022). 10.1001/jamacardio.2022.0183 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Huang M.-S., Wang C.-S., Chiang J.-H., Liu P.-Y., and Tsai W.-C., “ Automated recognition of regional wall motion abnormalities through deep neural network interpretation of transthoracic echocardiography,” Circulation 142, 1510–1520 (2020). 10.1161/CIRCULATIONAHA.120.047530 [DOI] [PubMed] [Google Scholar]
  • 10. Cheng L. H. et al. , “ Revealing unforeseen diagnostic image features with deep learning by detecting cardiovascular diseases from apical 4-chamber ultrasounds,” J. Am. Heart Assoc. 11, e024168 (2022). 10.1161/JAHA.121.024168 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Yuan N. et al. , “ Prediction of coronary artery calcium using deep learning of echocardiograms,” J. Am. Soc. Echocardiogr. 36(5), 474–481.e3 (2022). 10.1016/j.echo.2022.12.014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Wu H. et al. , “ Semi-supervised segmentation of echocardiography videos via noise-resilient spatiotemporal semantic calibration and fusion,” Med. Image Anal. 78, 102397 (2022). 10.1016/j.media.2022.102397 [DOI] [PubMed] [Google Scholar]
  • 13. He B. et al. , “ Blinded, randomized trial of sonographer versus AI cardiac function assessment,” Nature 616, 520–524 (2023). 10.1038/s41586-023-05947-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Hu H. et al. , “ A wearable cardiac ultrasound imager,” Nature 613, 667–675 (2023). 10.1038/s41586-022-05498-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Cuocolo R., Perillo T., De Rosa E., Ugga L., and Petretta M., “ Current applications of big data and machine learning in cardiology,” J. Geriatr. Cardiol. 16, 601–607 (2019). 10.11909/j.issn.1671-5411.2019.08.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Deo R. C., “ Machine learning in medicine,” Circulation 132, 1920–1930 (2015). 10.1161/CIRCULATIONAHA.115.001593 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Souza Filho E. M. et al. , “ Artificial intelligence in cardiology: Concepts, tools and challenges—“The Horse is the One Who Runs, You Must Be the Jockey,”” Arq. Bras. Cardiol. 114, 718–725 (2020). 10.36660/abc.20180431 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Madhulatha T. S., “ An overview on clustering methods,” IOSR J. Eng. 2(4), 719–725 (2012). 10.48550/arXiv.1205.1117 [DOI] [Google Scholar]
  • 19. Shiri F. M., Perumal T., Mustapha N., and Mohamed R., “ A comprehensive overview and comparative analysis on deep learning models: CNN, RNN, LSTM, GRU,” arXiv:2305.17473 (2023).
  • 20. Sevakula R. K. et al. , “ State-of-the-art machine learning techniques aiming to improve patient outcomes pertaining to the cardiovascular system,” J. Am. Heart Assoc. 9, e013924 (2020). 10.1161/JAHA.119.013924 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Mortazavi B. J. et al. , “ Analysis of machine learning techniques for heart failure readmissions,” Circ. Cardiovasc. Qual. Outcomes 9, 629–640 (2016). 10.1161/CIRCOUTCOMES.116.003039 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Ribeiro A. H. et al. , “ Automatic diagnosis of the 12-lead ECG using a deep neural network,” Nat. Commun. 11, 1760 (2020). 10.1038/s41467-020-15432-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Kwon J. M. et al. , “ Comparing the performance of artificial intelligence and conventional diagnosis criteria for detecting left ventricular hypertrophy using electrocardiography,” Europace 22, 412–419 (2020). 10.1093/europace/euz324 [DOI] [PubMed] [Google Scholar]
  • 24. Hannun A. Y. et al. , “ Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network,” Nat. Med. 25, 65–69 (2019). 10.1038/s41591-018-0268-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Thavendiranathan P. et al. , “ Feasibility, accuracy, and reproducibility of real-time full-volume 3D transthoracic echocardiography to measure LV volumes and systolic function: A fully automated endocardial contouring algorithm in sinus rhythm and atrial fibrillation,” JACC: Cardiovasc. Imaging 5, 239–251 (2012). 10.1016/j.jcmg.2011.12.012 [DOI] [PubMed] [Google Scholar]
  • 26. Gandhi S., Mosleh W., Shen J., and Chow C. M., “ Automation, machine learning, and artificial intelligence in echocardiography: A brave new world,” Echocardiography 35, 1402–1418 (2018). 10.1111/echo.14086 [DOI] [PubMed] [Google Scholar]
  • 27. Zhang J. et al. , “ Fully automated echocardiogram interpretation in clinical practice,” Circulation 138, 1623–1635 (2018). 10.1161/CIRCULATIONAHA.118.034338 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Madani A., Arnaout R., Mofrad M., and Arnaout R., “ Fast and accurate view classification of echocardiograms using deep learning,” npj Digital Med. 1, 6 (2018). 10.1038/s41746-017-0013-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Ghorbani A. et al. , “ Deep learning interpretation of echocardiograms,” npj Digital Med. 3, 10 (2020). 10.1038/s41746-019-0216-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30. Kagiyama N. et al. , “ Machine learning assessment of left ventricular diastolic function based on electrocardiographic features,” J. Am. Coll. Cardiol. 76, 930–941 (2020). 10.1016/j.jacc.2020.06.061 [DOI] [PubMed] [Google Scholar]
  • 31. Litjens G. et al. , “ State-of-the-art deep learning in cardiovascular image analysis,” JACC: Cardiovasc. Imaging 12, 1549–1565 (2019). 10.1016/j.jcmg.2019.06.009 [DOI] [PubMed] [Google Scholar]
  • 32. Howard J. P. et al. , “ Improving ultrasound video classification: An evaluation of novel deep learning methods in echocardiography,” J. Med. Artif. Intell. 3, 4 (2020). 10.21037/jmai.2019.10.03 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. Martins J. F. B. S. et al. , “ Towards automatic diagnosis of rheumatic heart disease on echocardiographic exams through video-based deep learning,” J. Am. Med. Inf. Assoc. 28, 1834–1842 (2021). 10.1093/jamia/ocab061 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34. Painchaud N., Duchateau N., Bernard O., and Jodoin P.-M., “ Echocardiography segmentation with enforced temporal consistency,” IEEE Trans. Med. Imaging 41, 2867–2878 (2022). 10.1109/TMI.2022.3173669 [DOI] [PubMed] [Google Scholar]
  • 35. Abdi A. H. et al. , “ Automatic quality assessment of echocardiograms using convolutional neural networks: Feasibility on the apical four-chamber view,” IEEE Trans. Med. Imaging 36, 1221–1230 (2017). 10.1109/TMI.2017.2690836 [DOI] [PubMed] [Google Scholar]
  • 36. Luong C. et al. , “ Automated estimation of echocardiogram image quality in hospitalized patients,” Int. J. Cardiovasc. Imaging 37, 229–239 (2021). 10.1007/s10554-020-01981-8 [DOI] [PubMed] [Google Scholar]
  • 37. Gao X., Li W., Loomes M., and Wang L., “ A fused deep learning architecture for viewpoint classification of echocardiography,” Inf. Fusion 36, 103–113 (2017). 10.1016/j.inffus.2016.11.007 [DOI] [Google Scholar]
  • 38. Narang A. et al. , “ Utility of a deep-learning algorithm to guide novices to acquire echocardiograms for limited diagnostic use,” JAMA Cardiol. 6, 624–632 (2021). 10.1001/jamacardio.2021.0185 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39. Schneider M. et al. , “ A machine learning algorithm supports ultrasound-naïve novices in the acquisition of diagnostic echocardiography loops and provides accurate estimation of LVEF,” Int. J. Cardiovasc. Imaging 37, 577–586 (2021). 10.1007/s10554-020-02046-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40. Knackstedt C. et al. , “ Fully automated versus standard tracking of left ventricular ejection fraction and longitudinal strain: The FAST-EFs multicenter study,” J. Am. Coll. Cardiol. 66, 1456–1466 (2015). 10.1016/j.jacc.2015.07.052 [DOI] [PubMed] [Google Scholar]
  • 41. Jeganathan J. et al. , “ Artificial intelligence in mitral valve analysis,” Ann. Card. Anaesth. 20, 129–134 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42. Moghaddasi H. and Nourian S., “ Automatic assessment of mitral regurgitation severity based on extensive textural features on 2D echocardiography videos,” Comput. Biol. Med. 73, 47–55 (2016). 10.1016/j.compbiomed.2016.03.026 [DOI] [PubMed] [Google Scholar]
  • 43. Omar H. A. et al. , in IEEE 15th International Symposium on Biomedical Imaging ( ISBI, 2018), pp. 1195–1198. [Google Scholar]
  • 44. Kusunose K. et al. , “ A deep learning approach for assessment of regional wall motion abnormality from echocardiographic images,” JACC: Cardiovasc. Imaging 13, 374–381 (2020). 10.1016/j.jcmg.2019.02.024 [DOI] [PubMed] [Google Scholar]
  • 45. Boynton G. M., “ Attention and visual perception,” Curr. Opin. Neurobiol. 15, 465–469 (2005). 10.1016/j.conb.2005.06.009 [DOI] [PubMed] [Google Scholar]
  • 46. Lorenz-Spreen P., Mønsted B. M., Hövel P., and Lehmann S., “ Accelerating dynamics of collective attention,” Nat. Commun. 10, 1759 (2019). 10.1038/s41467-019-09311-w [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47. Narula S., Shameer K., Salem Omar A. M., Dudley J. T., and Sengupta P. P., “ Machine-learning algorithms to automate morphological and functional assessments in 2D echocardiography,” J. Am. Coll. Cardiol. 68, 2287–2295 (2016). 10.1016/j.jacc.2016.08.062 [DOI] [PubMed] [Google Scholar]
  • 48. Sengupta P. P. et al. , “ Cognitive machine-learning algorithm for cardiac imaging: A pilot study for differentiating constrictive pericarditis from restrictive cardiomyopathy,” Circ. Cardiovasc. Imaging 9, e004330 (2016). 10.1161/CIRCIMAGING.115.004330 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49. Alsharqi M. et al. , “ Artificial intelligence and echocardiography,” Echo Res. Pract. 5, R115–R125 (2018). 10.1530/ERP-18-0056 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50. Steer A. C., Danchin M. H., and Carapetis J. R., “ Group A streptococcal infections in children,” J. Paediatr. Child Health 43, 203–213 (2007). 10.1111/j.1440-1754.2007.01051.x [DOI] [PubMed] [Google Scholar]
  • 51. Beam A. L. and Kohane I. S., “ Big data and machine learning in health care,” JAMA 319, 1317–1318 (2018). 10.1001/jama.2017.18391 [DOI] [PubMed] [Google Scholar]
  • 52. Obermeyer Z. and Emanuel E. J., “ Predicting the future—Big data, machine learning, and clinical medicine,” N. Engl. J. Med. 375, 1216–1219 (2016). 10.1056/NEJMp1606181 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53. Samad M. D. et al. , “ Predicting survival from large echocardiography and electronic health record datasets: Optimization with machine learning,” JACC: Cardiovasc. Imaging 12, 681–689 (2019). 10.1016/j.jcmg.2018.04.026 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54. Shad R. et al. , “ Predicting post-operative right ventricular failure using video-based deep learning,” Nat. Commun. 12, 5192 (2021). 10.1038/s41467-021-25503-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55. Ulloa Cerna A. E. et al. , “ Deep-learning-assisted analysis of echocardiographic videos improves predictions of all-cause mortality,” Nat. Biomed. Eng. 5, 546–554 (2021). 10.1038/s41551-020-00667-9 [DOI] [PubMed] [Google Scholar]
  • 56. Shah S. J. et al. , “ Phenomapping for novel classification of heart failure with preserved ejection fraction,” Circulation 131, 269–279 (2015). 10.1161/CIRCULATIONAHA.114.010637 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57. Hedman Å. K. et al. , “ Identification of novel pheno-groups in heart failure with preserved ejection fraction using machine learning,” Heart 106, 342–349 (2020). 10.1136/heartjnl-2019-315481 [DOI] [PubMed] [Google Scholar]
  • 58. Katz D. H. et al. , “ Phenomapping for the identification of hypertensive patients with the myocardial substrate for heart failure with preserved ejection fraction,” J. Cardiovasc. Transl. Res. 10, 275–284 (2017). 10.1007/s12265-017-9739-z [DOI] [PubMed] [Google Scholar]
  • 59. Ghadri J. R. et al. , “ Long-term prognosis of patients with takotsubo syndrome,” J. Am. Coll. Cardiol. 72, 874–882 (2018). 10.1016/j.jacc.2018.06.016 [DOI] [PubMed] [Google Scholar]
  • 60. Uribarri A. et al. , “ Short‐ and long-term prognosis of patients with takotsubo syndrome based on different triggers: Importance of the physical nature,” J. Am. Heart Assoc. 8, e013701 (2019). 10.1161/JAHA.119.013701 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61. Chang A. et al. , “ Mortality correlates in patients with takotsubo syndrome during the COVID-19 pandemic,” Mayo Clin. Proc. Innov. Qual Outcomes 5, 1050–1055 (2021). 10.1016/j.mayocpiqo.2021.09.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62. Medina J. R. and Kalita J., in 17th IEEE International Conference on Machine Learning and Applications (ICMLA) ( IEEE, 2018), pp. 547–552. [Google Scholar]
  • 63. Yan S. et al. , “ Image captioning via hierarchical attention mechanism and policy gradient optimization,” Signal Process. 167, 107329 (2020). 10.1016/j.sigpro.2019.107329 [DOI] [Google Scholar]
  • 64. Redmon J., Divvala S., Girshick R., and Farhadi A., in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition ( IEEE, 2016), pp. 779–788. [Google Scholar]
  • 65. Girshick R., “ Fast r-cnn,” arXiv:1504.08083 (2015).
  • 66. Ren S., He K., Girshick R., and Sun J., “ Faster r-cnn: Towards real-time object detection with region proposal networks,” IEEE Trans. Pattern Anal. Mach. Intel. 39, 1137–1149 (2016). 10.1109/TPAMI.2016.2577031 [DOI] [PubMed] [Google Scholar]
  • 67. Turner J., Gupta K., Morris B., and Aha D. W., “ Keypoint density-based region proposal for fine-grained object detection and classification using regions with convolutional neural network features,” arXiv:1603.00502 (2016).
  • 68. Sundararajan M., Taly A., and Yan Q., in Proceedings of the 34th International Conference on Machine Learning, edited by Doina P. and Whye T. Y. ( PMLR, Proceedings of Machine Learning Research, 2017), Vol. 70, pp. 3319–3328. [Google Scholar]
  • 69. Zhu J. Y., Park T., Isola P., and Efros A. A., in IEEE International Conference on Computer Vision (ICCV). ( IEEE, 2017), pp. 2242–2251. [Google Scholar]
  • 70. Singla S., Pollack B., Chen J., and Batmanghelich K., “ Explanation by progressive exaggeration,” arXiv:1911.00483 (2019).
  • 71. Erion G., Janizek J. D., Sturmfels P., Lundberg S. M., and Lee S.-I., “ Improving performance of deep learning models with axiomatic attribution priors and expected gradients,” Nat. Mach. Intell. 3, 620–631 (2021). 10.1038/s42256-021-00343-w [DOI] [Google Scholar]
  • 72. Yao X. et al. , “ Artificial intelligence-enabled electrocardiograms for identification of patients with low ejection fraction: A pragmatic, randomized clinical trial,” Nat. Med. 27, 815–819 (2021). 10.1038/s41591-021-01335-4 [DOI] [PubMed] [Google Scholar]
  • 73. Persell S. D. et al. , “ Effect of home blood pressure monitoring via a smartphone hypertension coaching application or tracking application on adults with uncontrolled hypertension: A randomized clinical trial,” JAMA Netw. Open 3, e200255 (2020). 10.1001/jamanetworkopen.2020.0255 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 74. Harrington R. A. et al. , “ Call to action: Rural health: A presidential advisory from the American Heart Association and American Stroke Association,” Circulation 141, e615–e644 (2020). 10.1161/CIR.0000000000000753 [DOI] [PubMed] [Google Scholar]
  • 75. Baldwin L. M. et al. , “ Quality of care for acute myocardial infarction in rural and urban US hospitals,” J. Rural Health 20, 99–108 (2004). 10.1111/j.1748-0361.2004.tb00015.x [DOI] [PubMed] [Google Scholar]
  • 76. Chen X. et al. , “ Differences in rural and urban health information access and use,” J. Rural Health 35, 405–417 (2019). 10.1111/jrh.12335 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 77. Barreto T. et al. , “ Distribution of physician specialties by rurality,” J. Rural Health 37, 714–722 (2021). 10.1111/jrh.12548 [DOI] [PubMed] [Google Scholar]
  • 78. Sengupta P. P. et al. , “ Proposed requirements for Cardiovascular Imaging-Related Machine Learning Evaluation (PRIME): A checklist: Reviewed by the American College of Cardiology Healthcare Innovation Council,” JACC Cardiovasc. Imaging 13, 2017–2035 (2020). 10.1016/j.jcmg.2020.07.015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 79. Hughes J. W. et al. , “ Deep learning evaluation of biomarkers from echocardiogram videos,” EBioMedicine 73, 103613 (2021). 10.1016/j.ebiom.2021.103613 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 80. Citro R. et al. , “ Multimodality imaging in takotsubo syndrome: A joint consensus document of the European Association of Cardiovascular Imaging (EACVI) and the Japanese Society of Echocardiography (JSE),” Eur. Heart J. Cardiovasc. Imaging 21, 1184–1207 (2020). 10.1093/ehjci/jeaa149 [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Data sharing is not applicable to this article as no new data were created or analyzed in this study.


Articles from Biophysics Reviews are provided here courtesy of American Institute of Physics

RESOURCES