Skip to main content
Journal of Personalized Medicine logoLink to Journal of Personalized Medicine
. 2021 Dec 2;11(12):1280. doi: 10.3390/jpm11121280

Artificial Intelligence Evidence-Based Current Status and Potential for Lower Limb Vascular Management

Xenia Butova 1, Sergey Shayakhmetov 2, Maxim Fedin 3, Igor Zolotukhin 1,*, Sergio Gianesini 4,5
Editor: Niels Bergsland
PMCID: PMC8705683  PMID: 34945749

Abstract

Consultation prioritization is fundamental in optimal healthcare management and its performance can be helped by artificial intelligence (AI)-dedicated software and by digital medicine in general. The need for remote consultation has been demonstrated not only in the pandemic-induced lock-down but also in rurality conditions for which access to health centers is constantly limited. The term “AI” indicates the use of a computer to simulate human intellectual behavior with minimal human intervention. AI is based on a “machine learning” process or on an artificial neural network. AI provides accurate diagnostic algorithms and personalized treatments in many fields, including oncology, ophthalmology, traumatology, and dermatology. AI can help vascular specialists in diagnostics of peripheral artery disease, cerebrovascular disease, and deep vein thrombosis by analyzing contrast-enhanced magnetic resonance imaging or ultrasound data and in diagnostics of pulmonary embolism on multi-slice computed angiograms. Automatic methods based on AI may be applied to detect the presence and determine the clinical class of chronic venous disease. Nevertheless, data on using AI in this field are still scarce. In this narrative review, the authors discuss available data on AI implementation in arterial and venous disease diagnostics and care.

Keywords: artificial intelligence, deep machine learning, peripheral artery disease, chronic venous disease, venous thromboembolism

1. Introduction

The need for optimizing healthcare management by guaranteeing both top quality consultation and the rationalization of the available infrastructure resources is of paramount importance, particularly in a post COVID-19 pandemic world [1]. Not only might remote consultation be helpful in such situations as a pandemic-induced lock-down, but it may also be of use in rural areas with a limited access to well-equipped healthcare centers [2]. Diagnostic tools based on artificial intelligence (AI)-dedicated software and digital medicine in general are considered an effective solution for remote consultations [1,3].

The term “AI” is used to indicate a simulation of human intellectual behavior by a computer with minimal human intervention [4]. AI is based on a machine learning process or on an artificial neural network (ANN). Machine learning indicates a set of technologies that automatically detect patterns in data and then use them to predict future data or enable decision making under uncertain conditions. Deep learning is part of the machine learning process and is a special type of artificial neural network that resembles a system of synapses between human neurons. The ANN consists of many basic computing units, i.e., artificial neurons that use a simple classifier model. After weighing the evidence, every neuron of an ANN produces a decision signal. To train an ANN, learning algorithms, such as back propagation, are involved. Paired input signals and desired output decisions resemble brain functioning when it analyzes external sensory stimuli to perform different activities depending on the situation. However, no one knows how exactly AI-based tools make conclusions, as they operate like a “black box” [5,6,7,8].

Deep learning is particularly compelling due to its usage in processing and analyzing big data in healthcare, since deep learning can be used to find solutions for effective patient management. Using AI in everyday practice may provide accurate diagnostic algorithms and personalize patient management [9,10].

AI may reduce medical mistake rates and prevent discrepancies in diagnostic data interpretation [7]. Moreover, AI can be used to support cost-effective clinical decision-making, developing healthcare recommender systems, emotion recognition using physiological signals, and patient monitoring [3,11,12]. This can also significantly reduce the burden on healthcare workers, maximizing their time and expertise for optimal patient care [13].

AI has already been used in diagnostics for “object detection” (lesion localization), “object segmentation” (determination of the contours and boundaries of the lesion), and “classification of objects” (malignant or benign) [14,15]. This makes AI useful in radiology, where large datasets are processed [16]. AI interprets images with breast cancer, including lymph nodes metastases [17,18,19]. Analyzing chest radiographs with AI helps to detect lung cancer, metastases, tuberculosis or pneumonia, or diffuse lung diseases [20,21]. The automatic detection and segmentation of brain metastases, as well as prostate cancer detection on magnetic resonance (MRI) images, is another field for AI utilization [22,23]. In ophthalmology, AI helps to diagnose diabetic retinopathy, age-related macular degeneration, glaucoma, and other ophthalmic disorders [24,25,26,27]. In traumatology, neural networks in X-ray diagnostics of intertrochanteric fractures of the proximal femur surpassed conclusions of orthopedic surgeons. Ultra-precise neural networks have significant potential for fracture screening on plain radiographs when orthopedic surgeons are not available in emergency situations [28]. AI allows detection of early-stage melanoma on skin images [29]. Neural networks are as effective as certified dermatologists in differentiating benign neoplasms from malignant neoplasms on photographic and dermatoscopic images [30]. AI can be used to evaluate electrocardiograms [31] and ultrasound images [32], and in pathomorphology [33] and genomics [34]. AI is a promising tool that can be used to trace contacts during the pandemic, to improve pneumonia diagnostics [35], or to monitor COVID-19 patients [36].

Two of the possible areas where AI based diagnostics seems promising are in arterial atherosclerotic images and lower limb venous diseases. We performed a literature search using the following keywords: “artificial intelligence”, “deep machine learning”, “artificial neural network”, “convolutional neural network”, “telehealth”, “peripheral artery disease”, “abdominal aortic aneurism”, “deep venous thrombosis”, “pulmonary embolism” “venous thromboembolism”, “chronic venous disease”, “varicose veins”, “venous ulcer”, “vascular surgery”, “vascular medicine”, “angiology”, and “phlebology”. We then assessed all the articles for their eligibility. The reference list of all the articles related to AI in vascular management was searched for additional sources that contributed to the field.

The present narrative review reports the current state of the art of AI in the management of arterial disease, venous thromboembolism (VTE), and chronic venous disease (CVD).

2. Artificial Intelligence in Arterial Disease

AI is still not widely used in vascular medicine, so not that many publications can be found while searching the literature [37]. However, various AI algorithms are currently being developed in this field. Vascular segmentation is challenging because the vessels are highly variable in morphology, size, and curvature. Nevertheless, AI can help in segmentation and pattern recognition, therefore improving diagnostic efficiency and reducing time spent on analyzing data.

AI opens up many opportunities in vascular surgery, including the management and analysis of medical data, and the development of expert systems for prediction and decision making. It can also be used for patient care, in education and training of vascular surgeons, and as a health information and surveillance system for research [38].

Kurugol S. et al. presented a tool to assess aorta morphology and aortic calcium plaques on CT scans. The authors computed the agreement between the proposed algorithm and expert segmentations on 45 CT scans and obtained a closest point mean error of 0.62 ± 0.09 mm and a Dice coefficient of 0.92 ± 0.01 [39].

Graffy P.M. et al. used instance segmentation with convolutional neural networks (Mask R-CNN). It was applied to a dataset of 9914 non-contrast CT scans from 9032 consecutive asymptomatic adults who had undergone colonography screening. A developed fully automated abdominal aortic calcification scoring tool allows for the assessment of any non-contrast abdominal CT for cardiovascular risk [40].

AI can also help with segmentation analysis of ultrasound, CT, and magnetic resonance imaging (MRI) images in patients with carotid artery stenosis [41,42]. Caetano Dos Santos F.L. et al. developed a tool for the segmentation and analysis of atherosclerosis in the extracranial carotid arteries. A dataset of 59 randomly chosen head-and-neck CTA scans was used. An algorithm mainly based on the detection of carotid arteries, delineation of the vascular wall, and extraction of the atherosclerotic plaque was successful in 83% of stenoses over 50%. Specificity and sensitivity were 25% and 83%, respectively, with an overall accuracy of 71% [43].

Raffort J. et al., while searching the literature on AI tools for abdominal aortic aneurism management, found several prognostic programs. The potential of AI was confirmed in predicting aneurism growth and rupture, in-hospital and 30 day mortality, endograft complications, aneurism evolution, stent graft deployment, and the need for re-intervention after endovascular aneurysm repair. Nevertheless, small datasets were mainly used, while machine learning approaches require large databases for learning and training. A lack of external validation is also a pitfall as it outlines the need for multicenter registries [44].

Dehmeshki J. et al. presented a computer-aided detection tool that detected arteries based on a 3D region growing method and a fast 3D morphology operation. They developed a computer-aided measurement system to measure the artery diameters from the detected vessel centerline. The system has been tested on phantom data and on fifteen CTA datasets of peripheral arterial diseases (PAD) patients. An 88% detection accuracy of stenosis was achieved with an 8% error of measurement accuracy [45].

Ross E.G. et al. used machine learning algorithms to analyze electronic health records. Data of 7686 PAD patients from two tertiary hospitals were used to learn predictive models. Models confirmed the ability of machine learning algorithms to identify patients with peripheral arterial atherosclerosis, favoring an early detection of the subjects at risk of serious cardiac and cerebrovascular events [46].

AI was confirmed as an extremely useful tool for teaching goals by simulating different clinical cases. Virtual reality simulators are already available to train young professionals in basic endovascular skills [47,48].

AI is used in apps designed for use by patients themselves. Those applications analyze photographic images of the legs and help to identify early signs of diabetic foot syndrome [49]. Ohura N. et al. prepared four architectures to build wound segmentation convolutional neural networks (CNNs). The best results were shown by U-Net, which demonstrated an area under the curve of 0.997, specificity of 0.943, and sensitivity of 0.993. Such tools may be applied to diagnostics of arterial and venous leg ulcers as well as pressure ulcers [50].

3. Artificial Intelligence in Venous Thromboembolism

Clinical decisions in patients with pulmonary embolism (PE) can be based on AI-supported analysis of multi-slice computed tomography (MSCT-angiography) of pulmonary arteries images [51,52]. For those patients, timely diagnostics are crucial to save lives, but in routine practice, PE is still one of the most commonly missed diagnoses [53]. Deep machine learning to detect PE on MSCT angiograms was demonstrated to be a valuable solution [54].

The first attempts to detect PE using neural networks were made in the early 1990s. Patil S. et al. supposed that computerized pattern recognition could accurately estimate the probability of PE based on readily available clinical characteristics. Medical history data, physical examination, ECG, chest X-ray scans, and arterial blood gases of patients with suspected acute PE were downloaded to a back propagation neural network. Study data were obtained from 1213 patients in a prospective study on PE diagnostics. They were divided into training group A (n = 606) and test group B (n = 607). These groups were then transformed into training set B (n = 607) and test set A (n = 606). The performance curve was constructed from clinical assessments made by specialists and the neural network in groups A and B. The areas under the corresponding ROC curves were 0.7450, 0.7477, and 0.7324. All differences were not significant. Thus, neural networks were able to predict the clinical probability of PE with an accuracy comparable to that of experienced clinicians [54].

Huang S.C. et al. used a deep learning model capable of detecting PE signs on computed tomography pulmonary angiography (CTPA) images of the pulmonary arteries with simultaneous data interpretation (77-layer 3D convolutional neural network). Researchers conducted a retrospective data collection of 1797 images from 1773 patients. Training (1461 images from 1414 patients), validation (167 images from 162 patients), and hold-out test sets (169 images from 163 patients) were developed. Stratified random sampling was used to create validation and test kits to ensure an equal number of positive and negative cases. There was no patient overlap between sets. When tested, the deep learning model achieved an AUROC score of 0.84 with automatic detection of PE signs on the test set. Thus, the possibility of using deep learning to evaluate complex radiographic data of CTPA angiograms to detect PE was confirmed [55].

Nima Tajbakhsh et al. presented a computer-aided detection of PE on CTPA images. They assumed that despite acceptable sensitivity, existing computer detection systems generate a large number of false-positive results. This may lead to additional burden on radiologists to analyze them. The possibility of convolutional neural networks (the type of neuronets specified for image classification) to eliminate false conclusions was investigated. Developing the “correct” representation of the image is important for the accuracy of AI when analyzing an object in 3D images. For this purpose, a multi-plane image of emboli with alignment along the vessels was developed. This imaging provides three advantages: (1) compactness, i.e., concise summarization of 3D contextual information around the embolus in two image channels; (2) consistency—automatic alignment of the embolus on two-channel images according to the orientation of the affected vessel; and (3) extensibility—natural support for data augmentation for training neural networks. This method was tested using a set of 121 CTPA angiograms with a total of 326 emboli. The sensitivity reached 83% with two false-positive results [56].

The important role of AI-based tools in the recognition of PE signs on CTPA images was also confirmed by other researchers [57,58,59,60].

Deep learning is used in cancer patients who are at high risk of VTE [61]. Randomized studies have shown that prophylactic doses of anticoagulants reduce VTE rates in cancer patients by about half [62]. However, there is also a potentially high risk of bleeding associated with anticoagulant therapy [63]. The decision to use anticoagulants for the prevention of cancer-related VTE should ideally be based on an effective risk stratification strategy [64].

Pabinger I. et al., from the University of Vienna, have developed and tested a computer-based predictive model of VTE based on AI in outpatients with cancer. They used data from 1737 patients from a CATS study (a prospective single-center observational cohort with a baseline biobank) who had recently been diagnosed with active cancer or disease progression after complete or partial remission. Patients who had a malignancy, except primary brain tumors, or lymphoma were selected. Only tumor localization, which is the most important part of the Khorana score, and D-dimer were left for training the model [65].

To test the model’s performance, demographic, laboratory data, and data from a multinational cohort study were used to identify cancer patients at high risk of VTE [66]. With the threshold of predictable cumulative 6 month’s risk of PE set at 10%, model sensitivity was 33% (95% confidence interval (CI) 23–47) and specificity was 84% (95% CI 83–87). The positive predictive value was 12% (95% CI 8–16), and the negative predictive value was 95% (95% CI 94–96). With the threshold set at 15%, model sensitivity was 15% (95% CI 8–24) and specificity was 96% (95% CI 95–97). The positive predictive value was 18% (95% CI 9–29), and the negative predictive value was 95% (95% CI 94–96). This shows that the model may help to find patients eligible for pharmacological thromboprophylaxis [65,67].

Huang C. et al. conducted a study using a fully automatic method of determining DVT extension. AI was used to detect the proximal level of deep vein thrombosis (DVT) on contrast-enhanced MRI images. Images taken from 58 patients with recently diagnosed lower limb DVT were analyzed. A total of 5388 snapshots were made, and on 2683 of them, thrombotic masses were seen. The boundaries of the blood clots on the CT scan were manually delineated by radiologists, and then a deep learning-based neural network was trained. The basic principle of operation is based on the segmentation of the boundaries of thrombosis. A DL network with an encoder–decoder architecture was designed for DVT segmentation. It took about 1.5 s for this model to fulfill the task and to identify the thrombus extension. This model identifies vein segments with thrombosis on the MRI image. The average value of the Dice similarity coefficient (DSC) for 58 patients was 0.74 ± 0.17, and the average value of the DSC was 0.79 (range 0 ~ 0.91). The results showed that the proposed method is relatively effective and quick. If further improved, this method will help clinicians to evaluate DVT quickly and objectively [68].

Willan J et al. confirmed that AI may be applied for risk stratification in patients with suspected DVT [69]. A neural network was trained to stratify the probability of DVT in patients with suspected DVT using the data of the 11,490 consecutive cases, including 7080 cases for which all data, i.e., Wells score, D-dimer, and duplex ultrasound, were available. The network was able to exclude DVT without the necessity for ultrasound scanning in more patients as compared with the existing algorithm with low false-negative rates. After preliminary fast and reliable AI evaluation, patients with symptoms suggestive of DVT can then be sent to the hospital for vascular specialist examination in order to confirm or exclude thrombosis [70]. To do this, the Wells scale has been used to assess symptoms, therefore identifying patients with high DVT probability [71]. AI can become a powerful synergistic tool, together with D-dimer assessment, for better prioritization of the ultrasound scanning and consequent disease management [72,73,74,75].

Deso S. et al. developed CNN to detect 23 different types of inferior vena cava (IVC) filters and diagnostics of related complications. For each type of cava filter, a database of radiographs and CT scans was collected. A wireframe and storyboard were created, and software was developed using HTML5/CSS compliant code [76].

Ni J.C. et al. used deep learning for automated classification of IVC filter types on radiographs. They took 1375 cropped radiographic images of 14 types of IVC filters, with 139 images for a test set. The CNN classification model achieved an F1 score of 0.97 (0.92–0.99) for the test set overall and of 1.00 for 10 of 14 individual filter types. Of the 139 test set images, 4 (2.9%) were misidentified, all mistaken for other filter types that appeared highly similar [77].

4. Artificial Intelligence in Chronic Venous Disease

CVD is a highly prevalent disease [78], severely impacting quality of life and the national health care budget [79]. Being a chronic condition, CVD needs a permanent follow-up, which is not easy for many patients, especially in the rural regions where access to vascular care is limited in many geographical areas. For rural residents, AI-supported diagnostics seems to be a good option [80,81]. Automatic methods based on AI may be applied to detect the presence and determine the clinical class of CVD. Nevertheless, data on using AI in this field are scarce.

Fukaya E. et al. used machine learning to find genetic risk factors for varicose veins in 493,519 people at the British Biobank. In addition, a genome-wide study of the association of varicose veins was carried out among 337,536 people, followed by a quantitative analysis of loci and expression pathways. They used a gradient boosting machine model. Its principle of operation is based on the introduction of variable data, analysis, and construction of a new tree for predicting possible options. The relationship of the genotype with the presence of varicose veins was tested using a logistic model. Researchers have found that high growth persons are at a higher risk of varicose vein development [82].

Another promising area for AI is varicose vein recurrence risk estimation after invasive procedures. Bouharati I. et al. analyzed risk factors for recurrence, such as age, sex, obesity, genetic predisposition, inadequate diagnosis, double trunk of the great saphenous vein, double trunk of the small saphenous vein, neovascularization, technical failures, and time from procedure. A CNN system was constructed with probable causes of varicose recurrence as input variables and recurrence rate as the output variable. To train neural networks, data on 62 patients who had undergone invasive treatment were used, but no results of how the system performed were published [83].

Artificial neural networks may predict the healing time for venous ulcers [84], therefore helping health professionals in customizing treatment and, at best, improving the patient’s quality of life [85,86,87]. Taylor R. J. et al. retrospectively assessed data on 325 patients with 345 venous ulcers. An ANN based on a computer program (a simple neural network) was used for training. It was loaded with input data on 45 risk factors and ulcer healing time as output. After training, the ANN accurately predicted the healing time in 68% of cases. AI also identified the most important risk factors for ulcer healing. Among them are previous history of venous ulcers, profuse ulcerative exudate, high body mass index, large initial surface skin defect, age, and male sex. The neural network confirmed its ability to predict which ulcer may be resistant to standardized treatment [84].

Bhavani R. et al. used AI to determine venous ulcer stages on photo images. They obtained data from 150 patients. From each patient, 5–15 photographs were taken, with a total of 1770 images for training and 810 for testing. The scheme of the neural network operation consisted of four parts. Images were previously edited to remove flash light reflection. Then, contour segmentation of the ulcer surface was carried out, and the analysis was performed using a multidimensional ultra-precise neural network. During the extraction stage, features such as homogeneity, color, texture, and depth, were analyzed. This tool had an average accuracy of 99.55%, with specificity of 98.06% and sensitivity of 95.66% [88,89].

The most burdensome venous pathology is primary varicose veins, which affect 27–31% of the general population in rural settlements [90]. To diagnose and manage varicose veins in residents of distant areas, AI-based applications seem to be good tools. To date, few studies have been published on the assessment of AI automatic methods for varicose vein detection.

Qiang Shi and co-authors used 221 photographic images of the lower limbs in order to train the neural network to identify CVD classes according to CEAP classification. They mapped low-level image features onto middle-level semantic features using a concept classifier. Then, a multi-scale semantic model was created. The latter model was used to represent images with rich semantics. Finally, a scene classifier was trained using an optimized feature subset and was then used to determine CVD clinical class. The reported accuracy was 90.92% [91].

Hoobi M.M. et al. conducted a similar study based on the analysis of only 100 photographic images (60 with varicose veins, 40 with no CVD). The system used more than one type of distances with a probabilistic neural network to produce a diagnostic system of CVD with high accuracy. New pictures were used when testing 60 of the images. In this CNN model, shape, size, and texture of the skin with varicose veins were used with a reported accuracy of 94% [92].

Most vascular AI tools are based on neural networks trained with a limited number of images (Table 1). It is generally considered that 1000 cases are needed just to build a system, while if aiming for practical use, the number of images for training has to be closer to 100,000 [14]. This is especially true for CVD, which is classified for seven clinical classes, among which varicose veins represent only one of them. Moreover, even in large training samples, the recognition result strongly depends on the conditions of photo taking, including the image resolution, the relative size of the lesion to the total area of the image, the position of the leg, and the degree of the hair line.

Table 1.

AI tools based on learning with images.

Authors, Year Disease AI Used for Data Used for AI Learning Principle of Operation Number of Images Performance Metrics App Available Online
Kurugol S. et al., 2015 [39] PAD Aorta size calculation, morphology, mural calcification distributions CT images Convolutional neural networks (Mask R-CNN) 2500 Dice coefficient of 0.92 ± 0.01 No
Caetano Dos Santos F.L. et al., 2015 [43] Carotid arteries stenosis Segmentation and analysis of atherosclerotic lesions in extracranial carotid arteries CTA images Convolutional neural networks 59 71% accuracy Yes
Raffort J. et al., 2015 [44] Abdominal aortic aneurism (AAA) Quantitative analysis and characterization of AAA morphology, geometry, and fluid dynamics CT images Convolutional neural networks 40 93% accuracy No
Dehmeshki J. et al., (2014) [45] PAD Arterial network, artery centerline detection, and distortion correction CTA images Computer-aided detection system 15 88% accuracy No
Huang SC. et al., (2020) [55] VTE PE detection CTPA images Convolutional neural network 1797 AUROC score of 0.84 No
Huang C. et al., 2019 [68] DVT Proximal level of DVT detection Contrast-enhanced MRI images Convolutional neural network 5388 Dice coefficient of 0.79 No
Ni J.C. et al., 2020 [77] DVT Different inferior vena cava filters identification Radiographic images Deep-learning convolutional neural network 1375 F1 score of 0.97 Yes
Rajathi V., Bhavani R.R., Wiselin Jiji G. (2019) [88] CVD Venous ulcer detection Venous ulcers photos Region growing, K-means, kNN 1770 94.85% accuracy No
Shi Q., et al., 2018, [91] CVD Varicose vein detection Lower limbs photos Multi-scale semantic model constructed to form the image representation with rich semantics 221 90.92% accuracy No
Hoobi M.M., Qaswaa A., 2017, [92] CVD Varicose vein detection Lower limbs photos Probabilistic neural network 100 94% accuracy No

5. Conclusions

AI has demonstrated to be an extremely helpful tool in healthcare, particularly in a time in which healthcare resources must be optimized, such as during a pandemic and in the rural conditions. Application of AI in healthcare has the potential to bring financial benefits and savings. However, according to this literature search, these services need further validation before they are used routinely in clinical practice, making this topic of great interest for the healthcare community. This is particularly true for CVD as a condition highly impacting society and because of its healthcare costs.

Author Contributions

Conceptualization, X.B., I.Z., S.S. and S.G.; methodology, X.B., I.Z. and S.G.; resources, X.B., S.S. and M.F. writing—original draft preparation, X.B., S.S. and M.F.; writing—review and editing, I.Z. and S.G.; project administration, S.S., I.Z. and S.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Footnotes

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Reisman J., Wexler A. Covid-19: Exposing the Lack of Evidence-Based Practice in Medicine. Hastings Cent Rep. 2020;50:77–78. doi: 10.1002/hast.1144. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Weisgrau S. Issues in rural health: Access, hospitals, and reform. Health Care Financ. Rev. 1995;17:1. [PMC free article] [PubMed] [Google Scholar]
  • 3.World Health Organization Regional Office for Europe. Future of Digital Health Systems: Report on the WHO Symposium on the Future of Digital Health Systems in the European Region: Copenhagen, Denmark, 6–8 February 2019; pp. 5–27. [(accessed on 17 July 2021)]. Available online: https://apps.who.int/iris/bitstream/handle/10665/329032/9789289059992-eng.pdf.
  • 4.Hamet P., Tremblay J. Artificial intelligence in medicine. Metabolism. 2017;69:S36–S40. doi: 10.1016/j.metabol.2017.01.011. [DOI] [PubMed] [Google Scholar]
  • 5.Lee J.G., Jun S., Cho Y.W., Lee H., Kim G.B., Seo J.B., Kim N. Deep Learning in Medical Imaging: General Overview. Korean J. Radiol. 2017;18:570–584. doi: 10.3348/kjr.2017.18.4.570. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Murphy K.P. Machine Learning: A Probabilistic Perspective. 1st ed. The MIT Press; Cambridge, UK: 2012. p. 25. [Google Scholar]
  • 7.Park S.H., Do K.-H., Kim S., Park J.H., Lim Y.-S. What should medical students know about artificial intelligence in medicine? J. Educ. Eval. Health Prof. 2019;16:18. doi: 10.3352/jeehp.2019.16.18. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.London A.J. Artificial Intelligence and Black-Box Medical Decisions: Accuracy versus Explainability. Hastings Cent. Rep. 2019;49:15–21. doi: 10.1002/hast.973. [DOI] [PubMed] [Google Scholar]
  • 9.Cutillo C.M., Sharma K.R., Foschini L., Kundu S., Mackintosh M., Mandl K.D., MI in Healthcare Workshop Working Group Machine intelligence in healthcare-perspectives on trustworthiness, explainability, usability, and transparency. NPJ Digit. Med. 2020;3:47. doi: 10.1038/s41746-020-0254-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Handelman G.S., Kok H.K., Chandra R.V., Razavi A.H., Lee M.J., Asadi H. eDoctor: Machine learning and the future of medicine. J. Intern. Med. 2018;284:603–619. doi: 10.1111/joim.12822. [DOI] [PubMed] [Google Scholar]
  • 11.Shu L., Xie J., Yang M., Li Z., Li Z., Liao D., Xu X., Yang X. A Review of Emotion Recognition Using Physiological Signals. Sensors. 2018;18:2074. doi: 10.3390/s18072074. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Tran T.N.T., Felfernig A., Trattner C., Holzinger A. Recommender systems in the healthcare domain: State-of-the-art and research issues. J. Intell. Inf. Syst. 2021;57:171–201. doi: 10.1007/s10844-020-00633-6. [DOI] [Google Scholar]
  • 13.Topol E.J. High-performance medicine: The convergence of human and artificial intelligence. Nat. Med. 2019;25:44–56. doi: 10.1038/s41591-018-0300-7. [DOI] [PubMed] [Google Scholar]
  • 14.Fujita H. AI-based computer-aided diagnosis (AI-CAD): The latest review to read first. Radiol. Phys. Technol. 2020;13:6–19. doi: 10.1007/s12194-019-00552-4. [DOI] [PubMed] [Google Scholar]
  • 15.Currie G.M. Intelligent Imaging: Artificial Intelligence Augmented Nuclear Medicine. J. Nucl. Med. Technol. 2019;47:217–222. doi: 10.2967/jnmt.119.232462. [DOI] [PubMed] [Google Scholar]
  • 16.SFR-IA Group. CERF. French Radiology Community Artificial intelligence and medical imaging 2018: French Radiology Community white paper. Diagn. Interv. Imaging. 2018;99:727–742. doi: 10.1016/j.diii.2018.10.003. [DOI] [PubMed] [Google Scholar]
  • 17.Herent P., Schmauch B., Jehanno P., Dehaeneb O., Saillarda C., Balleyguierc C., Arfi-Rouchec J., Jégoua S. Detection and characterization of MRI breast lesions using deep learning. Diagn. Interv. Imaging. 2019;100:219–225. doi: 10.1016/j.diii.2019.02.008. [DOI] [PubMed] [Google Scholar]
  • 18.Rodriguez-Ruiz A., Lång K., Gubern-Merida A., Broeders M., Gennaro G., Clauser P., Helbich T.H., Chevalier M., Tan T., Mertelmeier T., et al. Stand-Alone Artificial Intelligence for Breast Cancer Detection in Mammography: Comparison with 101 Radiologists. J. Natl. Cancer Inst. 2019;111:916–922. doi: 10.1093/jnci/djy222. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Bejnordi B.E., Veta M., van Diest P.J., van Ginneken B., Karssemeijer N., Litjens G., van der Laak J.A.W.M., CAMELYON16 Consortium Diagnostic Assessment of Deep Learning Algorithms for Detection of Lymph Node Metastases in Women with Breast Cancer. JAMA. 2017;318:2199–2210. doi: 10.1001/jama.2017.14585. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Gampala S., Vankeshwaram V., Gadula S.S.P. Is Artificial Intelligence the New Friend for Radiologists? A Review Article. Cureus. 2020;12:e11137. doi: 10.7759/cureus.11137. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Liu X., Zhou H., Hu Z., Jin Q., Wang J., Ye B. Clinical Application of Artificial Intelligence Recognition Technology in the Diagnosis of Stage T1 Lung Cancer. Zhongguo Fei Ai Za Zhi. 2019;22:319–323. doi: 10.3779/j.issn.1009-3419.2019.05.09. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Charron O., Lallement A., Jarnet D., Noblet V., Clavier J.-B., Meyer P. Automatic detection and segmentation of brain metastases on multimodal MR images with a deep convolutional neural network. Comput. Biol. Med. 2018;95:43–54. doi: 10.1016/j.compbiomed.2018.02.004. [DOI] [PubMed] [Google Scholar]
  • 23.Hamm C.A., Beetz N.L., Savic L.J., Penzkofer T. Artificial intelligence and radiomics in MRI-based prostate diagnostics. Radiologe. 2020;60:48–55. doi: 10.1007/s00117-019-00613-0. [DOI] [PubMed] [Google Scholar]
  • 24.Shibata N., Tanito M., Mitsuhashi K., Fujino Y., Matsuura M., Murata H., Asaoka R. Development of a deep residual learning algorithm to screen for glaucoma from fundus photography. Sci. Rep. 2018;8:14665. doi: 10.1038/s41598-018-33013-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Oh E., Yoo T.K., Hong S. Artificial Neural Network Approach for Differentiating Open-Angle Glaucoma From Glaucoma Suspect Without a Visual Field Test. Investig. Opthalmology Vis. Sci. 2015;56:3957–3966. doi: 10.1167/iovs.15-16805. [DOI] [PubMed] [Google Scholar]
  • 26.Ting D.S.W., Cheung C.Y., Lim G., Tan G.S.W., Quang N.D., Gan A., Hamzah H., Garcia-Franco R., Yeo I.Y.S., Lee S.Y., et al. Development and Validation of a Deep Learning System for Diabetic Retinopathy and Related Eye Diseases Using Retinal Images From Multiethnic Populations with Diabetes. JAMA. 2017;318:2211–2223. doi: 10.1001/jama.2017.18152. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Balyen L., Peto T. Promising Artificial Intelligence-Machine Learning-Deep Learning Algorithms in Ophthalmology. Asia Pac. J. Ophthalmol. 2019;8:264–272. doi: 10.22608/APO.2018479. [DOI] [PubMed] [Google Scholar]
  • 28.Urakawa T., Tanaka Y., Goto S., Matsuzawa H., Watanabe K., Endo N. Detecting intertrochanteric hip fractures with orthopedist-level accuracy using a deep convolutional neural network. Skelet. Radiol. 2018;48:239–244. doi: 10.1007/s00256-018-3016-3. [DOI] [PubMed] [Google Scholar]
  • 29.Petrie T., Samatham R., Witkowski A.M., Esteva A., Leachman S.A. Melanoma Early Detection: Big Data, Bigger Picture. J Investig. Dermatol. 2019;139:25–30. doi: 10.1016/j.jid.2018.06.187. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Esteva A., Kuprel B., Novoa R.A., Ko J., Swetter S.M., Blau H.M., Thrun S. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542:115–118. doi: 10.1038/nature21056. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Ponomariov V., Chirila L., Apipie F.M., Abate R., Rusu M., Wu Z., Liehn E.A., Bucur I. Artificial Intelligence versus Doctors’ Intelligence: A Glance on Machine Learning Benefaction in Electrocardiography. Discoveries (Craiova) 2017;5:e76. doi: 10.15190/d.2017.6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Al’Aref S.J., Anchouche K., Singh G., Slomka P.J., Kolli K.K., Kumar A., Pandey M., Maliakal G., van Rosendael A.R., Beecy A.N., et al. Clinical applications of machine learning in cardiovascular disease and its relevance to cardiac imaging. Eur. Heart J. 2019;40:1975–1986. doi: 10.1093/eurheartj/ehy404. [DOI] [PubMed] [Google Scholar]
  • 33.Wang S., Yang D.M., Rong R., Zhan X., Xiao G. Pathology Image Analysis Using Segmentation Deep Learning Algorithms. Am. J. Pathol. 2019;189:1686–1698. doi: 10.1016/j.ajpath.2019.05.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Zook J.M., Catoe D., McDaniel J., Vang L., Spies N., Sidow A., Weng Z., Liu Y., Mason C.E., Alexander N., et al. Extensive sequencing of seven human genomes to characterize benchmark reference materials. Sci. Data. 2016;3:160025. doi: 10.1038/sdata.2016.25. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Li M., Lei P., Zeng B., Li Z., Yu P., Fan B., Wang C., Li Z., Zhou J., Hu S., et al. Coronavirus Disease (COVID-19): Spectrum of CT Findings and Temporal Progression of the Disease. Acad. Radiol. 2020;27:603–608. doi: 10.1016/j.acra.2020.03.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Naseem M., Akhund R., Arshad H., Ibrahim M.T. Exploring the Potential of Artificial Intelligence and Machine Learning to Combat COVID-19 and Existing Opportunities for LMIC: A Scoping Review. J. Prim. Care Community Health. 2020;11:2150132720963634. doi: 10.1177/2150132720963634. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Rajasinghe H.A., Miller L.E., Chahwan S.H., Zamora A.J. TOI 2. Underutilization of Artificial Intelligence by Vascular Specialists. J. Vasc. Surg. 2018;68:e148–e149. doi: 10.1016/j.jvs.2018.08.099. [DOI] [Google Scholar]
  • 38.Raffort J., Adam C., Carrier M., Lareyre F. Fundamentals in Artificial Intelligence for Vascular Surgeons. Ann. Vasc. Surg. 2020;65:254–260. doi: 10.1016/j.avsg.2019.11.037. [DOI] [PubMed] [Google Scholar]
  • 39.Kurugol S., Come C.E., Diaz A.A., Ross J.C., Kinney G.L., Black-Shinn J.L., Hokanson J.E., Budoff M.J., Washko G.R., Estepar R.S.J., et al. Automated quantitative 3D analysis of aorta size, morphology, and mural calcification distributions. Med. Phys. 2015;42:5467–5478. doi: 10.1118/1.4924500. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Graffy P.M., Liu J., O’Connor S., Summers R.M., Pickhardt P.J. Automated segmentation and quantification of aortic calcification at abdominal CT: Application of a deep learning-based algorithm to a longitudinal screening cohort. Abdom. Radiol. (NY) 2019;44:2921–2928. doi: 10.1007/s00261-019-02014-2. [DOI] [PubMed] [Google Scholar]
  • 41.Gastounioti A., Kolias V., Golemati S., Tsiaparasa N.N., Matsakoua A., Stoitsisa J.S., Kadoglouc N.P.E., Gkekasc C., Kakisisc J.D., Liapis C.D., et al. CAROTID—A web-based platform for optimal personalized management of atherosclerotic patients. Comput. Methods Programs Biomed. 2014;114:183–193. doi: 10.1016/j.cmpb.2014.02.006. [DOI] [PubMed] [Google Scholar]
  • 42.Kumar P.K., Araki T., Rajan J., Lairdd J.R., Nicolaidese A., Surifg J.S., Fellow AIMBE State-of-the-art review on automated lumen and adventitial border delineation and its measurements in carotid ultrasound. Comput. Methods Programs Biomed. 2018;163:155–168. doi: 10.1016/j.cmpb.2018.05.015. [DOI] [PubMed] [Google Scholar]
  • 43.Dos Santos F.L.C., Kolasa M., Terada M., Salenius J., Eskola H., Paci M. VASIM: An automated tool for the quantification of carotid atherosclerosis by computed tomography angiography. Int. J. Cardiovasc. Imaging. 2019;35:1149–1159. doi: 10.1007/s10554-019-01549-1. [DOI] [PubMed] [Google Scholar]
  • 44.Raffort J., Adam C., Carrier M., Ballaith A., Coscas R., Jean-Baptiste E., Hassen-Khodja R., Chakfé N., Lareyre F. Artificial intelligence in abdominal aortic aneurysm. J. Vasc. Surg. 2020;72:321–333.e1. doi: 10.1016/j.jvs.2019.12.026. [DOI] [PubMed] [Google Scholar]
  • 45.Dehmeshki J., Ion A., Ellis T., Doenz F., Jouannic A.-M., Qanadli S. Computer Aided Detection and measurement of peripheral artery disease. Stud. Health Technol. Inform. 2014;205:1153–1157. [PubMed] [Google Scholar]
  • 46.Ross E.G., Jung K., Dudley J.T., Li L., Leeper N.J., Shah N.H. Predicting Future Cardiovascular Events in Patients With Peripheral Artery Disease Using Electronic Health Record Data. Circ. Cardiovasc. Qual. Outcomes. 2019;12:e004741. doi: 10.1161/CIRCOUTCOMES.118.004741. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Winkler-Schwartz A., Bissonnette V., Mirchi N., Ponnudurai N., Yilmaz R., Ledwos N., Siyar S., Azarnoush H., Karlik B., Del Maestro R.F. Artificial Intelligence in Medical Education: Best Practices Using Machine Learning to Assess Surgical Expertise in Virtual Reality Simulation. J. Surg. Educ. 2019;76:1681–1690. doi: 10.1016/j.jsurg.2019.05.015. [DOI] [PubMed] [Google Scholar]
  • 48.Aeckersberg G., Gkremoutis A., Schmitz-Rixen T., Kaiser E. The relevance of low-fidelity virtual reality simulators compared with other learning methods in basic endovascular skills training. J. Vasc. Surg. 2019;69:227–235. doi: 10.1016/j.jvs.2018.10.047. [DOI] [PubMed] [Google Scholar]
  • 49.Hazenberg C.E.V.B., De Stegge W.B.A., Van Baal S.G., Moll F.L., Bus S.A. Telehealth and telemedicine applications for the diabetic foot: A systematic review. Diabetes Metab. Res. Rev. 2020;36:e3247. doi: 10.1002/dmrr.3247. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Ohura N., Mitsuno R., Sakisaka M., Terabe Y., Morishige Y., Uchiyama A., Okoshi T., Shinji I., Takushima A. Convolutional neural networks for wound detection: The role of artificial intelligence in wound care. J. Wound Care. 2019;28((Suppl. 10)):S13–S24. doi: 10.12968/jowc.2019.28.Sup10.S13. [DOI] [PubMed] [Google Scholar]
  • 51.Heit J.A. The epidemiology of venous thromboembolism in the community. Arter. Thromb. Vasc. Biol. 2008;28:370–372. doi: 10.1161/ATVBAHA.108.162545. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Moore A.J.E., Wachsmann J., Chamarthy M.R., Panjikaran L., Tanabe Y., Rajiah P. Imaging of acute pulmonary embolism: An update. Cardiovasc. Diagn. Ther. 2018;8:225–243. doi: 10.21037/cdt.2017.12.01. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Leung A.N., Bull T.M., Jaeschke R., Lockwood C.J., Boiselle P.M., Hurwitz L.M., James A.H., McCullough L.B., Menda Y., Paidas M.J., et al. An official American Thoracic Society/Society of Thoracic Radiology clinical practice guideline: Evaluation of suspected pulmonary embolism in pregnancy. Am. J. Respir. Crit. Care Med. 2011;184:1200–1208. doi: 10.1164/rccm.201108-1575ST. [DOI] [PubMed] [Google Scholar]
  • 54.Patil S., Henry J.W., Rubenfire M., Stein P.D. Neural network in the clinical diagnosis of acute pulmonary embolism. Chest. 1993;104:1685–1689. doi: 10.1378/chest.104.6.1685. [DOI] [PubMed] [Google Scholar]
  • 55.Huang S.C., Kothari T., Banerjee I., Chute C., Ball R.L., Borus N., Huang A., Patel B.N., Rajpurkar P., Irvin J., et al. PENet-a scalable deep-learning model for automated diagnosis of pulmonary embolism using volumetric CT imaging. NPJ Digit Med. 2020;3:61. doi: 10.1038/s41746-020-0266-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Tajbakhsh N., Gotway M.B., Liang J. Computer-Aided Pulmonary Embolism Detection Using a Novel Vessel-Aligned Multi-planar Image Representation and Convolutional Neural Networks. In: Navab N., Hornegger J., Wells W., Frangi A., editors. Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015. MICCAI 2015. Volume 9350. Springer; Cham, Switzerland: 2015. Lecture Notes in Computer Science. [DOI] [Google Scholar]
  • 57.Serpen G., Tekkedil D., Orra M. A knowledge-based artificial neural network classifier for pulmonary embolism diagnosis. Comput. Biol. Med. 2008;38:204–220. doi: 10.1016/j.compbiomed.2007.10.001. [DOI] [PubMed] [Google Scholar]
  • 58.Özkan H., Osman O., Şahin S., Boz A.F. A novel method for pulmonary embolism detection in CTA images. Comput. Methods Programs Biomed. 2014;113:757–766. doi: 10.1016/j.cmpb.2013.12.014. [DOI] [PubMed] [Google Scholar]
  • 59.Park S.C., Chapman B.E., Zheng B. A multistage approach to improve performance of computer-aided detection of pulmonary embolisms depicted on CT images: Preliminary investigation. IEEE Trans. Biomed. Eng. 2011;58:1519–1527. doi: 10.1109/TBME.2010.2063702. [DOI] [PubMed] [Google Scholar]
  • 60.Das M., Mühlenbruch G., Helm A., Bakai A., Salganicoff M., Stanzel S., Liang J., Wolf M., Günther R.W., Wildberger J.E. Computer-aided detection of pulmonary embolism: Influence on radiologists’ detection performance with respect to vessel segments. Eur. Radiol. 2008;18:1350–1355. doi: 10.1007/s00330-008-0889-x. [DOI] [PubMed] [Google Scholar]
  • 61.Ay C., Pabinger I., Cohen A.T. Cancer-associated venous thromboembolism: Burden, mechanisms, and management. Thromb. Haemost. 2017;117:219–230. doi: 10.1160/TH16-08-0615. [DOI] [PubMed] [Google Scholar]
  • 62.Ben Lustig D., Rodriguez R., Wells P.S. Implementation and validation of a risk stratification method at The Ottawa Hospital to guide thromboprophylaxis in ambulatory cancer patients at intermediate-high risk for venous thrombosis. Thromb. Res. 2015;136:1099–1102. doi: 10.1016/j.thromres.2015.08.002. [DOI] [PubMed] [Google Scholar]
  • 63.Prandoni P., Lensing A.W.A., Piccioli A., Bernardi E., Simioni P., Girolami B., Marchiori A., Sabbion P., Prins M.H., Noventa F., et al. Recurrent venous thromboembolism and bleeding complications during anticoagulant treatment in patients with cancer and venous thrombosis. Blood. 2002;100:3484–3488. doi: 10.1182/blood-2002-01-0108. [DOI] [PubMed] [Google Scholar]
  • 64.Ay C., Pabinger I. VTE risk assessment in cancer. Who needs prophylaxis and who does not? Hamostaseologie. 2015;35:319–324. doi: 10.5482/HAMO-14-11-0066. [DOI] [PubMed] [Google Scholar]
  • 65.Pabinger I., van Es N., Heinze G., Posch F., Riedl J., Reitter E.-M., Nisio M.D., Cesarman-Maus G., Kraaijpoel N., Zielinski C.C., et al. A clinical prediction model for cancer-associated venous thromboembolism: A development and validation study in two independent prospective cohorts. Lancet Haematol. 2018;5:e289–e298. doi: 10.1016/S2352-3026(18)30063-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 66.van Es N., Di Nisio M., Cesarman G., Kleinjan A., Otten H.-M., Mahé I., Wilts I.T., Twint D.C., Porreca E., Arrieta O., et al. Comparison of risk prediction scores for venous thromboembolism in cancer patients: A prospective cohort study. Haematologica. 2017;102:1494–1501. doi: 10.3324/haematol.2017.169060. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67.Ferroni P., Roselli M., Zanzotto F.M., Guadagni F. Artificial intelligence for cancer-associated thrombosis risk assessment. Lancet Haematol. 2018;5:e391. doi: 10.1016/S2352-3026(18)30111-X. [DOI] [PubMed] [Google Scholar]
  • 68.Huang C., Tian J., Yuan C., Zeng P., He X., Chen H., Huang Y., Huang B. Fully Automated Segmentation of Lower Extremity Deep Vein Thrombosis Using Convolutional Neural Network. Biomed Res. Int. 2019;2019:3401683. doi: 10.1155/2019/3401683. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 69.Willan J., Katz H., Keeling D. The use of artificial neural network analysis can improve the risk-stratification of patients presenting with suspected deep vein thrombosis. Br. J. Haematol. 2019;185:289–296. doi: 10.1111/bjh.15780. [DOI] [PubMed] [Google Scholar]
  • 70.Willan J., Keeling D. Reducing the need for diagnostic imaging in suspected cases of deep vein thrombosis. Br. J. Haematol. 2019;184:682–684. doi: 10.1111/bjh.15158. [DOI] [PubMed] [Google Scholar]
  • 71.Geersing G.J., Zuithoff N.P.A., Kearon C., Anderson D.R., ten Cate-Hoek A.J., Eif J.L., Bates S.M., Hoes A.W., Kraaijenhagen R.A., Oudega R., et al. Exclusion of deep vein thrombosis using the Wells rule in clinically important subgroups: Individual patient data meta-analysis. BMJ. 2014;348:g1340. doi: 10.1136/bmj.g1340. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 72.Douma R.A., Le Gal G., Söhne M., Righini M., Kamphuisen P.W., Perrier A., Kruip M.J.H.A., Bounameaux H., Büller H.R., Roy P.-M. Potential of an age adjusted D-dimer cut-off value to improve the exclusion of pulmonary embolism in older patients: A retrospective analysis of three large cohorts. BMJ. 2010;340:c1475. doi: 10.1136/bmj.c1475. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 73.Linkins L.-A., Ginsberg J.S., Bates S.M., Kearon C. Use of different D-dimer levels to exclude venous thromboembolism depending on clinical pretest probability. J. Thromb. Haemost. 2004;2:1256–1260. doi: 10.1111/j.1538-7836.2004.00824.x. [DOI] [PubMed] [Google Scholar]
  • 74.Wells P.S., Anderson D.R., Rodger M., Forgie M., Kearon C., Dreyer J., Kovacs G., Mitchell M., Lewandowski B., Kovacs M.J. Evaluation of D-dimer in the diagnosis of suspected deep-vein thrombosis. N. Engl. J. Med. 2003;349:1227–1235. doi: 10.1056/NEJMoa023153. [DOI] [PubMed] [Google Scholar]
  • 75.Wells P.S. Integrated strategies for the diagnosis of venous thromboembolism. J. Thromb. Haemost. 2007;5((Suppl. 1)):41–50. doi: 10.1111/j.1538-7836.2007.02493.x. [DOI] [PubMed] [Google Scholar]
  • 76.Deso S.E., Idakoji I.A., Muelly M.C., Kuo W.T. Creation of an iOS and Android Mobile Application for Inferior Vena Cava (IVC) Filters: A Powerful Tool to Optimize Care of Patients with IVC Filters. Semin. Intervent. Radiol. 2016;33:137–143. doi: 10.1055/s-0036-1583206. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 77.Ni J.C., Shpanskaya K., Han M., Lee E.H., Do B.H., Kuo W.T., Yeom K.W., Wang D.S. Deep Learning for Automated Classification of Inferior Vena Cava Filter Types on Radiographs. J. Vasc. Interv. Radiol. 2020;31:66–73. doi: 10.1016/j.jvir.2019.05.026. [DOI] [PubMed] [Google Scholar]
  • 78.Ortega M.A., Fraile-Martínez O., García-Montero C., Álvarez-Mon M.A., Chaowen C., Ruiz-Grande F., Pekarek L., Monserrat J., Asúnsolo A., García-Honduvilla N., et al. Understanding Chronic Venous Disease: A Critical Overview of Its Pathophysiology and Medical Management. J. Clin. Med. 2021;10:3239. doi: 10.3390/jcm10153239. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 79.Ma H., O’Donnell T.F., Rosen N.A., Iafrati M.D. The real cost of treating venous ulcers in a contemporary vascular practice. J. Vasc. Surg. Venous Lymphat. Disord. 2014;2:355–361. doi: 10.1016/j.jvsv.2014.04.006. [DOI] [PubMed] [Google Scholar]
  • 80.Drake T.M., Ritchie J.E. The Surgeon Will Skype You Now: Advancements in E-clinic. Ann. Surg. 2016;263:636–637. doi: 10.1097/SLA.0000000000001505. [DOI] [PubMed] [Google Scholar]
  • 81.Korobkova O.K. Problems of improving medical services in the rural areas of the Russian regions. Aktual’niye Problemy Ekonomiki i Prava. 2015;1:179–186. [Google Scholar]
  • 82.Fukaya E., Flores A.M., Lindholm D., Gustafsson S., Zanetti D., Ingelsson E., Leeper N.J. Clinical and Genetic Determinants of Varicose Veins. Circulation. 2018;138:2869–2880. doi: 10.1161/CIRCULATIONAHA.118.035584. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 83.Bouharati I., El-Hachmi S., Babouche F., Khenchouche A., Bouharati K., Bouharati S. Radiology and management of recurrent varicose veins: Risk factors analysis using artificial neural networks. J. Med. Radiol. Pathol. Surg. 2018;5:1–5. doi: 10.15713/ins.jmrps.116. [DOI] [Google Scholar]
  • 84.Taylor R., Taylor A., Smyth J.V. Using an artificial neural network to predict healing times and risk factors for venous leg ulcers. J. Wound Care. 2002;11:101–105. doi: 10.12968/jowc.2002.11.3.26381. [DOI] [PubMed] [Google Scholar]
  • 85.Meulendijks A., De Vries F., Van Dooren A., Schuurmans M., Neumann H. A systematic review on risk factors in developing a first-time Venous Leg Ulcer. J. Eur. Acad. Dermatol. Venereol. 2019;33:1241–1248. doi: 10.1111/jdv.15343. [DOI] [PubMed] [Google Scholar]
  • 86.Tan K.H.M., Luo R., Onida S., Maccatrozzo S., Davies A.H. Venous Leg Ulcer Clinical Practice Guidelines: What is AGREEd? Eur. J. Vasc. Endovasc. Surg. 2019;57:121–129. doi: 10.1016/j.ejvs.2018.08.043. [DOI] [PubMed] [Google Scholar]
  • 87.Wilson E. Prevention and treatment of venous leg ulcers. Health Thends. 1989;21:97. [Google Scholar]
  • 88.Bhavani R., Jiji W. Varicose ulcer(C6) wound image tissue classification using multidimensional convolutional neural networks. Imaging Sci. J. 2019;67:1–11. doi: 10.1080/13682199.2019.1663083. [DOI] [Google Scholar]
  • 89.Bhavani R.R., Jiji G.W. Image registration for varicose ulcer classification using KNN classifier. Int. J. Comput. Appl. 2018;40:88–97. doi: 10.1080/1206212X.2017.1395108. [DOI] [Google Scholar]
  • 90.Zolotukhin I.A., Seliverstov E.I., Shevtsov Y.N., Avakiants I.P., Nikishkov A.S., Tatarintsev A.M., Kirienko A.I. Prevalence and Risk Factors for Chronic Venous Disease in the General Russian Population. Eur. J. Vasc. Endovasc. Surg. 2017;54:752–758. doi: 10.1016/j.ejvs.2017.08.033. [DOI] [PubMed] [Google Scholar]
  • 91.Shi Q., Chen W., Pan Y., Yin S., Fu Y., Mei J., Xue Z. An Automatic Classification Method on Chronic Venous Insufficiency Images. Sci. Rep. 2018;8:17952. doi: 10.1038/s41598-018-36284-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 92.Hoobi M.M., Qaswaa A. Detection System of Varicose Disease using Probabilistic Neural Network. [(accessed on 18 November 2021)];Int. J. Sci. Res. (IJSR) 2017 6:2591–2596. Available online: https://www.ijsr.net/get_abstract.php?paper_id=ART20173435. [Google Scholar]

Articles from Journal of Personalized Medicine are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES