Skip to main content
World Journal of Gastroenterology logoLink to World Journal of Gastroenterology
editorial
. 2018 Dec 7;24(45):5057–5062. doi: 10.3748/wjg.v24.i45.5057

Methodology to develop machine learning algorithms to improve performance in gastrointestinal endoscopy

Thomas de Lange 1,2, Pål Halvorsen 3,4, Michael Riegler 5,6
PMCID: PMC6288655  PMID: 30568383

Abstract

Assisted diagnosis using artificial intelligence has been a holy grail in medical research for many years, and recent developments in computer hardware have enabled the narrower area of machine learning to equip clinicians with potentially useful tools for computer assisted diagnosis (CAD) systems. However, training and assessing a computer’s ability to diagnose like a human are complex tasks, and successful outcomes depend on various factors. We have focused our work on gastrointestinal (GI) endoscopy because it is a cornerstone for diagnosis and treatment of diseases of the GI tract. About 2.8 million luminal GI (esophageal, stomach, colorectal) cancers are detected globally every year, and although substantial technical improvements in endoscopes have been made over the last 10-15 years, a major limitation of endoscopic examinations remains operator variation. This translates into a substantial inter-observer variation in the detection and assessment of mucosal lesions, causing among other things an average polyp miss-rate of 20% in the colon and thus the subsequent development of a number of post-colonoscopy colorectal cancers. CAD systems might eliminate this variation and lead to more accurate diagnoses. In this editorial, we point out some of the current challenges in the development of efficient computer-based digital assistants. We give examples of proposed tools using various techniques, identify current challenges, and give suggestions for the development and assessment of future CAD systems.

Keywords: Endoscopy, Artificial intelligence, Deep learning, Computer assisted diagnosis, Gastrointestinal


Core tip: Assisted diagnosis using artificial intelligence and recent developments in computer hardware have enabled the narrower area of machine learning to equip the endoscopists with potentially powerful tools for computer assisted diagnosis systems. The success depends on various factors; optimizing algorithms, image database quality and size and comparison with existing systems.

INTRODUCTION

Gastrointestinal (GI) endoscopy is a cornerstone for diagnosis and treatment of diseases in the GI tract. About 2.8 million luminal GI cancers (esophageal, stomach, colorectal) are detected globally every year, and many of these might be prevented through improved endoscopic performance and systematic high-quality screening in high incidence areas[1]. These cancers represent a substantial health challenge for society with a mortality rate of about 65%[2], and colorectal cancer is the third most common cause of cancer mortality among both women and men[3]. Despite substantial technical improvements in endoscopes over the last 10-15 years, a major limitation of endoscopic examinations is operator variation. This variation depends on operator skill, perceptual factors, personality characteristics, knowledge, and attitude[4]. This translates into a substantial inter-observer variation in the detection and assessment of mucosal lesions[5,6], leading to an average polyp miss-rate of 20% in the colon[7]. All of these factors can to some extent be alleviated by substantial educational efforts, but they cannot be eliminated entirely[8]. Thus, developing an automated computer-based support system for the detection and characterization of mucosal lesions would be an important contribution to eliminating the current variation in endoscopists’ performance.

Artificial intelligence (AI) is the area of computer science that aims to create intelligent machines that mimic human behavior, and assisted diagnosis using AI has been a holy grail in the field of medicine for many years. Such machines have long been the realm of fiction, but recent developments in computer hardware have enabled the narrower field of machine learning to develop potentially highly accurate computer assisted diagnosis (CAD) systems. At its most basic, machine learning is the practice of using algorithms to parse data, learn from the data, and then make a prediction, and in the medical domain such systems are used to detect or classify a disease. Research and development of such systems is currently under way in many medical domains like retina scans, various cancer screening systems, and skin cancer detection[9-11]. However, there exist methodological issues that need to be addressed both for creating and improving automated diagnosis algorithms.

MACHINE LEARNING IN ENDOSCOPY

Automated detection of anomalies in the GI tract have been proposed for diseases such as Barrett’s esophagus, gastric cancer, angiectasia, celiac disease, and polyp detection and characterization, and a number of methods and algorithms have been tested in recent years[12-18]. The methods and algorithms range from simpler traditional machine learning methods to more recently developed deep learning approaches[19,20].

An example of a simple system is a search-based system using various global features in the images[21]. It extracts (complex) image features like color histograms and textures and feeds these features into a classifier for determining whether an object is present or not. For example, such a system might determine the presence of an object by calculating the distance of the feature vector from the vectors in the model. An important advantage of systems based on simple methods is that they can be easier to understand and their results can be easier to explain to medical personnel[22-24].

The current state-of-the-art and the most commonly used methods are based on deep neural networks. These networks work as an interconnected group of nodes, akin to the vast network of neurons in the human brain[25]. Such networks typically consist of an input and an output layer, as well as multiple hidden convolutional, pooling, fully connected, and normalization layers. Typically, each input image will pass through the layers in order to classify an object with probabilistic values between 0 and 1. There exist several variations of deep neural networks. For image and video analysis, convolutional neural networks (CNNs) are the most common. CNNs can be used to perform either segmentation (the exact marking of a finding in the image[26]) or classification (a more global point of view on the image, such as a general statement like “this image contains a polyp”[22,27,28]). Another promising method for image analysis is generative adversarial networks (GANs). GANs consist of two neural networks competing with each other in a zero-sum game framework during the training phase. The generator network generates new data instances using an inverse convolutional network by upsampling random noise to an image. The other network, the discriminator, takes the generated image and the training set and checks for authenticity. This means that the discriminator decides whether the data belong to or are classified in the actual training dataset or not. GANs can also be defined as conditional GANs that have an image as input instead of random noise and that transform this image into another image. This can be used to create, for example, segmentation masks. An example of a GAN-based method is described by Pogorelov et al[22,29]. The approach presented in their papers uses conditional GANs with a normal image from the colon as input, and the algorithm segments the finding in the image. This noise segmentation is then cleaned in a post-processing step that leads to a clear segmentation. Many of these approaches have yielded promising results regarding detection accuracy, with some achieving numbers above 90%, but many run too slowly to be used in a clinically useful system providing real-time feedback. Some comparisons of different approaches are given by Pogorelov et al[22,26] and Riegler et al[30].

IMAGE DATABASE QUALITY

A sufficient amount of data is vital in machine learning, and the creation of algorithms usually relies on large databases. This is especially true for deep learning, which is currently the standard for image analysis[31]. However, the quality of the database is also essential, and it is crucial that all the images and videos are annotated correctly. The computer learns from analyzing the given data, and thus erroneous learning will lead to incorrect diagnoses. Therefore, when collecting data and making a dataset the recommendations below should be followed

There are variations between observers, and to reduce this bias the ground truth assessment should involve at least three observers[32]. However, the required agreement between the observers and the degree of confidence is not known and requires further studies. The goal regarding the diagnostic thresholds for such a technique is to reach more than 90% positive predictive value for correct classification of the lesions[33].

A potential problem in machine learning is overfitting. Many of the datasets show obvious examples of medical findings, and the similarity of the different images often results in overfitting. Thus, overfitting occurs when the learning algorithm learns the data too well and therefore also captures the noise of the data, e.g., when the model or the algorithm fits the data too well, or if the model or algorithm shows low bias but high variance. Therefore, too many similar samples should be avoided in order to avoid such “overtraining”. A diverse dataset is therefore recommended to better enable correct disease detection in new data.

Many datasets are limited in size, and many assess their systems using too few samples. Many argue that the dataset should be as large as possible[34], but others show that machine learning can also work on smaller datasets using transfer learning[30,35], which has recently found frequent use in the context of medical image problems[36,37]. Note that there is no “one size fits all” answer. The amount of required training data is dependent on many different aspects of the experiment, but a general rule of thumb is to have around 1000 images per class for deep learning applications. In the Kvasir dataset[38], at least 1000 images per class are provided for different findings.

One general problem is that several of the existing datasets are cumbersome to use in terms of permission, for example, several of the listed sets in Table 1[38-44] are restricted. To enable subsequent comparisons, it is best is to use an open dataset.

Table 1.

Some existing image datasets for gastrointestinal endoscopy

Dataset Findings Frames Usage
CVC-356[39] Polyps 1706 ©, by request
CVC-612[40] Polyps 1962 ©, by request
CVC-12k[41] Polyps 11954 ©, by request
Kvasir[38] Polyps, esophagitis, ulcerative colitis, Z-line, pylorus, cecum, dyed polyp, dyed resection margins, stool 8000 Open academic
Nerthus[41] Stool - categorization of bowel cleanliness 1350 Open academic
GIANA’17[42] Angiectasia 600 ©, by request
ASU-Mayo polyp database[43] Polyps 18781 ©, by request
CVC-ClinicDB Polyps 612 ©, by request
ETIS-Larib Polyp DB Polyps 1500 ©, by request
KID[44] Angiectasia, bleeding, inflammations, polyps 2500 + 47 videos Open academic

The most important take-away message is that clean and complete data are one of the most important parts of a good detection system. This means that spending the time to create a high-quality database is very important and is directly connected to the quality of the following steps.

SYSTEM ASSESSMENT

Comparing published research is challenging, and an increasing number of research communities are targeting this problem by creating public available datasets and encouraging reproducible experiments. In order to enable full comparisons, not only the same datasets should be used, but the datasets should also be split between training and test sets in an equal way. Furthermore, the more information the better, and one should use as many of the common metrics as possible as described by Pogorelov et al[38]. For detection accuracy, the raw numbers for true positives, true negatives, false positives, and false negatives are important, and metrics based on these like sensitivity (recall), precision, specificity, accuracy, Matthews correlation coefficient, and F1 score should be calculated. Finally, a metric for processing speed in terms of time per image or frame should be included, and although this depends on the hardware that is used, it gives an indication as to whether the system can run in real time.

We must also emphasize that there is a difference in how anomaly detection is defined. In the area of computer science, detection per frame or image is the standard, but in the medical domain, reporting a detection per instance (at least once in a sequence of frames of the same finding) is common. If possible, one should include both definitions.

CONCLUSION

Researchers have sought for many years to develop efficient AI tools to assist in medical diagnosis. Enabled by recent hardware developments, several research groups are now working on machine learning-based medical systems and have obtained promising results. Thus, we have observed a rapid increase in publications related to AI in GI endoscopy over the last two years. However, as described above, there are still large variations in the tested datasets, and insufficient metrics are being used. In order to enable full comparisons between methods, the same datasets should be utilized, and as many of the common metrics as possible should be used[38]. Another limitation is that the lesion characterization systems rely on advanced endoscopic functionality like narrow-band imaging, endocytoscopy, or volumetric laser endomicroscopy, to which most endoscopy units do not have access, especially in low-income countries[45]. Still, it is not proven that these techniques improve endoscopy performance, and validation in live endoscopies is still required. Therefore, there is still a long road ahead before such systems can be put into practice, and much research, development, and clinical testing still needs to be performed. To produce the best possible and the most comparable results, the recommendations given here should be followed.

Footnotes

Manuscript source: Invited manuscript

Specialty type: Gastroenterology and hepatology

Country of origin: Norway

Peer-review report classification

Grade A (Excellent): 0

Grade B (Very good): 0

Grade C (Good): C, C

Grade D (Fair): 0

Grade E (Poor): 0

Conflict-of-interest statement: de Lange T declares no conflict of interests; Halvorsen P reports grants from Norwegian Research Council during the conduct of the study; and grants from Norwegian Research Council outside the submitted work; Riegler M declares no conflict of interests.

Peer-review started: September 7, 2018

First decision: October 4, 2018

Article in press: November 2, 2018

P- Reviewer: Hashimoto R, Shichijo S S- Editor: Ma RY L- Editor: A E- Editor: Huang Y

Contributor Information

Thomas de Lange, Department of Transplantation, Oslo University Hospital, Oslo 0424, Norway; Institute of Clinical Medicine, University of Oslo, Oslo 0316, Norway. t.d.lange@medisin.uio.no.

Pål Halvorsen, Center for Digital Engineering Simula Metropolitan, Fornebu 1364, Norway; Department for Informatics, University of Oslo, Oslo 0316, Norway.

Michael Riegler, Center for Digital Engineering Simula Metropolitan, Fornebu 1364, Norway; Department for Informatics, University of Oslo, Oslo 0316, Norway.

References

  • 1.Brenner H, Kloor M, Pox CP. Colorectal cancer. Lancet. 2014;383:1490–1502. doi: 10.1016/S0140-6736(13)61649-9. [DOI] [PubMed] [Google Scholar]
  • 2.World Health Organization - International Agency for Research on Cancer. Estimated cancer incidence, mortality and prevalence world-wide in 2012. Available from: http://globocan.iarc.fr/Default.aspx. 2012.
  • 3.Torre LA, Bray F, Siegel RL, Ferlay J, Lortet-Tieulent J, Jemal A. Global cancer statistics, 2012. CA Cancer J Clin. 2015;65:87–108. doi: 10.3322/caac.21262. [DOI] [PubMed] [Google Scholar]
  • 4.Hewett DG, Kahi CJ, Rex DK. Efficacy and effectiveness of colonoscopy: how do we bridge the gap? Gastrointest Endosc Clin N Am. 2010;20:673–684. doi: 10.1016/j.giec.2010.07.011. [DOI] [PubMed] [Google Scholar]
  • 5.Lee SH, Jang BI, Kim KO, Jeon SW, Kwon JG, Kim EY, Jung JT, Park KS, Cho KB, Kim ES, et al. Endoscopic experience improves interobserver agreement in the grading of esophagitis by Los Angeles classification: conventional endoscopy and optimal band image system. Gut Liver. 2014;8:154–159. doi: 10.5009/gnl.2014.8.2.154. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.van Doorn SC, Hazewinkel Y, East JE, van Leerdam ME, Rastogi A, Pellisé M, Sanduleanu-Dascalescu S, Bastiaansen BA, Fockens P, Dekker E. Polyp morphology: an interobserver evaluation for the Paris classification among international experts. Am J Gastroenterol. 2015;110:180–187. doi: 10.1038/ajg.2014.326. [DOI] [PubMed] [Google Scholar]
  • 7.Lanspa SJ, Lynch HT. Quality indicators for colonoscopy and the risk of interval cancer. N Engl J Med. 2010;363:1371; author reply 1373. doi: 10.1056/NEJMc1006842. [DOI] [PubMed] [Google Scholar]
  • 8.Rondonotti E, Soncini M, Girelli CM, Russo A, Ballardini G, Bianchi G, Cantù P, Centenara L, Cesari P, Cortelezzi CC, et al. Can we improve the detection rate and interobserver agreement in capsule endoscopy? Dig Liver Dis. 2012;44:1006–1011. doi: 10.1016/j.dld.2012.06.014. [DOI] [PubMed] [Google Scholar]
  • 9.Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A, Venugopalan S, Widner K, Madams T, Cuadros J, et al. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. JAMA. 2016;316:2402–2410. doi: 10.1001/jama.2016.17216. [DOI] [PubMed] [Google Scholar]
  • 10.Ciompi F, Chung K, van Riel SJ, Setio AAA, Gerke PK, Jacobs C, Scholten ET, Schaefer-Prokop C, Wille MMW, Marchianò A, et al. Towards automatic pulmonary nodule management in lung cancer screening with deep learning. Sci Rep. 2017;7:46479. doi: 10.1038/srep46479. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, Thrun S. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542:115–118. doi: 10.1038/nature21056. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Swager AF, van der Sommen F, Klomp SR, Zinger S, Meijer SL, Schoon EJ, Bergman JJGHM, de With PH, Curvers WL. Computer-aided detection of early Barrett’s neoplasia using volumetric laser endomicroscopy. Gastrointest Endosc. 2017;86:839–846. doi: 10.1016/j.gie.2017.03.011. [DOI] [PubMed] [Google Scholar]
  • 13.Hirasawa T, Aoyama K, Tanimoto T, Ishihara S, Shichijo S, Ozawa T, Ohnishi T, Fujishiro M, Matsuo K, Fujisaki J, et al. Application of artificial intelligence using a convolutional neural network for detecting gastric cancer in endoscopic images. Gastric Cancer. 2018;21:653–660. doi: 10.1007/s10120-018-0793-2. [DOI] [PubMed] [Google Scholar]
  • 14.Leenhardt R, Vasseur P, Li C, Saurin JC, Rahmi G, Cholet F, Becq A, Marteau P, Histace A, Dray X; CAD-CAP Database Working Group. A neural network algorithm for detection of GI angiectasia during small-bowel capsule endoscopy. Gastrointest Endosc. 2018 doi: 10.1016/j.gie.2018.06.036. [DOI] [PubMed] [Google Scholar]
  • 15.Mori Y, Kudo SE, Chiu PW, Singh R, Misawa M, Wakamura K, Kudo T, Hayashi T, Katagiri A, Miyachi H, et al. Impact of an automated system for endocytoscopic diagnosis of small colorectal lesions: an international web-based study. Endoscopy. 2016;48:1110–1118. doi: 10.1055/s-0042-113609. [DOI] [PubMed] [Google Scholar]
  • 16.Mori Y, Kudo SE, Misawa M, Saito Y, Ikematsu H, Hotta K, Ohtsuka K, Urushibara F, Kataoka S, Ogawa Y, et al. Real-Time Use of Artificial Intelligence in Identification of Diminutive Polyps During Colonoscopy: A Prospective Study. Ann Intern Med. 2018;169:357–366. doi: 10.7326/M18-0249. [DOI] [PubMed] [Google Scholar]
  • 17.Yuan Y, Meng MQ. Deep learning for polyp recognition in wireless capsule endoscopy images. Med Phys. 2017;44:1379–1389. doi: 10.1002/mp.12147. [DOI] [PubMed] [Google Scholar]
  • 18.Wang P, Xiao X, Glissen Brown JR, Berzin TM, Tu M, Xiong F, Hu X, Liu P, Song Y, Zhang D, et al. Development and validation of a deep-learning algorithm for the detection of polyps during colonoscopy. Nat Biomed Eng. 2018;2:741–748. doi: 10.1038/s41551-018-0301-3. [DOI] [PubMed] [Google Scholar]
  • 19.Zhou T, Han G, Li BN, Lin Z, Ciaccio EJ, Green PH, Qin J. Quantitative analysis of patients with celiac disease by video capsule endoscopy: A deep learning method. Comput Biol Med. 2017;85:1–6. doi: 10.1016/j.compbiomed.2017.03.031. [DOI] [PubMed] [Google Scholar]
  • 20.Lequan Yu, Hao Chen, Qi Dou, Jing Qin, Pheng Ann Heng. Integrating Online and Offline Three-Dimensional Deep Learning for Automated Polyp Detection in Colonoscopy Videos. IEEE J Biomed Health Inform. 2017;21:65–75. doi: 10.1109/JBHI.2016.2637004. [DOI] [PubMed] [Google Scholar]
  • 21.Riegler M, Pogorelov K, Halvorsen P, de Lange T, Griwodz C, Johansen D, Schmidt PT, Eskeland SL. Eir - efficient computer aided diagnosis framework for gastrointestinal endoscopies. CBMI. 2016:1–6. [Google Scholar]
  • 22.Pogorelov K, Ostroukhova O, Jeppsson M, Espeland H, Griwodz C, de Lange T, Johansen D, Riegler M, Halvorsen P. Deep learning and hand-crafted feature based approaches for polyp detection in medical videos. IEEE CBMS. 2018 [Google Scholar]
  • 23.Hong D, Tavanapong W, Wong J, Oh J, de Groen PC. 3D Reconstruction of virtual colon structures from colonoscopy images. Comput Med Imaging Graph. 2014;38:22–33. doi: 10.1016/j.compmedimag.2013.10.005. [DOI] [PubMed] [Google Scholar]
  • 24.Riegler M, Larson M, Lux M, and Kofler C. How ’how’ reflects what’s what: Content-based exploitation of how users frame social images. ACM MED MER. 2014:397–406. [Google Scholar]
  • 25.LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:436–444. doi: 10.1038/nature14539. [DOI] [PubMed] [Google Scholar]
  • 26.Pogorelov K, Riegler M, Eskeland SL, de Lange T, Johansen D, Griwodz C, Schmidt PT, Halvorsen P. Efficient disease detection in gastrointestinal videos - global features versus neural networks. Multimed Tools Appl. 2017;76:22493–22525. [Google Scholar]
  • 27.Shin Y, Balasingham I. Automatic polyp frame screening using patch based combined feature and dictionary learning. Comput Med Imaging Graph. 2018;69:33–42. doi: 10.1016/j.compmedimag.2018.08.001. [DOI] [PubMed] [Google Scholar]
  • 28.Alammari A, Islam AR, Oh J, Tavanapong W, Wong J, De Groen PC. Classification of ulcerative colitis severity in colonoscopy videos using CNN. ACM ICIME. 2017:139–144. [Google Scholar]
  • 29.Pogorelov K, Ostroukhova O, Petlund A, Halvorsen P, de Lange T, Espeland H, Kupka T, Griwodz C, Riegler M. Deep learning and handcrafted feature based approaches for automatic detection of angiectasia. IEEE BHI. 2018 [Google Scholar]
  • 30.Riegler M, Pogorelov K, Eskeland SL, SchmidtPT, Albisser Z, Johansen D, Griwodz C, Halvorsen P, de Lange T. From annotation to computer-aided diagnosis: Detailed evaluation of a medical multimedia system. ACM Trans Multimedia Comput Commun. 2017;13:26:1–26:26. [Google Scholar]
  • 31.Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, van der Laak JAWM, van Ginneken B, Sánchez CI. A survey on deep learning in medical image analysis. Med Image Anal. 2017;42:60–88. doi: 10.1016/j.media.2017.07.005. [DOI] [PubMed] [Google Scholar]
  • 32.Gottlieb K, Hussain F. Voting for image scoring and assessment (VISA)--theory and application of a 2 + 1 reader algorithm to improve accuracy of imaging endpoints in clinical trials. BMC Med Imaging. 2015;15:6. doi: 10.1186/s12880-015-0049-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Rex DK, Kahi C, O’Brien M, Levin TR, Pohl H, Rastogi A, Burgart L, Imperiale T, Ladabaum U, Cohen J, et al. The American Society for Gastrointestinal Endoscopy PIVI (Preservation and Incorporation of Valuable Endoscopic Innovations) on real-time endoscopic assessment of the histology of diminutive colorectal polyps. Gastrointest Endosc. 2011;73:419–422. doi: 10.1016/j.gie.2011.01.023. [DOI] [PubMed] [Google Scholar]
  • 34.Chen XW, Lin X. Big data deep learning: challenges and perspectives. IEEE Access. 2014;2:514–525. [Google Scholar]
  • 35.Riegler M, Lux M, Griwodz C, Spampinato C, de Lange T, Eskeland SL, Pogorelov K, Tavanapong W, Schmidt PT, Gurin C, et al. Multimedia and medicine: Teammates for better disease detection and survival. ACM on Multimedia Conference. 2016:968–977. [Google Scholar]
  • 36.Younghak Shin, Balasingham I. Comparison of hand-craft feature based SVM and CNN based deep learning framework for automatic polyp classification. Conf Proc IEEE Eng Med Biol Soc. 2017;2017:3277–3280. doi: 10.1109/EMBC.2017.8037556. [DOI] [PubMed] [Google Scholar]
  • 37.Shin HC, Roth HR, Gao M, Lu L, Xu Z, Nogues I, Yao J, Mollura D, Summers RM. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning. IEEE Trans Med Imaging. 2016;35:1285–1298. doi: 10.1109/TMI.2016.2528162. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Pogorelov K, Randel KR, Griwodz C, Eskeland SL, de Lange T, Johansen D, Spampinato C, Dang-Nguyen DT, Lux M, Schmidt PT, et al. Kvasir: A multi- class image dataset for computer aided gastrointestinal disease detection. ACM on Multimedia Systems Conference. 2017:164–169. [Google Scholar]
  • 39.Bernal J, Aymeric H. Miccai endoscopic vision challenge polyp detection and segmentation. Accessed December. 2017. Available from: https://endovissub2017-giana.grand-challenge. org/home/ [Google Scholar]
  • 40.Bernal J, Sánchez FJ, Fernández-Esparrach G, Gil D, Rodríguez C, Vilariño F. WM-DOVA maps for accurate polyp highlighting in colonoscopy: Validation vs. saliency maps from physicians. Comput Med Imaging Graph. 2015;43:99–111. doi: 10.1016/j.compmedimag.2015.02.007. [DOI] [PubMed] [Google Scholar]
  • 41.Pogorelov K, Randel KR, de Lange T, Eskeland SL, Griwodz C, Johansen D, Spampinato C, Taschwer M, Lux M, Schmidt PT, et al. Nerthus: A bowel preparation quality video dataset. ACM on Multimedia Systems Conference. 2017:170–174. [Google Scholar]
  • 42.Bernal J, Aymeric H. Gastrointestinal image analysis (GIANA) angiodysplasia D&L challenge. Accessed November. 2017. Available from: https://endovissub2017-giana.grand-challenge. org/home/ [Google Scholar]
  • 43.Tajbakhsh N, Gurudu SR, Liang J. Automated Polyp Detection in Colonoscopy Videos Using Shape and Context Information. IEEE Trans Med Imaging. 2016;35:630–644. doi: 10.1109/TMI.2015.2487997. [DOI] [PubMed] [Google Scholar]
  • 44.Koulaouzidis A, Iakovidis DK, Yung DE, Rondonotti E, Kopylov U, Plevris JN, Toth E, Eliakim A, Wurm Johansson G, Marlicz W, et al. KID Project: an internet-based digital video atlas of capsule endoscopy for research purposes. Endosc Int Open. 2017;5:E477–E483. doi: 10.1055/s-0043-105488. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Wang Z, Meng Q, Wang S, Li Z, Bai Y, Wang D. Deep learning-based endoscopic image recognition for detection of early gastric cancer: a Chinese perspective. Gastrointest Endosc. 2018;88:198–199. doi: 10.1016/j.gie.2018.01.029. [DOI] [PubMed] [Google Scholar]

Articles from World Journal of Gastroenterology are provided here courtesy of Baishideng Publishing Group Inc

RESOURCES