Summary
Objective : To highlight noteworthy papers that are representative of 2019 developments in the fields of sensors, signals, and imaging informatics.
Method : A broad literature search was conducted in January 2020 using PubMed. Separate predefined queries were created for sensors/signals and imaging informatics using a combination of Medical Subject Heading (MeSH) terms and keywords. Section editors reviewed the titles and abstracts of both sets of results. Papers were assessed on a three-point Likert scale by two co-editors, rated from 3 (do not include) to 1 (should be included). Papers with an average score of 2 or less were then read by all three section editors, and the group nominated top papers based on consensus. These candidate best papers were then rated by at least six external reviewers.
Results : The query related to signals and sensors returned a set of 255 papers from 140 unique journals. The imaging informatics query returned a set of 3,262 papers from 870 unique journals. Based on titles and abstracts, the section co-editors jointly filtered the list down to 50 papers from which 15 candidate best papers were nominated after discussion. A composite rating after review determined four papers which were then approved by consensus of the International Medical Informatics Association (IMIA) Yearbook editorial board. These best papers represent different international groups and journals.
Conclusions : The four best papers represent state-of-the-art approaches for processing, combining, and analyzing heterogeneous sensor and imaging data. These papers demonstrate the use of advanced machine learning techniques to improve comparisons between images acquired at different time points, fuse information from multiple sensors, and translate images from one modality to another.
Keywords: Sensors, signals, imaging informatics, medical informatics
1 Introduction
The field of sensors, signals, and imaging informatics (SSII) is evolving at a rapid pace, with new technologies such as wearable biosensors and novel imaging modalities that measure a growing number of physiological processes and provide increasingly detailed anatomical and functional information. Developments in sensor and signal informatics include the collection, processing, management, and analysis of data from wearable devices and embedded environmental sensors using technologies such as the Internet-of-Things (IoT) 1 2 . A number of these developments relate to the integration of data from multiple sensors and the application of advanced signal processing algorithms to enable patient monitoring, health maintenance, rehabilitation, and disease management. The field of imaging informatics relates to the effective use of imaging and imaging-derived information to understand diseases and improve patient outcomes 3 4 . Recent developments include advancing the computerized reasoning of images by combining data from multiple sources (e.g., other types of images and textual data), improving model training, and focusing on model validation and interpretation 5 .
This synopsis aims at identifying notable papers and trends from 2019 in the literature of sensors, signals, and imaging informatics. We summarize the process we used to select the candidate best papers. Emerging trends are described based on a high-level synthesis of 50 candidate best papers. Noteworthy topics, related to ethics and the COVID-19 pandemic, are also discussed. The synopsis concludes with a discussion of the limitations of the selection process and a summary of the findings.
2 Paper Selection Process
Section editors surveyed the SSII literature published in 2019, nominating 50 best papers that were representative of cutting-edge research, had a potential for high clinical impact, and were produced by diverse research groups from around the world.
The process of searching the literature for candidate best papers remains challenging, given the broad nature of the SSII category. The goal of the search process was to generate a manageable list of papers for the section co-editors to review, while capturing the breadth of the works that were conducted last year across a diverse collection of journals. Compared with prior years, where papers were identified using both PubMed and Web of Science, this year, the search was limited to PubMed. The search was performed using a combination of Medical Subject Headings (MeSH) and relevant keywords that were drawn from queries executed in previous years and updated by the section co-editors. For sensors and signals, terms encompassed computer-assisted processing, physiologic monitoring, and biosensing techniques (e.g., electroretinogram, photoplethysmography). For imaging informatics, terms included imaging modalities (e.g., computed tomography, electrocardiogram), image processing and enhancement (e.g., segmentation, normalization), feature extraction (e.g., natural language processing, radiomics), standards and interoperability, multi-modal data integration (e.g., radiology-pathology correlation), and computer-based reasoning (e.g., machine learning, computer-aided diagnosis). Papers that touched upon the fields of SSII in any clinical domain were included. The query was further restricted to journal articles written in English, describing original research, including an abstract, and published in 2019.
The original search was performed during the first week of January 2020. The search result returned a set of 3,517 papers across 932 unique journals. Basic metadata associated with each paper, including title, authors, journal, abstract, and identifiers were retrieved. Citations were imported into Google Sheets, which allowed real-time concurrent review of the citations by all section editors.
The identification of candidate best papers proceeded in three phases. In the first phase, each section co-editor was assigned a set of papers to identify relevant papers based on their titles and/or abstracts. The assignments were made such that each paper was reviewed by two section co-editors. Papers were rated on a Likert scale between 1 to 3, where 3 was assigned to papers that were irrelevant and should be removed from consideration, and 1 was given to papers that were highly relevant and should be considered. Papers with an average score of 2 or below (n=50) were kept. In the second phase, each section co-editor independently reviewed the remaining 50 papers and again assigned a score. The co-editors discussed the papers based on the results of the scoring and nominated 15 candidate best papers for the SSII section. In the third phase, the candidate best papers were sent to a group of external reviewers, comprised of other Yearbook editorial members and representative researchers from the SSII community who were not involved in the production of the IMIA Yearbook. Each paper was considered by at least six reviewers and assigned an overall score based on criteria such as scientific impact, quality of content, originality, and clarity. Section co-editors nominated four best papers taking the external scores into account while selecting papers that are representative of the diverse research, institutions, and journals in the field. Finally, the IMIA Yearbook editorial board discussed the nominations, identified any potential overlap with other sections, and settled on four best papers for the SSII section (see Table 1 ).
Table 1. Best paper selection of articles for the IMIA Yearbook of Medical Informatics 2020 in the section ‘Sensors, Signals, and Imaging Informatics’. The articles are listed in alphabetical order of the first author’s surname.
Section Sensors, Signals, and Imaging Informatics |
---|
▪ Armanious K, Jiang C, Fischer M, Küstner T, Hepp T, Nikolaou K, Gatidis S, Yang B. MedGAN: Medical image translation using GANs. Comput Med Imaging Graph 2020;79:101684. |
▪ Chandra BS, Sastry CS, Jana S. Robust heartbeat detection from multimodal data via CNN-based generalizable information fusion. IEEE Trans Biomed Eng 2019; 66(3):710-7. |
▪ de Vos BD, Berendsen FF, Viergever MA, Sokooti H, Staring M, Išgum I. A deep learning framework for unsupervised affine and deformable image registration. Med Image Anal 2019;52:128-43. |
▪ Zhu T, Pimentel MAF, Clifford GD, Clifton DA. Unsupervised Bayesian inference to fuse biosignal sensory estimates for personalizing care. IEEE J Biomed Health Inf 2019;23(1):47-58. |
3 Emerging Trends and Noteworthy Topics
Based on a review of the 50 candidate papers, three emerging trends are highlighted. Two noteworthy topics— this year’s special topic related to ethics in health informatics and the role of SSII in the COVID-19 pandemic—are also discussed.
3.1 Emerging Trend 1: Selective Fusion of Multiple Signals/Domains to Improve Classification
As the number of biomedical signals used to characterize the physiological state of a patient grows, effective approaches to fuse this information are needed, given that many of the signals may be irrelevant or are prone to uncertainty and measurement variability 6 . Several papers proposed data fusion techniques that identified reliable signals and distinguished relative importance among different signals. Chandra et al. 7 , selected as a best paper, demonstrate an innovative way of signal fusion deploying convolutional neural networks (CNNs) for robust heartbeat detection. In the monitoring of cardiovascular diseases, especially in critical-care situations, accurate heartbeat detection is a key, but often prone to errors when monitoring with one physiological signal. The authors propose a CNN-based approach which directly fuses information from multiple physiological signals for estimating heartbeat locations without the need for any intermediate detection. Another example is a deep fusional attention network by Ye et al., FusionAtt 8 , which uses a unified fusional attention neural network combined with a multi-view convolutional encoder to learn channel-aware representations of multi-channel biosignals. FusionAtt outperformed baseline approaches on two clinical tasks, seizure detection using data from 23-channel scalp electroencephalogram and sleep stage classification using data from 14-channel polysomnography.
3.2 Emerging Trend 2: Improvements in Model Training
Multiple candidate papers addressed the challenge of obtaining large quantities of reliable and consistent labels that can be used to train machine learning algorithms. In a selected best paper, Zhu et al., 9 address the difficulties and challenges of reliable, consistent, and real-time labeling of high data volumes arising from medical sensors used for diagnosis and patient-specific treatments. Their study provides a valuable advanced method for aggregating the labeling of several imperfect automated algorithms, generating highly reliable labels to better support and improve decisions in personalized care. Addressing a similar problem in digital pathology, Campanella et al., 10 present a multiple instance learning-based approach to train deep neural networks that generate tile-level features. These features are then inputted into a recurrent neural network to generate the diagnostic classification for the whole slide. They showed excellent classification performance across three domains without manual pixel-level annotation. Another strategy in dealing with limited datasets is to improve the quality of the existing images or to generate synthetic images in place of missing images. Armanious et al., 11 selected as a best paper, describe a novel generative adversarial network (GAN) that generates synthetic images using several fully convolutional encoder-decoder networks and modified perceptual and style-transfer losses. They demonstrate the effectiveness of their approach on a variety of applications, including inter-modality translation, image de-noising, and motion-artifact correction.
3.3 Emerging Trend 3: Novel Approaches to Handle Time Series and Longitudinal Data
As the number of data collections that contain time series and longitudinal data grows, new methods for managing and analyzing this type of data are needed. Candidate papers touched upon methods for processing and reasoning on complex time series and longitudinal data. Yildirim et al., 12 describe a way to compress the representation of entire Holter monitor recordings using a deep convolutional auto-encoder. The compressed electrocardiogram signals are then fed into long-short term memory classifiers to detect arrhythmia. Through experimental studies using the MIT-BIH arrhythmia database, they were able to effectively compress the signals while achieving high classification accuracy. In medical image analysis, image registration is a frequent task in medical imaging and computer-aided diagnosis. The paper by de Vos et al., 13 describes an entire framework to perform registration tasks using CNNs. They recast conventional intensity-based image registration into a learning problem, and hence speed up the registration process essentially once the learning is done.
3.4 Noteworthy Topic 1: Ethics in SSII
This year’s special topic of ethics in health informatics is particularly pertinent to the SSII field. Rapid advances in the performance of algorithms and their potential applications in clinical practice have raised multiple ethical concerns. The imaging informatics community, for example, has been grappling with issues surrounding:
The use of patient data to develop and commercialize these models 14 ;
The (lack of) interpretability and transparency on how an algorithm arrived at its output 15 ;
The potential sources of bias that may cloud a model’s predictions and reinforce social inequality 16 .
The reader is referred to recent commentaries 17 18 , a joint society statement 19 , and reviews 20 , which comprehensively address relevant ethical issues.
3.5 Noteworthy Topic 2: Developments in SSII and COVID-19 Pandemic
At the time of this writing, the world is grappling with the COVID-19 pandemic, which has led to over confirmed 400,000 deaths worldwide (June 2020). The field of SSII is playing an active role in the international response to stop the outbreak. For example, algorithms that analyze chest x-rays and computed tomography scans are being developed to differentiate between COVID-19 and other forms of pneumonia 21 . Smartphone-embedded sensors are also being used to remotely monitor COVID-19 positive patients and provide a means for contact-tracing applications 22 . Nevertheless, early studies have had significant limitations, including small sample sizes, lack of external validation, and a high risk of bias 23 . We see the following challenges in responding to COVID-19 highlight opportunities for further advancement in SSII:
Improving interoperability of clinical imaging archives so that relevant clinical and imaging data can be readily shared across institutions and geopolitical boundaries to address public health needs;
Developing reporting standards to encourage transparency and reproducibility of algorithms and build trust with clinical users;
Building a more robust IoT to improve the real-time monitoring and response to new outbreaks;
Ensuring that algorithms successfully detecting COVID-19 positive cases today can be adapted and reused to diagnose similar diseases in the future.
4 Summary and Conclusion
In summary, the SSII field has had a productive year with many innovative, diverse, and impactful works that make full use of complex, weakly labeled datasets. Deep learning-based approaches have matured, and more emphasis is being placed on how these models are being trained, how they can be used to reason on time series and longitudinal data, and how they perform on external datasets and in comparison to human readers. The section co-editors would also like to note some limitations of the selection process: (i) the comparatively low number of results for sensors and signals compared to imaging informatics ; (ii) the sole focus on PubMed citations, which may exclude articles published in technical journals ; and (iii) the need to further refine the imaging informatics query to reduce the high number of false-positive hits. SSII is playing an important role in current events, particularly as the need to monitor the long-term effects of COVID-19 on patients (e.g., multisystem inflammation identified in children 24 ). Nevertheless, the field must take caution and not sacrifice scientific rigor in a rush to translating these novel developments. The section co-editors look forward to seeing the impact that the field of SSII will make on public health in the coming year.
Acknowledgments
The section editors would like to thank Adrien Ugon for supporting the external review process, and the external reviewers for their input on the candidate best papers.
Appendix: Content Summaries of Best Papers for the ‘Sensors, Signals, and Imaging Informatics’ Section of the 2020 IMIA Yearbook of Medical Informatics.
Armanious K, Jiang C, Fischer M, Küstner T, Hepp T, Nikolaou K, Gatidis S, Yang B
MedGAN: Medical image translation using GANs
Comput Med Imaging Graph 2020;79:101684
Nearly 20 years ago, the European Cross-Language Evaluation Forum (CLEF) Campaign started to regard medical images as being analogous to “language”. Concerning the diverse imaging modalities, there are a lot of image-based languages, and “translation” between languages is a frequent task in medical applications. This paper presents a novel approach to this problem, where generative adversarial networks (GANs) are used to generate synthetically images which are based on a template in a different language. The potential applications for this modeling approach include, but are not limited to, inter-modality translation, image de-noising, and motion-artifact correction. Using the MedGAN framework, all of these tasks can be done without task-specific training. A new generator architecture, CasNet, is introduced. CasNet concatenates several encoder-decoder pairs (similar to stacked U-nets) and captures high as well as low-frequency components of the desired target modality by combining the adversarial framework with non-adversarial losses. The authors analyzed individual loss functions and quantitatively showed the superiority of MedGAN over existing translation algorithms. The neural network was applied to three different tasks without any task-specific adaptation: positron emission tomography (PET) to computed tomography (CT) translation, PET image denoising, and magnetic resonance imaging (MRI) motion artifact correction. Interestingly, five experienced radiologists confirmed the equivalence of the synthetically generated images to original image recordings.
Chandra BS, Sastry CS, Jana S
Robust heartbeat detection from multimodal data via cnn-based generalizable information fusion
IEEE Trans Biomed Eng 2019 Mar;66(3):710-7
This paper demonstrates an innovative way of signal fusion deploying convolutional neural networks (CNNs) for robust heartbeat detection. In the monitoring of cardiovascular diseases, especially in critical-care situations, accurate heartbeat detection is key, but often prone to errors when monitoring with one physiological signal e.g., the electrocardiogram (ECG) only. Multi-signal detectors exist, but do not systematically exploit inter-signal correlations. To fill this gap, the authors propose a CNN-based approach, which directly fuses information from multiple physiological signals for estimating heartbeat locations without the need for any intermediate detection. They employ ECG and blood pressure signals from the PhysioNet 2014 Challenge as well as the MIT-BIH arrhythmia database for network training. Their CNN learns a set of linear filters to extract features, temporarily fusing information from multiple signals. A fully connected network including a sigmoid output function maps these features for heartbeat prediction. Their trained networks perform with a score of 94% using blood pressure and ECG signals on the PhysioNet 2014 dataset, and have a performance score of 99.92% using two ECG channels on the MIT-BIH arrhythmia database. Hence these results compare well with previously reported database-specific results. In conclusion, their CNN-based approach is generalizable, robust, and efficient in detecting heartbeat locations from multiple signals. Furthermore, the authors suggest their technique as an accurate method to estimate heartbeats on sparse datasets.
de Vos BD, Berendsen FF, Viergever MA, Sokooti H, Staring M, Išgum I
A deep learning framework for unsupervised affine and deformable image registration
Med Image Anal 2019 Feb;52:128-43
Image registration is a frequent task in medical imaging and computer-aided diagnosis. This paper describes an entire framework to perform registration tasks using CNNs. The framework has innovative features such as the support of 1) model-based (affine) as well as elastic (deformable) image registration, 2) n-dimensional images, 3) unsupervised training, 4) multi-modal data, and 5) coarse-to-fine hierarchical architectures. They performed a comprehensive evaluation, demonstrating a considerable speedup with respect to the processing time of conventional methods yielding performance similar to state-of-the-art methods. The core contribution is the framework called Deep Learning Image Registration (DLIR). The DLIR training procedure is similar to a conventional iterative image registration framework, where fixed and moving images are compared, transformed, and resampled into the warped image, but adds a CNN that allows unsupervised training. In other words, it can recast conventional intensity-based image registration into a learning problem, and hence speed up the registration process essentially, once the learning is done. The framework is flexible, allowing for a variety of architectures by stacking multiple CNNs into larger compositions. Among others, the authors demonstrate and evaluate a four-dimensional (4D) task using the publicly available DIR-Lab 4D chest CT data.
Zhu T, Pimentel MAF, Clifford GD, Clifton DA
Unsupervised Bayesian inference to fuse biosignal sensory estimates for personalizing care
IEEE J Biomed Health Inform 2019 Jan;23(1):47-58
This paper addresses the difficulties and challenges of reliable, consistent, and real-time labeling of high data volumes arising from medical sensors used for diagnosis and patient-specific treatments. In this work, the authors present two fully Bayesian approaches to BCLA modeling (Bayesian continuous-valued label aggregator) using
Gibbs sampling to fuse continuous-valued labels of biosignal sensor data from independent or potentially correlated annotators in an unsupervised manner. They estimate the bias and precision of each annotator to infer the underlying ground truth. One of these models takes, for the first time, into account the correlation between annotators, allowing a grouping based on their decision-making process. The manuscript is notably detailed, giving a deep insight into the methods developed and their proof-of-concept. The authors performed a comprehensive validation based on two clinical datasets, comprising QT intervals of ECGs and Capnobase Respiratory Rate data, as well as a synthetic QT dataset. The validation demonstrated impressively the outperforming performance and robustness in dealing with missing values of both proposed models in comparison to existing approaches. This study thus provides a valuable advanced method for aggregating the labeling of several imperfect automated algorithms, generating highly reliable labels to better support and improve decisions in personalized care.
References
- 1.Mavrogiorgou A, Kiourtis A, Perakis K, Pitsios S, Kyriazis D. IoT in healthcare: achieving interoperability of high-quality data acquired by IoT medical devices. Sensors (Basel) 2019;19(09):1978. doi: 10.3390/s19091978. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Suresh A, Udendhran R, Balamurgan M, Varatharajan R. A novel internet of things framework integrated with real time monitoring for intelligent healthcare environment. J Med Syst. 2019;43(06):165. doi: 10.1007/s10916-019-1302-9. [DOI] [PubMed] [Google Scholar]
- 3.Eby P R.Breast cancer: let imaging be our guide and improving patient outcomes be our goal Radiology 2019. Aug;29202309–10. [DOI] [PubMed] [Google Scholar]
- 4.Gibofsky A, Thiele R. A better look at rheumatoid arthritis: using imaging to improve patient outcomes. Semin Arthritis Rheum. 2019;48(04):763. doi: 10.1016/j.semarthrit.2018.12.004. [DOI] [PubMed] [Google Scholar]
- 5.van Ooijen P MA, Nagaraj Y, Olthof A.Medical imaging informatics, more than ‘just’ deep learningEur Radiol2020 [DOI] [PubMed]
- 6.Chandrasekaran B, Gangadhar S, Conrad J M. A survey of multisensor fusion techniques, architectures and methodologies. SoutheastCon. 2017:1–8. [Google Scholar]
- 7.Chandra B S, Sastry C S, Jana S. Robust heartbeat detection from multimodal data via CNN-based generalizable information fusion. IEEE Trans Biomed Eng. 2019;66(03):710–7. doi: 10.1109/TBME.2018.2854899. [DOI] [PubMed] [Google Scholar]
- 8.Yuan Y, Jia K. FusionAtt: deep fusional attention networks for multi-channel biomedical signals. Sensors. 2019;19(11):2429. doi: 10.3390/s19112429. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Zhu T, Pimentel M AF, Clifford G D, Clifton D A. Unsupervised bayesian inference to fuse biosignal sensory estimates for personalizing care. IEEE J Biomed Health Inf. 2019;23(01):47–58. doi: 10.1109/JBHI.2018.2820054. [DOI] [PubMed] [Google Scholar]
- 10.Campanella G, Hanna M G, Geneslaw L, Miraflor A, Werneck Krauss Silva V, Busam K J et al. Clinical-grade computational pathology using weakly supervised deep learning on whole slide images. Nat Med. 2019;25(08):1301–9. doi: 10.1038/s41591-019-0508-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Armanious K, Jiang C, Fischer M, Küstner T, Hepp T, Nikolaou K et al. MedGAN: medical image translation using GANs. Comput Med Imaging Graph. 2020;79:101684. doi: 10.1016/j.compmedimag.2019.101684. [DOI] [PubMed] [Google Scholar]
- 12.Yildirim O, Baloglu U B, Tan R S, Ciaccio E J, Acharya U R. A new approach for arrhythmia classification using deep coded features and LSTM networks. Comput Methods Programs Biomed. 2019;176:121–33. doi: 10.1016/j.cmpb.2019.05.004. [DOI] [PubMed] [Google Scholar]
- 13.de Vos B D, Berendsen F F, Viergever M A, Sokooti H, Staring M, Išgum I. A deep learning framework for unsupervised affine and deformable image registration. Med Image Anal. 2019;52:128–43. doi: 10.1016/j.media.2018.11.010. [DOI] [PubMed] [Google Scholar]
- 14.Larson D B, Magnus D C, Lungren M P, Shah N H, Langlotz C P. Ethics of using and sharing clinical imaging data for artificial intelligence: a proposed framework. Radiology. 2020;295(03):675–82. doi: 10.1148/radiol.2020192536. [DOI] [PubMed] [Google Scholar]
- 15.Reyes M, Meier R, Pereira S, Silva C A, Dahlweid F M, von Tengg-Kobligk H et al. On the interpretability of artificial intelligence in radiology: challenges and opportunities. Radiol Artif Intell. 2020;2(03):e190043. doi: 10.1148/ryai.2020190043. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Char D S, Shah N H, Magnus D. Implementing Machine Learning in Health Care — Addressing Ethical Challenges. New Engl J Med. 2018;378(11):981–3. doi: 10.1056/NEJMp1714229. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Kohli M, Geis R. Ethics, Artificial Intelligence, and Radiology. J Am Col Radiol. 2018;15(09):1317–9. doi: 10.1016/j.jacr.2018.05.020. [DOI] [PubMed] [Google Scholar]
- 18.Kahn C E.Do the Right ThingRadiol Artif Intell20191(2) [DOI] [PMC free article] [PubMed]
- 19.Geis J R, Brady A P, Wu C C, Spencer J, Ranschaert E, Jaremko J L et al. Ethics of artificial intelligence in radiology: summary of the joint European and North American multisociety Statement. Radiology. 2019;293(02):436–40. doi: 10.1148/radiol.2019191586. [DOI] [PubMed] [Google Scholar]
- 20.Safdar N M, Banja J D, Meltzer C C. Ethical considerations in artificial intelligence. Eur J Radiol. 2020;122:108768. doi: 10.1016/j.ejrad.2019.108768. [DOI] [PubMed] [Google Scholar]
- 21.Li L, Qin L, Xu Z, Yin Y, Wang X, Kong B et al. Artificial Intelligence Distinguishes COVID-19 from Community Acquired Pneumonia on Chest CT. Radiology. 2020:200905. doi: 10.1148/radiol.2020200905. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Yasaka T M, Lehrich B M, Sahyouni R. Peer-to-Peer Contact Tracing: Development of a Privacy-Preserving Smartphone App. JMIR Mhealth Uhealth. 2020;8(04):e18936. doi: 10.2196/18936. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Wynants L, Van Calster B, Bonten M MJ, Collins G S, Debray T PA, De Vos M et al. Prediction models for diagnosis and prognosis of covid-19 infection: systematic review and critical appraisal. BMJ. 2020;369:m1328. doi: 10.1136/bmj.m1328. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Viner R M, Whittaker E.Kawasaki-like disease: emerging complication during the COVID-19 pandemic Lancet 2020395102391741–3. [DOI] [PMC free article] [PubMed] [Google Scholar]