Abstract
This study was conducted to determine whether facial photographs obtained simultaneously with radiographs improve radiologists’ detection rate of wrong-patient errors, when they are explicitly asked to include the photographs in their evaluation. Radiograph-photograph combinations were obtained from 28 patients at the time of portable chest radiography imaging. From these, pairs of radiographs were generated. Each unique pair consisted of one new and one old (comparison) radiograph. Twelve pairs of mismatched radiographs (i.e., pairs containing radiographs of different patients) were also generated. In phase 1 of the study, 5 blinded radiologist observers were asked to interpret 20 pairs of radiographs without the photographs. In phase 2, each radiologist interpreted another 20 pairs of radiographs with the photographs. Radiologist observers were not instructed about the purpose of the photographs but were asked to include the photographs in their review. The detection rate of mismatched errors was recorded along with the interpretation time for each session for each observer. The two-tailed Fisher exact test was used to evaluate differences in mismatch detection rates between the two phases. A p value of <0.05 was considered significant. The error detection rates without (0/20 = 0 %) and with (17/18 = 94.4 %) photographs were different (p = 0.0001). The average interpretation times for the set of 20 radiographs were 26.45 (SD 8.69) and 20.55 (SD 3.40) min, for phase 1 and phase 2, respectively (two-tailed Student t test, p = 0.1911). When radiologists include simultaneously obtained photographs in their review of portable chest radiographs, there is a significant improvement in the detection of labeling errors. No statistically significant difference in interpretation time was observed. This may lead to improved patient safety without affecting radiologists’ throughput.
Keywords: Medical errors, Wrong-patient events
Introduction
The Joint Commission reaffirms that “wrong-patient errors occur in virtually all stages of diagnosis and treatment,” in their National Patient Safety Goals for 2014 [1]. Thus, the first requirement in their hospital accreditation program is for at least two patient identifiers to be used when providing care. These identifiers may include the individual’s name, medical record number, and telephone number, or other person-specific identifier.
Intended to reduce and ultimately prevent wrong-patient (patient mislabeling) errors, these identifiers have been in effect since 2003, but unfortunately, such errors still occur. In 1999, the Institute of Medicine released its report “To Err is Human” in which the number of hospitalized patients’ deaths related to medical errors was estimated between 44,000 and 98,000 [2]. In a more recent study from 2013 looking at preventable adverse events, including diagnostic, identification, and communication errors, among others, the estimated number of deaths due to preventable harm in hospitals was between 210,000 and 400,000 per year [3]. With respect to radiology specifically, the Pennsylvania Patient Safety Authority reported 652 serious radiology “wrong-events,” of which patient identification errors accounted for 30 % [4]. The cause of these patient identification errors resulted from patients’ inability to engage in the identification process, staff failing to use identifiers, transporting the wrong patient with the right chart or vice versa, or other such mislabeling or classification errors associated with radiographic studies [4].
A better and somewhat more intrinsic method to prevent wrong-patient errors should draw on humans’ natural inclinations and abilities, of which facial recognition is highly specific. Kumar et al. have shown that humans are able to correctly identify other human faces 97.53 % of the time using the “Labeled Faces in the Wild” data set [5, 6]. With such a high rate of correct identification, it would make sense to capitalize on this inherent human ability in the healthcare setting by incorporating patient photographs as identifiers.
We have previously detailed our work on a system to integrate the capturing of photographs with radiographic studies simultaneously and on the use of photographs in helping to identify wrong-patient errors [7–9]. These prior publications showed that obtaining, integrating, and working with photographs as identifiers is practical and can be adopted by radiologists and technologists. Specifically, in [7], an observer study with 10 observers in a single institution demonstrated that the addition of photographs to portable radiographs resulted in an increase in mismatch error detection rates from 12.5 to 64 %. In a larger study involving 90 radiologists with diverse experience, we found that observers who were shown photographs along with radiographs detected 77 % of the mismatch errors whereas those observers who were shown radiographs alone detected only 31 % of the errors [8]. In both of these previous studies, some observers commented that they did not pay attention to the photographs since they were not explicitly asked to include the photographs in their interpretation. Here, using largely the same patient data as used in [7, 8], we seek to further test whether the inclusion of patient photographs can help detect wrong-patient errors by radiologists reading chest radiographs when they are specifically asked to include the photographs in their interpretation.
Materials and Methods
This study was approved by Emory University’s Institutional Review Board, and written informed consent was obtained from patients recruited into the study or from one of their family members authorized to provide such consent. The study was compliant with the Health Insurance Portability and Accountability Act. The methods used to collect this data and study population are similar to those of our previous study with some slight changes but included here in brief for completeness [7, 8].
Study Population
Data was collected from the period of August 2011 to November 2011 between 2:00 and 6:00 a.m. in two cardiothoracic surgery intensive care units (ICUs) at Emory University Hospital. These hours were chosen because the majority of portable radiographs are obtained during this time. If the patient was transferred out of the ICU during their hospital stay, then data was collected from whichever unit they were on at the time, i.e., step-down care vs. regular hospital floor.
The study cohort originally consisted of 34 patients. Seven patients were initially excluded due to inadequate numbers of photograph-radiograph combinations, i.e., if they had only one radiograph-photograph combination, and we could thus not create pairs of new and old studies; other patients were excluded due to technical difficulties in retrieving images from the picture archiving and communications systems (PACS). A radiograph-photograph combination from one of these seven patients was used to create an error pair along with a radiograph-photograph combination from another patient. Thus, data from a total of 28 patients were used. The mean age of the cohort was 61 years (SD ±15.16), with a range of 22–89 years, and the cohort consisted of 13 males and 15 females. The most common diagnoses in the cohort were aortic stenosis (n = 10), congestive heart failure (n = 7), mitral regurgitation (n = 3), and coronary artery disease (n = 3). The most common surgeries in this group included aortic valve replacement (n = 9), left ventricular assist device placement (n = 6), mitral valve replacement (n = 3), and coronary artery bypass grafting (n = 3).
Data Acquisition, Set Creation, and Storage
Portable radiographs were obtained as single-view chest or abdomen studies according to the standard protocol, with the technologist confirming the patient’s identity either verbally or by wristband verification. A photograph of the patient’s face including the chest was obtained at the onset of the radiographic study by a single researcher using a 5-megapixel camera on the Apple iPhone 4 (Apple Inc., Cupertino, CA), with the use of a flash. Photos of the patient’s radiograph requisition form were taken prior to taking the patient’s photograph to ensure matching between the radiographic study and the photograph. If no requisition form was available at the time of the radiographic study, it was later obtained using other patient identifiers from the PACS and matched with the photograph.
The photographs were stored as Joint Picture Experts Group (JPEG) format initially then converted to Digital Imaging and Communication in Medicine (DICOM) format on a workstation. The DICOM photographs were sized to be approximately one fourth the size of the radiograph and then integrated into the radiograph using custom-developed software.
Study sets were then created by combining two radiographs from the same patient, one as the new radiograph and the other, the most recent previous radiograph, as the comparison radiograph. If the patient had more than two radiographs, then every two consecutive images exclusively were paired without overlap between pairs, i.e., radiographs 1 and 2 and radiographs 3 and 4 but not radiographs 2 and 3. One of the 28 patients only had one radiograph during their stay; their single radiograph was used in the error sets since there was no complementary radiograph available to produce a non-error study set.
A total of 166 studies were obtained, from which 83 unique pairs of matched radiographs and 12 error sets were created. An ad hoc decision was made to use 12 error sets. Each error set consisted of one radiograph from each of two different patients to form a mismatch.
Observers and Presentation
Five recently trained radiologists served as observers. Four of the observers had received ABR certification in the previous 2 years before the study, one observer had been residency trained in the UK and was pursuing fellowship training in the USA, while another observer was recently ABR certified after residency training in the UK and fellowship training in the USA and was a first-year faculty member. All observers were non-cardiothoracic radiologists, either pursuing fellowship training in another subspecialty or a first-year faculty member in another radiologic subspecialty. These observers were specifically informed that any photographs they saw in association with the radiographs they were shown were included to aid in the interpretation of the radiographs. They were not, however, informed that some of the radiographic pairs were mismatched or that the photographs were intended to help identify such mismatches.
The study was performed in two phases. In phase 1, a worklist of 20 randomly selected pairs of radiographs without photographs, 4 of which were erroneous pairs, was presented to each observer in a random fashion on a ClearCanvas DICOM viewer (ClearCanvas, Inc., Toronto, Ontario, Canada), running on a dual-monitor workstation in color. Each pair of radiographs was randomly chosen from the set of 83 pairs without replacement, so that no pair was repeated in the phase. The error pairs were also randomly chosen from the list of 12 previously generated errors again without replacement, so that no pair was repeated in the phase. The image viewer provided basic image manipulation capabilities such as windowing and inversion. Erroneous pairs were randomly assigned, and so not all observers were shown the same error pair in both phase 1 and phase 2. Details of the random process used for creating worklists and selection of error pairs are similar to those used in our previous studies [7, 8].
No clinical information or patient identifiers were available to the radiologists. Only dates of study and a randomly generated case number were visible with the photographs. Observers were not informed that erroneous pairs existed. They were asked to evaluate each pair based on the following four questions similar to those used in [7, 8]:
Is the image quality OK or not OK?
Are the lines and tubes in appropriate positions (OK or not OK)?
Has the patient’s status worsened, remained unchanged, or improved?
Do you have any other comments?
The last section labeled “other comments” was included to allow the observers to comment on whether there was a mismatched pair. The interpretation time for the set as a whole was recorded for each observer.
In phase 2 of the study, the same group of five observers, after an approximately 1-h interval, assessed an additional 20 pairs of radiographs, with 2 to 4 mismatched pairs, using the same equipment and form as in the first session. These pairs now included patient pictures with each radiograph. The observers were reminded that the photograph was to be used to aid in the evaluation of the radiograph pairs. They were not informed of any mismatch pairs, or that the radiographs should be used to look for mismatched pairs. The total interpretation time for the entire set of 20 pairs was recorded per observer. A final questionnaire containing the following questions was given to each observer after completing phase 2.
Were the photographs a distraction (yes/no)?
Did you feel you spent more time because of the photos (yes/no)?
Did the photographs help with the interpretation (yes/no)? If yes, how?
If you saw photographs of two different persons, did you go back to check if the radiographs appear to belong to two different individuals (yes/no)?
Statistical Analysis
A two-tailed Fisher exact test was applied to evaluate error detection rates both with and without photographs. A p value of 0.05 was used to indicate statistical significance. To compare the time taken by each observer between phase 1 and phase 2, a t test was performed with a p value of 0.05 or less indicating a significant difference. Statistical testing was performed using QuickCalcs (GraphPad Software, Inc., La Jolla, CA).
Results
Figure 1 shows a sample pair of radiographs with concomitantly obtained photographs used in the study. In phase 1, i.e., without photographs, none of the 20 errors presented to the observers were detected as shown in Table 1. In phase 2, i.e., with photographs, 17 out of 18 mismatched pairs, or 94.4 %, were detected (p = 0.0001). There was, however, one false-positive reading during this phase. One of the 82 correctly paired radiograph-photograph pairs was erroneously flagged by an observer as being a mismatch. The sensitivity increased from 0 to 88.89 % from phase 1 to phase 2, and the specificity decreased from 100 to 98.78 % from phase 1 to phase 2.
Fig. 1.
Sample mismatched: The comparison prior radiograph on the right shows a 61-year-old white man with a history of bullous emphysema who was status post bilateral lung volume reduction surgery; note the normal heart size and the presence of bilateral chest tubes. The current radiograph on the left obtained 1 week later shows a 33-year-old African-American man with congestive heart failure who had a left ventricular assist device placed. The radiograph on the left demonstrates marked cardiomegaly, a left ventricular assist device, and a left implantable defibrillator device; the chest tubes are no longer seen. Despite these differences, this pair was not flagged as an erroneous pair in the absence of photographs by most readers. The photographs (edited to protect patient identity) clearly show race and body habitus differences between the two patients and assisted with the detection of mislabeling
Table 1.
Number of mismatches introduced to each observer and the number they actually identified during both phase 1 and phase 2. The time taken during each phase for each observer is also recorded
| Observer | Without photographs (phase 1) | With photographs (phase 2) | ||||
|---|---|---|---|---|---|---|
| Number of mismatches introduced | Number of mismatches reported | Assessment time (min:s) | Number of mismatches introduced | Number of mismatches reported | Assessment time (min:s) | |
| 1 | 4 | 0 | 18:47 | 4 | 4 | 17:18 |
| 2 | 4 | 0 | 28:34 | 3 | 3 | 24:21 |
| 3 | 4 | 0 | 16:09 | 4 | 4 | 16:46 |
| 4 | 4 | 0 | 34:46 | 4 | 3 | 23:04 |
| 5 | 4 | 0 | 34:00 | 3 | 2a | 21:26 |
aAdditionally, this observer had a false positive
The interpretation time for the pairs was recorded for all five observers during both phases and shown in Table 1. The interpretation time for the 20 pairs decreased during phase 2 with the introduction of photographs, although not statistically significant. The average time for radiograph assessment for the 20 radiographic pairs was 26.45 (SD ±8.69) min and 20.55 (SD ±3.40) min, for phase 1 and phase 2, respectively. The mean time difference between phase 1 and phase 2 was 5.90 (95 % CI −3.629 to 15.434) min (two-tailed t test p = 0.1911).
The survey results show that two of the five observers felt the photographs were a distraction (Table 2). Two different observers felt that they spent more time due to the photographs while the remaining three observers did not think they spent more time reading the films. The majority of observers, four out of the five, felt that the photographs improved their interpretation of the films, with three of the observers stating that the photographs aided in the identification of tubes and lines and two observers stating that the photographs helped with identifying the patient. All five observers stated that if they noticed the photographs were of two different people, they went back to evaluate the radiographs to check if the radiographs appeared to be from different people.
Table 2.
Results for questions posed at end of study
| Questions | Number of respondents answering “yes” | Number of respondents answering “no” |
|---|---|---|
| Were the photos a distraction? | 2 | 3 |
| Do you feel you spent more time because of the photographs? | 2 | 3 |
| Did the photos help with interpretation? | 4 | 1 |
| If you noted mismatched photographs, did you go back and check the radiograph? | 5 | 0 |
Discussion
The results illustrate that the addition of patient photographs that are simultaneously obtained with portable radiographs, specifically in the ICU setting, significantly increases the detection of mismatched radiographs. This may reduce the frequency of wrong-patient errors within radiology.
None of the mismatched pairs without accompanying photographs were identified, whereas with photographs, 17 out of 18 mismatched pairs were identified. The inability of the observers to identify mismatched pairs without accompanying photographs may be attributed to an observer not specifically looking for mismatches during the study. Although noticing wrong-patient errors may be part of many radiologists’ reading pattern, in the case of this study, the observer may have assumed that some variable other than the addition of the photograph was being tested. As a result, the observer may not have been as diligent to detect a wrong-patient error.
The mechanism by which the inclusion of patient photographs led to an increase in mismatch error detection is still not clearly understood. It may be that the observers used the photographs themselves to identify the errors. On the other hand, it is possible that the presence of the patient photographs led to increased empathy and possibly resulted in more focus in observers, i.e., photographs may have caused observers to pay more attention to the radiographs. Unfortunately, we did not question observers about empathy in the post-study questionnaire.
While two of out the five observers felt that the photographs were a distraction, the observers in fact identified more mismatched pairs (0/20 vs. 17/18) with the photographs rather than without them. Furthermore, the data reveals no significant increase in the interpretation time to read the radiographs with photographs included. In fact, the data indicates a decrease in the time needed, although not statistically significant. Furthermore, three out of the five observers felt that the photographs helped to identify tubes and lines, information which can help in determining the status of the patient and in interpreting ambiguous findings on a radiograph. It is possible that the decrease in interpretation time seen in phase 2, although not statistically significant, may be attributed to the training that readers received by experiencing phase 1. That is, they may have become faster in phase 2, because they were more accustomed to the process in answering the questions for each case.
In radiology, with such high penetrance of technology, the addition of software to aid the radiologist in the identification of patients between studies using facial recognition may be on the horizon. In a recent paper, Facebook details their use of “DeepFace,” a facial recognition program that approaches 97.3 % accuracy, a number on par with human ability, at correctly identifying and discriminating individual human faces [10]. Therefore, it seems that widespread adoption of patient photographs as identifiers, especially in fields of medicine such as radiology, that are highly, if not fully, reliant on computer systems, is inevitable. The introduction of facial recognition technologies may eliminate the need for human involvement in the patient identification workflow.
A question that needs to be answered in the future is where the photographs are to be stored or embedded. In our technology described in [9], the photographs were placed as separate series in the PACS folder. However, it may be more effective to embed the photographs with the radiographs as we did in this study and in the studies in [7, 8].
Relationship to Prior Studies
In a prior study by Tridandapani et al. [7], 10 radiologists were enlisted to view photograph-radiograph pairs in a similar fashion to this study. In that particular study, the observers were not made aware prior to the initiation of the study that a photograph would be shown simultaneously with a radiograph. The observers were not given any specific instructions regarding the photograph or its purpose in the study. As a result, some of the observers physically covered the photograph with their hands while reading the radiograph. In following up with the observers, these observers explained that they believed the photograph to be a purposeful distraction in the study and therefore preferred to cover it. This, of course, defeated the purpose of the study and subsequently led to the specific instructions in the study we report here to use the photograph in the interpretation of the radiograph.
In another study by Tridandapani et al. [8], 90 radiologists each interpreted a unique randomly chosen set of 10 radiographic pairs, containing up to 10 % mismatches. In that study, radiologists were randomly assigned to interpret studies either with or without photographs, i.e., only some of the radiologists interpreted radiographs with photographs. This differs from our current study in that comparisons were made between two groups of radiologists instead of between the same radiologist. Our current study is likely a better indicator of the effects of the photographs’ inclusion in the radiograph because it compares the error detection rate for the same radiologist. Thus, no difference in radiologist ability, especially those skills which cannot be quantified, needs to be controlled for in our current study.
This current study adds further credibility to the effectiveness of adding photographs to portable radiographs. If such a photography technology is to be implemented in a clinical setting, one would assume that radiologists would be aware of the reason for the implementation and trained to include the photographs in their interpretation. Thus, they would pay more attention to the photographs and thereby achieve a greater improvement in the rate of detection of wrong-patient errors.
A recent study [11] analyzing the impact of including patient photographs along with computed tomographic examinations demonstrated no significant differences in the number of incidental findings when radiologists interpreted with photographs and without photographs. Although interpretation time was not evaluated in that study, our findings suggest that the addition of photographs does not increase this time and will thus not make radiologists less efficient.
Limitations
In our study, limitations in design exist and, specifically, there is concern regarding generalizability. In particular, the generalizability issues are twofold: (1) the specific patients being used and (2) the introduced error rates. The patients we used in this study were all from a cardiothoracic ICU, which implies similar etiologies of disease and disease severity. This population of patients may not accurately reflect those seen by a radiologist in their worklist. For example, some institutions have radiologists read a cardiothoracic ICU patient study, then a pediatric patient study, followed by an outpatient study. In this case where the patient population is diverse, it may be much easier to detect a mislabeled or wrong-patient radiograph than if the studies were all from the same type of patient. In institutions where all of the patients in a specific worklist are similar, i.e., all ICU patients or all outpatients, it may be much easier to miss a wrong-patient error.
The second generalizability issue is the enriched wrong-patient error rate we utilized (up to 20 %). In real-world practice, the number of wrong-patient errors is quite low. As a result, radiologists may not be sensitized to the detection of wrong-patient errors. This rate would seem to jump dramatically, as it did in our study, when a relatively high number of errors were made clear through mismatching photographs. Although this indicates that photographs included in the interpretation of studies aid in the detection of wrong-patient errors, with almost 100 % detecting rate resulting, it may inflate the effect since the radiologists may not have been as careful to look for the wrong-patient errors without photographs. Furthermore, the one false positive, where a correctly paired radiograph-photograph combination was erroneously flagged by an observer as belonging to two different patients, may have occurred because the observer’s sensitivity was heightened after noting other wrong-patient errors with the photographs.
In addition, although our study involved 200 radiograph interpretations, only 5 observers were measured. The number of observers could be increased and their respective backgrounds diversified in a future study. It may be that our small number of relatively junior observers is less adept at identifying mismatched radiographs than some larger cohort. In fact, in a prior study with a slightly different protocol, which used a larger cohort of observers with more diversified backgrounds and more experience, we found that the ability to detect mismatch errors without photographs was better since they were able to identify 31 % of the errors [8].
The interval of approximately 1 h between the two phases of the study is small, and there may have been memory effects that made observers more adept at identifying error pairs in phase 2.
Finally, we could have potentially eliminated any training effects by conducting a cross-over study where some of the observers interpreted with photographs in phase 1 and without photographs in phase 2. We felt that this would bias observers who were initially exposed to photographs as they may realize the reason for the study, viz., to evaluate sensitivity to detection of wrong-patient errors.
Conclusion
The inclusion of photographs obtained simultaneously with portable radiographs can significantly increase the rate of wrong-patient error detection. The addition of photographs to radiographs does not significantly increase the time for interpretation of studies. Obtaining photographs with radiographs and using them in the interpretation of portable radiographic studies could increase patient safety without adding additional interpretation time to studies.
Acknowledgments
We would like to acknowledge and thank the patients and their families who allowed our team to photograph them during their ICU stays. Samuel Galgano obtained the patient data in 2011. Senthil Ramamurthy developed the software for the observer studies and helped conduct the observer studies. We also thank the five radiology observers who participated in this study. Srini Tridandapani was supported in part by the National Institute of Biomedical Imaging and Bioengineering (Award Number K23EB013221) and by the National Center for Advancing Translational Sciences (Award Number UL1TR000454) of the National Institutes of Health. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
Contributor Information
Srini Tridandapani, Phone: 404-712-1098, Email: stridan@emory.edu.
Kevin Olsen, Email: kevinolsen728@gmail.com.
Pamela Bhatti, Email: pamela.bhatti@ece.gatech.edu.
References
- 1.Joint Commission on Accreditation of Healthcare Organizations. [Accessed May 13th, 2014] Hospital national patient safety goals. 2014. http://www.jointcommission.org/assets/1/6/OBS_NPSG_Chapter_2014.pdf
- 2.Kohn LT, Corrigan JM, Donaldson MS, (Institute of Medicine) To err is human: building a safer health system. Washington, DC: National Academy Press; 2000. [PubMed] [Google Scholar]
- 3.James J. A new evidence-based estimate of patient harms associated with hospital care. J Patient Saf. 2013;9(3):122–128. doi: 10.1097/PTS.0b013e3182948a69. [DOI] [PubMed] [Google Scholar]
- 4.Pennsylvania Patient Safety Authority, ECRI Institute, Institute for Safe Medication Practices Pennsylvania patient safety advisory, applying the universal protocol to improve patient safety in radiology. Pa Pateint Saf Advis. 2011;8:63–69. [Google Scholar]
- 5.Kumar N, Berg AC, Belhumeur PN, and Nayar SK: Attribute and simile classifiers for face verification. 12th IEEE International Conference on Computer Vision (ICCV), October 2009, 365–372
- 6.Huang G, Ramesh M, Berg T, and Learned-Miller E: Labeled faces in the wild: a database for studying face recognition in unconstrained environments. UMass Amherst Technical Report 07–49, October 2007
- 7.Tridandapani S, Ramamurthy S, Galgano S, Provenzale JM. Increasing rate of detection of wrong patient radiographs: use of photographs obtained at the time of radiography. Am J Roentgenol. 2013;200(4):W345–52. doi: 10.2214/AJR.12.9521. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Tridandapani S, Ramamurthy S, Provenzale JM, Obuchowski N, Evanoff M, Bhatti P. A multi-reader observer study of the effect of adding point-of-care patient photographs with portable radiographs: a means to reduce wrong-patient errors. Acad Radiol. 2014;21(8):1038–47. doi: 10.1016/j.acra.2014.03.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Ramamurthy S, Bhatti PT, Arepalli CD, Salama M, Provenzale JM, Tridandapani S. Integrating patient digital photographs with medical imaging examinations. J Digit Imaging. 2013;26:875–885. doi: 10.1007/s10278-013-9579-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Taigman Y, Yang M, Ranzato M, Wolf L: DeepFace: closing the gap to human-level performance in face verification. Conference on Computer Vision and Pattern Recognition, March 2014
- 11.Ryan J, Khanda GE, Hibbert R, et al. Is a picture worth a thousand words? The effect of viewing patient photographs on radiologist interpretation of CT studies. JACR. 2015;12(1):104–107. doi: 10.1016/j.jacr.2014.09.028. [DOI] [PubMed] [Google Scholar]

