Skip to main content
Journal of Imaging Informatics in Medicine logoLink to Journal of Imaging Informatics in Medicine
. 2025 Apr 3;39(1):557–563. doi: 10.1007/s10278-025-01490-x

Evaluating the Impact of a Ki-67 Decision Support Algorithm on Pathology Residents’ Scoring Accuracy

Mine İlayda Şengör Aygün 1,, Özben Yalçın 2, Burak Uzel 3, Gamze Kulduk 2, Cem Çomunoğlu 2
PMCID: PMC12920928  PMID: 40180631

Abstract

Ki-67 scoring is of essential importance in the evaluation of breast cancer. We evaluated a Ki-67 algorithm as a decision support tool to improve accuracy for pathology residents. We retrospectively evaluated Ki-67 scores on whole slide images (WSI) obtained from 156 consecutive breast cancer patients. Two senior pathologists determined the 2.1 mm2 hotspot to be evaluated. Ki-67 scores from senior pathologists were compared with results generated by the algorithm, results from 10 pathology residents, and results from pathology residents with the assistance of the algorithm. In addition to numerical results from the algorithm, residents were also presented with a visual representation of nuclei that were counted and excluded. Statistical analysis was performed using Wilcoxon and intra-class correlation (ICC) tests. The mean Ki-67 scores from senior pathologists and the algorithm were 23 ± 18 and 24 ± 18, respectively (ICC, 0.98). Ki-67 scores from the residents were 19 ± 16 and 22 ± 16, without and with input from the algorithm, respectively. With input from the algorithm, residents’ scores were significantly closer to those obtained by senior pathologists (p = 0.008). Residents modified their scores in 53.8% of the cases where 74% of the better scores were characterized by an increase in the original scores. The results obtained by the Ki-67 algorithm were highly correlated with those assessed by senior pathologists. We demonstrated that the algorithm may serve as a decision support tool for residents to align their results with those of senior pathologists.

Keywords: Algorithm, Decision support, Ki-67, Resident

Introduction

Ki-67 is a proliferation biomarker with prognostic and predictive value, particularly in breast cancer. Its main function is to form the perichromosomal layer and it is expressed at increasing levels from the G1 to M phases of the cell cycle to facilitate duplication [1]. Ki-67 scores are expressed as a percentage of stained tumor cells, ranging from 0 to 100%, and are used as a marker of cellular proliferation [2].

Although Ki-67 is widely evaluated in several neoplasms, there are no clear recommendations due to undefined cut-off values and interobserver variability [3, 4]. Nevertheless, a 20% or more Ki-67 score is needed for adjuvant treatment of breast cancer with ribociclib [5]. To overcome the interobserver variability, there have been some initiatives to standardize Ki-67 scoring [6]. These efforts, however, have primarily focused on experienced pathologists, leaving a gap in the training of residents. It has been shown that computer-assisted approaches to Ki-67 scoring could improve the consistency of Ki-67 scores in junior pathologists [7]. We therefore aimed to evaluate the effectiveness of a commercially available computer-assisted Ki-67 scoring algorithm in improving the scoring accuracy and reducing interobserver variability among pathology residents.

Materials and Methods

Slides from 156 consecutive breast cancer patients diagnosed at a tertiary referral hospital were retrieved from the archival collection of formalin-fixed, paraffin-embedded breast tissue samples. These consecutive breast cancer tru-cut biopsy samples were obtained between January and August 2023 and included 13 cases of invasive lobular carcinoma and 143 cases of invasive ductal carcinoma. Tissue sections were processed and stained using a standardized immunohistochemistry (IHC) protocol for Ki-67 detection. The Ki-67 immunohistochemistry slides (Ventana, Sp6) were then scanned using a high-resolution Hamamatsu NanoZoomer S360 scanner, with a resolution of 0.23 μm/pixel, ensuring precise capture of staining patterns and nuclear morphology. The scanned images were anonymized, randomly numbered, and subsequently presented to study participants for evaluation.

Senior pathologists and residents were recruited from the University of Health Sciences Prof. Dr. Cemil Tascioglu City Hospital, with the relevant review board approval. Senior pathologists were selected based on their extensive experience in histopathology. Residents were recruited from the pathology department, and their participation was based on their willingness to engage in the study and their level of experience. As a first step, hotspot areas, which were defined as regions with a high density of tumor cells with Ki-67 staining, within a 2.1 mm2 region were annotated and scored by consensus of two senior pathologists. The minimum size of a hotspot area was defined as an area containing at least 100 Ki-67 positive nuclei to ensure accurate quantification. Hotspot regions were annotated using the Viracenter image management system (Virasoft Corporation, New York). These areas were then scored using the Ki-67 algorithm (Virasoft Algorithm, Virasoft Corporation, New York) to evaluate inter-observer agreement between manual and algorithmic scoring. This Ki-67 analysis algorithm uses a color-based segmentation technique to identify and quantify Ki-67 positive nuclei in tissue samples. The algorithm processes digital images of stained tissue sections, detecting varying intensities of immunostaining corresponding to Ki-67 expression. Negative nuclei are distinguished from positive nuclei based on their staining intensity, with negative nuclei represented as blue dots. Positively stained nuclei are categorized into three intensity levels: yellow (weak staining), brown (moderate staining), and red (intense staining). These segmented regions are then analyzed to calculate the Ki-67 proliferation index, which is derived from the ratio of Ki-67 positive nuclei to the total number of nuclei within the annotated image. This algorithm has previously demonstrated excellent reliability in detecting Ki-67 scores in WSIs of different formats generated by 4 different scanners with results from 12 pathologists (ICC: 0.907) [8]. The output of the algorithm includes both a visual segmentation of the stained regions and a quantitative proliferation score. Each case was assigned a Ki-67 proliferation index score quantified on a scale of 0 to 100%. Two senior pathologists jointly evaluated each case and assigned a Ki-67 proliferation index score. This consensus score served as the reference for comparison in subsequent resident evaluations and the algorithm.

In the second step, pathology residents first scored the hotspot areas without algorithm assistance, followed by scoring using the algorithm mask and score (Fig. 1). The residents performed the study at their workstations without interfering with each other. All 156 cases were scored by 10 different residents. Each resident was given 2 weeks to score all images. While there was no specific time limit per slide, the overall duration was set to ensure consistent scoring across all cases and to simulate a typical clinical workload scenario, i.e., same-day results. Resident demographics and years of experience are shown in Table 1. The concordance between the residents’ scores, both with and without the Ki-67 algorithm, and the senior pathologists’ scores was then compared. Residents scored the Ki-67 scores without knowledge of their peers’ scores. All imaging was performed on 24-inch LED monitors at FHD (1920 × 1080, 16:9) resolution using the Viracenter image management system.

Fig. 1.

Fig. 1

Whole slide image of Ki-67 immunohistochemical staining (A Ki-67 × 40, B Ki-67 × 400) and analysis with Ki-67 Decision Support System (C Ki-67 × 40, D Ki-67 × 400)

Table 1.

Residents demographics

Resident (R) Age Gender Experience (years)
R1 28 Female 4
R2 29 Female 4
R3 28 Female 3
R4 30 Female 3
R5 30 Male 3
R6 28 Female 2,5
R7 27 Female 2
R8 27 Male 2
R9 26 Female 1
R10 28 Female 1

Data analysis was conducted using several key statistical tests, including the paired t-test to compare differences between paired observations, the Pearson correlation test to assess linear relationships between variables, the Breusch-Pagan test to check for heteroscedasticity, and the intraclass correlation coefficient (ICC) to evaluate inter-rater reliability. Interpretation of the ICC value indicates that a value close to 1 reflects excellent agreement between evaluators, whereas a lower value suggests poor reliability.

Mean absolute error (MAE) and root mean squared error (RMSE) were used to assess the precision and accuracy of the measurements. MAE measures the average size of the errors between the residents’ scores—both with and without the algorithm—compared to the ground truth obtained from their seniors’ consensus, without considering the direction of the errors by taking the absolute value. RMSE calculates the square root of the average squared differences between residents’ scores—both with and without the algorithm—compared to the ground truth obtained from their seniors’ consensus scores, penalizing larger errors more than smaller ones. Statistical significance was set at p < 0.05, and all analyses were performed using SPSS Statistics v20.

Results

The Ki-67 proliferation index for the 156 cases, as determined by the senior pathologists, ranged from 1 to 80%. The distribution of Ki-67 proliferation index scores among the 156 cases was as follows: 37 cases (23.7%) ranged from 1 to 5%, 20 cases (12.8%) from 6 to 10%, 26 cases (16.7%) from 11 to 20%, 24 cases (15.4%) from 21 to 30%, 24 cases (15.4%) from 31 to 40%, 14 cases (9%) from 41 to 50%, 4 cases (2.6%) from 51 to 60%, 5 cases (3.2%) from 61 to 70%, and 2 cases (1.3%) from 71 to 80%.

The first finding was that the mean Ki-67 scores of the senior pathologists and the algorithm were 23 ± 18 and 24 ± 18, respectively, with a Pearson correlation coefficient of 0.969 (p < 0.05). A Bland–Altman plot was generated to compare the ground truth scores with the algorithm results, which revealed some outliers, especially in the higher scores (Fig. 2). The Bland–Altman plot illustrates the agreement between the algorithm-generated Ki-67 scores and the ground truth (consensus scores from senior pathologists). To investigate the outliers, the Breusch-Pagan test was performed, which yielded a p-value of 0.007, reflecting the heteroscedasticity of the results.

Fig. 2.

Fig. 2

Bland–Altman plot comparing ground truth and algorithm scores. The x-axis represents the mean of the two measurements, while the y-axis shows the difference between them. The green line indicates the mean difference (bias), and the red lines represent the 95% limits of agreement (± 1.96 SD)

The residents’ Ki-67 scores were 19 ± 16 without algorithm support and improved to 22 ± 16 with algorithm support. Using the algorithm, residents’ scores increased significantly (p = 0.008) and approached those of senior pathologists. Residents modified their scores in 57.8% of cases, with 74% of the better scores characterized by an increase in the original score (Table 2 and Fig. 3). Table 2 presents the results of paired t-test p-values and Pearson correlation coefficients for each resident comparing their Ki-67 scoring with and without algorithmic assistance. The mean scores were the same for residents, R5, R9, and R10 without the algorithm and R3, R4, R7, R9, and R10 with the algorithm. The Pearson correlation coefficient increased for the majority of residents.

Table 2.

Statistical analysis of paired T-test and Pearson correlation coefficients

Rater Paired T-test p-value (without algorithm) Paired T-test p-value (with algorithm) Pearson correlation coefficient (without algorithm) Pearson correlation coefficient (with algorithm)
R1 0.00 0.01 0.83 0.92
R2 0.00 0.00 0.84 0.90
R3 0.01 0.57 0.86 0.92
R4 0.00 0.48 0.93 0.94
R5 0.76 0.01 0.87 0.94
R6 0.00 0.00 0.87 0.85
R7 0.00 0.08 0.85 0.88
R8 0.00 0.00 0.94 0.90
R9 0.20 0.46 0.91 0.95
R10 0.07 0.53 0.84 0.92

Fig. 3.

Fig. 3

Boxplot comparison of residents’ Ki-67 scoring with and without algorithmic assistance. This boxplot shows the distribution of the Ki-67 scores, both before and after algorithmic assistance, and the ground truth. The x-axis represents the different raters (R1–R10) and their respective scores with and without the algorithm. The y-axis represents the Ki-67 score. Outliers are shown as dots above or below the whiskers

The inter-resident agreement had an ICC of 0.927 (95% CI, 0.908–0.944) using the algorithm. The algorithm appeared to help achieve consistent scores between residents. However, the ICC for residents’ scores without assistance was 0.848 (95% CI, 0.814–0.879), which was lower than when residents used the algorithm. This suggests slightly lower agreement between residents when scoring alone.

The mean absolute error (MAE), which is the arithmetic mean of the absolute errors, for residents without the algorithm was 6.26, compared with 4.31 for residents using the algorithm (p < 0.01). The root mean squared error (RMSE), which is the quadratic mean of the differences, was 9.44 for the residents without the algorithm, compared with 7.15 for those using the algorithm (p < 0.01). The algorithm appeared to reduce the average error, both in terms of MAE and RMSE, reflecting improved performance and reduced deviation from the senior pathologists (Table 3).

Table 3.

Performance metrics of pathology residents with and without algorithmic assistance

Rater MAE without algorithm RMSE without algorithm MAE with algorithm RMSE with algorithm Improved results Worsened results Ineffective change Total changes
R1 7.14 10.80 4.80 7.55 74 13 2 89
R2 7.78 11.28 5.79 8.94 63 7 1 71
R3 6.95 9.74 4.77 7.19 76 21 5 102
R4 4.26 7.28 3.08 5.98 55 31 6 92
R5 6.07 9.11 4.31 6.57 59 27 2 88
R6 7.42 11.00 5.87 10.04 65 12 1 78
R7 5.60 9.09 3.49 6.27 72 21 5 98
R8 5.99 8.84 3.39 6.15 77 18 4 99
R9 5.14 7.44 3.40 5.71 79 28 2 109
R10 6.24 9.86 4.23 7.06 56 11 1 68

Table 3 shows the mean absolute error (MAE) and root mean squared error (RMSE) of the pathology residents’ Ki-67 scores, both with and without algorithmic assistance. In addition, an analysis of the number of cases where algorithmic assistance resulted in improvement, deterioration, or no change is included. Resident R8 had the highest change in MAE, with a reduction of 2.6 after using the algorithm, indicating a significant improvement in scoring accuracy. Resident R1 had the highest change in RMSE, with a reduction of 3.25, showing the greatest reduction in error variability with algorithmic assistance. R2 had the highest MAE (7.78) and RMSE (11.28) without the algorithm. R4 had the lowest MAE (4.26) and RMSE (7.28) without the algorithm.

Resident R9 had the highest number of decision changes with 109 cases. As a first-year resident, her mean Ki-67 score was 22 ± 18 without the algorithm and 23 ± 17 with the algorithm. The p-values of the paired t-test were 0.20 (without the algorithm) and 0.46 (with the algorithm), indicating no statistically significant difference. However, the Pearson correlation coefficient increased from 0.91 to 0.95, indicating improved agreement with the ground truth.

Resident R10 had the fewest decision changes with 68 cases. Also a first-year resident, her mean Ki-67 score was 22 ± 16 without the algorithm and 23 ± 17 with the algorithm. The p-values of the paired t-test were 0.07 (without the algorithm) and 0.53 (with the algorithm), again showing no statistically significant difference. The Pearson correlation coefficient increased from 0.84 to 0.92, indicating improved agreement with the ground truth.

Residents’ experience in pathology practice ranged from 1 to 4 years. As experience increased, the mean score difference was − 0.219, while the standard deviation of scores improved by 0.032. However, this change was not statistically significant (p > 0.05).

Discussion

In our study, algorithm support significantly improved the accuracy of pathology residents’ Ki-67 scoring, as measured by a decrease in the MAE and RMSE, and also shifted the mean scores toward the ground truth in half of the residents (p > 0.05, paired t-test). The algorithm also improved reliability with an increase in intraclass correlation coefficient (ICC) from 0.848 to 0.927. This improvement was due to residents modifying their scores 57.8% of the time after using the algorithm, and 74% of these modifications were to bring their scores closer to those of senior pathologists.

Previous studies, such as those by Acs et al. [9] and Rimm et al. [10], have primarily focused on validating algorithmic tools for Ki-67 scoring against experienced pathologists, often showing high interobserver agreement (e.g., ICC > 0.80) across different laboratories and platforms (Rimm et al., 2019) [10]. In comparison, our study achieved an ICC of 0.98 between senior pathologists and the algorithm, highlighting the high reliability of the algorithmic scoring in our dataset. More importantly, our results show that algorithm support significantly improved the scoring accuracy of pathology residents, with mean scores increasing from 19 ± 16 to 22 ± 16, approaching the mean score of senior pathologists of 23 ± 18.

Previous studies have investigated the benefits of algorithms in reducing interobserver variability in pathologists, but not in residents. This reduction was supported by the results of Abele’s study (2023) [11], where they found an increase from 0.69 to 0.72 (Krippendorff’s alpha) using their algorithm. However, the baseline and magnitude of change in interrater variability were different in our study (0.848 to 0.927), which may reflect inherent discrepancies between senior pathologists and residents in scoring the Ki-67 marker, suggesting that residents may particularly benefit from structured algorithmic support to achieve consistency.

In our study, the MAE was 6.26 without algorithm support compared to 4.31 with support (p < 0.01). The previous studies showed similar results of 11.0 to 4.71 in inexperienced pathologists [12]. Another study showed an MAE of 14.9 for manual eyeballing and 6.9 with assistance [13]. The slightly smaller reduction in MAE observed in our study compared to Cai et al. may reflect differences in the level of training of the residents, the specific algorithms used, or the complexity of the cases. Overall, the value of integrating human expertise with automated systems appears to be improving accuracy, but the magnitude is different across the studies. Recent studies, such as Dy et al. (2024), have investigated the impact of AI-based tools on the performance of less experienced pathologists, demonstrating that AI systems can enhance diagnostic accuracy and reduce scoring variability, particularly in junior pathologists [14]. In contrast, the present study specifically examines the convergence between residents and senior pathologists, with a focus on how algorithms may facilitate alignment of their scoring patterns. While the influence of AI on the performance of junior pathologists has been explored in the literature, the application of algorithms to improve consistency and accuracy in the scoring of Ki-67, particularly between residents and expert pathologists, remains a relatively underexplored area.

We also observed heteroscedasticity in the distribution of Ki-67 scores when compared to both the ground truth and the algorithm. This phenomenon was noted in the review by Nielsen et al. [15], which was described as the log-normal distribution of Ki-67. However, this discrepancy may be due to differences in counting methods: pathologists tend to count low scores individually (1 by 1) and higher scores in increments (10 by 10), whereas the algorithm consistently counts 1 by 1 across all score intervals.

One limitation of this study is that the agreement between residents and senior pathologists depended on the Ki-67 algorithm being consistent with the senior pathologists’ scoring. Differences in how pathologists interpret Ki-67 staining, which can be influenced by their individual experience and subjective judgment, may lead to discrepancies between the algorithm scores and the pathologists’ scores. Using the Ki-67 algorithm without ensuring alignment with the pathologists’ interpretations could compromise the reliability and accuracy of the scoring process, potentially impacting clinical decision-making and diagnostic outcomes. Therefore, the Ki-67 algorithm must be carefully trained and validated to align with the interpretations of senior pathologists.

Although our results show that the algorithm improves the accuracy of Ki-67, there are two main concerns: anchoring bias and automation bias. In anchoring bias, the output of the algorithm may anchor final decisions of the residents too closely to the suggested score [16, 17]. In automation bias, residents may become overly reliant on the output of the algorithm and accept its suggestions [18, 19]. Taken together, these can impair critical thinking and decision-making in diagnostic assessment. However, the intended use of the Ki-67 decision support system is to provide real-time feedback on the Ki-67 scores assigned by residents. If residents are aware of these biases, the immediate assessment may help them to identify errors and refine their skills, improving their diagnostic accuracy.

Conclusions

In this study, the Ki-67 algorithm assistance was found to reduce the mean absolute error of residents with respect to senior pathologists’ results. Results generated by the algorithm were found to be closely correlated with senior pathologists. The assistance improved residents’ accuracy and interrater agreement level. Future studies are needed to address anchoring bias and verify if accuracy is sustained when algorithms are used independently.

Author Contribution

Mine Ilayda Sengor Aygun: investigation, data curation, writing—original draft, and writing—review. Ozben Yalcin: methodology, project administration, supervision, investigation, and writing—review. Burak Uzel: conceptualization, methodology, formal analysis, result interpretation, supervision and writing—review. Gamze Kulduk: resources, methodology and investigation. Cem Comunoglu: conceptualization, methodology, investigation, supervision, writing—original draft, and writing—review. All authors read and approved the final manuscript.

Data Availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Declarations

Ethics Approval

This study was approved by the Educational Planning Board of Prof. Dr. Cemil Tascioglu City Hospital (E-48670771–770-238421922). No specific grants from public, commercial, or non-profit sectors were received. All procedures performed in this study adhered to the ethical standards of the institutional and/or national research committee and conformed to the 1964 Declaration of Helsinki and subsequent or comparable ethical standards.

Conflict of Interest

The authors declare no competing interests.

Footnotes

The summary of this study was presented at the 20th European Digital Pathology Congress in Vilnius, Lithuania, from June 5 to 8, 2024, titled “Ki-67 Decision Support Algorithm for Pathology Residents”.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Sun X, Kaufman PD. Ki-67: more than a proliferation marker. Chromosoma 2018;127:175. 10.1007/S00412-018-0659-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Pathmanathan N, Balleine RL. Ki67 and proliferation in breast cancer. J Clin Pathol 2013;66:512–6. 10.1136/jclinpath-2012-201085. [DOI] [PubMed] [Google Scholar]
  • 3.De Azambuja E, Cardoso F, De Castro G, Colozza M, Mano MS, Durbecq V, et al. Ki-67 as prognostic marker in early breast cancer: a meta-analysis of published studies involving 12,155 patients. Br J Cancer 2007;96:1504–13. 10.1038/SJ.BJC.6603756. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Yerushalmi R, Woods R, Ravdin PM, Hayes MM, Gelmon KA. Ki67 in breast cancer: prognostic and predictive potential. Lancet Oncol 2010;11:174–83. 10.1016/S1470-2045(09)70262-1. [DOI] [PubMed] [Google Scholar]
  • 5.Rashmi Kumar N, Schonfeld R, Gradishar WJ, Lurie RH, Moran MS, Abraham J, et al. NCCN Guidelines Version 1.2025 Breast Cancer. 2025.
  • 6.Dowsett M, Nielsen TO, A’Hern R, Bartlett J, Coombes RC, Cuzick J, et al. Assessment of Ki67 in breast cancer: recommendations from the International Ki67 in Breast Cancer working group. J Natl Cancer Inst 2011;103:1656–64. 10.1093/JNCI/DJR393. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Li L, Han D, Yu Y, Li J, Liu Y. Artificial intelligence-assisted interpretation of Ki-67 expression and repeatability in breast cancer. Diagn Pathol 2022;17. 10.1186/S13000-022-01196-6. [DOI] [PMC free article] [PubMed]
  • 8.Pathology Visions 2023 Overview. J Pathol Inform 2024:100362. 10.1016/j.jpi.2024.100362
  • 9.Acs B, Pelekanou V, Bai Y, Martinez-Morilla S, Toki M, Leung SCY, et al. Ki67 reproducibility using digital image analysis: an inter-platform and inter-operator study. Lab Invest 2019;99:107–17. 10.1038/S41374-018-0123-7. [DOI] [PubMed] [Google Scholar]
  • 10.Rimm DL, Leung SCY, McShane LM, Bai Y, Bane AL, Bartlett JMS, et al. An international multicenter study to evaluate reproducibility of automated scoring for assessment of Ki67 in breast cancer. Mod Pathol 2019;32:59–69. 10.1038/S41379-018-0109-4. [DOI] [PubMed] [Google Scholar]
  • 11.Abele N, Tiemann K, Krech T, Wellmann A, Schaaf C, Länger F, et al. Noninferiority of Artificial Intelligence–Assisted Analysis of Ki-67 and Estrogen/Progesterone Receptor in Breast Cancer Routine Diagnostics. Modern Pathology 2023;36:100033. 10.1016/J.MODPAT.2022.100033. [DOI] [PubMed] [Google Scholar]
  • 12.Cai L, Yan K, Bu H, Yue M, Dong P, Wang X, et al. Improving Ki67 assessment concordance by the use of an artificial intelligence-empowered microscope: a multi-institutional ring study. Histopathology 2021;79:544–55. 10.1111/HIS.14383. [DOI] [PubMed] [Google Scholar]
  • 13.Bodén ACS, Molin J, Garvin S, West RA, Lundström C, Treanor D. The human-in-the-loop: an evaluation of pathologists’ interaction with artificial intelligence in clinical practice. Histopathology 2021;79:210–8. 10.1111/HIS.14356. [DOI] [PubMed] [Google Scholar]
  • 14.Dy A, Nguyen N-NJ, Meyer J, Dawe M, Shi W, Androutsos D, et al. AI improves accuracy, agreement and efficiency of pathologists for Ki67 assessments in breast cancer. Sci Rep 2024;14:1283. 10.1038/s41598-024-51723-2. [DOI] [PMC free article] [PubMed]
  • 15.Nielsen TO, Leung SCY, Rimm DL, Dodson A, Acs B, Badve S, et al. Assessment of Ki67 in Breast Cancer: Updated Recommendations From the International Ki67 in Breast Cancer Working Group. J Natl Cancer Inst 2021;113:808–19. 10.1093/JNCI/DJAA201. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Tversky A, Kahneman D. Judgment under Uncertainty: Heuristics and Biases. Science 1974;185:1124–31. 10.1126/SCIENCE.185.4157.1124. [DOI] [PubMed] [Google Scholar]
  • 17.Patel VL, Arocha JF, Kaufman DR. A Primer on Aspects of Cognition for Medical Informatics. J Am Med Inform Assoc 2001;8:324. 10.1136/JAMIA.2001.0080324. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Goddard K, Roudsari A, Wyatt JC. Automation bias: a systematic review of frequency, effect mediators, and mitigators. J Am Med Inform Assoc 2012;19:121–7. 10.1136/AMIAJNL-2011-000089. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Langer M, Landers RN. The future of artificial intelligence at work: A review on effects of decision automation and augmentation on workers targeted by algorithms and third-party observers. Comput Human Behav 2021;123:106878. 10.1016/J.CHB.2021.106878. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.


Articles from Journal of Imaging Informatics in Medicine are provided here courtesy of Springer

RESOURCES