Abstract
Background
Information from ratings sites are increasingly informing patient decisions related to health care and the selection of physicians.
Objective
The current study sought to determine the validity of online patient ratings of physicians through comparison with physician peer review.
Methods
We extracted 223,715 reviews of 41,104 physicians from 10 of the largest cities in the United States, including 1142 physicians listed as “America’s Top Doctors” through physician peer review. Differences in mean online patient ratings were tested for physicians who were listed and those who were not.
Results
Overall, no differences were found between the online patient ratings based upon physician peer review status. However, statistical differences were found for four specialties (family medicine, allergists, internal medicine, and pediatrics), with online patient ratings significantly higher for those physicians listed as a peer-reviewed “Top Doctor” versus those who were not.
Conclusions
The results of this large-scale study indicate that while online patient ratings are consistent with physician peer review for four nonsurgical, primarily in-office specializations, patient ratings were not consistent with physician peer review for specializations like anesthesiology. This result indicates that the validity of patient ratings varies by medical specialization.
Keywords: physician review websites, online patient ratings, physician peer review
Introduction
In a 2016 study, the Pew Research Center found that 84% of all adults in the United States use online ratings sites to inform their product or service purchase decisions [1]. The same is true for health care: patients increasingly access online ratings sites to inform their health care decisions, with online ratings emerging as the most influential factor for choosing a physician. In a 2017 study by the National Institutes of Health, 53% of physicians and 39% of patients reported visiting a health care rating website at least once [2]. Overall, physicians indicated that the numerical results from these ratings websites were valid approximately 53% of the time, while patients indicated that they thought the ratings were valid 36% of the time [2].
RateMDs.com, HealthGrades.com, and Vitals.com are three frequently visited health care provider ratings websites, with over 2.6 million, 6.1 million, and 7.8 million reviews, respectively [3-5]. For these three sites, numeric rating scales range from 1 (poor) to 5 (excellent) and cover perceptions of physician knowledge, helpfulness, punctuality, and staff. Most patients give physicians positive ratings: one study reported that over 90% of all ratings were positive [6] and another reported that as the frequency of ratings increased, the average mean rating increased [7].
Extending the findings of the study by the National Institutes of Health, we sought to determine the validity of online patient ratings through comparison with physician peer review, defined in this study through Castle Connolly Medical. Specifically, we tested whether mean online patient ratings for physicians, by specialty, are higher for those physicians who have been nominated by their peers as one of “America’s Top Doctors” or not, as reported by Castle Connolly Medical. If online patient ratings were consistent with Castle Connolly Medical, ratings for physicians listed would be higher than for those not listed, thereby providing support for the validity of physician online review sites to inform health care-related decisions.
Methods
The basis for physician peer review selected for the current study is Castle Connolly Medical, a private consumer research firm that distinguishes top providers both nationally and regionally through a peer nomination process that involves over 50,000 providers and hospital and health care executives. Castle Connolly Medical receives over 100,000 nominations each year and a physician-led research team awards top providers from these nominations [8]. Lists are generated for each health care specialty as well as most subspecialties.
Several studies have similarly selected physician peer review through Castle Connolly Medical as a basis to assess the validity and role of patient online ratings sites, including an assessment for hand surgeons in the United States [9], as well as a more general correlation of physician attributes and ranking of hospital affiliations with peer review results [10]. Other studies have found alternative domain-specific objective measures to corroborate online review sites with relevant tangible outcomes, like restaurant ratings with patron visits [11].
Results
This study examined 223,715 reviews of 41,104 unique (nonduplicated) physicians from 10 of the largest cities in the United States (Atlanta, Boston, Chicago, Dallas, Washington DC, Los Angeles, Miami, New York, Philadelphia, and San Francisco). Reviews were extracted in January 2017. Of these physicians, 1142 were included as “America’s Top Doctors” in the Castle Connolly Medical rankings. The number of ratings and physicians evaluated makes this study the largest-scale evaluation of its kind, to date. The profile of the overall sample is provided in Table 1. Specific elements extracted included doctor name, rating (numeric), number of reviews, specialization, source (ratings site), city, and state. To mitigate issues related to “fake” reviews as well as influential observations, we excluded any physician with fewer than three reviews and specializations with fewer than five reviews. Of the total number of physicians with reviews, 16,525 had fewer than three reviews, making the final analyzed sample size of physicians 24,579 (Multimedia Appendix 1).
Table 1.
Ratings source | Number of physiciansa | Number of reviews | Average rating (1-5) |
HealthGrades | 17,385 | 113,427 | 3.97 |
RateMDs | 19,631 | 72,228 | 3.83 |
Vitals | 4088 | 38,060 | 4.06 |
Total | 41,104 | 223,715 | 3.91 |
aNonduplicated, unique number of physicians.
From Multimedia Appendix 1, four specializations demonstrated differences in online patient average ratings between those physicians included in Castle Connolly Medical’s listing of “America’s Top Doctors” and those not listed: allergists, family medicine, internists, and pediatricians. For each of these specializations, those physicians with a listing in Castle Connolly Medical received a higher rating than those physicians not listed. The remaining specializations exhibited little difference between physicians listed and those not listed.
Discussion
Principal Findings
This study sought to determine the validity of patient ratings for physicians by evaluating the mean online ratings for physicians, by specialty, between those who had been nominated by their peers as one of “America’s Top Doctors” or not, as reported by Castle Connolly Medical. We found that four specializations demonstrated differences in ratings between those physicians included in Castle Connolly Medical’s listing of “America’s Top Doctors” and those not listed: allergists, family medicine, internists, and pediatricians. Specifically, our study found that the validity of patient online reviews of physicians varies by specialization. This finding has implications related to how patients make choices related to health care.
Physicians have been inundated with mandates for attaining the “triple aim” of reducing costs and increasing patient experiences and quality [12]. In doing so, many have moved to a model of “patient-centered care” which seeks to form continuous patient-physician relationships [13]. Thus, some practices have simultaneously begun to direct attention at both the nature of the relationship and the quality of that encounter. Given that these efforts appear to be primarily directed at more “primary care” and “in-office” settings, our finding that patient reviews are valid for specializations that could be characterized as primarily “in-office” settings is not unexpected.
Within the context of promoting competition, information transparency needs to be both complete and understood. This review would suggest that online patient ratings accomplish neither of these market objectives. In fact, there may be implications for shopping behavior to negatively influence quality of care outcomes; care continuity is associated with many positive health outcomes including decreased hospitalizations, fewer emergency room visits, lower health care costs, and improvements in the use of preventative care services [14]. Conversely, evidence indicates that patients who experience more fragmented primary care services also have patterns of care that more significantly deviate from determined best practice guidelines and result in higher overall health care costs. Negative reviews could thus promote “doctor shopping” based on incomplete or nonfactual information and lead to more fragmented care continuity, and potentially less optimal health outcomes [15,16].
Health systems have called for more holistic approaches to treating patients and placing measurable value on attributes such as trust and continuity of care [17]. In a recent edition of the Journal of the American Medical Association, physicians discussed the role that standardized quality assessment tools have on care practice and the need to be thoughtful when constructing such measures [18]. Physician rating websites have utility, but are imperfect proxies for competence [19,20]. If such questions have arisen about standard best practice measurement, even greater questions exist about unstandardized and undefined open assessments such as online patient reviews, particularly in specialties where the patient has limited direct experience with their health care provider (eg, Anesthesiology).
Limitations
The selected basis for physician peer review for this study–Castle Connolly Medical–is not immune to challenge; while the organization does not receive payments or petitions, physicians have publicly questioned the “lobbying” efforts that some colleagues undertake to be included in their lists. However, no objective truth in the determination of a “good” or “bad” physician has been established. Other studies have explored alternative assessments for physician performance (eg, clinical outcomes, costs to treat, board certifications) and have acknowledged a variety of issues and limitations related to associating reviews with performance [21,22].
The current study only incorporated average numerical results for physicians (rather than an individual numeric rating for each review) from the three ratings sources; text from reviews was not analyzed. While the patterns and general findings would likely not change based upon text analysis, the text may provide additional insights regarding frequently occurring terms or relevant patterns for interested researchers.
We were not able to ascertain details about the individuals providing the ratings. Specifically, this study did not consider the patients’ insurance type. This insurance type could affect how a patient experiences the service provided relative to perceived value; those with higher out-of-pocket direct costs via copays and/or high deductibles may be more cost sensitive and therefore more likely to “shop” for health care in the face these payments.
Conclusions
A deceptive review or set of reviews related to a hotel visit is an inconvenience, but decisions based on deceptive or poorly-informed patient reviews related to a health care provider could have dire consequences for an individual using these reviews to inform their health care-related decisions. The presence of online ratings sites will likely continue to grow and expand across all segments of the economy. The results of this large-scale study indicate that while patient ratings are consistent with physician peer review ratings for specialties like allergists and pediatricians, patient reviews were not consistent with medical peer review for specializations characterized by less patient contact (eg, anesthesiology). This result may indicate that patients are not sufficiently knowledgeable to provide informed physician ratings for some medical specializations, leading other information-seekers to potentially less-qualified providers.
Overall mean ratings by specialization for physicians listed and not listed in Castle Connolly Medical.
Footnotes
Conflicts of Interest: None declared.
References
- 1.Smith A, Anderson M. Pew Research Center. 2016. Dec 19, Online shoppng and e-commerce: online reviews http://www.pewinternet.org/2016/12/19/online-reviews .
- 2.Holliday AM, Kachalia A, Meyer GS, Sequist TD. Physician and patient views on public physician rating websites: a cross-sectional study. J Gen Intern Med. 2017 Jun;32(6):626–631. doi: 10.1007/s11606-017-3982-5.10.1007/s11606-017-3982-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.RateMDs. 2017. Doctor reviews and ratings http://www.ratemds.com .
- 4.Vitals. 2017. http://www.vitals.com .
- 5.HealthGrades. 2017. Review your doctor https://www.healthgrades.com/review .
- 6.Emmert M, Sander U, Pisch F. Eight questions about physician-rating websites: a systematic review. J Med Internet Res. 2013 Feb 01;15(2):e24. doi: 10.2196/jmir.2360. http://www.jmir.org/2013/2/e24/ v15i2e24 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Grabner-Kräuter S, Waiguny MKJ. Insights into the impact of online physician reviews on patients' decision making: randomized experiment. J Med Internet Res. 2015 Apr 09;17(4):e93. doi: 10.2196/jmir.3991. http://www.jmir.org/2015/4/e93/ v17i4e93 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Castle Connolly. 2018. [2018-03-26]. Top Doctors https://www.castleconnolly.com .
- 9.Trehan SK, DeFrancesco CJ, Nguyen JT, Charalel RA, Daluiski A. Online patient ratings of hand surgeons. J Hand Surg Am. 2016 Jan;41(1):98–103. doi: 10.1016/j.jhsa.2015.10.006.S0363-5023(15)01328-3 [DOI] [PubMed] [Google Scholar]
- 10.Wiley MT, Rivas RL, Hristidis V. Provider attributes correlation analysis to their referral frequency and awards. BMC Health Serv Res. 2016 Mar 14;16:90. doi: 10.1186/s12913-016-1338-1. https://bmchealthservres.biomedcentral.com/articles/10.1186/s12913-016-1338-1 .10.1186/s12913-016-1338-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Gadidov B, Priestley JL. Does Yelp matter? Analyzing (and guide to using) ratings for a quick serve restaurant chain. In: Srinivasan S, editor. Guide to Big Data Applications. New York: Springer International Publishing; 2018. pp. 503–522. [Google Scholar]
- 12.Bodenheimer T, Sinsky C. From triple to quadruple aim: care of the patient requires care of the provider. Ann Fam Med. 2014;12(6):573–6. doi: 10.1370/afm.1713. http://www.annfammed.org/cgi/pmidlookup?view=long&pmid=25384822 .12/6/573 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Detsky AS. What patients really want from health care. JAMA. 2011 Dec 14;306(22):2500–1. doi: 10.1001/jama.2011.1819.306/22/2500 [DOI] [PubMed] [Google Scholar]
- 14.Frandsen BR, Joynt KE, Rebitzer JB, Jha AK. Care fragmentation, quality, and costs among chronically ill patients. Am J Manag Care. 2015 May;21(5):355–62. http://www.ajmc.com/pubMed.php?pii=86101 .86101 [PubMed] [Google Scholar]
- 15.Frost A, Newman D. Health Care Cost Institute. 2018. Spending on shoppable services in health care http://www.healthcostinstitute.org/files/Shoppable%20Services%20IB%203.2.16_0.pdf .
- 16.Desai S, Hatfield LA, Hicks AL, Chernew ME, Mehrotra A. Association between availability of a price transparency tool and outpatient spending. JAMA. 2016 May 03;315(17):1874–81. doi: 10.1001/jama.2016.4288.2518264 [DOI] [PubMed] [Google Scholar]
- 17.Friedberg M, Chen P, Van BK, Aunon F, Pham C, Caloyeras J, Mattke S, Pitchforth E, Quigley DD, Brook RH, Crosson FJ, Tutty M. Factors affecting physician professional satisfaction and their implications for patient care, health systems, and health policy. Rand Health Q. 2014;3(4):1. http://europepmc.org/abstract/MED/28083306 . [PMC free article] [PubMed] [Google Scholar]
- 18.Goitein L, James B. Standardized best practices and individual craft-based medicine: a conversation about quality. JAMA Intern Med. 2016 Jun 01;176(6):835–8. doi: 10.1001/jamainternmed.2016.1641. http://jamanetwork.com/journals/jamainternalmedicine/article-abstract/2521828 .2521828 [DOI] [PubMed] [Google Scholar]
- 19.Murphy GP, Awad MA, Osterberg EC, Gaither TW, Chumnarnsongkhroh T, Washington SL, Breyer BN. Web-based physician ratings for California physicians on probation. J Med Internet Res. 2017 Aug 22;19(8):e254. doi: 10.2196/jmir.7488. http://www.jmir.org/2017/8/e254/ v19i8e254 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Okike K, Peter-Bibb TK, Xie KC, Okike ON. Association between physician online rating and quality of care. J Med Internet Res. 2016 Dec 13;18(12):e324. doi: 10.2196/jmir.6612. http://www.jmir.org/2016/12/e324/ v18i12e324 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Trzeciak S, Gaughan JP, Bosire J, Mazzarelli AJ. Association between medicare summary star ratings for patient experience and clinical outcomes in US hospitals. J Patient Exp. 2016 Mar;3(1):6–9. doi: 10.1177/2374373516636681. http://europepmc.org/abstract/MED/28725825 .10.1177_2374373516636681 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Liu J, Matelski J, Cram P, Urbach D, Bell C. Association between online physician ratings and cardiac surgery mortality. Circ Cardiovasc Qual Outcomes. 2016 Nov;9(6):788–791. doi: 10.1161/CIRCOUTCOMES.116.003016. http://circoutcomes.ahajournals.org/cgi/pmidlookup?view=long&pmid=27803091 .CIRCOUTCOMES.116.003016 [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Overall mean ratings by specialization for physicians listed and not listed in Castle Connolly Medical.