Skip to main content
American Journal of Rhinology & Allergy logoLink to American Journal of Rhinology & Allergy
. 2020 Sep 11;35(3):341–347. doi: 10.1177/1945892420958366

Academic Rhinologists’ Online Rating and Perception, Scholarly Productivity, and Industry Payments

Khodayar Goshtasbi 1, Brandon M Lehrich 1, Mehdi Abouzari 1, Dariush Bazyani 1, Arash Abiri 1, Peter Papagiannopoulos 2, Bobby A Tajudeen 2, Edward C Kuan 1,
PMCID: PMC8258306  PMID: 32915651

Abstract

Introduction

The emergence of popular online rating websites, social media platforms, and public databases for industry payments and scholarly outputs provide a complete physician online presence which may guide choice and satisfaction.

Methods

Websites of all U.S. otolaryngology academic institutions were queried for fellowship-trained rhinologists. Additional well-known and academically active rhinologists were identified by the senior author. Online ratings and comments were collected from Google, Healthgrades, Vitals, and RateMD websites, and weighted rating scores (RS) were calculated on a 1–5 scale.

Results

A total of 210 rhinologists with 16 ± 9 years of practice were included, where 6901 online ratings (33 ± 47 per rhinologist) provided an average RS of 4.3 ± 0.6. RS was not different according to gender (p = 0.58), geographic quartile (p = 0.48), social media presence (p = 0.41), or attending top-ranked medical school (p = 0.86) or residency programs (p = 0.89). Years of practice negatively correlated with RS (R = –0.22, p<0.01), and academic ranking significantly influenced RS, with professors, associate professors, and assistant professors scoring 4.1 ± 0.6, 4.3 ± 0.4, and 4.4 ± 0.6, respectively (p = 0.03). Of the 3,304 narrative comments analyzed (3.1 ± 11.6 per rhinologist), 76% (positive) and 7% (negative) had elements of clinical knowledge/outcomes, 56% (positive) and 7% (negative) of communication/bedside manner, and 9% (positive) and 7% (negative) of office staff, cost, and wait-time. All negative comment categories had moderate negative correlation with RS, while positive comment categories regarding knowledge/competence and bedside manner weakly correlated with higher RS. Number of publications (48 ± 54) positively correlated with 2018 industry payments ($11,384 ± $19,025) among those receiving industry compensation >$300 (n = 113). Attending a top-ranked medical school was associated with higher industry payments (p<0.01) and H-index (p = 0.02).

Conclusion

Academic rhinologists’ online RS was not associated with gender, geographic location, or attending a top-ranked training program, and their scholarly productivity was significantly correlated with total industry payments.

Keywords: academic, rhinologist, online rating, scholarly, open payment, Google

Introduction

The Internet provides an unprecedented repository of public data constituting a physician’s online profile, which can include their patient-reported ratings and comments, 1 scholarly outputs, 2 industry payments, 3 or malpractice claims. 4 With the advent of various easily-accessible online platforms, physicians’ professional and personal information are increasingly available to the public and often not directly controlled by physicians.5,6 These platforms provide a popular avenue for physician branding and perception, and may ultimately influence provider choice and satisfaction as well as future opportunities and collaborations.1,5,7 Investigating these publicly available domains and perceptions, which can influence current/future patients, colleagues, and private and public entities warrants careful investigation.

Contrary to traditional methods of assessing patient satisfaction via in-person and hospital-protected surveys, 8 the continuous merging of the internet and healthcare has paved the way for public online platforms to house patients’ voice and opinions. A 2013 study by Emmert and colleagues reported that 25% of patients viewed their physicians’ online rating profiles, 9 a proportion that is likely increasing in more recent years. As such, it is important to investigate physicians’ patient-reported ratings and comments in order to elucidate various influential factors and methods for improvement. For instance, recent studies have suggested that bedside manner, clinical proficiency and knowledge, wait time, and time spent with patients are directly influential for improved online ratings.1012 Other factors outside of the physicians’ immediate control, such as gender, age, subspecialty, or region of practice may also play important roles in the received online ratings and comments.1315

The emerging popularity of various online platforms has also allowed open and convenient access to physicians’ scholarly outputs and industry payments. With both possibly increasing over the years among academic surgeons,16,17 industry funding can be associated with greater scholarly impact.18,19 Considering all these important domains, this study evaluated fellowship-trained rhinologists from all U.S. academic Otolaryngology Departments regarding their online rating scores and narrative comments, social media presence, industry payments, and scholarly contributions. This study aimed to elucidate factors which may lead to an improved overall rating score for rhinologists. Additionally, we aimed to identify characteristics associated with greater scholarly productivity and industry payment quantity and monetary amounts for rhinologists. A thorough investigation of these datasets can help physicians gain a better understanding of their online presence and perception, and tailor future considerations for adjusting their practice in an era of patient-driven healthcare.

Methods

The websites of all Otolaryngology academic institutions within the U.S. were queried for fellowship-trained academic rhinologists. Additional senior rhinologists (without formal fellowship training noted on their faculty websites) were added according the knowledge of senior author (E.C.K.) and American Rhinologic Society (ARS) membership portal. The cohort’s state of practice was categorized into West, South, Midwest, and Northeast regions. Online rating scores and narrative comments were collected from Google.com, Healthgrades.com, Vitals.com, and RateMD.com from inception-December 2019. A cumulative weighted rating score (RS) was calculated using the following formula: [(Google rating × Number of Google votes) + (Healthgrades rating × Number of Healthgrades votes) + (Vitals rating × Number of Vitals votes) + (RateMDs rating × Number of RateMDs votes)]/(Total number of votes across the four platforms) and scaled from 1–5. Narrative comments from all four websites of fellowship-trained academic rhinologists were categorized thematically into: 1) Professional knowledge and clinical competence/outcomes, 2) Bedside manner, communication, time allocation, and 3) Office staff, cost, insurance, wait time, and making appointments. Furthermore, all the categorized comments were subcategorized into positive or negative comments based on the comments’ connotations. The categories were not mutually exclusive; thus, one comment could encompass multiple domains and be counted for all accordingly. For instance, a comment reading “the doctor spent time answering my questions and my surgery went well, but his office wait was too long” was counted as one positive point for professional knowledge and clinical competence/outcomes (surgical outcome specifically), one positive point for bedside manner, communication, time allocation (time allocation specifically), and one negative point for office staff, insurance, wait time, and making appoints (wait time specifically). These comments were all evaluated by one author (D.B.) to ensure consistency, and questionable comments were re-evaluated by a second author (K.G.) for final decisions. Upon disagreement between the two authors, the senior author (E.C.K.) made the final decisions.

The cohort’s overall number of PubMed-indexed publications was collected using https://www.ncbi.nlm.nih.gov/pubmed during inception-December 2019. H-index was determined using https://www.scopus.com. Medical school ranking was determined using the 2019 U.S. News and World Report Rankings for Research, and residency ranking was determined using the 2019 Doximity Otolaryngology Residency Program Ranking. For the latter, Doximity provided ranking according to both “reputation” and “research output”, and the top–10 results which were used for categorization in this study were ranked as follows: 1) Reputation: Johns Hopkins, Massachusetts Eye & Ear, University of Michigan, Vanderbilt, University of Iowa, Icahn School of Medicine at Mt. Sinai, Ohio State University, University of Pittsburgh, University of Washington, and University of Pennsylvania, and 2) Research output: Johns Hopkins, University of Pittsburgh, University of California, Los Angeles, University of Pennsylvania, University of Washington, Washington University, University of North Carolina, Massachusetts Eye & Ear, Stanford University, and Oregon Health and Science.

The 2018 Open Payments Database, which tracks industry payments as reported by the Center for Medicare Services, was searched for the cohort. We set a $300 threshold for Open Payment inclusion for analysis. Social media presence was defined as having a publicly accessible/viewable platform of either a professional website, Facebook, Instagram, and/or Twitter account. All statistical analysis was performed using PASW Statistics 180 software (SPSS Inc., Chicago, IL) and a p-value<0.05 was considered significant. We performed independent samples t-test and Analysis of Variance for continuous variables, and chi-squared analysis to compare categorical variables. In Pearson correlation, R-values <0.2 were defined as a weak correlation, R-values 0.2–0.4 were defined as a moderate correlation, and R-value >0.4 were defined as a strong correlation.

Results

A total of 194 fellowship-trained academic rhinologists found from program faculty websites and an additional 16 practicing U.S. rhinologists active in academics and ARS were included in this study with 16.0  ±  9.2 years of practice. The cohort consisted of 44 (21.0%) female physicians and the overall geographic breakdown were as follow: West (n = 44, 21.0%), Midwest (n = 46, 21.9%), South (n = 72, 34.3%), and Northeast (n = 47, 22.4%). The number of physicians with at least one rating score in Vitals, Healthgrades, Google, and RateMD were 161 (76.7%), 158 (75.2%), 97 (46.2%), and 74 (35.2%), respectively, and all the sub-categorical mean ratings are demonstrated in Table 1. An average of 32.9  ±  46.8 online ratings (on a 1–5 scale) were recorded per physician, which amounted to a total of 6901 and a calculated overall RS of 4.3  ±  0.6 (range = 2.6–5.0). The calculated RS was similar between male and female rhinologists (4.3 ± 0.6 vs 4.3 ± 0.7, p = 0.58), and between rhinologists from West, Midwest, South, and Northeast (4.2 ± 0.7 vs 4.2 ± 0.7 vs 4.3 ± 0.5 vs 4.3 ± 0.5, p = 0.48). A similar RS were also observed between rhinologists with a social media presence (n = 91, 43.3%) versus those without a social media presence (4.3 ± 0.5 vs 4.2 ± 0.6, p = 0.41). Years of practice negatively correlated with RS (R = –0.22, p<0.01), where physicians with ≥ 15 years of experience and those with < 15 years in practice had a RS of 4.1  ±  0.5 versus 4.4  ±  0.6, respectively (p<0.01). Academic ranking also significantly influenced RS, with professors, associate professors, and assistant professors scoring 4.1 ± 0.6, 4.3 ± 0.5, and 4.4 ± 0.7, respectively (p = 0.033).

Table 1.

Academic Rhinologists’ Sub-categorical Rankings in Healthgrades, Vitals, and RateMD Rating Websites.

Platform and Criteria Mean Score Platform and Criteria Mean Score
Physician’s trustworthiness (H) 4.3 ± 0.8 Accurate diagnosis (V) 4.2 ± 0.8
Explaining conditions well (H) 4.3 ± 0.8 Bedside manners (V) 4.2 ± 0.7
Answering questions (H) 4.3 ± 0.7 Spending adequate time (V) 4.2 ± 0.9
Time well spent (H) 4.3 ± 0.8 Appropriate follow-up (V) 4.1 ± 0.9
Office scheduling (H) 4.3 ± 0.6 Staff (R) 4.0 ± 1.2
Office environment (H) 4.5 ± 0.6 Punctuality (R) 3.9 ± 1.2
Staff friendliness (H) 4.4 ± 0.6 Helpfulness (R) 4.1 ± 1.1
Easy appointment (V) 4.1 ± 0.8 Knowledge (R) 3.9 ± 1.3
Promptness (V) 4.0 ± 0.8 Wait time* (H) 14.3 ± 11.1
Friendliness (V) 4.2 ± 0.8 Wait time* (V) 16.3 ± 9.1

All reported scores are on a 1–5 scale, except wait time (designated with *) which is in minutes. H = Healthgrades (n = 146), V = Vitals (n = 135), R = RateMD (n = 67).

With a mean of 17.0  ±  38.9 narrative comments per physician (median = 7.0), a pooled total of 3,304 comments were collected from the four rating websites. All comments were analyzed for thematic content and the results are summarized in Table 2. Pearson correlation demonstrated that all negative comment categories moderately correlated with a lower RS, while positive comment categories regarding knowledge/competence and bedside manner weakly correlated with a higher RS. The proportion of comment categories between male and female rhinologists were all similar (all p < 0.05), but physicians with ≥15 years of experience had higher overall positive (p = 0.01) and negative remarks (p < 0.01) compared to those with fewer years of experience.

Table 2.

Thematic Content of the Formally Fellowship-Trained Rhinologists’ 3,304 Comments, and Its Pearson Correlation (R) With the Overall Weighted Rating Score.

Thematic Content Positive Comments R (p-Value) Negative Comments R (p-value)
Knowledge, correct diagnosis, clinical competence and outcomes 2500 (75.7) 0.16 (0.04) 214 (6.5) −0.37 (<0.01)
Bedside manner, answering questions and spending appropriate time 1838 (55.6) 0.17 (0.03) 246 (7.4) −0.39 (<0.01)
Wait time, ease of appointment making, office staff, cost, insurance 286 (8.7) 0.15 (0.06) 229 (6.9) −0.26 (<0.01)

The cohort’s number of PubMed-indexed publications ranged from 2–429, with a mean of 47.6  ±  54.4 publications per physician. The cohort’s H-index was 14.0  ±  11.3, and this positively correlated with years in practice (R = 0.62, p < 0.01). As reported by the 2018 Open Payment database, a total of 113 (53.8%) rhinologists received at least $300 in industry payments with a mean of $11,384  ±  $19,025 among the recipients (range = $303–$79,423; median = $1,670). Among the receivers, industry payment amount and number of publications were positively correlated (R = 0.27, p < 0.01). There was also a positive correlation between industry payments and H-index (R = 0.33, p < 0.01). Rhinologists with no industry payment had a lower number of publications compared to those with any payment (36.1 ± 37.5 vs 57.3 ± 64.0, p = 0.01), but years in practice was similar between the two subgroups (15.5 ± 9.7 vs 16.4 ± 8.9, p = 0.48). The relationship between faculties’ H-index and industry payments, and between years-in-practice and overall composite score are demonstrated in Supplementary Figure 1. Lastly, the influence of training at a top-ranked medical school or residency program on the cohort’s publication output, H-index, online RS, and open payments is demonstrated in Table 3.

Table 3.

The Influence of Attending Top-25 Medical School (n = 60), Top-50 Medical School (n = 94), Top-10 Residency Program by Reputation (N = 43), and Top-10 Residency Program by Research Output (N = 49), on Publication Output, H-Index, Online RS, and Open Payment Amount of the Formally Fellowship-Trained Rhinologists.

Training at ___ Versus Not Publication Output H-Index Online RS Open Payment Amount
Top 25 medical school 51.2 ± 46.9 vs 42.2 ± 54.7 (p = 0.27) 16.1 ± 11.4 vs 12.0 ± 10.1 (p = 0.02) 4.3  ±  0.5 vs 4.3  ±  0.6 (p = 0.86) 17816 ± 24637 vs 5806 ± 9749 (p<0.01)
Top 50 medical school 45.7 ± 50.5 vs 43.9 ± 54.6 (p = 0.82) 14.5 ± 11.8 vs 12.1 ± 9.3 (p = 0.11) 4.31 ± 0.5 vs 4.2 ± 0.6 (p = 0.42) 13783 ± 21564 vs 5839 ± 9823 (p = 0.02)
Top 10 residency reputation 49.6 ± 67.5 vs 43.4 ± 47.5 (p = 0.50) 13.4 ± 11.1 vs 12.9 ± 9.1 (p = 0.77) 4.3 ± 0.5 vs 4.3 ± 0.6 (p = 0.89) 10389 ± 18339 vs 8694 ± 13394 (p = 0.71)
Top 10 residency research 54.4 ± 49.4 vs 41.6 ± 53.2 (p = 0.14) 12.7 ± 10.6 vs 15.1 ± 10.7 (p = 0.17) 4.2 ± 0.7 vs 4.3 ± 0.6 (p = 0.65) 10237 ± 18181 vs 10034 ± 17400 (p = 0.96)

Bolded p-values denote statistical significance (p < 0.05).

Discussion

Through examining current academic rhinologists from all U.S. academic otolaryngology institutions, this study provides information regarding patient-reported online ratings and narrative comments, social media presence, scholarly productivity and industry payments, and factors that may be associated with these domains. The majority of rhinologists were rated on at least one rating website, where RS was negatively associated with years of experience, but not influenced by gender, region of practice, or social media presence. Many narrative comments were positive and commenting on physicians’ knowledge and correct diagnosis, clinical competence and surgical outcomes, bedside manner, and spending time answering questions. Overall research productivity, as measured by publication volume, correlated with industry payments. The cohort’s scholarly output and online RS were not influenced by attending top-ranked medical schools or residency programs, but attending a top-ranked medical school was associated with a higher H-index and industry payment.

In today’s day and age, the Internet has become a readily accessible platform for information, where patients or colleagues are able to access highly personal information regarding almost any provider. Online physician rating websites are rapidly growing in popularity, and a better understanding of what factors influence better ratings, or if they correlate with quality of care are important. 20 In an ideal world, these ratings are intended to be reflective of a physician’s clinical prowess, surgical skills and outcomes, and bedside manner. However, the reality can be far from perfect, and often ratings may reflect only the immediate mood and whim of the patient. For instance, patients who were extremely satisfied or unsatisfied with a physician may be more likely to rate the physician, as a means to reward or punish them, respectively. This may preclude many averagely satisfied opinions to be voiced, and, needless to say, it is not uncommon to see negative ratings of outstanding clinicians and vice versa. 21 In some ways, categorizing the narrative comments are more practical, which can shed light on what areas patients consider important when receiving care from rhinologists.

Of note, our calculated RS of rhinologists was in-line with those previously reported for neurosurgeons 22 and neurotologists. 14 Naturally, patients developed good rapport with both male and female clinicians, and RS was not associated with medical school or residency rankings. In a study by Tsugawa et al., medical school or residency ranking was also shown to not associate with patient outcomes. 3 We found that there was no association between online RS and having a social media presence. This suggests that, despite efforts for practice promotion and establishing para-social bonds with current and potential patients, there likely is no substitute for face-to-face visits and establishing a direct patient-physician relationship. Analyzing the narrative comments demonstrated that the vast majority of rhinologists were recognized for positive clinical care delivery, which was of utmost value to most patients who provided comments. Moreover, of almost equal importance was the quality of the patient-physician relationship and bedside manner, which is an important area regardless of specialty.10,12,23 Interestingly, poor communication can also be a risk factor for medical malpractice, 24 and apologizing to the patient for a poor clinical outcome may help reduce malpractice claims, payment amounts, and settlements. 25 Since patients with poor outcomes tend to have higher rates of litigation, 26 clinicians should not only focus on developing a sound clinical plan, but also communicating it effectively to patients through joint decision making, when appropriate. A minority of comments focused on the environment of the visit itself and cost, which are more challenging to modify. In any busy practice, the office staff are tasked to work at extremely high standards, and not all patients may be accustomed to the diverse personalities in the office. Moreover, academic centers which were the settings of our entire cohort are often subject to hospital-regulated visit costs and mediated through specific billing departments, which are usually beyond the direct control of an individual physician. Furthermore, although shorter wait times and longer face time are both associated with better patient satisfaction, 27 longer waits followed by shorter appointments are not unusual among surgical subspecialists due to a high per-capita demand.

In the current study, it was observed that increased overall academic productivity was associated with increased industry payments for a given year. The H-index is a metric used to quantify academic influence within the field. More specifically, the metric evaluates the citation frequency across their publication volume (e.g. an author with at least 5 publications each cited 5 times usually has an H-index = 5). Industry payments among clinicians is an increasingly important topic of interest, which can be associated with certain prescriptions or clinical/research practices among otolaryngologists.28,29 A previous report on academic radiation oncologists demonstrated that industry payments were significantly associated with H-index. 30 Our finding was in-line with studies by Eloy et al., demonstrating positive associations between industry contributions and scholarly impact among academic otolaryngologists and neurosurgeons.18,19 Of note, this association is likely multi-faceted and not necessarily a causal one. For instance, it is not always clear whether more research is facilitated from more funding, or whether clinicians with impressive scholarly track records are more likely to receive industry payments. 18 It is also plausible to consider that industry often seeks the expertise of key opinion leaders and experts within the field, who are mostly academically-driven by nature and highly productive and prolific in scientific endeavors. In addition to H-index, we observed that industry payment was statistically influenced by attending a top-ranked medical school. Likewise, a comprehensive study of approximately 550,000 physicians reported higher industry payments among physicians from top-ranked medical schools or certain surgical specialties. 31 To our knowledge, our analyses of otolaryngology residency ranking according to reputation or research output were novel, where neither associated with higher industry payment, H-index, or research output.

There are several limitations to the study. First, not all online rating websites were evaluated, and some of the included ratings/comments may have been redundant and from the same patients. Selection bias may also play an important role in patients’ ratings and narrative comments, where those with exceptionally good or bad experiences may be more motivated to voice their opinions. Ratings may not necessarily be authentic, as there are very few restrictions and regulations for creating a rating or comment, though there is no reliable means to validate this claim. Moreover, physicians with a smaller number of ratings are affected more by outlier scores, and the ratings and narrative comments in general may not associate with physician competence or quality of care.1,32 Additionally, the total number of publications or H-index may not be the best measure for research productivity, since obtaining research grants, longitudinal basic science projects, or higher impact research are also important measures of scholarly productivity, but not accounted for in this manuscript. Furthermore, the H-index is not the solitary measure of research productivity, as other similar indices have been utilized for similar purposes before, namely m-index, 30 Relative Citation Ratio, 33 and Radicchi index. 34 This is in addition to the fact that Scopus and/or other sources for H-index (e.g., Web of Knowledge, Google Scholar, and ResearchGate) can provide conflicting results. 35 Lastly, certain established rhinologists may have not been included in the study, but they could be well-involved with industry payments or scholarly productivity. Despite its limitations, this study highlights important information regarding several important and publicly accessible domains of academic rhinologists, which can help gain a better understanding of their online presence, perception, and factors that may influence certain outcomes.

Conclusion

Academic rhinologists’ online presence is multifaceted with online ratings, scholarly outputs, and transparency of conflicts of interest potentially affecting future opportunities and perceptual satisfaction. Patient-reported online ratings were not associated with gender, geographic location, or attending a top-ranked training program. Many comments were positive and about physicians’ clinical knowledge and outcomes. Among academic rhinologists, scholarly productivity and total industry payments were positively correlated. Attending top-ranked training programs was not associated with higher H-index or research output.

Supplemental Material

sj-jpg-1-ajr-10.1177_1945892420958366 - Supplemental material for Academic Rhinologists’ Online Rating and Perception, Scholarly Productivity, and Industry Payments

Supplemental material, sj-jpg-1-ajr-10.1177_1945892420958366 for Academic Rhinologists’ Online Rating and Perception, Scholarly Productivity, and Industry Payments by Khodayar Goshtasbi MS Brandon M.Lehrich BS Mehdi Abouzari MD, PhD Dariush Bazyani BS Arash Abiri BS Peter Papagiannopoulos Tajudeen MD Bobby A.Edward C. Kuan MD, MBA in American Journal of Rhinology & Allergy

Authors Note: Portions of this work were accepted as a podium presentation at the American Rhinologic Society at the Combined Otolaryngology Spring Meeting, Atlanta, GA, April 23–24, 2020.

Declaration of Conflicting Interests: The author(s) declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: ECK is a consultant for Stryker ENT (Kalamazoo, MI).

Funding: The author(s) received no financial support for the research, authorship, and/or publication of this article.

Supplemental Material: Supplemental material for this article is available online.

ORCID iDs

Khodayar Goshtasbi https://orcid.org/0000-0002-0045-2547

Mehdi Abouzari https://orcid.org/0000-0002-3585-698X

Arash Abiri https://orcid.org/0000-0003-2656-1060

Peter Papagiannopoulos https://orcid.org/0000-0002-4209-9980

Edward C. Kuan https://orcid.org/0000-0003-3475-0718

References

  • 1.Hanauer DA, Zheng K, Singer DC, Gebremariam A, Davis MM. Public awareness, perception, and use of online physician rating sites. J Am Med Assoc. 2014; 311(7):734–735. [DOI] [PubMed] [Google Scholar]
  • 2.Svider PF, Pashkova AA, Choudhry Z, et al. Comparison of scholarly impact among surgical specialties: an examination of 2429 academic surgeons. Laryngoscope. 2013; 123(4):884–889. [DOI] [PubMed] [Google Scholar]
  • 3.Tsugawa Y, Blumenthal DM, Jha AK, Orav EJ, Jena AB. Association between physician US News & World Report medical school ranking and patient outcomes and costs of care: observational study. Br Med J. 2018; 362:k3640. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Paik AM, Mady LJ, Sood A, Eloy JA, Lee ES. A look inside the courtroom: an analysis of 292 cosmetic breast surgery medical malpractice cases. Aesthet Surg J. 2014; 34(1):79–86. [DOI] [PubMed] [Google Scholar]
  • 5.Mostaghimi A, Crotty BH, Landon BE. The availability and nature of physician information on the internet. J Gen Intern Med. 2010; 25(11):1152–1156. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Kim C, Gupta R, Shah A, Madill E, Prabhu AV, Agarwal N. Digital footprint of neurological surgeons. World Neurosurg. 2018; 113:e172–e178. [DOI] [PubMed] [Google Scholar]
  • 7.Lagu T, Hannon NS, Rothberg MB, Lindenauer PK. Patients’ evaluations of health care providers in the era of social networking: an analysis of physician-rating websites. J Gen Intern Med. 2010; 25(9):942–946. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Kennedy GD, Tevis SE, Kent KC. Is there a relationship between patient satisfaction and favorable outcomes? Ann Surg. 2014; 260(4):592–598; discussion 8–600. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Emmert M, Meier F, Pisch F, Sander U. Physician choice making and characteristics associated with using physician-rating websites: cross-sectional study. J Med Internet Res. 2013; 15(8):e187. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Calixto NE, Chiao W, Durr ML, Jiang N. Factors impacting online ratings for otolaryngologists. Ann Otol Rhinol Laryngol. 2018; 127(8):521–526. [DOI] [PubMed] [Google Scholar]
  • 11.Bakhsh W, Mesfin A. Online ratings of orthopedic surgeons: analysis of 2185 reviews. Am J Orthop. 2014; 43(8):359–363. [PubMed] [Google Scholar]
  • 12.Emmert M, Meier F, Heider A-K, Dürr C, Sander U. What do patients say about their physicians? An analysis of 3000 narrative comments posted on a German physician rating website. Health Policy. 2014; 118(1):66–73. [DOI] [PubMed] [Google Scholar]
  • 13.Nwachukwu BU, Adjei J, Trehan SK, et al. Rating a sports medicine surgeon’s “quality” in the modern era: an analysis of popular physician online rating websites. HSS J. 2016; 12(3):272–277. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Goshtasbi K, Lehrich BM, Moshtaghi O, et al. Patients’ online perception and ratings of neurotologists. Otol Neurotol. 2019; 40(1):139–143. [DOI] [PubMed] [Google Scholar]
  • 15.Sobin L, Goyal P. Trends of online ratings of otolaryngologists: what do your patients really think of you? JAMA Otolaryngol Head Neck Surg. 2014; 140(7):635–638. [DOI] [PubMed] [Google Scholar]
  • 16.Morse E, Fujiwara RJ, Mehra S. Increasing industry involvement in otolaryngology: insights from 3 years of the open payments database. Otolaryngol Head Neck Surg. 2018; 159(3):501–507. [DOI] [PubMed] [Google Scholar]
  • 17.Yheulon CG, Balla FM, Ernat JJ, Lin E, Davis SS., Jr. Academic inertia: Examining changes of scholarly output over time among academic minimally invasive surgeons. Am J Surg. 2019; 218(5):813–817. [DOI] [PubMed] [Google Scholar]
  • 18.Eloy JA, Kilic S, Yoo NG, et al. Is industry funding associated with greater scholarly impact among academic neurosurgeons? World Neurosurg. 2017; 103:517–525. [DOI] [PubMed] [Google Scholar]
  • 19.Svider PF, Bobian M, Lin HS, et al. Are industry financial ties associated with greater scholarly impact among academic otolaryngologists? Laryngoscope. 2017; 127(1):87–94. [DOI] [PubMed] [Google Scholar]
  • 20.Gao GG, McCullough JS, Agarwal R, Jha AK. A changing landscape of physician quality reporting: analysis of patients’ online ratings of their physicians over a 5-year period. J Med Internet Res. 2012; 14(1):e38. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Okike K, Peter-Bibb TK, Xie KC, Okike ON. Association between physician online rating and quality of care. J Med Internet Res. 2016; 18(12):e324. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Cloney M, Hopkins B, Shlobin N, Dahdaleh NS. Online ratings of neurosurgeons: an examination of web data and its implications. Neurosurgery. 2018; 83(6):1143–1152. [DOI] [PubMed] [Google Scholar]
  • 23.Donnally CJ, 3rd, Roth ES, Li DJ, et al. Analysis of internet review site comments for spine surgeons: How office staff, physician likeability, and patient outcome are associated with online evaluations. Spine (Phila Pa 1976). 2018; 43(24):1725–1730. [DOI] [PubMed] [Google Scholar]
  • 24.Huntington B, Kuhn N. Communication gaffes: a root cause of malpractice claims. Proc (Bayl Univ Med Cent). 2003; 16(2):157–161. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Mastroianni AC, Mello MM, Sommer S, Hardy M, Gallagher TH. The flaws in state ‘apology’ and ‘disclosure’ laws dilute their intended impact on malpractice suits. Health Aff (Millwood). 2010; 29(9):1611–1619. [DOI] [PubMed] [Google Scholar]
  • 26.Lydiatt DD. Cancer of the oral cavity and medical malpractice. Laryngoscope. 2002; 112(5):816–819. [DOI] [PubMed] [Google Scholar]
  • 27.Anderson RT, Camacho FT, Balkrishnan R. Willing to wait?: the influence of patient wait time on satisfaction with primary care. BMC Health Serv Res. 2007; 7(1):31. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Eloy JA, Svider PF, Bobian M, et al., eds. Industry relationships are associated with performing a greater number of sinus balloon dilation procedures. Int Forum Allergy Rhinol. 2017; 7:878–883. [DOI] [PubMed] [Google Scholar]
  • 29.Morse E, Hanna J, Mehra S. The association between industry payments and brand-name prescriptions in otolaryngologists. Otolaryngol Head Neck Surg. 2019; 161(4):605–612. [DOI] [PubMed] [Google Scholar]
  • 30.Zaorsky NG, Ahmed AA, Zhu J, et al. Industry funding is correlated with publication productivity of US academic radiation oncologists. J Am Coll Radiol. 2019; 16(2):244–251. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Inoue K, Blumenthal DM, Elashoff D, Tsugawa Y. Association between physician characteristics and payments from industry in 2015–2017: observational study. BMJ Open. 2019; 9(9):e031010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Greaves F, Pape UJ, King D, et al. Associations between web-based patient ratings and objective measures of hospital quality. Arch Intern Med. 2012; 172(5):435–436. [DOI] [PubMed] [Google Scholar]
  • 33.Reddy V, Gupta A, White MD, et al. Assessment of the NIH-supported relative citation ratio as a measure of research productivity among 1687 academic neurological surgeons. J Neurosurgery. 2020; 1(aop):1–8. [DOI] [PubMed] [Google Scholar]
  • 34.Do TH, Miller C, Low WC, Haines SJ. A proof of concept for applying the Radicchi index (hf) to compare academic productivity and scientific impact among medical specialties. Neurosurgery. 2020; 86(4):593–603. [DOI] [PubMed] [Google Scholar]
  • 35.Da Silva JAT, Dobránszki J. Multiple versions of the h-index: cautionary use for formal academic purposes. Scientometrics. 2018; 115(2):1107–1113. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

sj-jpg-1-ajr-10.1177_1945892420958366 - Supplemental material for Academic Rhinologists’ Online Rating and Perception, Scholarly Productivity, and Industry Payments

Supplemental material, sj-jpg-1-ajr-10.1177_1945892420958366 for Academic Rhinologists’ Online Rating and Perception, Scholarly Productivity, and Industry Payments by Khodayar Goshtasbi MS Brandon M.Lehrich BS Mehdi Abouzari MD, PhD Dariush Bazyani BS Arash Abiri BS Peter Papagiannopoulos Tajudeen MD Bobby A.Edward C. Kuan MD, MBA in American Journal of Rhinology & Allergy


Articles from American Journal of Rhinology & Allergy are provided here courtesy of SAGE Publications

RESOURCES