Skip to main content
American Journal of Pharmaceutical Education logoLink to American Journal of Pharmaceutical Education
editorial
. 2012 Aug 10;76(6):103. doi: 10.5688/ajpe766103

In Pursuit of Prestige: the Folly of the US News and World Report Survey

Frank J Ascione 1,
PMCID: PMC3425918  PMID: 22919079

“Oh my goodness!” That is a sanitized version of my actual response when I realized in late December 2011 that another US News and World Report (USNWR) survey had arrived and that I had missed my opportunity to “vote.” Somehow, my questionnaire got lost among the large volume of mail that flows through my office. Fortunately, after some frantic and determined communication with USNWR, we were directed to the group running the survey (it is subcontracted to someone else) and I was able to submit my opinion.

When I glanced at the ballot, I wondered how administrative voters at the 124 other colleges and schools of pharmacy listed would have informed opinions about the University of Michigan College of Pharmacy. They may know that a required research experience for all our doctor of pharmacy (PharmD) students is a key and highly valued feature of our curriculum because it was described at a 2011 American Association of Colleges of Pharmacy meeting and a few years ago in an American Journal of Pharmaceutical Education editorial.1 They may know about the positive impact our faculty members and alumni have had on the profession of pharmacy, pharmaceutical science, and society since 1868. But would they know, for example, that we have instituted a new PharmD curriculum in the past 2 years that extensively uses active-learning techniques? Would they know the extent to which our academic medical center/health system and college are operationally, educationally, and philosophically integrated? I concluded that only a few of the USNWR survey respondents would know about these defining attributes and most of these individuals would be our graduates. I also concluded that, despite being in academia for 35 years, I knew comparatively little about many of the colleges and schools on the list, especially the newer ones.

I was not always this invested in the periodic USNWR survey. In fact, when I received my first questionnaire as a dean in fall 2007, I casually completed and returned it. The results were published in 2008 and we were ranked fifth. That we were perceived positively by our fellow educators was gratifying, but because of the survey’s limitations (eg, it only measured “reputation” focused exclusively on our PharmD program, and based its conclusions on a relatively low response rate of 56%), I chose to ignore the results. I assumed everyone else, including our key constituents (university administrators, alumni, current, and prospective students), would either be unaware of the rankings or ignore the findings for the same reasons I had.

I was wrong. Soon after the results were published, I started receiving comments from our PharmD students. They were upset that we had dropped from previous rankings (I think from third to fifth) and that we had tied (rather than outranked) Ohio State University (spillover from the 2 universities’ storied rivalry in football and other sports). The students wanted to know what I was going to do to improve our standing next time (a tough question). Similar observations also arrived from our alumni. I was troubled but understood that both constituent groups are prone to ascribe disproportionate value to these ranking systems because they are not familiar with our internal assessment processes. Then, to my chagrin, our university administrators started commenting on the rankings. Fortunately, it was positive for us, as our college was one among many highly-ranked units on campus, and our alleged superiority provided fodder for playful one-upmanship with my University of Michigan dean colleagues, enlivening otherwise serious meeting topics. However, I also recognized that our administrators were using these rankings to argue for more resources as a means to protect or improve their/our strong reputation, an administrative practice common both nationally and globally.2

The collective interest in these ranking systems was troubling and I was faced with a dilemma: how to acknowledge our positive reputation among our fellow educators, while guarding against repercussions from future unpredictable shifts that inevitably occur in reputational surveys. Thus, in the public forum, I acknowledged the results with nonchalance: “While we recognize the limitations of such reputational surveys, we are pleased to be valued so highly by our colleagues.” I also tried to determine if there was a way to defend our image from the caprices of future USNWR surveys. I searched for research that would provide insight, but my findings were not much help. In fact, they seemed to confirm that reputational surveys, such as the one generated by USNWR, are highly variable because of the vague manner in which standards of quality are measured and the erratic methodologies used.2-4 Discouraged, I considered other options (albeit, facetiously): bribe my colleagues (unethical); curry favor with every other college of pharmacy administrator I meet (difficult to do and inconsistent with my brusque personality); or populate every college administrative structure with our alumni (impossible in such a short time frame).

I felt increasingly apprehensive about the next survey, secretly hoping that we did not experience a drop in ranking for no logical reason. Hence, my aforementioned reaction when I discovered that I had missed the opportunity to fill out the most recent questionnaire.

As some may know, the University of Michigan did pretty well in the 2012 rankings, thanks to the survey ratings given to us by those who value our contributions to pharmacy education. Nevertheless, our University of Michigan constituency is not likely to be pleased. Despite having the exact same raw score as the previous survey, we dropped 2 slots to seventh. Unlike last time, I have been proactive in commenting about the survey. I note our success but remind everyone that this was a reputational survey that validates, in part, our more important measures of achievement: student quality, faculty scholarly activity, and alumni success.

Nevertheless, I believe pharmacy deans need to address the USNWR survey in a unified manner. All deans want to be held accountable for the outcomes they can influence. The USNWR ranking system does not fit into this category. How can we avoid being assessed by a commercially-driven enterprise more concerned with selling magazines than with measuring achievement by objective quality criteria? I suggest some possible ameliorative actions that college and school of pharmacy deans should consider:

  • (1) Ignore them and not participate. This type of boycott has been suggested before.5 US schools of dentistry have successfully refused to participate in the USNWR survey for many years, although other rankings of these schools exist. 6-7

  • (2) Work with USNWR to improve their survey by including quantitative measures. For example, according to the USNWR site, the medical school survey is based on several common indicators such as: student selectivity/admission statistics (Medical College Admission Test scores, grade point average, and acceptance rate), faculty-to-student ratio; National Institutes of Health funding; and the proportion of graduates entering primary-care specialties. “Reputation” comprises 20% or less of the medical school model vs. 100% of USNWR’s pharmacy school survey.8 While these quantitative measures were criticized by medical school deans as not accurately reflecting the educational quality of their schools,9 they are better than the focus on “reputation” that occurs in the pharmacy survey.

  • (3) Try to expand the number of groups doing the surveys. This approach occurs in the global assessment of universities, in which several diverse rating groups use different methodologies to rank or rate their quality. 2,4,10.11

  • (4) Develop our own message to the public and disseminate it consistently and broadly. Other countries have used this approach to rate their institutions of higher education and to allocate government resources. 11

My view is that colleges and schools of pharmacy cannot and should not avoid being rated by informed, third-party agencies. USNWR has stated that its rankings “spotlight the country's academically excellent graduate programs and can start you [the student] on the track toward picking the right school for you.” Clearly, the public agrees with this noble purpose, and appears to be willing to pay for it. (One can get complete online access to all USNWR data for an annual fee of $34.95.)12 Although USNWR cautions that “rankings should not be used as the sole criteria in deciding where to go to graduate school,”12 research indicates that many prospective students – for better or worse -- are influenced by surveys by USNWR and other similar organizations when choosing which school to attend.13 On balance, USNWR rankings are useful to the public as well as university administrators because they encourage institutional transparency and accountability. As such, they also provide an incentive to improve institutional performance.10

Colleges and schools of pharmacy should work together on recommendations 2 through 4. We have a great deal of statistics collected about our colleges and schools from many sources within and outside of pharmacy (eg, American Association of Colleges of Pharmacy, Accreditation Council for Pharmacy Education, National Association of Boards of Pharmacy, National Institutes of Health, National Research Council), as well as internal data. These data are constantly used by colleges and schools of pharmacy (and others) to assess their success and quality. However, we have been reluctant to share these data with “outsiders.” We need to overcome this reluctance and work with the credible individuals or groups (eg, prospective students, public and university administrators, and alumni/ae) to identify a key sets of indicators that can be used to measure a college or school’s processes and impact in a more systematic way—an approach already prevalent globally for assessing overall university quality.2,4,11

Colleges and schools of pharmacy should be judged on student factors such as who is admitted, their satisfaction with the program, and their ability to select the career path of their choice upon graduation. Colleges and schools of pharmacy also should be assessed on the quality of faculty members, specifically the scholarly activities they perform and the recognition (eg, awards, prestigious academic leadership positions) they receive. The accomplishments of alumni should also be considered. Ultimately, however, the success of a college or school should be assessed based on fulfillment of its mission, which in our case is to strive for: “excellence in education, service and research, all directed toward enhancing the health and quality of life of the people of the State of Michigan, the nation and the international community.”14

As dean, I can influence these measures at our college and institute action plans to achieve success. However, I cannot do much about the current USNWR ranking system, except treat the results in an ambiguous manner. Thus, until we create a better system of publicly evaluating and promoting our respective institutions, we will all be left wondering where our pharmacy programs will be ranked next time. In the meantime, I will continue to reiterate my standard response: “While we recognize the limitations of such reputational surveys, we are pleased…”

REFERENCES


Articles from American Journal of Pharmaceutical Education are provided here courtesy of American Association of Colleges of Pharmacy

RESOURCES