Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2011 Mar 1.
Published in final edited form as: J Econ Perspect. 2009 FALL;23(4):119–146. doi: 10.1257/jep.23.4.119

Playing the Admissions Game: Student Reactions to Increasing College Competition

John Bound 1, Brad Hershbein 1, Bridget Terry Long 1
PMCID: PMC3046867  NIHMSID: NIHMS178282  PMID: 21373378

During the last several decades, it has become increasingly difficult to gain entry into an American four-year college or university. Growing numbers of students compete for admission to such schools: the number of college applicants has doubled since the early 1970s, while school sizes have changed little. This increase is due both to the increasing fraction of high school graduates applying for college and more recently to the increase in the size of the college-aged cohorts. Using data from the Digest of Education Statistics (Snyder, Dillow, and Hoffman, 2009) and various National Center for Education Statistics (NCES) surveys, we summarize these trends in Table 1. The table shows that while the application rate to four-year colleges has steadily increased over the last several decades, the decline in cohort size between 1982 and 1992 left the number of applicants practically unchanged between the two years. From 1992 to 2004, on the other hand, the number of applicants to four-year colleges grew from 1.19 million to 1.71 million students, an increase of 44 percent, as rising application rates and growing cohort size reinforced each other. The pattern was slightly different for selective private and public colleges, which saw the number of applicants rise by 10 to 15 percent over the 1980s despite declining cohort size. While the application rate to selective privates dipped slightly between 1992 and 2004, the number of applicants still grew by 30,000, or 18 percent, due to growing cohort size.

Table 1.

Supply and Demand Trends in College-Going (thousands)

Year
1972 1982 1992 2004
18 year-olds 3,945 4,122 3,347 4,124
High school graduates 3,002 2,995 2,478 3,064
 …who apply to four-year colleges 949 1,184 1,187 1,705
 …who apply to selective four-
   year private colleges
129 149 172 203
 …who apply to selective four-
   year public colleges
254 271 304 419

Sources: The number of 18 year-olds is from the National Cancer Institute, and the number of high school graduates is from the 2008 Digest of Education Statistics (Table 104). These latter numbers were multiplied by the percentages of high school graduates who met each benchmark according to the authors’ calculations using data from the NCES longitudinal surveys.

Notes: Data availability limits application behavior to the top two school choices of respondents; while this measure is consistent across years, it is does not capture a complete profile of application behavior.

In the face of growing demand, the supply of admission slots at four-year colleges did not keep pace. According to our calculations using data from the Annual Survey of Colleges, a near-census of four-year postsecondary institutions in the United States conducted by the College Board, the top 20 private universities and top 20 liberal arts colleges saw only a 0.7 percent change in average under-graduate enrollment from 1986 to 2003. Those ranked 21 to 50 also experienced relatively little growth (4.9 percent and 6.8 percent at private universities and liberal arts colleges, respectively). In contrast, other private four-year institutions grew nearly 16 percent during the period. Public institutions showed more expansion during this period with enrollments increasing 15.2 percent at the top 20 public universities, 10.5 percent at public universities ranked 21 to 47, and 12.8 percent at other public institutions. This increase in enrollment at the most selective public institutions appears largely driven by transfer students, many assumed to be from public two-year colleges. However, when focusing on the sizes of the incoming freshmen classes, the change in enrollment at public institutions has been much smaller. Because fewer than 500,000 slots were added in total at four-year schools from 1992 to 2004, supply did not keep pace with demand, and college selectivity increased. High school seniors today are subject to more competition than at any time in the recent past.

The increased overall demand for a college education, presumably, can largely be explained by the dramatic increases in the value of such an education since the 1970s (Heckman, Lochner, and Todd, 2006; Goldin and Katz, 2008). The increased demand for admission to selective schools in particular is plausibly related to the fact that the particular institution a student attends has become increasingly important. Since 1970, income distribution has widened among college-educated workers, and Hoxby and Long (1999) find that nearly half of the explained growth in this dispersion is due to the increasing concentration of peer and financial resources at more selective colleges and universities relative to other institutions. Other work has also documented this increasing segmentation within higher education (Hoxby, 1997, this issue; Bound, Lovenheim, and Turner, 2008). The spread of information through the advent of the U.S. News and World Report and other rankings systems have also given students, their families, and society more data with which to evaluate college quality. As emphasized by Hoxby in this issue, the college market has shifted from regional in focus to national. Also, as more workers are college educated, employers may view the average college-educated worker as less productive than in the past. Under this signaling type of framework, a degree from an elite college becomes more valuable. All of these factors likely play a role in increasing the number of high school graduates who consider elite colleges.

This paper begins by documenting the trends of increasing competition in higher education, including how these increases have varied across groups, from the perspective of both institutions and students. It then explores the ways in which this phenomenon has influenced student behavior, in terms of academic preparation and high school activities, standardized test-taking, and college application behavior. Evidence from multiple sources suggests that a significant fraction of students are increasingly searching for ways to maximize their likelihood of admittance into a selective institution. As theory would predict, students have been driven to invest more in signals of ability and to raise their qualifications with the hope of increasing their chances of gaining entry into a selective institution. It has also driven students to alter their approaches to the college application. The extent of student reactions has differed along the ability distribution and by region, as the returns to such investments and changes in application approaches also vary by student. Finally, the paper explores whether such student reactions to growing competition have translated into longer-term effects on the amount that students learn. From a theoretical point of view, the increased competition could have induced high school students to work harder and learn more or, alternatively, could have lead to the reverse by prompting investments in nonproductive signals. Credible evidence on the net effect of increased competition is, needless to say, difficult to find. However, comparisons across regions of the country where competition is more versus less severe provides little evidence that increased competition has had positive effects on what students learn and even provides some suggestive evidence that the reverse might be true.

Increasing College Selectivity

A natural measure of the selectivity of a college would be the odds that a student with a given set of characteristics would gain admittance to the college in question. Selectivity at the institution rises if the odds of gaining admittance to the institution fall for students who might otherwise have been admitted. Two common measures of selectivity used—the fraction of those applying to an institution who gain admittance and the characteristics of students admitted to (or attending) an institution—are not ideal indicators of selectivity in this sense. If what drives down acceptance rates is the fact that more marginal students are now applying to certain colleges, then declining acceptance rates do not imply increasing selectivity (as defined above). At the same time, if more and more students are applying to a college from all parts of the ability distribution, it is entirely possible that measures of preparedness among attending students could decline while the institution is in fact becoming increasingly selective. As we discuss further below, student-level micro data suggest that we are in this situation.

While the data typically used to examine trends in selectively have the limitations noted above, they are readily available. What is more, survey data that allow one to get closer to the ideal construct is only available for relatively small samples of high school graduates, and as such, do not allow for detailed examinations of trends in selectivity by institution type. Therefore, this paper uses the more limited, conventional definitions but also attempts to approximate a more accurate measure of selectivity.

Institutional-Level Indicators

Table 2 displays how the percentage of students accepted has changed from 1986 to 2003 using data from the American Survey of Colleges (ASC), which contains detailed information on institutional classification, enrollment, applications, student body profiles, and expenditures. The results are broken down by sector and by the ranking of the institution.1

Table 2.

Percentage Accepted by Sector and Institution Ranking

Private institutions
Public institutions
“Top 20
private”
Top 21–50
universities
Top 21–48
liberal arts
Other
private
4-year
Top 20
universities
Top 21–47
universities
Other
public
4-year
1986 38.58 62.46 59.75 78.13 63.15 75.59 73.75
1991 38.39 67.39 57.95 76.82 56.78 73.47 68.49
1996 37.55 62.41 61.85 78.73 58.98 76.53 72.96
2001 31.49 51.30 52.68 77.41 50.55 71.92 71.85
2002 30.72 49.51 51.26 75.58 48.81 71.23 71.07
2003 29.85 52.35 47.88 74.35 47.72 70.56 69.19
Percentage
 change
−22.63% −16.19% −19.87% −4.84% −24.43% −6.65% −6.18%
# of schools 38 26 28 419 17 21 208

Source: American Survey of Colleges, College Board, 1986–87 to 2003–04.

Notes: The “Top 20 private” category includes the top 20 private universities and top 20 liberal arts colleges. To be included in the sample, institutions must have had at least 16 of the 18 possible years of data.

Three patterns are clear from Table 2. First, all categories—indeed, all individual schools in the sample—reduced the percentage of applicants they accepted during the period. Second, the most dramatic changes occurred among the more highly-ranked institutions; for example, the top 20 private and top 20 public universities both saw the percentage of applicants accepted fall by about a quarter over this time. Third, in terms of timing, the most dramatic reductions in the percentage accepted happened late in this time period at about the same time cohort size began to grow.

Trends in college selectivity also differed by region. In 1986, four-year institutions in New England and the Middle Atlantic States accepted the smallest percentage of applicants in comparison to other regions (67.5 and 69.3 percent, respectively), and by 2003, they remained the regions with the lowest acceptance rates. To some extent, this reflects differences in the mix of colleges across regions, with many of the most selective schools located in the Northeast. But the declines in acceptance rates within region and public/private status are still notable. During the period, the average percentage accepted by private four-year institutions in New England fell from 64.8 to 59 percent, while in the Middle Atlantic States, the rate at public four-year institutions slipped from 64.3 to 55.7 percent. The West also became much more competitive during the last few decades, with only 68.9 percent of applicants accepted by institutions within the region in 2003, compared with 72.7 percent in 1986.

These trends highlight that competitiveness in higher education has become particularly heightened in the northeastern United States and in California. The South, Midwest, and Southwest also experienced reductions in the percentage of students accepted by four-year colleges and universities during this time, but even in 2003, more than seven out of ten students were accepted on average at institutions within their borders. Since most college-going students still attend a college in their home region, these numbers suggest that the difficulty of getting into college for a typical applicant varies geographically.

The ASC also reports information on the distribution of college entrance exam test scores for entering classes. These data show quite clearly that incoming students have higher scores on college entrance exams than they used to. Figure 1 displays trends in the 75th percentile math SAT score of schools’ student bodies; the results are broken down by institutional sector and ranking. Again, all schools in the sample experienced increases in student body test scores at the 75th percentile, but the largest changes occurred among the top-ranked schools. The growth from 1986 to 2003 was particularly large among public universities in the top 20 (52 points on the SAT, or an 8.1 percent change). Among the private institutions, the top 20 universities and liberal arts colleges experienced an average increase of 41 points, to 749, a score much higher than that found at other schools.2 The trends for math SAT scores are similar to those for verbal SAT scores and for the ACT.

Figure 1. Math SAT 75th Percentile by Sector and Institution Ranking.

Figure 1

Source: American Survey of Colleges, College Board, 1986–87 to 2003–04.

Notes: The figure displays trends in the 75th percentile math SAT score of schools’ student bodies; the results are broken down by institutional sector and ranking. The “Top 20 privates” category includes the top 20 private universities and top 20 liberal arts colleges. To be included in the sample, institutions must have had at least 16 of the 18 possible years of data.

Student-Level Indicators of College Acceptance

A caveat about the institutional-level acceptance rates above is that the composition of applicants as well as the number of applications per student may be changing, and so a lower share of acceptances may only be partially revealing about increased selectivity. Thus, looking at the student perspective is also important, and another measure of selectivity is whether students are able to attend their first-choice college. According to data from surveys of college freshmen done by the Cooperative Institutional Research Program (CIRP) at the University of California, Los Angeles’s Higher Education Research Institute (HERI), about 22 percent of students reported that they were not able to attend their first choice for college in 1974, while 33 percent reported in 2006 that they were not able to do so. This result is not surprising given, as pointed out earlier, that the availability of slots has remained relatively fixed or increased very slowly, particularly at the most highly-ranked schools.

More detail is available by analyzing data from the National Center of Education Statistics (NCES), which allow us to track application behavior and acceptances over time by student background. The results are weighted to be nationally representative. Table 3 displays the percentage of students who applied to a four-year institution by cohort and sector based on their top-two college choices.3 Reading across the cohorts, it is clear that the percentage of students applying to a four-year institution has increased over time, from 38 percent in 1982 to 53 percent in 2004. The percentage applying to selective public institutions has also grown to 12.8 percent for the high school class of 2004. A smaller share of students apply to selective private four-year schools (6.2 percent in 2004), and recently there has been a small decline in this percentage (from 6.7 percent in 1992). It is worth noting that the fall in institutional-level acceptance rates seen in Table 2 would have been even greater had the application rate at selective privates held steady or even risen.

Table 3.

Percentage Who Applied to Four-Year Institutions, by Cohort and Sector

Percentage who applied to
a 4-year institution
Percentage who applied to
a private selective 4-year
institution
Percentage who applied to
a public selective 4-year
institution
High school cohort 1982 1992 2004 1982 1992 2004 1982 1992 2004
U.S. average 38.2 46.5 53.2 4.8 6.7 6.2 8.8 11.9 12.8
Test quintile
 1st 12.3 18.2 23.8 0.1 0.7 0.7 0.7 1.7 2.5
 2nd 19.5 32.9 37.8 0.8 1.7 1.8 2.2 5.3 4.5
 3rd 31.0 47.1 53.9 1.7 3.2 2.7 5.3 8.8 9.5
 4th 51.9 64.3 67.6 4.1 7.2 5.8 12.6 16.0 18.1
 5th 77.8 81.7 84.8 17.0 22.8 20.4 23.8 29.3 30.1
Region
 New England 46.7 59.5 62.9 13.9 19.9 14.4 9.0 12.5 12.3
 Middle Atlantic 40.5 55.9 59.9 9.5 14.9 10.4 7.1 12.9 12.0
 South 35.4 46.0 55.2 2.1 4.3 5.4 7.5 10.6 12.7
 Midwest 40.2 48.7 55.9 1.9 3.2 4.2 11.2 13.9 16.7
 Southwest 37.4 39.1 49.2 1.6 3.1 2.4 6.8 8.7 7.6
 West 31.6 36.7 42.5 5.1 5.8 6.0 9.2 11.3 11.1

Sources: National Center for Education Statistics, National Longitudinal Study of the High School Class of 1972 (NLS72), High School and Beyond (HSB82), National Educational Longitudinal Survey (NELS92), and Educational Longitudinal Survey (ELS04). The cohort year refers to the year on-time students would have graduated high school.

Notes: Data are representative of high school seniors for the cohorts indicated. Application behavior is based on the top two school choices of respondents. Geography is according to the high school of the student. The test quintile comes from a survey-specific cognitive test battery given to the respondents of each survey during the spring of their senior year; by construction, it is normalized by cohort. (The test batteries are similar but not identical across surveys.) See the online Data Appendix available at ⟨http://e-jep.org⟩ for the definitions of selective schools and the regional breakdowns.

Not surprisingly, higher-ability students were more likely to apply to selective institutions, with 20.4 and 30.1 percent of the fifth, or top, quintile of the 2004 graduating class applying to a selective private or public institution, respectively. However, Table 3 also emphasizes increasing proportions of students at all ability levels applying to four-year institutions, including selective schools. The propensity to apply to a four-year institution, particularly a selective one, also differed by region. Students from New England were by far the most likely to apply to a selective private school, although this proportion dropped from 19.4 percent in 1992 to 14.4 percent in 2004. Students from the Middle Atlantic States are the second most likely, and this region shows a drop-off similar to New England. At the selective public four-year institutions, students from the Midwest are the most active, followed closely by students in the South and New England.

These changes in application rates over time and by ability and region make analysis of acceptance rates difficult. Ideally, we would like to observe how the same student who applied to college in 1972 or 1982 would fare if that same student had applied in 1992 or 2004, instead. Because this is not possible, we instead construct a counterfactual acceptance rate that controls for the changes in applicants by ability and region that we observe in Table 3. These acceptance rates, shown in Table 4, are fitted probabilities from logistic regressions that use the 1972 high school graduating class as the baseline but allow coefficients to be survey-wave specific. Each number represents the mean conditional probability that a student from 1972 in a given cell would have been admitted to a given college type during the respective survey year. Generally speaking, the likelihood of a student with characteristics from 1972 being accepted by a four-year college has declined. While acceptance rates increased slightly from 1972 to 1982, this trend was reversed thereafter. Over the entire 32-year period, the likelihood fell nearly 9 percent.4 The sharpest reductions occurred for low-ability students. Those among the first and second (lowest) quintiles saw the likelihood of being accepted by a four-year institution fall by 42.5 and 23.3 percent, respectively.

Table 4.

Counterfactual Rate of College Acceptance, Conditional on Applying (controlling for student ability and region)

Percentage who applied to a
4-year institution
Percentage who applied to a
private selective 4-year
institution
Percentage who applied to a
public selective 4-year
institution
Cohort 1972 1982 1992 2004 1972 1982 1992 2004 1972 1982 1992 2004
U.S. average 94.2 97.6 90.7 85.9 82.5 78.5 71.3 63.9 88.4 87.9 84.9 78.9
Test quintile
 1st 86.4 95.9 69.4 49.7
 2nd 88.9 96.6 80.9 68.2 71.2 71.3 68.5 50.8
 3rd 91.3 97.7 89.4 83.2 76.1 69.2 59.6 39.7 81.7 85.6 78.1 73.1
 4th 93.9 98.3 93.2 89.5 73.2 87.9 67.6 59.8 88.2 86.3 78.8 73.3
 5th 97.9 97.6 94.8 94.0 86.1 78.9 74.7 71.5 92.0 91.7 92.6 87.6
Region
 New
  England
94.7 97.6 91.8 87.8 83.0 78.8 72.6 65.4 88.9 87.5 85.6 80.0
 Middle
  Atlantic
94.8 97.7 91.7 87.7 82.3 78.6 70.7 64.1 88.7 88.9 85.9 80.2
 South 93.1 97.4 88.7 82.2 80.9 76.9 69.2 59.2 87.6 87.3 83.8 77.6
 Midwest 94.1 97.5 90.8 86.2 84.7 80.3 73.2 69.2 88.0 87.6 84.4 78.1
 Southwest 93.4 97.5 89.1 83.2 81.2 76.3 68.4 59.7 89.9 88.5 84.5 79.3
 West 94.6 97.6 91.2 86.8 83.1 79.3 73.2 63.7 89.1 88.1 85.6 79.8

Source: National Center for Education Statistics, various longitudinal surveys described in the text.

Notes: We construct a counterfactual acceptance rate that controls for the changes in applicants by ability and region. These acceptance rates shown are fitted probabilities from logistic regressions that use the 1972 high school graduating class as the baseline (data from the National Longitudinal Study of the High School Class of 1972) but allow coefficients to be survey-wave specific. Each number represents the mean conditional probability that a student from 1972 in a given cell would have been admitted to a given college type during the respective survey year. The covariates used for the regression include only test decile dummies and regional dummies; a version based on a more thorough set of covariates is available in the online appendix at ⟨http://e-jep.org⟩. Also see notes under Table 3.

Taking a broad perspective across the entire horizon of 1972 to 2004, the counterfactual conditional acceptance rate at selective private schools fell by 22.5 percent, more than twice the decline at selective public institutions (10.7 percent) or at the typical four-year school (8.8 percent). Among students in the highest test quintile, the reduction in the likelihood of being accepted was relatively small at selective publics and the average four-year school (4.8 and 4.0 percent, respectively) but not at the selective private schools (17.0 percent). Other studies also emphasize this point. For example, McDuff (2007) finds evidence that someone with a combined SAT score of 1500 would have less than a 50 percent chance of getting into a very selective college. Students of median ability in the third quintile also experienced a substantial decline in the likelihood of being accepted by a private selective institution (a 47.8 percent reduction). The pattern of declining acceptances at selective private institutions also holds looking across regions. While most of the regions experience a decline of roughly similar magnitude (except for the Midwest), it is worth noting that the greater share of students applying to selective private schools from New England and the Middle Atlantic states (Table 3) implies a greater number of students would be rejected from these regions under the counterfactual exercise.5

In the light of the steep reductions in acceptance rates as shown in Table 4, particularly among selective private schools, and the findings of Hoxby (1997) regarding an increasingly national (and international) college market, we used data from the Integrated Postsecondary Education Data System (IPEDS) to examine the fraction of first-year students coming from a different region of the country than the school they attend, as well as the fraction coming from a different country entirely. (The IPEDS is an annual census of postsecondary institutions in the United States collected by the National Center of Education Statistics.) For the period between 1992 and 2004 among selective private schools, the share of first-year students from a different region than their school rose only slightly, from 47.9 percent to 49.0 percent. The fraction of international students increased even less, from 4.2 percent to 4.5 percent. These numbers are too small for interregional and foreign students to be playing a significant role in rising college selectivity over this period.6

It seems quite likely that college applicants (and their parents) from New England and the Middle Atlantic states may be bearing the brunt of this increasingly competitive environment. Much of this has to do with the distribution of schools across the country, with far more selective private institutions than selective public institutions located in the Northeast. As a consequence, a typical talented student from the Northeast is more likely to apply to a selective private college or university than a typical talented student from, say, the Midwest. As we documented earlier, while the supply of slots has expanded only slightly at all selective schools, this growth has been greater at selective publics than selective privates. Additionally, a far greater share of students at selective private institutions come from out-of-state or out-of-region than at selective public instutions (49 percent versus 10.5 percent, respectively). All these facts suggest that high-ability students in the Northeast are competing for fewer slots than their peers elsewhere in the country. From Table 3 we know that although the share of students from the New England and Middle Atlantic States applying to college overall has risen, the share applying from these regions to selective schools has declined since 1992, which suggests that students from these regions are not finding the same level of access to selective schools as before. Thus, behaviors that result from the more competitive college admissions environment should tend to be more pronounced among families in the Northeast.

Student Responses to Growing College Competition

Stories abound concerning the increasing pressure that students face to take on activities that will impress university and college admission officers. For example, Williams (2006) reported in the New York Times: “Once, summer for teenagers meant a season of menial jobs and lazy days at the local pool. But for a small but growing number of college-bound students … summer has become a time of résumé-building academic work and all-consuming, often exotic projects to change the world… . There is a growing sense among college-bound seniors and their parents that downtime is wasted time, said Stacy Harvey, the college counselor at Santa Monica High School in California.”

In what follows, we evaluate the available quantitative evidence to examine whether this story is indicative of student responses to growing competition. As the return to attending and graduating from a more selective school increased while entry into such schools became more competitive, one would expect to see students invest more heavily in behaviors that would increase their chances of acceptance. Such investments could include better academic preparation, such as taking more challenging courses or being more involved in activities looked upon fondly by admissions committees. It could also include investing in signals of ability, such as focusing on improving college examination test scores. Changes in application behavior, such as the number of applications submitted or where test scores are sent, might also increase the likelihood of being accepted into a top school. Because the increase in selectivity has varied both across different ability levels of students and regions of the country, one might expect to see changes in student behavior also vary by student test score and geography. We note, however, that we cannot impart a causal interpretation to the change in student behavior as we are unable to separate the effects of other, secular changes that are unrelated with growing competition.7 The results presented here should thus be viewed as suggestive.

Academic Preparation and High School Activities

Students have had increasing incentive to improve their academic preparation. Table 5 reports three indicators of college-preparatory high school behavior—taking calculus, taking an AP exam, and time spent on homework. Although not all indicators are available for each cohort, they are measured consistently when available. The tabulations are done to represent the national average of all high school seniors at that time, as well as separately by test battery quintile, geography, and what type of school the student applied to (students may apply to more than one type of school).

Table 5.

Studying, Course-Taking, and AP Exam-taking Behavior (percentages)

Took high school calculus
Took an AP
exam
Homework time:
10+ hours/week
High school cohort 1982 1992 2004 1992 2004 1982 1992 2004
U.S. average 9.2 10.3 15.2 16.5 30.9 10.2 26.7 20.4
Test quintile
 1st 2.0 0.3 4.0 3.4 13.7 4.6 15.1 10.3
 2nd 1.8 1.2 3.8 6.2 14.6 5.3 20.0 15.4
 3rd 2.4 3.7 5.5 8.6 24.6 7.5 22.5 17.8
 4th 7.9 10.1 13.2 16.9 36.6 11.9 30.1 23.1
 5th 31.5 38.7 49.9 49.0 66.2 21.3 39.4 35.9
Application status
 4-year school 19.7 19.2 23.3 27.5 44.8 18.4 34.7 27.4
 Selective private 43.9 43.6 52.3 60.0 77.9 38.7 49.5 45.2
 Selective public 26.6 29.4 36.8 39.7 60.8 22.7 40.0 33.7
Region
 New England 15.4 15.8 19.3 19.0 31.6 16.8 35.8 23.7
 Middle Atlantic 13.8 13.8 18.2 20.6 31.7 12.1 25.8 18.6
 South 6.4 9.5 15.2 17.4 32.9 7.5 25.4 17.7
 Midwest 8.2 8.9 14.8 13.0 26.9 9.6 25.2 19.4
 Southwest 4.6 10.3 13.2 10.8 31.7 5.3 23.3 16.1
 West 8.0 9.1 13.3 19.5 32.6 12.4 30.8 26.7

Source: National Center for Education Statistics, longitudinal surveys (HSB82, NELS92, ELS04). The cohort year refers to the year on-time students would have graduated high school.

Notes: The universe is high school seniors in the year designated for each cohort, and all figures are weighted to match the population universe. The test quintile comes from a survey-specific cognitive test battery given to the respondents of each survey during the spring of their senior year; by construction, it is normalized by cohort. (The test batteries are similar but not identical across surveys.) See the online Data Appendix available at ⟨http://e-jep.org⟩ for the definitions of selective schools and the regional breakdowns. Application status refers to the types of colleges to which the respondent applied, and it is nonexclusive. Calculus and AP taking are based on students’ self reports in the survey. Homework time is also based on self reports with categorical answers; the categories can consistently be aggregated across survey cohorts to construct a 10+ hour per week measure.

Overall, high school students in 2004 engaged in significantly more behavior associated with college preparation than did their counterparts from 10 and 20 years before. The share taking at least a semester of calculus in high school rose from 9.2 percent to 15.2 percent between 1982 and 2004. In just the 12 years from 1992 to 2004, the fraction of seniors having taken at least one Advanced Placement (AP) exam nearly doubled, from 16.5 to 30.9 percent. Finally, while one in ten high school seniors spent ten or more hours on homework per week in 1982, this ratio had reached one in four by 1992. However, the share with at least 10 hours of homework did drop off from 1992 to 2004.

The recent fall in homework time is somewhat mystifying as theory predicts that, at least to the extent that homework time is positively correlated with college acceptance, homework time should increase as competition intensifies.8 Yet the trend appears otherwise. CIRP data from The American Freshman annual survey finds a similar pattern. In that dataset, the percentage of college freshmen who reported spending six or more hours per week on homework during their senior year of high school declined between the early 1990s and 2004. Furthermore, it appears that the drop in homework time was well underway by the late 1980s, more or less continuing to the present day. The sharp rise in the percentage spending more than 10 hours a week on homework between the 1982 and 1992 NCES cohorts thus likely masks an even more dramatic spike that occurred in the mid 1980s.9

As seen by the example of homework time, this overview of trends in academic preparation can hide subtleties in the timing of the changes, particularly with regard to certain groups of students. Separating the analysis by test quintile shows that the increases in college-preparatory behavior are widespread throughout the ability distribution. This pattern may reflect that much of the rise in anticipated college-going over the past 30 years stems from higher college application rates from those in the lower quintiles (as shown earlier in Table 3). Nonetheless, looking strictly at the changes between 1992 and 2004, when increasing competition was the most evident, the top ability quintile shows consistently the most positive movement across each of the behavior measures. This finding is largely corroborated by students applying to selective four-year private institutions, who overwhelmingly come from the top ability quintile. Between 1992 and 2004, the decline in homework time is smaller and the growth in calculus-taking is larger for students applying to selective private institutions than those applying to baccalaureate institutions more generally.

Examining the time trends by region shows that New England and the Middle Atlantic States, and to a somewhat lesser extent the West (especially California), tend to be early leaders among the college preparatory measures, but that the remaining regions tend to exhibit faster growth, if not entire convergence, over time.

Other data provide further support of a trend toward increasing academic preparation. Data from the College Board give a more detailed account of the growth of the AP program. Begun at a few pilot secondary schools in the mid 1950s as a way for superior students to earn college credit while still in high school, by 2007 some 1.4 million students at over 13,000 high schools throughout the country took 2.5 million exams in over 30 subjects. On a per-capita basis, fewer than two out of one hundred 18 year-olds took an AP exam in 1977; 30 years later, this ratio had reached 34 out of 100. While the growth has been remarkably stable—since 1970, both the number of takers and the number of exams have increased at roughly 10 percent per year—it has not been even throughout the nation.

Administrative data from the College Board suggest that AP program participation was relatively rare from the 1970s into the early 1980s: among the early leaders—New England, the Middle Atlantic States, and the West (mostly California)—the participation rate was roughly 5 percent, while in the South, Southwest, and Midwest, it was scarcely half that. Over the next 25 years, while all regions exhibited rapid growth, the South and Southwest experienced a meteoric rise, particularly since 1998, allowing them to converge with the early leaders, and in the case of the Southwest, surpass them. As of 2007, only the AP participation rate in the Midwest, a region that has not shown particularly sharp increases in competitive pressure, noticeably lags the other regions. This pattern is also illustrated using the NCES data by the middle columns of Table 5.

Of course, the AP program has two main purposes: it allows for earning college credit while still in high school (thus reducing the costs of college attendance), and it can also serve as a signal of academic ability to prospective colleges. As college costs and competitive pressures have risen, both reasons are likely to have grown more compelling. Additionally, the rising importance of signaling through taking AP exams can be seen by looking at the correlation in the change in participation rates and the change in passing rates. (Each AP exam is scored on a 1 through 5 scale; scores of 3 or higher are considered “passing” and are the minimum that most colleges require for credit.) Using state-level data for the period 1996 through 2007, we find that this correlation is strong and negative; the point estimate from the regression implies that a 10 percent increase in the participation rate is associated with a 2.6 percentage point decline in the pass rate. This finding is consistent with much of the recent growth in participation coming from marginal students who would not have taken an AP exam in the past but have an increased desire to signal ability.

Involvement in extracurricular activities may also affect chances for college admission. Data from CIRP’s Freshman Survey show that the percentage of college freshmen who regularly volunteered during their senior year of high school increased rapidly from about 45 percent in 1987 and 1988 up to about 70 percent by 2000, where it has roughly remained since. Similar increases are reported across institutions of different selectivity, although those at highly selective institutions consistently report volunteering at higher rates (77 percent in 2004) than do those at institutions of medium or low selectivity (72 percent and 68 percent, respectively, in 2004). However, other data suggest that participation in school clubs has decreased in recent years. The percentage of students reporting having spent at least six hours per week in a school club fell from 18 percent in the early 1990s to about 14 percent by the mid 2000s. Interestingly, the drop-off is most pronounced among students attending the most selective colleges. This result, like the homework results, deserves further investigation to better understand how time use among students has changed in recent years.

Taking this evidence together, we find mixed evidence in support of the hypothesis that high school students are undertaking rational behavioral responses to increased college selectivity. Over the entire period from the early 1980s through the early 2000s, high school seniors increased the time spent on homework and became more likely to take advanced classes like calculus and AP exams. Furthermore, the parts of the country that we identified in the preceding section as having experienced the earliest and most pronounced growth in college admission competition—primarily New England and the Middle Atlantic States—also exhibited significantly higher levels in homework time and calculus course-taking already by 1982. The same pattern holds for students applying to selective private schools, and, to a lesser extent, those at the top of the ability distribution. Within the last 15 years, when competitive pressures were growing the most quickly, AP exam taking and time spent volunteering rose. However, this growth was stronger in the parts of the country outside the Northeast, for students below the top ability quintile, and for those not applying to private selective schools—in other words, the segment of the student population that experienced lesser increases in college competition. Moreover, time spent on homework and on extracurricular and leadership positions actually declined in this period. Perhaps students have substituted some of their time away from homework and extracurriculars and toward AP exams and volunteer time. The magnitude of how different factors affect college admissions for students in different positions, and whether such substitution makes sense, remains to be investigated.

Standardized Test-taking and Test Preparation

The share of high school seniors taking either or both the primary college entrance examinations, the SAT and the ACT, has risen in recent decades. The SAT, or the Scholastic Aptitude Test, is the older examination and is more popular on the East and West Coasts. The rival ACT is broader in its coverage of material, more common in the middle of the country, and has grown in popularity in the last 25 years.

The first set of columns of Table 6 show the fraction of high school seniors, by cohort, that took either of these college entrance examinations. While this proportion has increased moderately from 56.1 to 64.6 percent between 1982 and 2004, this increase has not been uniform. As has been documented earlier, much of the growth has come from marginal students lower in the ability distribution, with the concomitant slowdown being driven by the plateau among the higher-ability quintiles. In fact, the share of students applying to selective schools who took a college entrance exam, as well as the share coming from the competitive New England and Middle Atlantic regions who took a college entrance exam, actually declined from 1992 to 2004. This counterintuitive trend may in part be a backlash against the stress associated with increased competition: many colleges and universities, including several selective ones, no longer require either test for admission (Bruno, 2006).10

Table 6.

College Exam Test-Taking and Preparation (percentages)

Took SAT or ACT
Test preparation:
private
class/tutoring
Test preparation:
any form
High school cohort 1982 1992 2004 1992 2004 1992 2004
U.S. average 56.1 61.0 64.6 14.1 18.1 59.7 62.6
Test quintile
 1st 29.2 25.2 34.3 18.4 19.2 48.2 48.2
 2nd 38.4 46.3 51.1 11.7 16.0 58.7 58.1
 3rd 50.8 65.6 69.5 12.5 16.7 62.4 66.8
 4th 70.7 80.3 80.3 11.7 18.0 66.9 68.1
 5th 90.5 89.5 88.8 15.2 20.3 65.9 69.7
Application status
 4-year school 91.7 89.8 87.4 17.7 23.1 72.7 76.5
 Selective private 98.1 95.2 90.0 32.8 36.4 80.4 83.0
 Selective public 95.9 92.9 88.6 18.0 27.0 74.0 76.8
Region
 New England 63.7 74.1 68.2 19.1 19.4 60.5 58.7
 Middle Atlantic 57.3 70.0 66.6 19.4 20.9 64.4 67.6
 South 54.6 59.2 68.7 14.2 21.4 61.9 69.3
 Midwest 57.2 64.9 71.1 8.7 13.9 55.0 59.7
 Southwest 54.7 51.9 63.6 17.6 17.2 63.9 62.6
 West 51.4 51.9 50.7 13.4 17.4 57.4 55.0

Source: National Center for Education Statistics, longitudinal surveys (NLS72, HSB82, NELS92, ELS04). The cohort year refers to the year on-time students would have graduated high school.

Notes: The universe is high school seniors in the year designated for each cohort, and all figures are weighted to match the population universe. The test quintile comes from a survey-specific cognitive test battery given to the respondents of each survey during the spring of their senior year; by construction, it is normalized by cohort. (The test batteries are similar but not identical across surveys.) See the online Data Appendix available at ⟨http://e-jep.org⟩ for the definitions of selective schools and the regional breakdowns. Application status refers to the types of colleges to which the respondent applied, and it is nonexclusive. SAT/ACT test taking are (each) based on students’ self reports, as are the test preparation questions; “any form” of test preparation includes private classes/tutoring, classes offered by the high school, and self study using books, video, or computer software.

Of course, the numbers in Table 6 may mask differences between the SAT and ACT. Students in both New England and the Middle Atlantic States use the SAT more heavily; nearly three out of five 18 year-olds have taken the test in these regions in recent years, nearly twice the rate of the rest of the country, based on our tabulations of the NCES surveys. Nonetheless, every region of the country except for the Midwest, where the ACT is most prevalent, has shown a sizable increase in the SAT-taking rate over the past 35 years, with this increase accelerating within the past 10 years. The fastest growth has come from the Southwest and West regions, where participation rates have increased nearly 75 percent since 1972, and over 20 percent just since 1997.

Considering that most colleges require at least one of the entrance examinations but generally accept either, it is useful to look at patterns of students taking both the SAT and ACT. In some cases, students perform better on one than on the other, and the mitigation of risk from taking both tests may exceed the financial and psychic costs. Using the NCES panels to construct snapshots of combined test-taking, we find that about one in eight college-bound seniors took both tests in 1972, and by 2004 this ratio had increased to one in five. Among those applying to private, selective schools, however, both the levels and rate of increase are greater: about 15 percent of these students took both in 1972, and 35 percent took both by 2004. Unfortunately, the NCES data are not able to pinpoint the timing of this increase precisely. Examining aggregate ACT participation rates in predominantly SAT states,11 although a crude proxy, provides higher-frequency data. In just the six years between 2001 and 2007, the New England and Middle Atlantic regions have seen their ACT participation rates nearly double, from about 6 percent to 11 percent. As SAT participation rates have also increased during this period, it seems plausible that the rate of combined test-taking in these areas has increased considerably in recent years.

With the growing importance of the SAT and ACT in admissions, students have had increasing incentive to invest in test preparation services. Table 6 also displays trends in private classes or tutoring and any form of test preparation for the 1992 and 2004 cohorts. Use of some kind of test preparation service is more common among those who are higher ability and those who applied to a four-year institution, particularly a selective private school.

Retaking the test is another way in which students may respond to competitive pressure. According to a College Board report, of the 1.1 million students of the class of 1997 who took the SAT, just under half took the test more than once (Camara and Nathan, 1998). We suspect that retaking of the SAT, as well as the ACT, for which there is even less data, has risen considerably in recent years, but without better data we are unable to test our hypothesis.

Finally, some students (or their parents) might seek an advantage by obtaining special accommodations during the test, such as additional time or a less-crowded room. Abrams (2005) provides suggestive but compelling evidence that the College Board’s decision in 2003 to end “flagging” of tests given under nonstandard conditions resulted in benefitting the savvy and well-to-do, with unprecedented score gains for nonstandard-condition test takers in Washington D.C. and wealthy communities in California. We extended Abrams’s analysis slightly by examining the fraction of SAT takers under nonstandard conditions in selected (SAT) states in 2003 and 2004, the years bracketing the policy change. In states where the SAT is prevalent and competitive pressures have risen sharply—New York, Massachusetts, Connecticut, New Hampshire, New Jersey, and Maryland—the share of those taking the SAT under nonstandard conditions ranged from 3.4 to 5.2 percent. In a selection of five states where the SAT is also prevalent but competitive pressures have risen less—Indiana, South Carolina, North Carolina, Georgia, and Oregon—the share of those taking the SAT under nonstandard conditions ranged from 0.8 to 2.4 percent.

College Application and Score-Sending Behavior

Applying to a different or larger set of schools may feel to many students and their parents like a relatively easy and inexpensive way to increase the chance of college acceptance. In the mid-1970s, the Common Application began as a near-unified college application form among a consortium of selective, private schools. Membership in the consortium has grown considerably since then: by 2007, nearly 300 institutions participated. Moreover, Internet-based applications began in 1998, and public institutions were invited to join in 2001.12 These innovations, which obviate having to write multiple essays and fill out multiple forms by hand, have almost certainly reduced the cost of applying to a wider variety and greater number of colleges even further.

Figure 2 investigates the trend in the number of college applications with data drawn from the CIRP Freshman Survey. This series exists for successful matriculates at baccalaureate-granting institutions from the late 1960s through 2006, broken down by the selectivity of the institutions for selected years. While 25 percent of students had applied to four or more schools in 1972, more than half had by 2004. Figure 2 shows that the percentage of students applying to seven or more schools rose from about 3 percent in 1972 to 18 percent in 2004. This implies that more than half of the increase among those applying to four or more schools is driven by those applying to seven or more schools; within the last ten years, more than three quarters of the increase is from those applying to seven or more schools. The increase in application rates has been widespread throughout the selectivity distribution, with students at highly selective institutions not only sending more applications on average, but also increasing the number of applications sent at a faster pace.

Figure 2. Percentage of Students Reporting Having Applied to 7+ Schools.

Figure 2

Source: The American Freshman, Cooperative Institutional Research Program (CIRP), various years.

Notes: CIRP attempts to make its sample nationally representative by stratifying participating schools by control, highest degree awarded, and selectivity, and then weighting responses to population totals from the Integrated Postsecondary Education Data System (IPEDS). The selectivity metric used is time-varying and based on mean composite SAT score (or ACT equivalent). As a rule of thumb, high selectivity is fairly similar to our combined select private and select publics.

Another proxy for college application behavior is the number of SAT score reports sent to various colleges.13 When taking the SAT, students are allowed to send up to four score reports at no additional marginal cost. However, in recent years, students have been sending far more score reports. For those with scores above 1400 (around the 97th percentile), the median number of reports sent is around eight, which suggests that even students with very high scores do not feel that they can rely on being accepted into a top school. Table 7 illustrates how the number of reports sent varies with score. About three out of 10 students with barely above-average SAT scores, in the 1000 to 1090 range, send six or more score reports; one out of eight in this range of test scores sends eight or more. Throughout the table, the fraction of students sending a given number of reports rises with the test score.

Table 7.

Fraction of Students Who Sent at Least 6, 8, 10, or 15 SAT Score Reports, by Score

SAT score range
# SAT score
reports sent
900–990 1000–1090 1100–1190 1200–1290 1300–1390 1400–1490 1500–1600
6+ 0.244 0.295 0.370 0.461 0.570 0.680 0.783
8+ 0.095 0.128 0.178 0.251 0.343 0.450 0.560
10+ 0.032 0.048 0.076 0.117 0.177 0.247 0.318
15+ 0.003 0.004 0.008 0.014 0.023 0.035 0.045

Source: Tabulations by Jesse Rothstein of College Board microdata of SAT takers.

Notes: The sample is restricted to states in which students primarily take the SAT. (See Data Appendix available at ⟨http://e-jep.org⟩ for a definition and list of SAT states.) The data cover SAT I tests taken in the period 1996 through 2001. The test allows up to four score reports to be sent at no charge beyond the test fee.

Has Greater Competition for Higher Education Increased Learning?

The increasingly competitive environment in higher education has increased the level of anxiety that many high school students and their families experience (Lombardi, 2007; Kaufman, 2008). Beyond this, it is natural to wonder whether the increasingly competitive environment has made the typical high school student experience more productive. On one hand, an increasingly competitive environment could induce students to work harder at school and, as a result, to learn more during their high school years; on the other hand, certain mechanisms might lead to the opposite outcome. For example, capable students may spend time on activities that will enhance the chance they obtain admission to selective colleges at the expense of spending time on other activities that might be more productive; Holmstrom and Milgrom (1991) offer the classic formal model of this general phenomenon.

Raising the possibility that increasing competition in higher education may be counterproductive takes us into some profound and difficult questions of education policy. For example, to what extent will an increase in students taking SAT tests or AP exams increase learning? Time spent in an SAT test preparation class, focusing on strategies for more efficient time use or guessing during the exam, may accomplish relatively little to enhance learning. In contrast, one could imagine that students do learn some history or biology while taking high school courses to prepare for the AP tests in these areas. However, the AP tests put a heavy emphasis on memorization of detailed facts because such knowledge is easier to test and to measure. Some students might learn more about softer aspects of various subjects (say, history) and about how to pursue these interests on their own if they were taking classes not focused on the AP exam. One prominent study commissioned by the National Science Foundation and the U.S. Department of Education concluded that AP courses crammed in too much material at the expense of understanding, and that many were taught by teachers who did not have sufficient background in the field (National Research Council, 2002). As a result of these kinds of issues, some highly prestigious private and public high schools are abandoning AP classes (Hu, 2008).

Another major question of education policy is the balance between intrinsic and extrinsic motivation. Kreps (1997) and many others have emphasized that a key value of education is to build an intrinsic motivation for learning. However, the current increasingly competitive high school environment seems to put more emphasis on the extrinsic rewards associated with study, which can result in several problems. For example, even capable students may learn less when under heavy pressure (Ariely, Gneezy, Loewenstein, and Mazar, 2009). This might hold true, for example, if capable students spent a great deal of time worrying about getting into the college of their choice rather than simply focusing on their studies. At the other end of the ability spectrum, less-capable students may effectively give up trying, either because they know their chances of getting into a selective school are small or because they do not want to subject themselves to possible humiliation. Moreover, the experimental literature has found evidence that in certain settings extrinsic motivations and rewards can reduce the pressure of intrinsic motivation (Heyman and Ariely, 2004; Gneezy and Rustichini, 2000a,b). Readers who doubt that many students are intrinsically motivated are likely to imagine increased competition causing students to work harder, while those who think that many students are internally motivated will more likely think that increased competition and external pressure will take time away from more productive activities. But many educators would express some trepidation if they believed that colleges are showing an increasing tendency to select those who are externally motivated at the expense of those that are internally motivated.

The efforts that students and their families make to increase their attractiveness to colleges also have a number of undesirable consequences for equity and efficiency. If some families are in a better position than others to invest in valuable signals, either because they are more aware of what colleges are looking for, or because they have more resources to make the appropriate investments, these families will have an advantage over other families, with negative consequences for both the efficiency and equity of college selection (Leonhardt, 2004). For instance, when some students take SAT prep courses and others do not, the information value of SAT scores is reduced. The high stakes (or perceived high stakes) involved in college admissions naturally leads parents at least partly to game the system. Such behavior is naturally reinforcing. If everyone else in your school is managing to figure out ways to take the SAT untimed, it would seem foolish not to do so oneself. To the extent the admission process is seen as a process that can be manipulated, students learn that the appropriate strategy is to make the most of any advantage they may be able to obtain (Rabin, 1993).

Taking all the factors together, has the increased competitive environment for higher education improved the learning of high school students or has it been counterproductive? Persuasive evidence as to which view is right is exceedingly hard to come by. Various authors have found evidence that many U.S. college students are not particularly hard-working or motivated (for example, Sabot and Wakeman-Linn, 1991; Hersh and Merrow, 2005; Nathan, 2006). However, this research does not make comparisons across time, nor does it offer a way to gauge the importance of the increasingly competitive nature of college admission on the outcomes observed.

We attempt to fill this gap by comparing trends across states in outcomes that we believe are valuable metrics of social welfare. In particular, we examine four indicators: the percentage of 19 year-olds enrolled in college, the percentage of 25 year-olds with at least some college, the percentage of 25 year-olds with at least a bachelor’s degree, and real annual labor earnings of 25 year-olds who were employed and not attending school. Using the Integrated Public-Use Microdata Sample (IPUMS) for the 1980 Census and the 2005–2007 American Community Surveys, we constructed state-level averages of the above variables. The 1980 values capture high school experiences of the 1970s, well before the period of increased competitive pressure to get into college. The 2006 values, on the other hand, correspond to high school during the late 1990s and early 2000s, when we have seen pressure to be much higher.

In order to compare trends across states with more and less competitive pressure, one must first delineate which states are in which group. To this end, we created a composite index of competitive pressure for each of the 50 states plus the District of Columbia by adding the fraction of students who engaged in each of the following behaviors in 1992: took the PSAT, took an AP exam, spent 10+ hours on homework per week, used private test preparation services, and applied to five or more colleges. The top six “states” from this index, New Jersey, Rhode Island, the District of Columbia., Connecticut, Massachusetts, and New York, align quite closely with our intuition and reading of the popular press. (For the full, rank-ordered list, see Bound, Hershbein, and Long, 2009). There appears to be a natural gap between the states ranked six and seven so we group these first six together. The states ranked seventh through eleventh are Delaware, Virginia, California, Colorado, and Georgia. We grouped them together because of their similar scores and a natural gap in the index after state 11. We matched our Census data to the competitive index using an individual’s state of birth as a proxy for state of high school attendance.

Figures 3A–D display scatter plots, one for each of the social indicators, where values in 1980 are along the x-axis and values in 2006 are along the y-axis. States that are among the six most competitive according to the index are marked with triangles, states ranked seventh through eleventh are squares, and the remaining states are circles. For the first three indicators (Figures 3A–C), these plots show clearly that the states that were subject to more competitive pressure are also those that had a considerable initial advantage in educational attainment. The top six states are generally among those on the far right, and few, if any, of the less competitive states overlap with them. (This pattern is weaker, but still evident, if one uses the top-eleven threshold instead.) If competitive pressure has a positive effect on learning outcomes, then we would expect the lead in educational attainment of the top six states to persist, and perhaps even widen, over time. However, this has not been the case. Over the last quarter century, the gap in educational attainment between the more- and less-competitive states has narrowed significantly. Competitive states still have higher levels of education in Figures 3A–C, but this advantage is much smaller in 2006 than 1980. In Figure 3B, several of the less-competitive states have converged with or overtaken the top six. For all three indicators, while nearly every state experienced growth, the less competitive states grew faster, on average, than did the top six.

Figure 3. The Effects of Competitive Pressure in High School on College Enrollment, College Attainment, Share with a Bachelor’s Degree, and Earnings, by State.

Figure 3

Figure 3

Source: Integrated Public-Use Microdata for the 1980 Census and the 2005–02007 American Community Surveys.

Notes: We created a composite index of competitive pressure for each of the 50 states plus the District of Columbia. The index of competitive pressure is defined as the sum of the fraction of students who engaged in each of the following in 1992: took the PSAT, took an AP exam, spent 10+ hours on homework per week, used private test preparation services, and applied to five or more colleges. The top six “states” from this index are New Jersey, Rhode Island, the District of Columbia., Connecticut, Massachusetts, and New York. The states ranked seventh through eleventh are Delaware, Virginia, California, Colorado, and Georgia.

Figure 3D shows the trend in log earnings adjusted for current state of residence, sex, education, and the interaction of sex and education. Unlike in the previous panels, the adjusted earnings data show no initial advantage of the competitive states; they are quite close to average in 1980. Furthermore, their relative position in 2006 has changed little, with no clear differential trend relative to the less competitive states. If increased competitive pressure to get into college caused students to learn more and become more productive, we do not see it in their earnings.

Of course, initial conditions differ across states, and so it would be difficult to state with confidence that the implicit differences-in-differences analysis presented here represents causal effects. Nonetheless, we find no clear evidence that greater competition led students to be more productive.

In addition to the simple analysis shown here, we investigated other outcomes, such as Ph.D. attainment and medical school matriculation, conducted slightly more sophisticated analyses that attempt to control for other time-varying factors, and experimented with shorter time horizons. In most cases, we still failed to find positive associations between our social outcomes and competitive pressure. In fact, when the correlations are statistically significant at all, they are almost always negative (Bound, Hershbein, and Long, 2009). While we stop short of inferring that competition has had negative effects on these outcomes, we find no support that it has produced positive effects. In conjunction with the psychological and informational costs associated with competitive pressure that we discuss above, these results should raise doubts that the increased competition for college admission has had a net positive effect on what and how students learn.

Conclusion

Higher education in the United States has changed dramatically over the past 30 to 40 years. The overall demand for a college education amongst high school graduates has grown, and this has resulted in increasingly fierce competition for admission to the more selective colleges. While, as we have seen, this increase in competition has been particularly large for students who, had they finished high school in an earlier period, would have had a reasonable chance for admission to one of the more selective colleges in the country, we have also seen that the effects of the increased competition have been quite pervasive. Even students of more average ability have been affected by the changes. In terms of regional differences, competition has grown the most in the Northeast region and in California, although other regions have also faced increases. The increased competition that currently exists for admission to a more selective college might have real benefits if it were to increase learning amongst high school students. However, our analysis suggests that there are reasons to suspect that this congenial outcome might not hold true. Moreover, the increased resources parents and students are able to use to improve their odds of admission at top colleges put low-income students at a disadvantage. Students who attend high schools that typically do not send their graduates to top schools as well as the children of parents who did not attend selective institutions are also at a disadvantage (Roderick, Nagaoka, and Allensworth, 2006).

Interventions to reduce the selectivity of institutions seem neither practical nor sensible. However, the difference in resources per student in private and public schools may be worth addressing. Using the same categorization of schools as is utilized here, Bound, Lovenheim, and Turner (2008) estimate that typical per student expenditures are four times as large at selective private colleges and universities than they are at relatively open-access public four-year schools. Winston (2000) reports even larger differences, as does Hoxby (this issue). The resources provided to students attending elite private colleges have been increasing dramatically, while the resources provided to students attending nonflagship public colleges and universities have declined significantly (Hoxby, 1997; Bound, Lovenheim, and Turner, 2008). Policies designed to reduce the gap in resources available to students attending selective private schools versus those attending relatively openaccess institutions might reduce the disparities in terms of college experience between those who do and those who do not obtain admission to these schools and thus also reduce some of the pressure students and their families feel to obtain access to a more selective college or university. The huge gap in resources available to students at selective relative to less-selective schools seems too large to be justifiable on grounds of either efficiency or equity.

Acknowledgments

We are grateful to Arline Geronimus, Dan Silverman, and Sarah Turner for helpful discussions; to Jesse Rothstein, DeForest McDuff, and Amanda Pallais for providing tabulations from SAT and ACT data on score sending behavior, and Sarah Turner for providing tabulations of the Survey of Earned Doctorates. In addition, Bound would like to acknowledge fellowship support from the Center for Advanced Study in the Behavioral Sciences, while Hershbein would like to acknowledge similar support from the National Institute for Child Health and Development.

Footnotes

1

See the Data Appendix available with this paper at ⟨http://e-jep.org⟩ for details on the ASC dataset and institutional rankings, including a list of the schools. The bottom row reflects the number of institutions represented by the data. As reflected in the numbers, the ASC is not a complete census of institutions, and so the sample does not include all of the top-ranked schools. The sample of “Other Public Four-years” includes public colleges (as opposed to only universities).

2

These statistics almost surely understate the shifts in competitiveness. Given the increase in the size of the applicant pool, it seems inevitable that the probability that someone scoring at the 75th percentile of the incoming class is admitted has gone down.

3

The numbers in Table 3 do not accord perfectly with those in Table 1 because the latter are conditional on the respondent having graduated from high school, whereas the numbers in Table 3 are conditional on the respondent being a senior in high school. The basic trends between the two tables, however, are the same. We use the top two choices due to data availability.

4

The assertion that students with a given level of qualifications have had a tougher time getting into college is confirmed by more detailed regression analysis that controls for many additional covariates. This is presented in Appendix Table 3, which is available with this paper at ⟨http://www.e-jep.org⟩. This finding may seem at odds with the data Hoxby presents in her paper in this issue, but it is not. Dramatic increases in both the fraction and number of high school graduates applying to college has meant that both the average ability of those attending four-year schools outside the most selective has fallen, and that it has become increasingly difficult for high school students to gain admittance to the same institutions as they would have before. Indeed, the NCES data we have analyzed suggest that getting accepted to college has become more difficult not just at the most selective schools, but at all schools.

5

This assumes student characteristics and behaviors do not change. In the online Appendix, we show that once additional student traits are held constant, the conditional acceptance rates are smaller for New England, the Middle Atlantic, and the Western states than the counterfactual rates shown in Table 4 in 1992 and 2004. This suggests students did have a behavioral response—that is, their characteristics changed—which we explore in the next section.

6

In this issue, Hoxby finds convincing evidence of increased college integration over a longer horizon; from our data, it would appear that much of the integration occurred before 1992.

7

For example, the National Commission on Excellence in Education’s release of the seminal report A Nation at Risk in 1983 almost certainly led to major curricular changes at the high school level that had little to do with already tightening college admission standards (Wong, Guthrie, and Harris, 2004). However, it may have affected college demand if it increased the number of students prepared for higher education.

8

In fact, in our own exploratory work we have found that the partial correlation between homework time and conditional college acceptance is stronger in 2004 than in 1992, whether it is for any four-year college or the selective private institutions or selective public institutions.

9

We suspect this spike is due in large part to curricular reform brought about by the release of the A Nation At Risk report in 1983. Unfortunately, data limitations do not allow us to test our suspicion, but it would be a worthy topic for future research. It may also be the case that changes in the availability of alternative activities, such as online pursuits and video games, may contribute to the decline in homework time from 1992 to 2004, but research is needed to explore this hypothesis.

10

For a nice treatment of the selection and signaling decisions of students under a test-optional policy at a selective college, see Robinson and Monks (2004). Fairtest, an organization critical of standardized testing, maintains a list of “SAT/ACT-optional” schools on its website at ⟨http://www.fairtest.org/university/optional⟩.

11

This set of states is quite consistent over time. See the Appendix available with this paper at ⟨http://e-jep.org⟩ for a list.

12

For more information, including a complete membership list, see ⟨http://www.commonapp.org⟩.

13

This pattern holds true for ACT reports, as well. We use SAT score reports here because they have a much higher topcode on the number of scores sent (15) than do the corresponding ACT data (six), allowing a finer and more detailed analysis.

References

  1. Abrams Samuel J. Unflagged SATs. Education Next. 2005 Summer5(3) [Google Scholar]
  2. Ariely Dan, Gneezy Uri, Loewenstein George, Mazar Nina. Large Stakes and Big Mistakes. Review of Economic Studies. 2009;76(2):451–469. [Google Scholar]
  3. Bound John, Hershbein Brad, Long Bridget Terry. Playing the Admissions Game: Student Reactions to Increasing College Competition. 2009 doi: 10.1257/jep.23.4.119. NBER Working Paper No. 15272. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Bound John, Lovenheim Michael, Turner Sarah E. Population Studies Center Research Report 07-626. University of Michigan; 2007. Understanding the Decrease in College Completion Rates and the Increased Time to the Baccalaureate Degree. [Google Scholar]
  5. Bound John, Lovenheim Michael F., Turner Sarah. Why Have College Completion Rates Declined? The Effects of Changes in Students and Changes in Colleges. 2008 doi: 10.1257/app.2.3.129. http://www.human.cornell.edu/che/PAM/People/upload/CR_Website.pdf. [DOI] [PMC free article] [PubMed]
  6. Bruno Laura. [accessed June 23, 2009];More Universities are Going SAT-optional. USA Today. 2006 April 4; http://www.usatoday.com/news/education/2006-04-04-standardized-tests_x.htm.
  7. Camara Wayne, Nathan Julie S. Research Notes RN-05. College Board; Sep, 1998. Score Change When Retaking the SAT I: Reasoning Test. [Google Scholar]
  8. College Board Trends in College Pricing. 2008 [Google Scholar]
  9. Courant Paul N., McPherson Michael, Resch Alexandra M. The Public Role in Higher Education. National Tax Journal. 2006;59(2):291–318. [Google Scholar]
  10. Finder Alan. [accessed June 23, 2009];In New Twist on College Search, a First Choice, and 20 Backups. New York Times. 2006 March 21; http://www.nytimes.com/2006/03/21/education/21apply.html. [Google Scholar]
  11. Gamerman Ellen, Chung Juliet, Park Sungha, Jackson Candace. How the Schools Stack Up. Wall Street Journal. 2007 December 28; [Google Scholar]
  12. Gneezy Uri, Rustichini Aldo. Pay Enough or Don’t Pay at All. Quarterly Journal of Economics. 2000a;115(3):791–810. [Google Scholar]
  13. Gneezy Uri, Rustichini Aldo. A Fine is a Price. Journal of Legal Studies. 2000b January;29:1–18. [Google Scholar]
  14. Goldin Claudia, Katz Lawrence F. The Race between Education and Technology. Harvard University Press; Cambridge: 2008. [Google Scholar]
  15. Heckman James, Lochner Lance, Todd Petra. Handbook of the Economics of Education. Vol. 1. Elsevier; Amsterdam: 2006. Earnings Functions, Rates of Return and Treatment Effects: The Mincer Equation and Beyond; pp. 307–457. [Google Scholar]
  16. Hersh Richard, Merrow John. Declining by Degrees: Higher Education at Risk. Palgrave Macmillan; New York: 2005. [Google Scholar]
  17. Heyman James, Ariely Dan. Effort for Payment: A Tale of Two Markets. Psychological Science. 2004;15(11):787–793. doi: 10.1111/j.0956-7976.2004.00757.x. [DOI] [PubMed] [Google Scholar]
  18. Hoffer Thomas B., Welch Vincent. [accessed January 8, 2009];Time to Degree of U.S. Research Doctorate Recipients. 2006 National Science Foundation Info-Brief 06-312. http://www.nsf.gov/statistics/infbrief/nsf06312/
  19. Holmstrom Bengt, Milgrom Paul. Multitask Principal-Agent Analyses: Incentive Contracts, Asset Ownership and Job Design. Journal of Law, Economics and Organization. 1991;7:24–52. Special Issue. [Google Scholar]
  20. Hoxby Caroline M. How the Changing Market Structure of U.S. Higher Education Explains College Tuition. 1997 NBER Working Paper No. 6323. [Google Scholar]
  21. Hoxby Caroline M., Long Bridget T. Explaining Rising Income and Wage Inequality among the College-Educated. 1999 NBER Working Paper No. 6873. [Google Scholar]
  22. Hu Winnie. [accessed June 23, 2009];New York Times. 2008 December 6; http://www.nytimes.com/2008/12/07/education/07advanced.html. [Google Scholar]
  23. Kaufman Jonathan. [accessed June 23, 2009];High School’s Worst Year? For Ambitious Teens, 11th Grade Becomes a Marathon of Tests, Stress and Sleepless Nights. Wall Street Journal. 2008 May 24; http://online.wsj.com/article/SB121158515508718929.html. [Google Scholar]
  24. Kreps David. Intrinsic Motivation and Extrinsic Incentives. American Economic Review. 1997;87(2):359–364. [Google Scholar]
  25. Leonhardt David. [accessed June 23, 2009];As Wealthy Fill Top Colleges, Concerns Grow Over Fairness. New York Times. 2004 April 22; http://www.nytimes.com/2004/04/22/us/as-wealthy-fill-top-colleges-concerns-grow-over-fairness.html. [Google Scholar]
  26. Lombardi Kate Stone. [accessed June 23, 2009];High Anxiety of Getting Into College. New York Times. 2007 April 8; http://www.nytimes.com/2007/04/08/nyregion/nyregionspecial2/08wecol.html. [Google Scholar]
  27. McDuff DeForest. Quality, Tuition, and Applications to In-State Public Colleges. Economics of Education Review. 2007;26(4):433–449. [Google Scholar]
  28. Nathan Rebekah. My Freshman Year: What a Professor Learned by Becoming a Student. Cornell University Press; Ithaca: 2006. [Google Scholar]
  29. National Commission on Excellence in Education . A Nation at Risk. U.S. Department of Education; Washington, D.C.: 1983. [Google Scholar]
  30. National Research Council . Learning and Understanding: Improving Advanced Study of Mathematics and Science in U.S. High Schools. National Academy Press; Washington, D.C.: 2002. [Google Scholar]
  31. Pryor John, Hurtado Sylvia, Saenz Victor B., Santos Jose Luis, Korn William S. The American Freshman: Forty Year Trends. Higher Education Research Institute, University of California; Los Angeles: 2007. [Google Scholar]
  32. Rabin Matthew. Incorporating Fairness into Game Theory and Economics. American Economic Review. 1993;83(5):1281–1302. [Google Scholar]
  33. Robinson Michael, Monks James. Making SAT Scores Optional in Selective College Admissions: A Case Study. Economics of Education Review. 2004;24(4):393–405. [Google Scholar]
  34. Roderick Melissa, Nagaoka Jenny, Allensworth Elaine M. From High School to the Future: A First Look at Chicago Public School Graduates’ College Enrollment, College Preparation, and Graduation from Four-year Colleges. Consortium on Chicago School Research at the University of Chicago, Chicago Postsecondary Transition Project; Chicago: [accessed June 18, 2009]. 2006. http://ccsr.uchicago.edu/publications/Postsecondary.pdf. [Google Scholar]
  35. Ruggles Steven, Sobek Matthew, Alexander Trent, Fitch Catherine A., Goeken Ronald, Kelly Patricia, Miriam King Hall, Ronnander Chad. Integrated Public Use Microdata Series: Version 4.0. Minnesota Population Center; 2008. Machine-readable database. http://usa.ipums.org/usa/ [Google Scholar]
  36. Sabot Richard, Wakeman-Linn John. Grade Inflation and Course Choice. Journal of Economic Perspectives. 1991;5(1):159–170. [Google Scholar]
  37. Snyder Thomas D., Dillow Sally A., Hoffman Charlene M. Digest of Education Statistics 2008 (NCES 2009-020) National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education; Washington, D.C.: 2009. [Google Scholar]
  38. [accessed June 18, 2009];Gold Medal Schools. U.S. News and World Report. 2007 November 29; http://www.usnews-.com/articles/education/high-schools/2007/11/29/gold-medal-schools.html.
  39. Vigdor Jacob, Clotfelter Charles. Retaking the SAT. Journal of Human Resources. 2003;38(1):1–33. [Google Scholar]
  40. Western Interstate Commission for Higher Education . Knocking at the College Door: Projections of High School Graduates by State and Race/Ethnicity, 1996–2012. 6th edition 1998. [Google Scholar]
  41. Western Interstate Commission for Higher Education . Knocking at the College Door: Projections of High School Graduates by State and Race/Ethnicity, 1992–2022. 7th edition 2008. [Google Scholar]
  42. Williams Alex. [accessed September, 23];The Lost Summer. New York Times. 2006 June 4; http://query.nytimes.com/gst/fullpage.html?res=9E0DEFDE1731F937A35755C0A9609C8B63. [Google Scholar]
  43. Winston Gordon C. Subsidies, Hierarchy and Peers: The Awkward Economics of Higher Education. Journal of Economics Perspectives. 1999;13(1):13–36. [Google Scholar]
  44. Winston Gordon C. [Accessed June 28, 2009];Economic Stratification and Hierarchy in U.S. Colleges and Universities. Williams Project on the Economics of Higher Education. 2000 Available at: http://www.williams.edu/wpehe/research.html.
  45. Wong Kenneth, Guthrie James, Harris Douglas., editors. A Nation at Risk: A 20-Year Reappraisal. Special Issue of the Peabody Journal of Education. 1 Vol. 79. 2004. [Google Scholar]

RESOURCES