Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2014 Oct 4.
Published in final edited form as: Educ Res. 2013 May 9;42(5):259–265. doi: 10.3102/0013189X13481382

Is the Sky Falling? Grade Inflation and the Signaling Power of Grades

Evangeleen Pattison 1, Eric Grodsky 2, Chandra Muller 3
PMCID: PMC4185208  NIHMSID: NIHMS578461  PMID: 25288826

Abstract

Grades are the fundamental currency of our educational system; they signal academic achievement and non-cognitive skills to parents, employers, postsecondary gatekeepers, and students themselves. Grade inflation compromises the signaling value of grades, undermining their capacity to achieve the functions for which they are intended. We challenge the ‘increases in grade point average’ definition of grade inflation and argue that grade inflation must be understood in terms of the signaling power of grades. Analyzing data from four nationally representative samples, we find that in the decades following 1972: (a) grades have risen at high schools and dropped at four-year colleges, in general, and selective four-year institutions, in particular; and (b) the signaling power of grades has attenuated little, if at all.


Since at least 1894 when a committee at Harvard University warned that “Grades A and B are sometimes given too readily--Grade A for work of no very high merit, and Grade B for work not far above mediocrity…,” educational researchers have warned about grade inflation (Report of the Committee on Raising the Standard, Harvard University, 1894; as cited in Kohn, 2008, p. 1). Critics continue to express concerns about “[a] reduction in the capacity of grades to provide true and useful information about students” (Kamber, 2008, p. 47) due to a mismatch between student achievement and the grades students receive (Rojstaczer and Healy, 2012). Despite such concerns, we know little about changes in the relationship between achievement and grades over time, and if or how these shifts impact the capacity of grades to serve as a signal of student quality.

We challenge the often implicit definition of grade inflation employed by critics, suggesting that the ‘increases in grade point average’ definition is inadequate for understanding grade inflation. Rather, we argue that the signaling power of grades – their ability to provide information to and about students—is more fundamental to the concerns raised by the inflation. To better understand the grade inflation problem, we address two key research questions:

  1. Have mean secondary and postsecondary grades risen in the decades following 1972?

  2. Has the signaling power of grades declined over this time period?

We move beyond an analysis of change or stability in the mean to consider change or stability in the variance of grades and the covariance of grades with antecedents (student ability and effort) and outputs (educational attainment, occupational prestige, and annual earnings). In addition to considering general trends in grade inflation at high schools and four-year colleges, we investigate grade inflation among selective universities. Our analysis of transcript and survey data from four nationally representative samples of youth who were expected to complete high school in 1972, 1982, 1992, and 2004 leads us to conclude that concerns about grade inflation are overstated if not entirely misplaced.

BACKGROUND

Cognitive skills (reflected by test scores) and non-cognitive skills (such as student effort) largely determine the course grade assigned by a teacher (Farkas, Grobe, Sheehan, & Shaun, 1990; Kelly, 2008). Supporting the construct validity of grades as measures of these underlying traits and behaviors, empirical studies demonstrate that grades are positively associated with educational plans (Rosenbaum, 1980), persistence to degree (Attewell, Heil, & Reisel, 2010; W. Bowen, Chingos, & McPheron, 2009), occupational prestige (Baird, 1985), and long-term earnings (Gemus, 2010; Jones & Jackson, 1990; Miller, 1998). Despite these findings, some scholars assert that an increase in mean grades has degraded the quality of the signal grades carry over time at both the secondary (ACT, 2005; Carr, 2004; Pope, 2006) and postsecondary levels (Astin, 1998; Babcock & Marks, 2011). Such critics suggest that A’s and B’s are easier to come by now than in the past, contributing to a decline in student effort and attenuation in the signaling value of grades for postsecondary gatekeepers, employers, and students themselves. While there is certainly evidence that the prevalence of A and B grades has increased in recent decades (Adelman, 2004, 2008), there is little empirical support of a change in the signaling power of grades. Without jointly considering the mean and signaling criteria for grade inflation, we argue, past studies risk mistaking increases in student achievement for a loosening of grading standards.

Consistent with other work on the topic, we argue that grade inflation requires an upward shift in mean grades. Absent other changes in the distribution of grades, however, we argue mean shifts do not themselves imply devaluation. Given a ceiling of grades at 4.0, increases in the mean could lead to a decline in the variance, but the relationship between the mean and variance is not deterministic; other shifts in the distribution (e.g. changes in the maximum GPA or growth in the lower tail of the distribution) could also compensate for mean shifts. Although the signaling power of grades could be attenuated by a decline in the dispersion (or variance) in grades, the only reason one should be concerned about changes in the distribution of grades is if those changes are related to the important qualities grades are thought to reflect (student ability and effort) and predict (educational attainment and labor force outcomes). Declines in the association between grades and these important antecedents and outcomes could indicate the declining signaling power of grades and be grounds for concern. Thinking about grade inflation in this way is an important contribution to the grade inflation literature because critics often highlight the central tendency component of grades, while ignoring the variance component, and taking the covariance component as given.

Grade Inflation in Secondary Education

Research employing the ‘increases in grade point average’ definition of grade inflation documents widespread grade inflation at high schools in the United States (ACT, 2005; Camara, Kimmel, Scheuneman, & Sawtell, 2003; Carr, 2004; Godfrey, 2011; Pope, 2006; Woodruff & Ziomek, 2004; Ziomek & Svec, 1995). For example, Godfrey (2011) uses public high school student records from one state, along with corresponding exam score records from the College Board, to examine shifts in grade point averages compared to shifts in students’ scores on the math and verbal sections of the SAT. In 2006 the average cumulative high school GPA was 2.90, up a quarter of a grade point from 1996, despite relatively stable mean SAT scores (Godfrey, 2011). Studies released by the American College Testing Program (ACT) (ACT, 2005; Woodruff & Ziomek, 2004; Ziomek & Svec, 1995) and the National Center for Education Statistics (NCES) (Perkins, Kleiner, Roey, & Brown, 2004) use school-level data and obtain similar findings—an increase in mean GPA without a concurrent increase in student achievement, as measured by test scores.

Grade Inflation in Postsecondary Education

In contrast to the consistent findings about grade inflation in secondary education, findings are mixed on both the existence and prevalence of grade inflation at the postsecondary level. Some scholars contend that grade inflation among postsecondary universities is rampant (Juola, 1976; Kamber, 2008; Levine & Cureton, 1999; Rojstaczer & Healy, 2012), while others assert that it is a nonissue (Kohn, 2008; see Adelman, 2004, 2008; McAllister, Jiang, & Aghazadeh, 2008). Much of the literature in support of grade inflation at postsecondary institutions hinges upon temporal increases in mean grades, while the literature challenging it questions this definition as proof of inflation (Brighouse, 2008; Kohn, 2008).

Juola (1976), one of the earliest researchers to employ a large database to raise empirically-based concerns about grade inflation in postsecondary education, uses data from 134 colleges (28% of his original stratified sample) to illustrate that “grade inflation in higher education is real and conspicuous” (p.7). Juola highlights that mean grades increased four-tenths of a point from 1965 to 1973, findings corroborated and extended by more recent studies (Kuh & Hu, 1999; Levine & Cureton, 1999; Rojstaczer & Healy, 2012). Kuh and Hu (1999) use student reports of grades to illustrate that college grades increased at every level of institutional selectivity between the mid-1980s and mid-1990s. Similar research documents particularly steep increases in grade point average at selective colleges (relative to less selective universities) over time (Babcock & Marks, 2011; Cote & Allahar, 2007; Johnson, 2003; Rojstaczer & Healy, 2012; Wilson, 1999). For example, the median GPA at Princeton in 1997 was 3.42, an increase of approximately 11 percent since 1973 (Wilson, 1999). Meanwhile, the SAT scores of students enrolled at selective colleges remained stable over this time period (Rojstaczer & Healy, 2012).

Despite numerous studies that report grade inflation at postsecondary institutions, Adelman (2004, 2008) finds no single-direction, nationwide trend in grades between 1972 and 1992. He illustrates that the proportion of “A” grades declined between 1972 and 1982, and then rose between 1982 and 1992. Additionally, he observes an increase in the proportion of withdrawals and no-credit repeats. Such ‘non-penalty’ courses most likely mask sub-par student performance and, if replaced by the grades students would have earned, could presumably lead to lower GPAs. Adelman’s work highlights the importance of examining the distribution of grades and underscores the necessity of using representative, transcript-based data.

Student reports of grades systematically differ from school records, with nearly a third of college students inflating their grades (Kuncel, Credé, & Thomas, 2005). As a result, previous studies based on students’ reports of their grades may have overstated the extent to which grades are actually increasing. Many of the studies finding support for grade inflation at secondary and postsecondary levels are hampered by additional data limitations such as low response rates or biased samples. Results based on truncated samples may reflect compressed distributions of grades or lead to inflated estimates of GPA. In the present study, we address these limitations and examine the important nexus between shifts in mean grades, the distribution of grades, and the power of grades to serves as signals about student antecedents and outcomes. Together, these contributions allow us to develop a more complete understanding of both the existence and magnitude of grade inflation.

DATA AND METHODS

We base our analyses on survey and transcript data from four nationally representative samples of high school students: The National Longitudinal Study of the High School Class of 1972 (NLS72), the High School and Beyond sophomore cohort (HS&B), The National Educational Longitudinal Study of 1988 (NELS), and The Educational Longitudinal Study of 2002 (ELS). These data are well-suited for this analysis for several reasons: they are nationally representative, include transcript data, and include measures of antecedents and outcomes associated with grades.

Analytic Samples

We restrict our high school analytic sample to respondents with a transcript-based indicator of high school graduation or GED within two years of their expected graduation date. We restrict our college analytic sample to respondents with a transcript-based indicator of ever attending a four-year or selective four-year college.1 Dividing our college sample into two strata allows us to gauge if grade inflation is particularly problematic at selective universities. We consider colleges to be selective if they are ranked by Barron’s Admissions Competitiveness Index as ‘highly competitive’ or ‘most competitive’ in the years corresponding to each dataset (Schmitt, 2009). We obtain substantively similar results when we use a temporally consistent (1972) selectivity designation for all of the cohorts or examine the signaling power of grades across a wider array of institutional contexts (most/highly competitive, very competitive, competitive, less competitive, and non-competitive). We use listwise deletion for missing data on the independent variables (high school antecedents). We retain 86%, 90%, and 91% of our original analytic sample for the HS&B, NELS, and ELS cohorts, respectively. We obtain similar results for the full sample when we retain respondents by substituting a constant for missing values. We do not impute data for the 1972 cohort because NLS72 respondents are omitted from analyses that include high school antecedents.

Variables

Grade Point Average

We construct weighted high school GPA by weighting core academic course grades (reading, math, science, and social studies) by the number of credits students earned in each course. We obtain similar results when we add a grade point to grades in honors, advanced placement (AP), and international baccalaureate (IB) courses. We construct weighted four-year college GPA by weighting course grades by the number of credits students earned in each course.2

High School Antecedents

The key antecedents we examine are achievement test scores and student reports of effort. Our achievement measures are standardized 10th grade mathematics achievement and reading comprehension scores based on multiple choice tests administered to the HS&B, NELS, and ELS cohorts by NCES. We observe similar patterns when we employ SAT scores (or estimated SAT scores for students taking only the ACT) instead of the test scores included as part of the panel studies. We construct student effort using student’s 10th-grade responses to three questions: “How often do you come to class and find yourself without these things? a) pencil or paper (when needed); b) books (when needed); and c) your homework done (when assigned).” Response categories include: “usually,” “often,” “seldom,” and “never.” Question wording and response categories are consistent across the cohorts.3

Postsecondary Outcomes

The postsecondary outcomes we examine are attending a four-year college within two years of expected high school graduation date, attending a selective four-year college (as defined above) within two years of expected high school graduation, and baccalaureate degree completion within 8.5 years of expected high school graduation date (conditional on four-year college attendance). This restriction allows us to examine the same “time to degree” window for NLS, HS&B, and NELS cohorts. We observe similar patterns when we do not impose this restriction.

Occupational Outcomes

The occupational outcomes we examine are the occupational prestige of the respondent’s most recent occupation and their logged annual earnings (conditional on employment). Although it would have been ideal to evaluate the association between grades and later adult occupational prestige and earnings, such analyses are not possible with the available data. Occupational prestige is a measure of the stature a particular occupation holds in society and is often used to gauge relative social class positions. We define occupational prestige using the 1989 Nakao-Treas Occupational Prestige Scores, which are based on 1980 Census three-digit occupational codes (Nakao & Treas, 1994).4 For the 1972 cohort, we link the 1970 Census occupational codes to the 1980 Census codes. This allows us to assign a Nakao-Treas Occupational Prestige Score to each occupation in the sample. Because three-digit occupation codes are not available for the HS&B and NELS cohorts, we construct our measure of occupational prestige for these cohorts by averaging the occupational prestige for the exemplar occupations in the 1980 census categories listed on the survey instrument for each dataset. Respondents reporting a military occupation are dropped from all analyses. We define earnings as the log of each respondent’s annual earnings (conditional on reporting an occupation). We add $250 to each respondent’s reported annual income to retain the income of respondents who reported no earnings,.

Analytic Plan

Our methods are straightforward. To answer our first research question, we present the mean high school and college GPA for each cohort overall, and by college selectivity. To answer our second research question, we separately examine two components of secondary and postsecondary grades: variance and covariance with relevant antecedents and outputs. In our first step, we examine the variance among high school and college grades for each cohort to see if it has declined over time—consistent with ceiling effects due to increasing mean grades. Next, we estimate a series of correlations to explore possible attenuation in the relationship between high school antecedents and high school grades. Lastly, we estimate two sets of models to explore possible attenuation in the relationship between grades and key outcomes: (1) logistic regression models for college entrance and completion; and (2) ordinary least squares models for occupational prestige and logged annual earnings. We present results from these models as average marginal effects. Because our educational outcomes are discrete, examining the marginal effects of grades provides a good approximation to the probability of achieving a desired educational outcome that will be produced by a 1-unit change in GPA (Long & Freese, 2006). For the occupational outcomes, the marginal effect simply equals the relevant slope coefficient and change produced by a 1-unit change in GPA (Cameron & Trivedi, 2009). As an extra step, we also standardized the grades for each cohort and re-estimated the models (not shown); we arrived at conclusions similar to those we report.

RESULTS

Trends in Mean Grades

Cumulative high school GPA has risen steadily among high school seniors between 1982 and 2004, increasing from 2.38 in 1982 to 2.62 in 2004 (an increase of approximately 10%; see Table 1). Conversely, when we look at postsecondary grades, we see that the mean GPA among students who attend a four-year college dropped gradually in the decades following 1972 (from 2.73 in 1972 to 2.50 and 2.33 in 1982 and 1992, respectively). The mean GPA among students who attend a selective four-year college also dropped considerably between 1972 and 1992 (from 3.13 in 1972 to 2.77 in 1982 to 2.67 in 1992), contrary to widely held beliefs that grades at selective institutions have risen in recent decades. Consequently, the answer to our first research question is simple: grades at secondary institutions have risen in the time period we examine; however, postsecondary grades have declined. We now turn to our second research question, which examines the signaling aspect of grades.

Table 1.

Mean and Standard Deviation for GPA Measures, by Cohort

Cohort
1972 1982 1992 2004
High School Graduates High School GPA

- 2.38 2.41 2.62
- (0.71) (0.71) (0.73)

College Matriculants College GPA

  All Baccalaureate 2.73 2.50 2.33 -
(0.69) (0.68) (0.75) -
  Selective Baccalaureate 3.13 2.77 2.67 -
(0.68) (0.50) (0.56) -

Trends in Grade Variance

As illustrated by Table 1, the variance of high school grades increased slightly over the time period we examine, from 0.71 in 1982 to 0.73 in 2004 (an increase of approximately 3%). The variance in college grades also increased, from 0.69 in 1972 to 0.75 in 1992 (an increase of approximately 9%). The variance in grades among students attending selective college declined in the decades following 1972—particularly between 1972 and 1982 when it dropped by approximately 26% (from 0.68 to 0.50). Overall, our preliminary examination of the signaling component of grades illustrates an increase in the variance of grades at high schools and four-year colleges. However, we see a tightening of the grade spread at selective four-year colleges, suggesting a potential decline in the capacity of these grades to adequately reflect variation in the academic achievement of students. To develop a more complete understanding of the signaling aspect of grades, we now examine the covariance between grades and what they purportedly represent.

High School Grades

Those who are concerned about grade inflation often assume that an increase in mean grades diminishes the meaning of grades in terms of their signaling power. As illustrated by Table 2, associations of both test scores and student effort with high school grades have remained consistently robust for the cohorts of high school seniors we are able to observe (1982, 1992, and 2004). If anything, the relationship between test scores and high school grades may have become stronger over time, particularly between 1982 and 1992. Although we cannot say with certainty that the signaling power of grades has increased, these descriptive results fail to provide support to the thesis that high school grades have lost signaling power in the decades following 1982.

Table 2.

Correlation Coefficients and Average Marginal Effects (and standard errors) for the Relationship between GPA and Selected Measures of Academic Achievement and Employment, by Cohort

Cohort
1972 1982 1992 2004
A. Correlation between high school GPA and…
 Math achievement - 0.52 0.64 0.53
 Reading comprehension - 0.46 0.54 0.50
 Student report of effort in high school - 0.24 0.24 0.24

B. Average marginal effect of high school GPA on…
 Attending a four-year college - 0.27 (0.006) 0.32 (0.007) 0.32 (0.004)
 Attending a selective four-year college - 0.07 (0.009) 0.16 (0.012) 0.22 (0.011)
 Earning a baccalaureate degree - 0.23 (0.013) 0.26 (0.015) -
 Nakao-Treas occupational prestige score - 5.27 (0.266) 5.52 (0.378) -
 Logged annual earnings - 0.15 (0.033) 0.09 (0.041) -

C. Average marginal effect of college GPA on…
 Earning a baccalaureate degree 0.20 (0.009) 0.26 (0.012) 0.33 (0.007) -
 Nakao-Treas occupational prestige score 3.63 (0.297) 4.94 (0.367) 4.82 (0.415) -
 Logged annual earnings 0.13 (0.021) 0.10 (0.049) 0.12 (0.039) -

D. Average marginal effect of selective college GPA on…
 Earning a baccalaureate degree 0.17 (0.029) 0.17 (0.033) 0.15 (0.021) -
 Nakao-Treas occupational prestige score 5.08 (1.206) 4.49 (1.554) 3.96 (1.311) -
 Logged annual earnings 0.11 (0.097) 0.09 (0.188) 0.11 (0.154) -

Consistent with the point estimates discussed above, the average marginal effects of high school GPA on postsecondary outcomes have remained substantial—and perhaps even increased—over time. For example, where each additional high school grade point was associated with a 27 percentage point increase in the probability of attending a baccalaureate college within two years of expected high school graduation among students who completed high school around 1982, the marginal effects of a grade point increased to 32 percentage points in the following decades. The upward trend in the relationship between high school GPA and college attendance is even greater in magnitude when we examine selective four-year college attendance. Where each additional high school grade point was associated with a 7 percentage point increase in the probability of attending a selective baccalaureate college within two years of high school graduation among students who completed high school around 1982, the marginal effect of a grade point more than tripled by 2004 (up from 7 to 22 percentage points). As with college attendance, we see no attenuation in the association between high school GPA and college completion.

Turning now to the average marginal effects of high school GPA on occupational outcomes, we find mixed support regarding the signaling power of high school grades. The association between high school GPA and occupational prestige remained relatively stable in the decades following 1982, while the association between high school GPA and logged earnings attenuated over this time period—although the changes we observe here are not statistically significant at the .05 level.5 In sum, our examination of the relationship between high school grades and what they reflect suggests that the signaling power of high school grades has persisted over time. Below we examine the signaling power of four-year college grades.

College Grades

The average marginal effect of four-year college GPA on baccalaureate degree completion within 8.5 years of expected high school graduation increased steadily over the time period we examine, from 0.20 to 0.26 to 0.33 for the high school classes of 1972, 1982, and 1992, respectively. Turning to occupational outcomes, four-year college GPA is consistently associated with occupational prestige; however, the association between college GPA and earnings declined slightly. Importantly, we find no statistically significant evidence of attenuation in the signaling power of college grades. Our findings suggest that college GPA has retained its signaling power.

Grades at Selective Colleges

As shown in Table 2, the correlation between selective four-year college grades and degree completion declined slightly between1982 and 1992 (from 0.17 in 1982 to 0.15 in 1992). The less pronounced effects of college GPA on degree completion for selective colleges relative to the broader universe of baccalaureate institutions is unsurprising given the markedly higher completion rates (and thus lower variance) for students attending selective institutions (W. G. Bowen, Chingos, & McPherson, 2009; Small & Winship, 2007). Turning to our occupational outcomes, we find mixed evidence regarding the ability of grades at selective four-year colleges to serve as signals about students. The association between selective four-year college GPA and occupational prestige declined slightly over the time period we examine, whereas the association between selective four-year college GPA and logged earnings remained relatively stable (declining slightly between 1972 and 1982). Evidence about trends in the signaling power of grades at selective universities must be viewed with some caution, however, as these changes also fail to attain statistical significance.

CONCLUSION

For over a century, critics have warned that rising grades compromise the value of the signals grades carry. We challenge the often implicit definition of grade inflation employed by critics, arguing that a focus solely on upward drift in the mean is inadequate for understanding the existence and pervasiveness of grade inflation in secondary and postsecondary education. Increasing average grades are irrelevant if they are not accompanied by a decline in the signaling power of grades. As such, we consider not just the mean, but also the variance among grade point averages and their relationships with important antecedents and outcomes. Contrary to much of the existing literature, we find virtually no support for the existence of grade inflation in secondary or postsecondary education.

Our study illustrates that mean grades have risen at secondary institutions, as critics have noted, but dropped at postsecondary institutions in the decades following 1972. Furthermore, we do not observe any statistically significant attenuation in the signaling power of grades over this time period. In fact, we find some evidence of an increase in the signaling power of grades. While it is possible that grade inflation had already run its course by the 1970s and there was a time when grades meant something more than they do today, we may find ourselves in the same position that Harvard found itself in 1894—bemoaning the fact that grades aren’t what they used to be.

We have demonstrated trends in the signaling power of grades between 1972 and 2004. We have not, however, made any claims about the absolute signaling power of grades. Readers might conclude that the correlations we report are too low and that educators should evaluate students in a way that increases the magnitude of the empirical relationships among grades, academic effort, and labor market success. Our analyses do not, and cannot, purport to identify an ideal relationship among grades and their antecedents and outputs; that is a value judgment. We fully support serious discussions about standards of grading and sympathize (even empathize) with concerns about the standards to which students are held. Perhaps educational institutions should award A’s less readily and hold students to a higher standard. We do not, however, see any reason to believe that our grading standards are any different now than they were forty years ago.

Acknowledgments

This research was supported by grants from the National Science Foundation (DGE-1110007 to Evangeleen Pattison) and (DUE-0757018 to Chandra Muller, PI) and a grant from the Eunice Kennedy Shriver National Institute of Health and Child Development (5 R24 HD042849 to the Population Research Center). Opinions reflect those of the authors and do not necessarily reflect those of the granting agencies. We would also like to thank the Education and Transition to Adulthood group, and especially Andy Halpern-Manners, at the University of Texas at Austin.

Biographies

EVANGELEEN PATTISON is a graduate student in the Department of Sociology, a trainee in the Population Research Center, and holds a National Science Foundation Graduate Research Fellowship at The University of Texas at Austin, 305 E. 23rd Street, Stop G1800, Austin, TX, 78712-1699; epattison@prc.utexas.edu. Her primary research interest is stratification, in particular, the conditions and consequences of educational inequality in higher education.

ERIC GRODSKY, PhD, is an associate professor of sociology and educational policy studies at the University of Wisconsin-Madison; 4412 Sewell Social Sciences Bldg., 1180 Observatory Drive, Madison, WI 53706-1393; egrodsky@ssc.wisc.edu. His research focuses on inequality in access to and persistence through higher education and testing policies in education.

CHANDRA MULLER, PhD, is a professor of sociology and faculty affiliate of the Population Research Center at The University of Texas at Austin, 305 E. 23rd Street, Stop G1800, Austin, TX 78712-1699; cmuller@austin.utexas.edu. Her research focuses on stratification of students in schools and the impact of educational opportunities on outcomes across the life course.

Footnotes

1

The 1972 cohort includes postsecondary but not secondary transcripts. For this reason, NLS72 respondents are omitted from analyses that include high school GPA as an outcome or predictor. The 2002 cohort includes secondary but not postsecondary transcripts. For this reason, ELS respondents are omitted from analyses that include college GPA as an outcome or predictor. Previous research examining the distribution of postsecondary grades restricts the universe of postsecondary students to students who earned more than 10 postsecondary credits (Adelman 2004, 2008). Our substantive results do not change when we impose this restriction.

2

We first convert all grades, whether letter or numerical, to a four-point scale, omitting grades indicating in-process status, audits, no-penalty withdrawals, no-grade pass, etc. Next, we standardize all non-transfer-term credits on a semester metric and create a flag to indicate all courses for which credit was earned. Given the increase in the proportion of grades removed from GPA calculations over the time period we examine (Adelman, 2004), we ran similar analyses that account for the total number of courses students completed for which a course grade was not assigned (withdrawals, no-credit repeats, and “P” grades). Our substantive results were unchanged by this specification.

3

We reverse code ELS response categories to ensure that valence is uniform across cohorts. Alpha values are 0.72 for HS&B, 0.70 for NELS, and 0.80 for ELS. While it would have been informative to also examine teacher-reports of student effort, this is not feasible because the questions asked of teachers for the 1982 cohort differ dramatically from those asked of teachers for the 1992 and 2004 cohorts. Analyses of the teachers in the 1992 and 2004 cohorts show similar associations between student and teacher-reports of student effort and high school grades.

4

Nakao and Treas (1994) construct occupational prestige for the 503 detailed occupational categories of the 1980 census classification system using the 740 occupational titles that were evaluated in the 1989 NORC General Social Survey (GSS). GSS respondents were asked to rank the social standing of occupations from “1” for the lowest possible social standing to “9” for the highest. Nakao and Treas converted and scored these ratings in 12.5 point intervals resulting in prestige scores with a range from 0 (lowest) to 100 (highest).

5

We determine statistical significance using the post-estimation “suest” command in Stata.

Contributor Information

Evangeleen Pattison, University of Texas, Austin, Texas.

Eric Grodsky, University of Wisconsin-Madison, Madison, WI.

Chandra Muller, University of Texas, Austin, Texas.

References

  1. ACT. Are High School Grades Inflated? Iowa City; 2005. [Google Scholar]
  2. Adelman C. I. o. E. S. U.S. Department of Education, editor. Principal Indicators of Student Academic Histories in Postsecondary Education, 1972–2000. Washington, DC: 2004. [Google Scholar]
  3. Adelman C. Undergraduate Grades: A More Complex Story than Inflation. In: Hunt LH, editor. Grade Inflation: Academic standards in higher education. Albany: State University of New York Press; 2008. [Google Scholar]
  4. Astin A. The Changing American College Student: Thirty-Year Trends, 1966–1996. The Review of Higher Education. 1998;21:115–135. [Google Scholar]
  5. Attewell P, Heil S, Reisel L. Competing Explanations of Undergraduate Noncompletion. American Educational Research Journal 2010 [Google Scholar]
  6. Babcock P, Marks M. The Falling Time Cost of College: Evidence from Half a Century of Time Use Data. Review of Economics and Statistics. 2011;93:468–478. [Google Scholar]
  7. Baird L. Do Grades and Tests Predict Adult Accomplishment. Research in Higher Education. 1985;23(1):3–85. [Google Scholar]
  8. Bowen W, Chingos M, McPheron M. Crossing the Finish Line. Princeton: Princeton University Press; 2009. [Google Scholar]
  9. Bowen WG, Chingos MM, McPherson M. Crossing the Finish Line. Princeton: Princeton University Press; 2009. [Google Scholar]
  10. Brighouse H. Grade Inflation and Grade Variation: What’s All the Fuss About? In: Hunt LH, editor. Grade Infaltion: Academic Standards in Higher Education. New York: State University of New York Press; 2008. pp. 73–91. [Google Scholar]
  11. Camara W, Kimmel E, Scheuneman J, Sawtell EA. College Board Research Report. New York: 2003. Whose grades are inflated? [Google Scholar]
  12. Cameron C, Trivedi P. Microeconometrics Using Stata. 1. College Station, TX: Stata Press; 2009. [Google Scholar]
  13. Carr PC. The NAEP High School Transcript Study. Education Statistics Quarterly. 2004;6:4–5. [Google Scholar]
  14. Cote J, Allahar A. Ivory Tower Blues: A University System in Crisis. Toronto: University of Toronto Press; 2007. [Google Scholar]
  15. Farkas G, Grobe R, Sheehan D, Shaun Y. Cultural resources and school success: Gender, ethnicity, and poverty groups within an urban school district. American Sociological Review. 1990;55:127–142. [Google Scholar]
  16. Gemus J. College Achievement and Earnings. Paper presented at the Uppala University Department of Economics Working Paper Series; Uppsala, Sweden. 2010. [Google Scholar]
  17. Godfrey K. College Board Research Report. New York: 2011. Investigating Grade Inflation and Non-Equivalance. [Google Scholar]
  18. Johnson V. Grade Inflation. A Crisis in College Education. New York: Springer; 2003. [Google Scholar]
  19. Jones EB, Jackson JD. College Grades and Labor Market Rewards. Economics of Education Review. 1990;25:253–266. [Google Scholar]
  20. Juola A. Grade Inflation in Higher Education: What Can or Should We Do?. Paper presented at the Annual Meeting of National Council on Measurement in Education; San Francisco, CA. 1976. Apr, [Google Scholar]
  21. Kamber R. Understanding Grade Inflation. In: Hunt LH, editor. Grade Inflation: Academic standards in higher education. Albany: State University of New York Press; 2008. pp. 45–71. [Google Scholar]
  22. Kelly S. What Types of Students’ Effort Are Rewarded with High Marks? Sociology of Education. 2008;81:32–52. [Google Scholar]
  23. Kohn A. The Dangerous Myth of Grade Inflation. In: Hunt LH, editor. Grade Inflation: Acadenuc Standars in Higher Education. Albany: State University of New York Press; 2008. pp. 1–11. [Google Scholar]
  24. Kuh G, Hu S. Unraveling the Complexity of the Increase in College Grades from theMid-1980s to theMid-1990s. Educational Evaluation and Policy Analysis Fall. 1999:297–320. [Google Scholar]
  25. Kuncel N, Credé M, Thomas LL. The Validity of Self-Reported Grade Point Averages, Class Ranks, and Test Scores: A Meta-Analysis and Review of the Literature. Review of Educational Research. 2005;75(1):63–82. [Google Scholar]
  26. Levine A, Cureton JS. When Hope and Fear Collide: A Portrait of Todays College Student. San Francisco: Jossey-Bass; 1999. [Google Scholar]
  27. Long SJ, Freese J. Regression Models for Categorical Dependent Variables Using Stata. 2. College Station, TX: Stata Press; 2006. [Google Scholar]
  28. McAllister C, Jiang X, Aghazadeh F. Analysis of Engineering Discipline Grade Trends. Assessment and Evaluation in Higher Education. 2008;33(2):167–178. [Google Scholar]
  29. Miller SR. Shortcut: High School Grades As a Signal of Human Capital. Educational Evaluation and Policy Analysis. 1998;20:299–311. [Google Scholar]
  30. Nakao K, Treas J. Updating Occupational Prestige and Socioeconomic Scores: How the New Measures Measure Up. In: Marsden P, editor. Sociological Methodology. Washington, D.C: American Sociological Association; 1994. pp. 1–72. [Google Scholar]
  31. Perkins R, Kleiner B, Roey S, Brown J. The High School Transcript Study: A Decade of Change in Curricula and Achievement, 1990–2000. Education Statistics Quarterly. 2004;6(1 and 2):1–11. [Google Scholar]
  32. Pope J. Admissions Boards Face ‘Grade Inflation’. The Washington Post 2006 [Google Scholar]
  33. Rojstaczer S, Healy C. Where A is Ordinary: The Evolution of American College and University Grading, 1940–2009. Teachers College Record. 2012;114 [Google Scholar]
  34. Rosenbaum JG. Track Misperceptions and Frustrated College Plans: An Analysis of the Effects of Tracks and Track Perceptions in the National Longitudinal Survey. Sociology of Education. 1980;53:74–88. [Google Scholar]
  35. Schmitt CM. Data Files 1972, 1982, 1992, 2004, and 2008. Washington, D.C: National Center for Education Statistics, Institute for Education Sciences, U.S. Department of Education; 2009. Documentation for the Restricted-Use NCES-Barron’s Admissions Competitiveness Index. [Google Scholar]
  36. Small ML, Winship C. Black Students’ Graduation from Elite Colleges: Institutional Characteristics and Between-Institution Differences. Social Science Research. 2007;36(3):1257–1275. [Google Scholar]
  37. Wilson B. The Phenomenon of Grade Inflation in Higher Education. National Forum. 1999;79:38–49. [Google Scholar]
  38. Woodruff DJ, Ziomek DL. ACT Research Report Series. Iowa: 2004. Differential Grading Standards Among High Schools. [Google Scholar]
  39. Ziomek R, Svec J. ACT Research Report Series. Iowa City, Iowa: 1995. High School Grades and Achievement: Evidence of Grade Inflation. [Google Scholar]

RESOURCES