Skip to main content
Proceedings of the National Academy of Sciences of the United States of America logoLink to Proceedings of the National Academy of Sciences of the United States of America
. 2018 May 14;115(21):5314–5316. doi: 10.1073/pnas.1805767115

Uncertainty in long-run forecasts of quantities such as per capita gross domestic product

M Granger Morgan a,1
PMCID: PMC6003491  PMID: 29760080

People make predictions about the future all of the time. Often these predictions take the form of single-value best estimates, with no associated statement of uncertainty. Sometimes that is just fine. For example, if you need to know to within a few meters where Venus or the Earth will be located with respect to the sun at noon Greenwich Mean Time on January 1, 2100, experts in orbital mechanics can tell you, and single-value answers will be more than sufficient. Of course, there is a miniscule probability that some large object from outside the solar system will disrupt those orbits. However, on the timescale of the next century, the probability that something like that will happen is so low that it can be ignored in virtually any human decision making.

In other cases, people routinely make forecasts in the form of single-value or point estimates when in fact uncertainty is very high. Slowly but surely different communities have come to understand that doing this can result in serious problems and have begun to characterize uncertainty with probability distributions (1). Thanks to the efforts of Richard Moss and Steven Schneider (2) the climate science community has been in the forefront of working to describe the uncertainty in their estimates. However, other fields such as health risk assessment (35) and energy forecasting (610) have been slower to embrace the characterization of uncertainty in their forecasts.

For this reason the paper published in PNAS by Christensen et al. (11) that provides probabilistic estimates of global per capita gross domestic product (GDP) in 2100 is an important step forward. These authors correctly note that “projections of long-run productivity growth and economic growth are primary inputs into analyses used to support long-term planning and decision-making on many critical national priorities.” They go on to note that long-run growth scenarios are “imbedded in projections of greenhouse gas (GHG) emissions and concentrations as well as projections of temperature and other climatic outcomes” that are the inputs to many climate impact assessments. The physics of the Earth system is well enough understood that, if we know the quantity of various greenhouse gases that will be in the atmosphere over the next century or two, climate scientists can make reasonably reliable predictions about some of the changes that will result. At the same time, just what will happen to ocean circulation, to large ice sheets, to clathrates stored in the tundra, and to several other critical variables is far less certain.

Strengths and Limits of Analytical Forecasting Methods

Christensen et al. (11) adopt two different strategies to make their forecasts of future per capita GDP and then compare the results. Their first approach uses a statistical forecasting method that basically applies a low-pass filter to the time series of past economic growth (so as to remove high-frequency variability that arises from processes such as business cycles) and builds a probability distribution for future growth rates on the basis of the resulting spectrum. Mueller and Watson (12) have shown that such procedures can sometimes yield quite good estimates of the future value of various summary economic variables over time periods of decades. In the case of Christensen et al. (11), such an approach makes the assumption that the underlying economic and social processes that give rise to growth are fairly stationary—that is, that they remain largely unchanged between the past time series that was analyzed and the future interval across which the projection will be made. Fancy mathematical manipulation cannot capture the future impact of fundamental changes that have not yet happened or been anticipated. In that regard, Mueller and Watson (12) caution that while such analysis “accommodates a wide range of low-frequency persistence patterns” it does not “directly accommodate large breaks in volatility such as those evident in the pre- and post-WWII U.S. GDP growth rates.” They go on to observe that such large breaks could be accommodated by “postulating a stochastic process” to describe the changes.

Since the time series employed by Christensen et al. (11) runs from 1900 to 2010 it covers periods before and after the ubiquitous adoption of microprocessors and their important role in enhancing productivity described by Jorgenson (13) and Jorgenson and Vu (14). Hence, the low-frequency forecasting procedure presumably captures productivity growth in both the presence and absence of this “general-purpose technology” that has been driven for several decades by Moore’s law (15). What it cannot capture is possibly even higher rates of future technology-induced productivity growth brought about by something like an as-yet-unanticipated combination of the impacts of ubiquitous artificial intelligence and synthetic biology, nor will it capture the impacts of changes resulting from the possible implosion of the Chinese and/or US economy, possible climate-induced hemispheric-scale migrations, or possible catastrophic developments such as a global pandemic or nuclear war. Of course, one might argue that over the course of the coming century such events are so unlikely as to not warrant consideration in human decision making. However, since their occurrence probability is orders of magnitude larger than an unexpected disruption of planetary orbits, when making such forecasts it would be wise to be clear about what future developments have and have not been considered.

Strengths and Limits of Judgmental Forecasting Methods

The second forecasting method adopted by Christensen et al. (11) involves asking experts to make probabilistic judgements about the rate of growth of GDP, a procedure often referred to as “expert elicitation.” Such methods were first developed and applied in the field of decision analysis (1, 16). More recently they have been applied to topics such as assessing uncertainty about climate science and about the likely evolution of the future cost and performance of emerging technologies. Citations for a number of these studies can be found in a perspective piece I published in PNAS in 2014 (17).

At the outset of that paper I argued that “to conduct an expert elicitation, there must be experts whose knowledge can support informed judgment and prediction about the issues of interest. There are many topics about which people have extensive knowledge that provides little or no basis for making informed predictive judgments.” Unfortunately, there is no easy way to draw a clear line between topics where predictive expertise does and does not exist. In my view, the further one moves away from topics in which empirically established physical laws and observational data can underpin the judgments on which a forecast is based, the more one should remember that eminence and expertise are not always the same thing and consider whether the necessary condition exists for knowledge-informed assessment.

Before making more use of methods of expert elicitation to assess the likely future evolution of key social and economic variables over timescales of many decades, or even a few centuries, the research community would be well advised to spend some time thinking about who (if anyone) has the necessary expertise to make the requested judgments. As my colleague Baruch Fischhoff has reminded me on many occasions over the years, most people will give you answers to almost any question you ask. That does not mean that answers actually are useful or informed.

When my colleagues and I have conducted expert elicitations of climate scientists (ref. 17 and references therein) we have been careful to define the question precisely and specify what should and should not be included. We have also prepared rather elaborate literature summaries and other supporting materials to assist the experts as they develop their considered opinion. If we can persuade ourselves that there are people whose knowledge can support informed judgment and prediction about the likely future value of socioeconomic quantities over periods of decades or more, it will be essential to precisely define what is and is not to be included in the question being addressed and to support the people making those judgments with similar supporting materials.

The literature in behavioral social science makes it clear that almost everyone (1, 18, 19), including experts (20), tends to be systematically overconfident when making probabilistic judgments. In light of this, it is interesting to note that the uncertainty bounds on per capita GDP in 2100 assessed by respondents in the Christensen et al. (11) study are wider (more uncertain) than the results obtained from the low-frequency method. If one believes these respondents really have the knowledge necessary to make the predictions they were asked to make, and one assumes that like virtually all other people their judgements are overconfident, that suggests that the amount of uncertainty being estimated by the low-frequency forecasting method is much too small.

Christensen et al. are correct that forecasts of GDP are widely used as inputs in things like assessing climate impacts and computing the social cost of carbon. That, of course, begs the question of whether they should be, and of whether GDP is an adequate, or even an unambiguously defined, measure of economic activity, let alone of well-being.

Christensen et al. (11) are correct that forecasts of GDP are widely used as inputs in things like assessing climate impacts (21) and computing the social cost of carbon (22, 23). That, of course, begs the question of whether they should be, and of whether GDP is an adequate, or even an unambiguously defined, measure of economic activity, let alone of well-being (2426). For analyses of energy systems and their environmental impacts, Warr and Ayres (27) have even questioned whether the causal arrow flows from GDP to energy consumption or in the other direction. Other than to note the potential problems with GDP, this is not the place for an extended discussion of these issues.

Resisting Pressure to Forecast

There are understandable pressures to develop ever-more-elaborate analytical procedures and report ever-more-detailed quantitative forecasts about the future values of many different systems and quantities. Sometimes this makes sense, but often these pressures should be resisted.

When we have to make projections about the future evolution of a complex dynamic system it is common practice to build models. The question then arises: What to do when one suspects that the underlying structure of the system being modeled may change over time (i.e., is not stationary)? Just up the hall from my Carnegie Mellon office sit several of the world’s leading Bayesian philosophers and statisticians. Some years ago I posed this problem to one of them, who argued that I should build a list of all of the ways in which the system I was concerned with might change in the future, assess a probability for each of those possible future structures, run each model, and then weight and combine the results using my assessed probabilities. I understand this argument philosophically, but, of course, its implication is that the less I know about how the world is likely to change in the future, the more complicated my forecasting model should become. The practical engineer in me rebelled at this notion. In a paper titled “Mixed levels of uncertainty in complex policy models” (28) Elizabeth Casman, Hadi Dowlatabadi, and I argued that a more sensible strategy is to begin with a fairly detailed model and then over time shift between models, moving to progressively simpler (for examples see refs. 2931) and then order-of-magnitude models, and perhaps ultimately on to simple bounding analysis (32).

When uncertainty is great the use of order-of-magnitude methods, bounding analysis, or even simple parametric “what if” analysis may sometimes be more informative—and the best we can do.

Footnotes

The author declares no conflict of interest.

See companion article on page 5409.

References

  • 1.Morgan MG, Henrion M. Uncertainty: A Guide to Dealing with Uncertainty in Quantitative Risk and Policy Analysis. Cambridge Univ Press; New York: 1990. [Google Scholar]
  • 2.Moss R, Schneider SH. 2000 Uncertainties in the IPCC TAR: Recommendations to lead authors for more consistent assessment and reporting. Guidance Papers on the Cross Cutting Issues of the Third Assessment Report of the IPCC, eds Pachauri R, Taniguchi T, Tanaka K (Intergovernmental Panel on Climate Change, Geneva), pp 33–51. Available at www.ipcc.ch/pdf/supporting-material/guidance-papers-3rd-assessment.pdf. Accessed April 19, 2018.
  • 3. Presidential/Congressional Commission on Risk Assessment and Risk Management (1997). Volume 1: Framework for environmental health risk management; Volume 2: Risk assessment and risk management in regulatory decision-making. Available at https://cfpub.epa.gov/ncea/risk/recordisplay.cfm?deid=55006. Accessed April 19, 2018.
  • 4.Morgan MG. Uncertainty analysis in risk assessment. Hum Ecol Risk Assess. 1998;4:25–39. [Google Scholar]
  • 5.Goldstein BD. Risk assessment of environmental chemicals: If it ain’t broke... Risk Anal. 2011;31:1356–1362. doi: 10.1111/j.1539-6924.2010.01447.x. [DOI] [PubMed] [Google Scholar]
  • 6.Craig PP, Gadgil A, Koomey JG. What can history teach us? A retrospective examination of long-term energy forecasts for the United States. Annu Rev Energy Environ. 2002;27:83–118. [Google Scholar]
  • 7.Shlyakhter AI, Kammen DM, Broido CL, Wilson R. Quantifying the credibility of energy projections from trends in past data: The US energy sector. Energy Policy. 1994;22:119–130. [Google Scholar]
  • 8.Morgan MG, Keith DW. Improving the way we think about projecting future energy use and emissions of carbon dioxide. Clim Change. 2008;90:189–215. [Google Scholar]
  • 9.Kaack LH, Apt J, Morgan MG, McSharry P. Empirical prediction intervals improve energy forecasting. Proc Natl Acad Sci USA. 2017;114:8752–8757. doi: 10.1073/pnas.1619938114. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Sherwin ED, Henrion M, Azevedo ML. Estimation if the year-on-year volatility and the unpredictability of the United States energy system. Nature Energy. 2018;3:341–346. [Google Scholar]
  • 11.Christensen P, Gillingham K, Nordhaus W. Uncertainty in forecasts of long-run economic growth. Proc Natl Acad Sci USA. 2018;115:5409–5414. doi: 10.1073/pnas.1713628115. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Mueller U, Watson M. Measuring uncertainty about long-run predictions. Rev Econ Stud. 2016;83:1711–1740. [Google Scholar]
  • 13.Jorgenson DW. Information technology and the U.S. economy. Am Econ Rev. 2001;91:1–32. [Google Scholar]
  • 14.Jorgenson DW, Vu K. Information technology and the world growth resurgence. Ger Econ Rev. 2007;8:125–145. [Google Scholar]
  • 15.Khan HN, Hounshell DA, Fuchs ERH. Science and research policy at the end of Moore’s law. Nature Electronics. 2018;1:14–21. [Google Scholar]
  • 16.Spetzler CS, Staël von Holstein C-AS. Probability encoding in decision analysis. Manage Sci. 1975;22:340–358. [Google Scholar]
  • 17.Morgan MG. Use (and abuse) of expert elicitation in support of decision making for public policy. Proc Natl Acad Sci USA. 2014;111:7176–7184. doi: 10.1073/pnas.1319946111. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Kahneman D, Slovic P, Tversky A. Judgment Under Uncertainty: Heuristics and Biases. Cambridge Univ Press; New York: 1982. [DOI] [PubMed] [Google Scholar]
  • 19.Morgan MG. Theory and Practice in Policy Analysis: Including Applications in Science and Technology. Cambridge Univ Press; New York: 2017. [Google Scholar]
  • 20.Henrion M, Fischhoff B. Assessing uncertainty in physical constants. Am J Phys. 1986;54:791–798. [Google Scholar]
  • 21.Field CB, Barros VR, editors. Climate Change 2014: Impacts, Adaptation, and Vulnerability. Vol 1 Cambridge Univ Press; New York: 2014. [Google Scholar]
  • 22.National Academies . Valuing Climate Damages: Updating Estimation of the Social Cost of Carbon Dioxide. National Academies; Washington, DC: 2017. [Google Scholar]
  • 23.Morgan MG, Vaishnav P, Dowlatabadi H, Azevedo IL. 2017. Rethinking the social cost of carbon dioxide. Issues Sci Technol Summer:43–50.
  • 24. The Economist (April 30, 2016). Measuring economies: The trouble with GDP. April 30 print edition. Available at https://www.economist.com/news/briefing/21697845-gross-domestic-product-gdp-increasingly-poor-measure-prosperity-it-not-even. Accessed April 19, 2018.
  • 25.Nordhaus WD. Quality change in price indexes. J Econ Perspect. 1998;12:59–68. [Google Scholar]
  • 26.Ayres RU. Limits to the growth paradigm. Ecol Econ. 1996;19:117–134. [Google Scholar]
  • 27.Warr BS, Ayres RU. Evidence of causality between the quantity and quality of energy consumption and economic growth. Energy. 2010;35:1688–1693. [Google Scholar]
  • 28.Casman EA, Morgan MG, Dowlatabadi H. Mixed levels of uncertainty in complex policy models. Risk Analysis. 1999;19:33–42. [Google Scholar]
  • 29.Hart J. Consider a Spherical Cow: A Course in Environmental Problem Solving. University Science Books; Sausalito, CA: 1988. [Google Scholar]
  • 30.Rubin ES, Small MJ, Bloyd CN, Henrion M. Integrated assessment of acid deposition effects on lake acidification. J Environ Eng. 1992;118:120–134. [Google Scholar]
  • 31.Nordhaus WD. An optimal transition path for controlling greenhouse gases. Science. 1992;258:1315–1319. doi: 10.1126/science.258.5086.1315. [DOI] [PubMed] [Google Scholar]
  • 32.Morgan MG. The neglected art of bounding analysis. Environ Sci Technol. 2001;35:162A–164A. doi: 10.1021/es012311v. [DOI] [PubMed] [Google Scholar]

Articles from Proceedings of the National Academy of Sciences of the United States of America are provided here courtesy of National Academy of Sciences

RESOURCES