Skip to main content
Proceedings of the National Academy of Sciences of the United States of America logoLink to Proceedings of the National Academy of Sciences of the United States of America
. 2025 Nov 24;122(48):e2520857122. doi: 10.1073/pnas.2520857122

Stochastic responses and marginal valuation

Lars Peter Hansen a,1,2, Panagiotis Souganidis b,2
PMCID: PMC12685021  PMID: 41284873

Significance

Intertemporal marginal valuations are central to economic decision making and policy analysis. They guide decisions that are robustly optimal and, even in nonoptimal settings, offer informative characterizations for policy improvement. Relevant applications include the social impact of climate change and the social value of research and development. There are a wide range of applications in public economics, environmental economics, and macroeconomics. Our formulation disentangles the roles of modeling assumptions and empirical inputs by enabling conceptual clarity and illuminating measurement challenges. Designed for broad applicability, the analysis explicitly accommodates deep uncertainties, including model ambiguity and misspecification. In doing so, it provides a rigorous and transparent foundation for both economic analysis and practical policy evaluation in uncertain and evolving environments.

Keywords: policy assessment, valuation, deep uncertainty, robustness, stochastic differential equations

Abstract

The analysis of policy impacts in a dynamic and uncertain reality is vital to supporting informed economic policy design and implementation. Dynamic, stochastic economic models used in policy evaluation necessarily simplify the world as we know it. This motivates us to explore, refine, and extend tools aimed at producing marginal valuations that shed light on why some policies are optimal and how others, though suboptimal, can be improved. We present representations of these marginal valuations that embrace uncertainty and support robust implementation-even in environments characterized by “deep uncertainties.” These representations offer a more complete understanding of how interactions among multiple state variables, concerns about model misspecification, and uncertainties surrounding potentially long-term implications contribute to the cogent assessment of policies. We argue that these methods are particularly salient for evaluating the global cost of climate change and the global value of research and development with long-term prospects for success.

1. Introduction

Marginal valuations play a central role in economic analyses and serve as tools for both interpretation and measurement. This paper develops such constructs in dynamic settings relevant for investment and policy analyses for which uncertainty is pervasive. We imagine a Markov environment with multiple, interacting, endogenous state variables. Small changes in one of these variables at an initial period have marginal outcome consequences in future dates that we seek to value. As an example, an initial change in some form of capital—induced, for example, by an initial investment—has payoff ramifications for several periods in the future. As another example, a marginal increase in climate change has consequences for economic opportunities in future periods. Finally, proposals such as ref. 1 for using net national product to assess policies in the presence of externalities use marginal calculations to deduce the prices relevant for social valuation.

Uncertainty contributions can arise in multiple forms, including randomness captured by repeated small shocks and infrequent big events. In contrast to more narrowly framed risk analyses, the marginal valuations we explore entertain a broad conception of such uncertainties, rich enough to encapsulate model ambiguities and specification concerns.

While intertemporal marginal assessments are constructed to be local in nature, they complement more global assessments. They suggest where actions or policy changes can have large impacts, and they isolate when such modifications are inconsequential. More formally, they provide insights about global solutions to control problems as the first-order conditions for optimization typically depend on marginal valuations. Intertemporal marginal valuations are commonly used in economics and policy making, including some prominent measurements of the social cost of carbon and marginal abatement costs, and are used in assessing the welfare implications of marginal changes in exposures to uncertainty.*, Some applications rely on explicitly specified models of state dynamics measured as partial derivatives of value functions. Other applications are more empirically grounded but abstract from fully explicit state dynamic modeling including the interactions among state variables.

In many economic settings, uncertainty would seem to be a first-order consideration. However, typical calculations either abstract from uncertainty or focus on risk specifications with full confidence in the probabilistic input. We suggest ways to relax that confidence and show how it impacts valuation. As some researchers and policymakers have suggested, deep uncertainties pervade dynamical systems in ways that should impact the design of prudent climate change and other policies, a view that we embrace. For example, see refs. 811 for discussions of such challenges in the economics literature. Moreover, a plethora of research in macroeconomics treats aggregate uncertainty as a second-order consideration despite the limited understanding of some aspects of the underlying dynamics. Finally, we use the jump diffusion model to include infrequent big events, along with the repeated small shocks (Brownian increments). Importantly, the infrequent jumps have endogenous intensities and uncertain dynamics.

Although we build on ideas, described in refs. 1214, and elsewhere, justifying decision theory approaches that provide a way to confront such challenges, our emphasis is different. Indeed, we focus on the construction of i) uncertainty-adjusted probabilities used in valuation, and ii) revealing representations and decompositions of these valuations.

In addition to decision theory, we borrow and extend ideas from asset pricing to obtain our representation and decomposition of marginal valuation. We show formally how to represent marginal values as asset prices with implicit marginal payoffs. Depending on the application, these payoffs could be economic outputs or social payoffs and are the counterpart to dividends of cash flows from market-based asset pricing. Our construction of marginal payoffs provides us with ways to find revealing additive decompositions of marginal valuations. These decompositions quantify how dynamic interactions across state variables contribute to the overall valuation. In applications with so-called “structural models” of economic dynamics, the methods will help researchers to deconstruct model implications and to assess roles of the alternative “moving parts.” The resulting valuation formulas use stochastic responses to initial perturbations in ways that are similar to sensitivity characterizations of derivative claims in mathematical finance.

Our approach to uncertainty is encapsulated in an uncertainty-adjusted probability measure. We derive this adjustment for alternative characterizations of the underlying uncertainties. The use of a change-of-measure to represent asset values, while conceptually distinct, is reminiscent of the risk-neutral change in probability measure used extensively in derivative claim pricing.

We explore this topic for continuous-time stochastic systems, although many of the insights derived are also applicable to discrete-time formulations. We start with Markov diffusion processes analyzed in the familiar risk environment with full confidence in probabilities. Our results in this baseline case are given in Sections 2 and 2.2. In Section 3, we allow for ambiguities and robustness considerations expressed in terms of likelihood and prior uncertainties. Finally, we include infrequent Poisson-type jumps with endogenous intensities and explore their consequences in Section 4.

Many of the basic mathematical ingredients in our analysis were developed and used previously in related forms of parameter sensitivity analyses for continuous-time diffusions. See, for example, refs. 15 and 16, along with the extensive list of references in the latter paper. A substantial focus of this literature is computation, while our motivation is to develop a framework that supports measurement, model interpretation, and deconstruction in the presence of broadly conceived uncertainties.

2. Impulse Responses and Diffusion dynamics

The use of impulse responses has been a common tool in the study of economic dynamical systems. See ref. 17 for an initial application of this approach. For nonlinear stochastic models, there are a variety of alternative approaches. Consistent with our interest in marginal valuation, we follow the approach of refs. 18 and 19 by which responses are deduced for marginal changes in an initial time period. We refer to the computations as local impulse responses. In what follows, we show the dynamic stochastic evolution of these responses for Markov diffusions.

Consider an N-dimensional vector system of the state dynamics given by the stochastic differential equation (SDE):

dXt=μ(Xt)dt+=1Lσ(Xt)dWtX0=x, [1]

where W is an L-dimensional, standard Brownian motion with entry denoted W, μ and, the σ’s are N-dimensional column vectors with the same number of entries as there are states. In addition to the standard assumptions that guarantee the well-posedness of Eq. 1, we restrict the matrix σσ to be uniformly elliptic (its determinant is bounded away from zero). The infinitesimal generator A for the diffusion is written, for sufficiently smooth ϕ:RNR, in the familiar form:

Aϕ=defμ·ϕx+12traceσ2ϕxxσ. [2]

Initially, we consider an expected discounted value objective of a decision maker.

V(x)=defδE0exp(δt)U(Xt)dtX0=x.

This decision maker could be a private investor or a social planner. We adopt this starting point for the sake of pedagogical convenience, although our analysis will have a more broader applicability. Later, we explore two extensions that are central to our analysis.

It is well known that the process V solves a Feynman-Kac (FK) or equivalently a resolvent equation of the form:§

0=δUδV+AV. [3]

Since we imagine a decision maker in the background, we suppose that he or she has a Markov control law and that posited μ and σ embed this decision rule in their construction. As we are potentially interested in suboptimal controls or policies, we do not necessarily start with an optimal problem.

2.1. Stochastic Responses.

For decision and policy analyses, it is often of interest to study marginal improvements from potentially suboptimal decision rules. Then it becomes important to consider the marginal valuation Vxi for alternative coordinates. Depending on the application, these could be measures of the shadow value of capital, the social cost of climate change, or the value of research and development supporting large-scale projects.

We start by characterizing what is called the first variational process.# A straightforward application of the chain rule yields that the Jacobian Yt=DxXt satisfies the SDE:

dYt=Dμ(Xt)Ytdt+=1LDσ(Xt)YtdWt,Y0=IN.

For the local impulse response theory, we introduce a stochastic process (called a variational process) Λt=Ytλ, which gives a vector of marginal responses of X to some date zero marginal change λ in the state vector. Given the SDE of Y, it is immediate that Λt evolves according to:

dΛt=Dμ(Xt)Λtdt+=1LDσ(Xt)ΛtdWtΛ0=λ. [4]

In effect, the Λ evolution replaces the entries of μ(Xt) and σ(Xt) with the dot products between the corresponding gradients evaluated at Xt and Λt. The specification of the initial condition λ will be one of the N coordinate vectors when we are interested in the impact of marginal changes in one of the state variables in the initial time period. Other initial conditions are also of interest, however, as we discuss subsequently.

The previous calculations can also be justified using Malliavin Calculus given the underlying Brownian information structure—we refer to refs. 16, 18, and 19 for an extensive discussion and examples. For the discussion here, there is no need to use Malliavin Calculus. Such methods, however, may help to simplify some of the formulas derived later in concrete examples.

In view of the interactions with the state dynamics, we study (X,Λ) as a vector system with generator B. We are particularly interested in functions of the form: Ψ(x,λ)=ψ(x)·λ, where ψ is a N-dimensional vector of functions of x. Let A act on the vector of functions ψ componentwise. Then, we find:

BΨ=λ·Aψ+Dμλ·ψ+12traceDσ1λDσLλψxσ+σψxDσ1λDσLλ.

It follows that the outcome can be expressed in the form: λ·ψ~(x) for a N-dimensional vector of functions ψ~.

Remark 2.1:

Discrete-time counterparts to linear systems, like, for example, Eq. 1 with μ linear in the state vector and σ state independent, are often used in approximations in macroeconomics. Under these restrictions on the diffusion dynamics, Λ can be solved analytically.

2.2. An Initial Representation.

Going forward, we are interested in the process of stochastic (marginal) responses of future U(X) to an initial change in one of the states. We follow the economic convention of referring to U(x) as the “utility” given state realization x, although we could instead follow a convention in control theory of viewing U(x) as a measure of a current period contribution to costs and obtain analogous results.** The marginal responses are given by the process Ux(Xt)·Λt:t0. Investigating the behavior of this process requires that we study the joint dynamics (X,Λ).

We construct a representation for each entry of the gradient DV, and use this representation to perform an intertemporal decomposition that quantifies the importance of state variable interdependencies in representing marginal values.

For convenience, we study DV·λ where we are particularly interested in coordinate vectors λ’s that select particular coordinates of DV. Using the process (X,Λ) we find the representation:

DV(x)·λ=δE0exp(δt)DU(Xt)·ΛtdtX0=x,Λ0=λ. [5]

Since the initial state variable perturbation can alter future values of the entire state, we include the entire gradient vector DU(Xt) at date t. The dot product between DU and Λ reflects a version of the chain rule.

Differentiating Eq. 3 with respect to each xj gives a system of equations for the vector DV of functions. We reduce this system to a scalar equation by premultiplying the vector λ. This reduction is of interest precisely because it results in a single FK equation with a substantive interpretation that is revealing. Specifically, it provides the following scalar FK equation for DV·λ:

0=δDU(x)·λδDV(x)·λ+BDV(x)·λ. [6]

Formula Eq. 5 gives the stochastic representation of the solution to FK equation Eq. 6.

Eq. 5 opens the door to an “asset pricing” interpretation for the marginal valuation Vxi as a discounted expected value of “cash flow” DU(Xt)·Λt that is the stochastic utility response to an initial perturbation in the ith component of the state vector when initializing Λ0 at the ith coordinate vector.

Remark 2.2:

The study of control problems will sometimes use Hamiltonian methods to deduce first-order conditions for optimization. Early prominent examples of this in economics include (21) in the case of optimal deterministic growth models and ref. 22 in the case of the stochastic counterpart. Specifically, such methods give rise to a costate equation that is included when constructing an optimal solution. There is a known equality between a costate and a derivative of a value function with respect to a corresponding state variable. This costate is often interpreted as a measure of a marginal valuation.

Our characterization provides a way to represent the marginal valuation that is distinct from a costate equation and opens the door to further interpretation. The equation of interest is different from the costate equation system for two reasons.

First, we do not exploit the optimality of the control law to allow for a more general applicability of our analysis. Instead, we take the Markov control law or the Markov equilibrium conditions given when deducing our representation of interest to use for interpretation. Costate evolutions are typically derived to be solved as part of constructing a solution to the optimal control problem and the impact of optimization is reflected in the costate dynamics. Of course, we could simply focus on a degenerate case of optimality in which the control law is constrained ex ante to equal a prespecified function of the state. The resulting costate evolution can be viewed as an input into our analysis.

Second, we do not directly represent the solution to this vector system of backward SDE’s for DV(X). Instead we focus on an interpretable, state-dependent linear combination Λ·DV(X), Stochastic response theory gives us the evolution of Λ that supports this reduction.†† We then solve the single backward SDE in Λ·DV(X), which has the form of the Feynman–Kac equation, taking as input the forward SDE for the joint (X,Λ) process. Our reason for doing this is neither for mathematical nor computational advantages; but it is rather because it results in a substantively interesting and economically motivated representation. SI Appendix, section 1 gives an illustration of these various points using a linear-quadratic control problem.

Remark 2.3:

Asset prices are often represented in terms of stochastic discount factors. To make this connection precise, consider a utility function U of the form:

U(x)=U~c(x),

where c denotes consumption, which can depend on the Markov state. By the chain rule,

Ux=dU~dccx.

Thus, the stochastic flow to be valued is the stochastic response of consumption times the marginal utility at the date of the flow. By converting the marginal valuation at date zero into units of consumption at date zero, we obtain the formula:

DV(x)·λU~c(C0)=E0Stcx(Xt)·ΛtdtX0=x,Λ0=λ, [7]

where Ct denotes the stochastic consumption for date t. This representation includes a date t stochastic discount factor:

St=defexp(δt)dU~dc(Ct)U~c(C0).

The division on the left side of formula Eq. 7 converts the marginal valuation into units of consumption. The stochastic discount factor process {St:t0} is familiar from asset pricing theory as a measure of the intertemporal marginal substitution rates expressed in units of consumption at the respective dates.‡‡ For example, see refs. 2526–27. It follows from the right side of Eq. 7, that the process {St:t0} is used to discount the stochastic flow process cx(Xt)·Λt:t0.

When Λ0 is initialized at ei (the ith coordinate vector), we obtain the marginal valuation of state i; for explicit policy, we are interested in other initializations as illustrated in the following remark.

Remark 2.4:

Let Γ(x) denote a given, possibly suboptimal, control law where:

μ(x)=μ^[x,Γ(x)]U(x)=U^[x,Γ(x)]

To investigate the impact of a “small” change in Γ, first write Γ+ϵΓ¯. Substitute in the modified control law and differentiate the altered Feynman-Kac equation for an ϵ-dependent value function with respect to ϵ for the given initial value function and its derivatives. By including all of the terms, we obtain a Feynman-Kac equation for the derivative of the value function with respect to ϵ for which the term of interest is the “flow term:”

δU^γ·Γ¯+Vx·μ^γΓ¯. [8]

A solution is a discounted (conditional) expected value of the implied flows. When Γ is chosen optimally, the sum of the two terms in Eq. 8 is zero achieved by setting marginal costs equal to marginal benefits. When the discounted expected value is of the flows implied by Eq. 8 is positive, Γ¯ isolates a direction for policy improvement for the associated state vector. For a variety of applications, we can interpret separately the two terms in Eq. 8 in terms of marginal costs and benefits. Given the second term, we are led to set the initial condition for the Λ process of stochastic responses to:

Λ0=μ^γ[X0,Γ(X0)]Γ¯(X0).

These formulas represent costs and benefits in terms of the utility numeraire. In economic applications, a further division by a current period marginal utility of a consumption numeraire converts the measures in units of that numeraire.

Example 2.5:

SI Appendix, section 6 gives a climate–economics example. In the resulting model, there are three state variables, temperature, a broadly based capital used in producing output, and a measure of knowledge capital that makes new green technology discovery more likely. The controls include a dirty form of energy that emits CO2, an investment in the broadly based capital, and an investment in knowledge capital. From the guises of this model, there are two cost–benefit style computations that are of interest.

First, construct Γ¯ to be a coordinate vector with a one in entry corresponding to the energy input. This energy input enhances current-period product, but it also induces warming. The marginal social cost of this dirty energy source is

Vx·μ^γΓ¯.

By converting the energy input into units of the implied CO2 emissions, this measurement becomes the commonly discussed social cost of carbon. We refer to it as social because it includes costs that are not internalized by decentralized markets in the absence of targeted taxes. The marginal benefit measures the contribution of the energy input into production, which in turn induces increases in consumption. Formally, the resulting marginal benefit is measured by

U^γ·Γ¯

using the same choice of Γ¯ By design, our representation reveals the important components to the social cost of carbon.

As a second computation with this same model, observe that a larger stock of knowledge makes technology discovery of a new clean energy source more likely. This is the source of a benefit to investment in green research and development. This investment is costly because it reduces the output available for other activities such as consumption or investment.

We now let Γ¯ be a coordinate vector with a one in the entry corresponding to investment in green research and development. In this application,

Vx·μ^γΓ¯

is a measure of the marginal benefit to investing in the stock of knowledge, whereas U^γ·Γ¯ captures the marginal cost.

In conclusion, our representation provides insights into what contributes intertemporally to the social value of investment in green research and development.

2.3. An Additive Decomposition of the Marginal Values.

The formula for DV·λ opens the door to a decomposition by states. Suppose that we initialize Λ0=λ where λ is a coordinate that isolates the initial state of interest for the marginal valuation. Then Λt measures how a perturbation in this initial state affects the entire state vector at future dates. Each state variable process thus gives a channel by which the change in an initial state impacts valuation. We use these channels to give an additive decomposition of the marginal valuation.

This valuation decomposition is based on an additive decomposition of the stochastic flows from each of the N different states. Write

DU=Ux1UxNandΛt=Λ1,tΛN,t.

Then the stochastic flow is decomposed as

DU(Xt)·Λt=j=1NUxj(Xt)Λj,t.

While the choice of initial condition λ, determines the marginal valuation of interest, the subscript j determines the state variable channel that contributes to this marginal valuation.

2.4. Decomposition of Values.

This additive decomposition of flows implies a corresponding decomposition of values.

DV(x)·λ=j=1NVj(x)·λ, [9]

where:

Vj(x)·λ=defδE0exp(δt)Uxj(Xt)Λj,tdtX0=x,Λ0=λ.

The terms in this marginal valuation decomposition reflect the impact of changes in an initial state (specified by the choice of λ) on the alternative states in the future, as they contribute to the valuation.

As a motivational example, some measurements of the social cost of climate change leave out the stock of knowledge associated with investments in research and development, or possibly other suboptimal government policy responses to future changes in climate change. This decomposition helps us understand both the empirical challenges and the potential drawbacks of ignoring some of the states that interact with climate change.

3. Robust Marginal Valuation

So far, the only form of uncertainty that we have considered is a narrowly defined notion of risk. Risk captures unknown outcomes with known probabilities such as Brownian motion risk. We are influenced by Knight’s (28) assertion that:

  • Uncertainty must be taken in a sense radically distinct from the familiar notion of risk, ... and there are far-reaching and crucial differences in the bearings of the phenomena depending on which of the two is really present and operating.

Such concerns appear in a variety of discussions of policy challenges, including climate change, with a recent example being ref. 11. However, the ramifications for marginal valuation have not been formally addressed, a topic we consider next.

We describe formally how to incorporate uncertainty adjustments that relax confidence in the baseline specification of the Brownian motion and instead allow for baseline probabilities to be misspecified. Conveniently, we will show that these adjustments are captured by a change in the baseline probability distribution, a mathematical device that is familiar to derivative claims valuation and financial engineering, but the uncertainty-adjusted probability is constructed in an entirely different way.

Although there are a variety of lines of attack that have been suggested to confront “deep uncertainty,” we focus on three interrelated approaches. In all cases, we adopt a recursive formulation that leads to straightforward modifications of the Feynman-Kac equations. Although the approaches differ in terms of the structure that is imposed on the uncertainty, all three of them use recursive versions of variational preferences. (See (29) for an axiomatic defense of these preferences.) We note that despite the commonality to their mathematical structure that we exploit, there are important conceptual differences among the three.

3.1. Robust Valuation with Model Uncertainty.

We follow refs. 3032 by exploring changes in the baseline probability specification, which is assumed to be a multivariate standard Brownian motion W.

Following the well-known Girsanov characterization of positive martingales, we parameterize the local evolution with the process H and consider a martingale M such that

dMt=MtHt·dWtM0=1.

The resulting probability measure changes W to satisfy

dWt=Htdt+dWtH, [10]

where {WtH:t0} is a standard Brownian motion under the probability specification induced by the martingale.

The decision maker entertains these alternative probabilities subject to a relative entropy penalization. Expressed with the new Brownian motion, the dynamics of the state process are

dXt=μ(Xt)+σ(Xt)Htdt+σ(Xt)dWtH.

Since the alternative probability remains locally normally distributed, the local contribution to the log-likelihood ratio between the processes evolves as

dlogMt=Ht·dWt12Ht·Ht=12Ht·Htdt+Ht·dWtH

which has a local mean of 12Ht·Ht under the alternative distribution. This gives the local contribution to the relative entropy.

To make a robust adjustment to the valuation, we consider the value function:

V(x)= minHE(0exp(δt)Mt[δU(Xt)+ξ2Ht·Ht]dtX0=x,M0=1), [11]

where the expectation is taken under the original probability distribution with the change in probability measure embedded in the positive martingale M. The positive parameter, ξ, determines the magnitude of the penalty for deviating from the baseline probability specification. Smaller values of ξ imply more aversion to misspecification uncertainty on the part of the decision maker. The state variable M induces a linear scaling of the value function, which we write as mV. Specifically, mV, solves the Hamilton–Jacobi–Bellman (HJB) equation:

0=δmUδmV+mAV+minhmDV·σh+mξ2hh.

Since the positive variable m scales the HJB equation proportionately, we instead focus on the equation for V, which also satisfies an HJB equation§§

0=δUδV+AV(x)+minhDV·σh+ξ2hh. [12]

Notice that the minimization problem has the quasi-analytic solution:

h=1ξσDV. [13]

With this in hand, we find that the minimized objective solves the HJB:

0=δUδV+AV(x)+(σh)·DV+ξ2h·h. [14]

Next, we propose two alternative ways to apply our previous analysis to characterize DV·λ.

3.1.1. Approach 1.

We show here how to use the Envelope Theorem in conjunction with the first-order condition Eq. 13 to eliminate some of the “indirect terms” involving Dh when we differentiate the HJB-equation. Indeed, it follows from Eq. 13 that

0=(Dh)[σDV+ξh]=[σDh]DV+ξ(Dh)h. [15]

As a consequence, the stochastic flow term is DU·λ, as it was without robustness considerations, and the drifts or local means for X and Λ are respectively:

μ(x)+σ(x)h(x)andDμ(x)λ+Dσ·1(x)λDσ·L(x)λh(x).

However, with the simplification of the envelope Theorem, the resulting Λ process ceases to encode the stochastic responses that provide the most substantively interesting interpretation. By imposing the first-order conditions, we dropped the drift contribution σDh in the evolution of Λ when imposing h. As a consequence, the resulting Λ process is different and no longer can be interpreted as encoding the stochastic response vectors of interest.

One possibility is that we do not impose Eq. 15 and include [σDh] as part of the evolution for λ. In this case, we would also need to add ξ(Dh)h to the stochastic flow term DU·λ.

As a substantively interesting alternative, we consider a second approach, one that takes advantage of the envelope condition and supports the construction of an uncertainty-adjusted probability distribution.

3.1.2. Approach 2.

Suppose that the decision maker considers drift distortions that are perceived not to depend on endogenous state variables. In other words, the minimization explores altered probability specifications that are beyond the control of the decision maker. To represent this result, we set H¯t=hX¯t and construct an artificial process {X¯t:t0} evolving according to

dX¯t=μX¯t+σX¯thX¯tdt+σX¯tdWtH¯X¯0=x¯. [16]

The original process {Xt:t0} is now replaced by

dXt=μ(Xt)+σ(Xt)hX¯tdt+σ(Xt)dWtH¯X0=x. [17]

This construction captures the idea that the drift distortion h is specified exogenously and contributes to the marginal valuation only through a change in the probability measure. Although the process H still depends on the outcome of the minimization problem, for the purposes of evaluation, the decision maker does not view it as a function of the endogenous state variables.¶¶ By design, the resulting marginal valuation is the same as in Approach 1, but the interpretation is altered. See SI Appendix, section 3 for a formal elaboration.

With this in mind, the martingale used to represent the change in probability evolves as

dM¯t=M¯tH¯tdWtM¯0=1.

The value function V¯=V¯(x,x¯) given by

V¯(x,x¯)=E(0exp(δt)M¯tδU(Xt)+ξ2h(X¯t)h(X¯t)dtX0=x,X¯0=x¯,M¯0=1) [18]

solves the modified HJB equation:

0=δU(x)δV¯(x,x¯)+A¯V¯(x,x¯)+ξ2h(x¯)h(x¯), [19]

where A¯ is constructed with a composite drift and Brownian exposure matrices given by

μ¯(x,x¯)=μ(x)+σ(x)h(x¯)μ(x¯)+σ(x¯)h(x¯)andσ¯(x,x¯)=σ(x)σ(x¯).

Since we are only interested in the stochastic responses associated with X captured by Λ, and not those X¯, we modify the stochastic response evolution Eq. 4:

dΛt=Dμ(Xt)Λtdt+Dσ1(Xt)ΛtDσL(Xt)ΛthX¯tdt+Dσ1(Xt)ΛtDσL(Xt)ΛtdWtH¯, [20]

which does not depend on the stochastic responses of X¯, only on the process of X¯ itself.

This construction and analysis justifies the robust extension of Eq. 5:

V¯x(x,x)·λ=E0M¯texp(δt)Ux(Xt)·ΛtdtX0=X¯0=x,Λ0=λ=E~0exp(δt)Ux(Xt)·ΛtdtX0=x,Λ0=λ, [21]

where E~ is an uncertainty-adjusted expectation implied by the joint stochastic evolution given in Eq. 16 and 20 induced by the martingale M¯ along with the initial condition that X¯0=x. Notice that we dropped ξ[Dh(x¯)]h(x¯) from the discounted objective, which is permitted because h(x¯) depends only on x¯ and not on x.

This probability measure does not represent the beliefs of the decision maker, but rather a mathematical construct that is used as a device to support robust decision-making and the corresponding marginal valuation. The outcome of the minimization problem given in Eq. 11 provides the ingredients to construct this probability measure. We refer to this as an uncertainty-adjusted probability measure. The decomposition described in Section 2.4 applies here as well, but with this change in the probability measure.

Remark 3.1:

Although we are not claiming that the state dynamics emerge from an optimal control problem, this development does extend to robust control problems. Here, we appeal to results from ref. 34 on two-player stochastic differential games and the existence of a corresponding dynamic programming principle.## Importantly, we do not impose the simplification from Envelope Theorem counterpart to the maximization in supporting the resulting representation. Instead, we assume that μ and σ in the state dynamics include the recursive, robustly optimal solution for the maximizing controls.

Remark 3.2:

The analysis described here can be extended to provide a robust interpretation of a well-known family of recursive utility models initiated by Kreps and Porteus (36) and Epstein and Zin (37). They are constructed from a homogeneous-of-degree one continuation value recursion including a risk adjustment to the continuation value of consumption. Our analysis is directly applicable to the case with a unitary elasticity of consumption. In addition, one may impose a unitary risk aversion in conjunction with a preference for robustness, as we describe in this subsection, to reinterpret the risk aversion presumed in the recursive utility literature. This alternative interpretation of risk aversion has an antecedent in the link in control theory between risk sensitive and robust control, first suggested by Jacobson (30). This reinterpretation carries over to recursive utility specifications in which the elasticity of intertemporal substitution differs from one, but not starting from a discounted expected utility function. Nevertheless, our approach extends in a straightforward manner to this case by altering δ to be state dependent in the FK equation for DV. See SI Appendix, section 2 for an elaboration.

3.2. Adding Structure to the Analysis.

Building on an approach suggested by Chen and Epstein (38) and Hansen and Sargent (13), we show how to add more structure to the exploration of potential misspecification.

Consider the following relative entropy contribution to the objective Eq. 11:

δ2E0exp(δt)MtHt·HtdtX0=x, [22]

where the process: Mt:t0 evolves as in Eq. 10, and where we set ξ=δ as a convenient normalization. We refer contribution Eq. 22 to the objective as exponentially weighted, relative entropy. The limiting version of this as δ goes to zero typically gives a measure of relative entropy pertinent for the continuous-time Donsker-Varadhan Large Deviation theory.

For a prespecified Ht=h^(Xt), exponentially weighted discounted relative entropy could be computed by solving the FK equation:

0=δ2h^h^δK+AK+DK·σh^. [23]

Instead of an equation that determines the function, K, we use K to constrain potential drift distortions. We require that there be at least one h^ that solves Eq. 23, but there will typically be many such choices. We expand the possibilities by converting this equation into an inequality and construct the constraint set:

S(x)=s:12ss+1δDK(x)·σ(x)sK(x)1δAK(x),

Thus, when considering alternative models, we constrain the time derivative of discounted relative entropy. Structure is added to the robust adjustment by using the function K to represent the constraint set. For a given state realization, x, the set of resulting s(x)’s is an ellipsoid centered at σDK/δ and computationally easy to impose.

With this extra structure, this robust valuation is reflected in the HJB equation:

0=δUδV+AV+minsS(x)DV·(σs)=δUδV+AV+DV·(σs), [24]

where s solves the minimization problem. At this juncture, we may imitate the less structured robust control approach developed in Section 3.1 by setting

h(x)=s(x),

and using the approach in Section 3.1.2 to construct an alternative probability measure that adjusts the marginal valuations for the model uncertainties. There are minor changes in the value function constructions because we are now solving a constraint problem and not penalization problem. But the underlying argument still applies. The decomposition of Subsection 2.4 continues to hold as well. See SI Appendix, section 4 for a formal elaboration. As is described there, we make some additional restrictions when establishing the counterpart to the approach in Section 3.1.2.

Since the constraint set S(x) changes as an explicit function of the state, it “separates” over different calendar dates. As a consequence, this is a special case of a more general approach proposed by Chen and Epstein (38). The robust adjustment to marginal valuation applies to other types of state-dependent constraints as well. As in the robustness formulation described in Section 3.1, the resulting separation over time allows the underlying preferences of decision makers to be dynamically consistent. As we have seen, the example here has a direct connection to the approach in the previous subsection through the use of relative entropy or Kullback-Liebler divergences.

3.3. Adding Structure via Learning or Filtering.

We describe another approach to add structure based on learning or filtering. We work with a Markov model with a hidden state vector. We write this system as

dYt=ν(Yt,Zt)dt+ς(Yt)dWt,

where the process Y is observed by the decision maker, the process Z is hidden and the matrix ςς is nonsingular (uniformly elliptic). We assume that W is Brownian motion adapted to the filtration G generated by the composite process (Y,Z) and we denote by F the smallest filtration generated by the Y process of observable states which is pertinent to the decision maker.

The hidden state variable Z adds structure to the robustness adjustments by solving a filtering or learning problem with drift:

ν^t=Eν(Yt,Zt)Ft

and “innovation” process W^, which is the multivariate Brownian motion adapted to the F filtration given by

dW^t=ς(Yt)ς(Yt)1/2dYtν^tdt=ς(Yt)ς(Yt)1/2ν(Yt,Zt)ν^tdt+ς(Yt)dWt.

3.3.1. Simple illustration.

We consider the case of a time-invariant hidden state that takes on a finite number of values to illustrate the robustness adjustment. We may think of this as the case of a finite number of possible models, equivalently, a parameter vector that takes on a finite number of values.

Without loss of generality, we may think of realizations of Z as coordinate vectors with dimension I. Moreover, we may write:

ν(y,z)=κ(y)z

Note that:

Z^t=defEZtFt

is the vector of conditional probabilities for a set of I possible models. By essentially applying a conditional version of least squares, we obtain the following recursive learning equation:

dZ^t=diagZ^tZ^tZ^tκ(Yt)ς(Yt)ς(Yt)1dYtκ(Yt)Z^tdt,

where diagz^ is the diagonal matrix with the entries of z^ on the diagonal. The state vector for the decision maker now becomes

Xt=YtZ^t,

where:

dYt=κ(Yt)Z^tdt+ς(Yt)ς(Yt)1/2dW^t

and the constructed Brownian increment dW^t under the Ft satisfies:

dW^t=ς(Yt)ς(Yt)12κ(Yt)Ztdtκ(Yt)Z^tdt+ς(Yt)dWt.

Building on the smooth ambiguity decision model of refs. 3941, we next use robustness considerations in conjunction with variational utility as developed in ref. 29. The value function V=V(x) solves the HJB equation:

0=δV(x)+δU(y)+AV(x)+M(DV,x),

where A is the infinitesimal generator for composite X process and

M(p,x)=defminπi0,πi=1[pσ(y)ς(y)ς(y)1/2κ(y)πz^+ξilogπilogz^iπi)]. [25]

Conveniently, the minimization problem on the right side of Eq. 25 has a quasi-analytical solution, which we report in SI Appendix, section 5.1.

To relate this to our previous analysis, we introduce the drift distortion:

h(x,π)=ς(y)ς(y)1/2κ(y)πz^,

parameterized by π with entries that sum to one, and view:

ξilogπilogz^iπi [26]

as divergence measure in π. This perspective allows us to imitate the approach in Section 3.1.2 to construct an alternative probability measure that adjusts the marginal valuations for the model uncertainties. The decomposition outlined in Subsection 2.4 continues to hold as well.

3.3.2. Smooth ambiguity and misspecification.

Learning is absent from the previous two formulations with the view that future misspecifications may not be reflected in past evidence. The perspective in this subsection is different. For example, the particular featured example incorporates a form of learning through the use of realizations z^ of Z^t in the divergence criterion Eq. 26. The decision maker uses the baseline probability specification to solve the parameter learning problem and then entertains a restricted form of a misspecification again captured by a drift distortion along with corresponding divergence penalty.

While this example has considerable pedagogical simplicity, it has direct extensions to accommodate a more general parameter space or to allow for time variation as a discrete-state Markov process, as illustrated in SI Appendix, section 5.2. It also has extensions to other hidden-state specifications, including both Kalman-Bucy filters and the Zakai equations.

Remark 3.3:

The smooth ambiguity model takes the logexp contribution to M(DV,x) as a direct adjustment for ambiguity aversion expressed as a certainty equivalent and implies a robust adjustment to the learning solution yielding an uncertainty adjustment to the probabilities that is pertinent for marginal valuation. (See ref. 42 for a complementary approach to linking smooth ambiguity to robustness in statistics.)

Other forms of structured uncertainty with analogous implications are also possible depending upon the application. Moreover, structured and unstructured valuation adjustments can be combined, as suggested by Hansen and Sargent (13) to form a composite uncertainty-adjusted probability measure.

4. Infrequent Jumps with Endogenous Intensities

We next consider jump processes. We are particularly interested in jumps that occur infrequently and with intensities that depend on potentially endogenous state variables. These jumps could be tipping points in an economic, social, or political system; or they could be dramatic successes in technological advancement induced by research and development that are highly uncertain with payoffs that could occur in many years or decades in the future.

We take the view that the jumps are terminal dates for diffusion specification with payoffs, should a jump occur, that depend on postjump continuation values. In applications, these postjump continuation values may themselves be deduced endogenously; but we take them as input into the prejump analysis. In summary, we adopt two conventions:

  • a)

    We focus on representing values in a current jump state, taking continuation value functions for other states as input into our analysis.

  • b)

    We use a baseline probability without jumps and incorporate jump probabilities as a form of state-dependent discounting in the future.

Although focusing on the “current jump state,” we could repeat the analysis for each possible jump state. The second convention allows us to preserve our previous analysis except that we modify the marginal utility contribution DU based on the marginal jump possibilities.

Example 4.1:

We revisit the climate–economics example in SI Appendix, section 6. The example model specification has two types of Poisson jumps, a damage curve realization jump, for which there are L1 possibilities, and technology jump that eliminates the need for carbon emissions. The L1 damage realization curves are treated as equally likely under baseline probabilities, given that a damage curve realization jump takes place. All L jumps have intensities that depend on endogenous state variables: either temperature or the stock of knowledge.

4.1. Representing Marginal Valuations Abstracting from Robustness.

For simplicity, we consider a finite number of state-dependent jump intensities, J(x) for =1,...,L. Associated with each is a postjump value function, V. This specification nests as a special case one for which there is a single jump but a distribution over jump locations. We treat the probability conditioned on a jump times the jump intensity as an intensity for each of the possible jump locations. The FK-equation for the jump-diffusion model is

0=δUδV+AV+=1LJVV. [27]

Arguing as before, we find that DV·λ solves the equation:

0=δDU·λ(δ+=1LJ(x))DV·λ+=1LJx(x)·λV(x)V(x)+=1LJ(x)Vx(x)·λ,

and admits the representation:

DV(x)·λ=E0ΔtΦtX0=x,Λ0=λ, [28]

with the expectation in Eq. 28 computed with diffusion dynamics.

In Eq. 28,

Δt=defexp0tδ+=1LJ(Xu)du,

is the modification of the discount factor process in a potentially state-dependent way to adjust for the jump probabilities. The stochastic flow term Φt has three components:

Φt=Φt1+Φt2+Φt3

where, with as before, Λt=Xtxλ,

Φt1=defδΛt·Ux(Xt),Φt2=defΛt·=1LJx(Xt)V(Xt)V(Xt),andΦt3=def=1LJ(Xt)Λt·Vx(Xt).

The expression for Φt1 captures the utility potential, the one for Φt2 the marginal impact of a jump, and the one for Φt3 the marginal valuation should a jump occur. The flow contribution Φt2 comes into play when the jump intensities depend on endogenous state variables. While Φt1 measures the contributions of the current period to the valuation through the contribution of the flow utility to preferences, Φt3 quantifies the future contributions reflected in the postjump value functions.

4.2. Additive Value Decomposition.

The additive flow contribution implies a corresponding additive value decomposition:

DV(x)·λ=V1(x)+V2(x)+V3(x)·λ,

where:

Vj(x)·λ=defE0ΔtΦtjX0=x,Λ0=λ.

Again, the flow terms can be decomposed by alternative state contributions, which can be particularly revealing in dynamic stochastic systems with state interactions. The two new jump contributions to the stochastic flow process {Φt:t0} can also be decomposed additively by jump types .

Remark 4.2:

In applications for which the continuation value functions V are computed by solving equations with the same mathematical structure, we can obtain a corresponding representation for the partial derivatives of the continuation value functions.

4.3. Robust Adjustment to the Jump Intensities.

Following ref. 43, we add robustness to the potential misspecification of the jump intensities by entertaining alternative scalings, g, of these intensities where g0 alters the intensity of type . Given the additive structure by which the jump intensities enter Eq. 27, we deduce the robust adjustment for each intensity separately by solving, for each =1,2,...,L, the minimization problem:

mingJgVV+ξ1g+glogg. [29]

In Eq. 29, the term:

J1g+glogg

measures the conditional relative entropy of the jump intensity change induced by the scaling. The minimizing g is

g= exp1ξVV

and the minimized objective given by

Jξ1exp1ξVV. [30]

The minimized objective is increasing and concave in the difference in the value function VV.

In what follows, we are interested in the derivative of the minimized function in Eq. 30 with respect to the state vector given by

JgDVDV,

where g is the minimizer used to alter the jump intensity. With these computations, we are led to modify both the discounting process Δ and the stochastic flow process Φ. The modified Δ process, which now is

Δt=defexp0tδ+=1Lg(Xu)J(Xu)du,

accounts for the altered jump probabilities. The second two contributions to the flow process become the following:

Φt2=defξΛt·l=1LDJl(Xt)[1exp(1ξ[Vl(Xt)V(Xt)])]andΦt3=defl=1LJl(Xt)gl(Xt)Λt·DVl.

Using analogous reasoning as in our previous investigation of robustness, since the g’s are solved by minimization, we ignore their partial derivatives when differentiating the intensity with respect to the states. With these changes to Δ and Φ2, and Φ3, we obtain the robust counterpart decomposition described in Subsection 4.2.

Remark 4.3:

We may also include robustness considerations to the diffusion specification as described in Section 3.1 in the formulation here.

5. Summary

Our representation of intertemporal marginal valuations has four fundamental components:

  • i)

    stochastic responses that are the marginal responses of future variables of interest to a change in an initial time period;

  • ii)

    an uncertainty-adjusted probability measure that accounts for model ambiguity and misspecification concerns;

  • iii)

    discounting that incorporates prospective probabilities of big events or large uncertain changes in the future;

  • iv)

    marginal implications of big events in the future, contributed by marginal jump intensities and continuation values.

Asset-pricing-type representations support two decompositions of marginal valuation. The first one reflects the state interdependencies; and the second one reflects the three types of stochastic flows, two of which are induced by prospective Poisson jumps with state-dependent intensities.

6. Constructing Prudent Policies

We see multiple uses of the conceptual framework that we have delineated in this paper.

First, we have shown how to operationalize approaches to addressing “deep uncertainty” for computing marginal valuations relevant to alternative decisions. Our perspective, consistent with a variety of approaches in control theory and decision theory, is to use probabilistic formulations as an important starting point but to back off from a full commitment to this practice. Instead, we explore ways to relax this full confidence, consistent with the axiomatic formulations of decision theory. We show how this limited confidence is operationalized through the construction of uncertainty-adjusted probability distributions to use when computing values. Some treatments for confronting deep uncertainties use multiple scenarios. Elsewhere, ref. 44 has discussed the limitations and challenges of this approach within the context of central bank climate policy. To be useful as a policy guide, scenarios need both a dynamic structure, as components of uncertainty will get resolved in the future, and some notion of plausibility when there are a vast number of alternative scenarios to consider.

Second, empirically driven approaches to social valuation are significantly more coherent and credible when accompanied by a conceptual framework within which to interpret the evidence. This paper provides such a framework. Many of the valuation measurements from environmental economics and macroeconomics address policy challenges that are explicitly dynamic and arguably confront broadly conceived uncertainties. Some empiricists may lament our dependence on formal modeling with explicit interactions across states. Although modeling such interactions in a credible and tractable way can be very challenging, understanding how they might operate seems central to the design of prudent policies in an uncertain world. On this point, we are very sympathetic to the call from ref. 45 to capture the interactions among the sources of uncertainty when exploring challenges to climate change. Modular approaches like those discussed in ref. 46 simplify cross-disciplinary inputs into marginal valuations but may be problematic if they ignore interactions among state variables.

Third, our marginal value decompositions are designed to help researchers who build explicit dynamic stochastic models to deconstruct the implications when there are many “moving parts” to consider. By design, one of them features the quantitative impact of state variable interactions over time, and the other assesses the long-term impacts of particular courses of action, such as investment in scientific advances that support technologies of the future. (For instance, see ref. 47 for an application of these decompositions to large-project investments in addressing climate change.) The decompositions are applicable to the evaluation of both private sector behavior in decentralized economies and policy consequences. They are applicable to robustly optimal policy specifications and ad hoc alternatives.

Although we offer no simple solutions, we suggest a coherent framework for measuring and assessing policy impacts, embracing a broad notion of uncertainty. Our aim is to support critical examination of current approaches and to open the door to better quantifications in the future. Part of our motivation stems from government critiques of current methods (for a recent example, see ref. 48) and calls for better approaches (for a second example, see ref. 49). Although we do not offer a detailed commentary on these documents here, our framework provides tools to confront some of the challenges they point to. A principal example is our incorporation of broadly based uncertainty adjustments within the valuations themselves. A typical approach to uncertainty quantification is to perform an uncertainty-motivated sensitivity analysis outside or external to the formal realm of decision making. This external perspective, in which uncertainty is assessed outside formal measurement, is in contrast to the tools we discuss, and the analysis we provide in Section 3. Our approach is meant to internalize at least some aspects of the uncertainty quantification. The importance of this aspect to valuation will depend on how decision makers are or should be averse to the broadly conceived uncertainties. In our view, this perspective strengthens the conceptual foundation and quantitative rigor of marginal valuations and will provide more salient guides to prudent decision-making.

Supplementary Material

Appendix 01 (PDF)

pnas.2520857122.sapp.pdf (301.3KB, pdf)

Acknowledgments

Fernando Alvarez, William Brock, Dennis Epple, Benjamin Hebert, Ken Miyahara Coello, Darrell Duffie, Greg Kaplan, Kevin Murphy, Aleksei Oskolkov, Diana Petrova, Richard Romano, Tom Sargent, Grace Tsiang, Xunyyu Zhou, Bowen Dong, Jessie Liao, and Zhaoyang Xu provided helpful comments.

Author contributions

L.P.H. and P.S. performed research; and wrote the paper.

Competing interests

The authors declare no competing interest.

Footnotes

Reviewers: D.E., Carnegie Mellon University; and X.Y.Z., Columbia University.

*There is a wide variety of measurement of the social cost of carbon using different approaches. For some examples, see refs. 24 for alternative approaches to measuring or estimating the social cost of carbon. ref. 2’s measurements are based on solutions to a planner’s problem that include explicit risk considerations. Ref. 4 uses a more modular approach to incorporate a wide variety of alternative sources of risk. To the extent that their analysis is marginal, it is relative to a suboptimal dynamic allocation. Ref. 3 compares multiple approaches but abstracts from risk and “inside the decision problem” uncertainties that we explore here.

Refs. 5 and 6 use intertemporal marginal valuations to measure the importance of aggregate uncertainty. Ref. 5 uses this approach to reveal implications of aversion to model misspecification. Alvarez and Jermann (6) use it to address a quantitative challenge posed by Lucas (7) on the welfare consequences of macroeconomic fluctuations. In this paper, we provide revealing representations of marginal valuation that support interpretation and measurement and open the door to critical assessments of alternative applications.

This restriction is known to be too strong for some applications of interest and can be relaxed in alternative ways depending on more specific aspects of the dynamic, stochastic system. One such condition that can sometimes be verified is what is called partial ellipticity. See, for instance, ref. 16 for a discussion.

§When σσ is uniformly elliptic, V is a smooth solution to Eq. 3. If σσ is degenerate elliptic, then V may not be classical and should be taken to be a viscosity solution to Eq. 3.

It is often the case that there are controls in Eq. 1, in which case the expected discounted value objective is defined as sup over all possible controls and Eq. 3 becomes a nonlinear pde which can be analyzed using the theory of viscosity solutions.

#Although the calculations here are straightforward, since we are dealing with a linear uniformly elliptic pde, we present them to familiarize the reader. For the models later in the paper, however, it becomes necessary to use more sophisticated mathematical arguments coming from the theory of nonlinear pdes.

There is a substantial applied macroeconomics literature that engages in “innovation accounting” to assess the impacts of different initial shocks within a dynamic linear systems. See ref. 20 for an initial and prominent contribution to this literature. For such applications, the counterpart for the initial λ is the impact vector of one of the initial shocks on the initial state evolution. See ref. 19 for an elaboration.

**In environments with uncertainty, economic measures of costs typically involve stochastic discounting determined endogenously. Constant discounting is instead used to represent preferences expressed as discounted expected utilities. See Remark 2.3 for an illustration of stochastic discounting.

††See, for instance, ref. 23 for more general discussions of infinite horizon, forward–backward SDE’s.

‡‡For pedagogical simplicity, this illustration presumes decision makers’ preferences that are time separable. There is a corresponding marginal utility adjustment for temporally dependent preferences, such as those suggested by Ryder and Heal (24) and many others studying economic dynamics.

§§For a given H, we need to verify ex post that the implied M is martingale rather than just a local martingale.

¶¶Dynamic economic modelers might see a very rough analogy here between this construction and that of the big K, little k trick sometimes used in representing equilibrium outcomes. See ref. 33, pages 226, 470, and 489, for a textbook discussion of the big K, little k approach.

##See ref. 35 for a further discussion.

Data, Materials, and Software Availability

There are no data underlying this work.

Supporting Information

References

  • 1.Dasgupta P., Mäler K. G., Net national product, wealth, and social well-being. Environ. Dev. Econ. 5, 69–93 (2001). [Google Scholar]
  • 2.Y. Cai, K. L. Judd, T. S. Lontzek, “The social cost of carbon with climate risk. Economic working paper (2017). https://www.hoover.org/sites/default/files/research/docs/18113-judd1.pdf.
  • 3.Nordhaus W. D., Revisiting the social cost of carbon. Proc. Natl. Acad. Sci. U.S.A. 114, 1518–1523 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Rennert K., et al. , Comprehensive evidence implies a higher social cost of CO2. Nature 610, 687–692 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Hansen L. P., Sargent T. J., Tallarini T. D., Robust permanent income and pricing. Rev. Econ. Stud. 66, 873–907 (1999). [Google Scholar]
  • 6.Alvarez F., Jermann U. J., Using asset prices to measure the welfare cost of business cycles. J. Polit. Econ. 112, 1223–1256 (2004). [Google Scholar]
  • 7.Lucas R. E., Models of Business Cycles (Basil Blackwell, New York, 1987). [Google Scholar]
  • 8.Hallegatte S., Shah A., Brown C., Lempert R., Gill S., “Investment decision making under deep uncertainty-application to climate change” (Tech. Rep. 6193, World Bank Policy Research Working Paper, 2012).
  • 9.Maier H. R., et al. , An uncertain future, deep uncertainty, scenarios, robustness and adaptation: How do they fit together? Environ. Model. Softw. 81, 154–164 (2016).
  • 10.Marchau V. A., Walker W. E., Bloemen P. J., Popper S. W., Decision Making Under Deep Uncertainty: From Theory to Practice (Springer Nature, 2019). [Google Scholar]
  • 11.Rising J., Tedesco M., Piontek F., Stainforth D. A., The missing risks of climate change. Nature 610, 643–651 (2022). [DOI] [PubMed] [Google Scholar]
  • 12.Berger L., Marinacci M., Model uncertainty in climate change economics: A review and proposed framework for future research. Environ. Resour. Econ. 77, 475–501 (2020). [Google Scholar]
  • 13.Hansen L. P., Sargent T. J., Structured ambiguity and model misspecification. J. Econ. Theory 199, 105165 (2022). [Google Scholar]
  • 14.Cerreia-Vioglio S., Hansen L. P., Maccheroni F., Marinacci M., Making decisions under model misspecification. Rev. Econ. Stud. forthcoming 2, rdaf046 (2025). [Google Scholar]
  • 15.Yang J., Kushner H. J., A Monte Carlo method for sensitivity analysis and parametric optimization of nonlinear stochastic systems. SIAM J. Control. Optim. 29, 1216–1249 (1991). [Google Scholar]
  • 16.Gobet E., Munos R., Sensitivity analysis using itô-malliavin calculus and martingales, and application to stochastic optimal control. SIAM J. Control. Optim. 43, 1676–1713 (2005). [Google Scholar]
  • 17.R. Frisch, “Propagation problems and impulse problems in dynamic economics” in Economic Essays in Honour of Gustav Cassel, J. Åkerman, Ed. (Allen and Unwin, 1933), pp. 171–205.
  • 18.Fournie E., Lasry J. M., Lebuchoux J., Lions P. L., Touzi N., Applications of Malliavin calculus to Monte Carlo methods in finance. Finance Stochast. 3, 391–413 (1999). [Google Scholar]
  • 19.Borovička J., Hansen L. P., Scheinkman J. A., Shock elasticities and impulse responses. Math. Financial Econ. 8, w20104 (2014). [Google Scholar]
  • 20.Sims C. A., Macroeconomics and reality. Econometrica 48, 1–48 (1980). [Google Scholar]
  • 21.Dorfman R., An economic interpretation of optimal control theory. Am. Econ. Rev. 59, 817–831 (1969). [Google Scholar]
  • 22.Bismut J. M., Growth and optimal intertemporal allocation of risks. J. Econ. Theory 10, 239–257 (1975). [Google Scholar]
  • 23.Peng S., Shi Y., Infinite horizon forward-backward stochastic differential equations. Stoch. Process. Appl. 85, 75–92 (2000). [Google Scholar]
  • 24.Ryder H. E., Heal G. M., Optimal growth with intertemporally dependent preferences. Rev. Econ. Stud. 40, 1–31 (1973). [Google Scholar]
  • 25.Rubinstein M., The valuation of uncertain income streams and the pricing of options. Bell J. Econ. 7, 407–425 (1976). [Google Scholar]
  • 26.Ross S., A simple approach to valuation of risky streams. J. Bus. 51, 453–475 (1978). [Google Scholar]
  • 27.Hansen L. P., Richard S. F., The role of conditioning information in deducing testable restrictions implied by dynamic asset pricing models. Econometrica 50, 587–614 (1987). [Google Scholar]
  • 28.F. H. Knight, Risk, Uncertainty, and Profit (Houghton Mifflin, 1921).
  • 29.Maccheroni F., Marinacci M., Rustichini A., Dynamic variational preferences. J. Econ. Theory 128, 4–44 (2006). [Google Scholar]
  • 30.Jacobson D. H., Optimal stochastic linear systems with exponential performance criteria and their relation to deterministic differential games. IEEE Trans. Autom. Control. AC–18, 124–131 (1973). [Google Scholar]
  • 31.James M. R., Asymptotic analysis of nonlinear stochastic risk-sensitive control and differential games. Math. Control. Signals Syst. 5, 401–417 (1992). [Google Scholar]
  • 32.Hansen L. P., Sargent T. J., Robust control and model uncertainty. Am. Econ. Rev. 91, 60–66 (2001). [Google Scholar]
  • 33.Ljungqvist L., Sargent T. J., Recursive Macroeconomic Theory (MIT Press, Cambridge, MA, ed. 4, 2018). [Google Scholar]
  • 34.Fleming W. H., Souganidis P. E., On the existence of value function of two-player, zero-sum stochastic differential games. Indiana Univ. Math. J. 38, 293–314 (1989). [Google Scholar]
  • 35.Hansen L. P., Sargent T. J., Turmuhambetova G. A., Williams N., Robust control and model misspecification. J. Econ. Theory 128, 45–90 (2006). [Google Scholar]
  • 36.Kreps D. M., Porteus E. L., Temporal resolution of uncertainty and dynamic choice. Econometrica 46, 185–200 (1978). [Google Scholar]
  • 37.Epstein L. G., Zin S. E., Substitution, risk aversion and the temporal behavior of consumption and asset returns: A theoretical framework. Econometrica 57, 937–969 (1989). [Google Scholar]
  • 38.Chen Z., Epstein L., Ambiguity, risk, and asset returns in continuous time. Econometrica 70, 1403–1443 (2002). [Google Scholar]
  • 39.Klibanoff P., Marinacci M., Mukerji S., Recursive smooth ambiguity preferences. J. Econ. Theory 144, 930–976 (2009). [Google Scholar]
  • 40.Hansen L. P., Sargent T. J., Robustness and ambiguity in continuous time. J. Econ. Theory 146, 1195–1223 (2011). [Google Scholar]
  • 41.Hansen L. P., Miao J., Aversion to ambiguity and model misspecification in dynamic stochastic environments. Proc. Natl. Acad. Sci. U.S.A. 115, 9163–9168 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Cerreia-Vioglio S., Maccheroni F., Marinacci M., Montrucchio L., Ambiguity and robust statistics. J. Econ. Theory 148, 974–1049 (2013). [Google Scholar]
  • 43.Anderson E. W., Hansen L. P., Sargent T. J., A quartet of semigroups for model specification, robustness, prices of risk, and model detection. J. Eur. Econ. Assoc. 1, 68–123 (2003). [Google Scholar]
  • 44.Hansen L. P., Central banking challenges posed by uncertain climate change and natural disasters. J. Monet. Econ. 125, 1–15 (2022). [Google Scholar]
  • 45.Simpson N. P., et al. , A framework for complex climate change risk assessment. One Earth 4, 489–501 (2021). [Google Scholar]
  • 46.National Academies of Sciences , Engineering and Medicine, Valuing Climate Damages: Updating Estimation of the Social Cost of Carbon Dioxide (The National Academies Press, Washington, DC, 2017). [Google Scholar]
  • 47.M. Barnett, W. A. Brock, L. P. Hansen, H. Zhang, Uncertainty, social valuation, and climate change policy (University of Chicago, Becker Friedman Institute for Economics Working Paper number 2024-75, July 3, 2025). 10.2139/ssrn.4872679. [DOI]
  • 48.Christy J., Curry J., Koonin S., McKitrick R., Spencer R., A Critical Review of Impacts of Greenhouse Gas Emissions on the US Climate (U.S. Department of Energy, Washington DC, Report, 2025). [Google Scholar]
  • 49.Office of Management and Budget, “OMB circular no. a-4, regulatory analysis” (Tech. Rep. A-4, Executive Office of the President, 2023).

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Appendix 01 (PDF)

pnas.2520857122.sapp.pdf (301.3KB, pdf)

Data Availability Statement

There are no data underlying this work.


Articles from Proceedings of the National Academy of Sciences of the United States of America are provided here courtesy of National Academy of Sciences

RESOURCES