Skip to main content
Elsevier Sponsored Documents logoLink to Elsevier Sponsored Documents
. 2012 Apr;122(4):1204–1209. doi: 10.1016/j.spa.2011.12.001

A short proof of the Doob–Meyer theorem

Mathias Beiglböck 1,, Walter Schachermayer 1, Bezirgen Veliyev 1
PMCID: PMC4459556  PMID: 30976134

Abstract

Every submartingale S of class D has a unique Doob–Meyer decomposition S=M+A, where M is a martingale and A is a predictable increasing process starting at 0.

We provide a short proof of the Doob–Meyer decomposition theorem. Several previously known arguments are included to keep the paper self-contained.

MSC: 60G05, 60G07

Keywords: Doob–Meyer decomposition, Komlos lemma

1. Introduction

Throughout this article we fix a probability space (Ω,F,P) and a right-continuous complete filtration (Ft)0tT.

An adapted process (St)0tT is of class D if the family of random variables Sτ where τ ranges through all stopping times is uniformly integrable [9].

The purpose of this paper is to give a short proof of the following.

Theorem 1.1 Doob–Meyer —

Let S=(St)0tT be a càdlàg submartingale of class D . Then, S can be written in a unique way in the form

S=M+A (1)

where M is a martingale and A is a predictable increasing process starting at 0.

Doob [4] noticed that in discrete time an integrable process S=(Sn)n=1 can be uniquely represented as the sum of a martingale M and a predictable process A starting at 0; in addition, the process A is increasing iff S is a submartingale. The continuous time analogue, Theorem 1.1, goes back to Meyer [9], [10], who introduced the class D and proved that every submartingale S=(St)0tT can be decomposed in the form (1), where M is a martingale and A is a natural process. The modern formulation is due to Doléans-Dade [2], [3] who obtained that an increasing process is natural iff it is predictable. Further proofs of Theorem 1.1 were given by Rao [11], Bass [1] and Jakubowski [5].

Rao works with the σ(L1,L)-topology and applies the Dunford–Pettis compactness criterion to obtain the continuous time decomposition as a weak-L1 limit from discrete approximations. To obtain that A is predictable one then invokes the theorem of Doléans-Dade.

Bass [1] gives a more elementary proof based on the dichotomy between predictable and totally inaccessible stopping times.

Jakubowski [5] proceeds as Rao, but notices that predictability of the process A can also be obtained through an application of Komlos’ lemma [8].

This is also our starting point. Indeed the desired decomposition can be obtained from a trivial L2-Komlos lemma, making the Dunford–Pettis criterion obsolete.

2. Proof of Theorem 1.1

The proof of uniqueness is standard and we have nothing to add here; see for instance [7, Lemma 25.11].

For the remainder of this article we work under the assumptions of Theorem 1.1 and fix T=1 for simplicity.1

Denote by Dn and D the set of n-th resp. all dyadic numbers j/2n in the interval [0,1]. For each n, we consider the discrete time Doob decomposition of the sampled process Sn=(St)tDn, that is, we define An,Mn by A0n0,

AtnAt1/2nnE[StSt1/2n|Ft1/2n]and (2)
MtnStAtn (3)

so that (Mtn)tDn is a martingale and (Atn)tDn is predictable with respect to (Ft)tDn.

The idea of the proof is, of course, to obtain the continuous time decomposition (1) as a limit, or rather, as an accumulation point of the processes Mn,An,n1.

Clearly, in infinite dimensional spaces a (bounded) sequence need not have a convergent subsequence. As a substitute for the Bolzano–Weierstrass Theorem we establish the Komlos-type Lemma 2.1 in Section 2.1.

In order to apply this auxiliary result, we require that the sequence (M1n)n1 is uniformly integrable. This follows from the class D assumption as shown by Rao [11]. To keep the paper self-contained, we provide a proof in Section 2.2.

Finally, in Section 2.3, we obtain the desired decomposition by passing to a limit of the discrete time versions. As the Komlos-approach guarantees convergence in a strong sense, predictability of the process A follows rather directly from the predictability of the approximating processes. This idea is taken from [5].

2.1. Komlos’ lemma

Following Komlos [8],2 it is sometimes possible to obtain an accumulation point of a bounded sequence in an infinite dimensional space if appropriate convex combinations are taken into account.

A particularly simple result of this kind holds true if (fn)n1 is a bounded sequence in a Hilbert space. In this case

A=supn1inf{g2:gconv{fn,fn+1,}}

is finite and for each n we may pick some gnconv{fn,fn+1,} such that gn2A+1/n. If n is sufficiently large with respect to ε>0, then (gk+gm)/22>Aε for all m,kn and hence

gkgm22=2gk22+2gm22gk+gm224(A+1n)24(Aε)2.

By completeness, (gn)n1 converges in .2.

By a straight forward truncation procedure this Hilbertian Komlos lemma yields an L1-version which we will need subsequently.3

Lemma 2.1

Let (fn)n1 be a uniformly integrable sequence of functions on a probability space (Ω,F,P) . Then there exist functions gnconv(fn,fn+1,) such that (gn)n1 converges in .L1(Ω).

Proof

For i,nN set fn(i)fn1{|fn|i} such that fn(i)L2(Ω).

We claim that there exist for every n convex weights λnn,,λNnn such that the functions λnnfn(i)++λNnnfNn(i) converge in L2(Ω) for every iN.

To see this, one first uses the Hilbertian lemma to find convex weights λnn,,λNnn such that (λnnfn(1)++λNnnfNn(1))n1 converges. In the second step, one applies the lemma to the sequence (λnnfn(2)++λNnnfNn(2))n1, to obtain convex weights which work for the first two sequences. Repeating this procedure inductively we obtain sequences of convex weights which work for the first m sequences. Then a standard diagonalization argument yields the claim.

By uniform integrability, limifn(i)fn1=0, uniformly with respect to n. Hence, once again, uniformly with respect to n,

limi(λnnfn(i)++λNnnfNn(i))(λnnfn++λNnnfNn)1=0.

Thus (λnnfn++λNnnfNn)n1 is a Cauchy sequence in L1(Ω). □

2.2. Uniform integrability of the discrete approximations

Lemma 2.2 [11]

The sequence (M1n)n1 is uniformly integrable.

Proof

Subtracting E[S1|Ft] from St we may assume that S1=0 and St0 for all 0t1. Then M1n=A1n, and for every (Ft)tDn-stopping time τ

Sτn=E[A1n|Fτ]+Aτn. (4)

We claim that (A1n)n=1 is uniformly integrable. For c>0,n1 define

τn(c)=inf{(j1)/2n:Aj/2nn>c}1.

From Aτn(c)nc and (4) we obtain Sτn(c)E[A1n|Fτn(c)]+c.

Thus,

{A1n>c}A1ndP={τn(c)<1}E[A1n|Fτn(c)]dPcP[τn(c)<1]{τn(c)<1}Sτn(c)dP.

Note {τn(c)<1}{τn(c2)<1}, hence, by (4)

{τn(c2)<1}Sτn(c2)dP={τn(c2)<1}A1nAτn(c2)ndP{τn(c)<1}A1nAτn(c2)ndPc2P[τn(c)<1].

Combining the above inequalities we obtain

{A1n>c}A1ndP2{τn(c2)<1}Sτn(c2)dP{τn(c)<1}Sτn(c)dP. (5)

On the other hand

P[τn(c)<1]=P[A1n>c]E[A1n]/c=E[M1n]/c=E[S0]/c,

hence, as c,P[τn(c)<1] goes to 0, uniformly in n. As S is of class D, (5) implies that the sequence (A1n)n1 is uniformly integrable and hence (M1n)n1=(S1A1n)n1 is uniformly integrable as well. □

2.3. The limiting procedure

For each n, extend Mn to a (càdlàg) martingale on [0,1] by setting MtnE[M1n|Ft]. By Lemma 2.1, Lemma 2.2 there exist ML1(Ω) and for each n convex weights λnn,,λNnn such that with

MnλnnMn++λNnnMNn (6)

we have M1nM in L1(Ω). Then, by Jensen’s inequality, MtnMtE[M|Ft] for all t[0,1]. For each n1 we extend An to [0,1] by

AntDnAtn1(t1/2n,t] (7)
and set AnλnnAn++λNnnANn, (8)

where we use the same convex weights as in (6). Then the càdlàg process

(At)0t1(StMt)0t1

satisfies for every tD

Atn=(StMtn)(StMt)=Atin L1(Ω).

Passing to a subsequence which we denote again by n, we obtain that convergence holds also almost surely. Consequently, A is almost surely increasing on D and, by right continuity, also on [0,1].

As the processes An and An are left-continuous and adapted, they are predictable. To obtain that A is predictable, we show that for a.e. ω and every t[0,1]

lim supnAtn(ω)=At(ω). (9)

If fn,f:[0,1]R are increasing functions such that f is right continuous and limnfn(t)=f(t) for tD, then

lim supnfn(t)f(t)for all t[0,1] and (10)
limnfn(t)=f(t)if f is continuous at t. (11)

Consequently, (9) can only be violated at discontinuity points of A. As A is càdlàg, every path of A can have only finitely many jumps larger than 1/k for kN. It follows that the points of discontinuity of A can be exhausted by a countable sequence of stopping times, and therefore it suffices to prove lim supnAτn=Aτ for every stopping time τ.

To do so, we argue along the lines of [5]. By (10), lim supnAτnAτ and as AτnA1nA1 in L1(Ω) we deduce from Fatou’s Lemma4

lim infnE[Aτn]lim supnE[Aτn]E[lim supnAτn]E[Aτ].

Therefore it is sufficient to show limnE[Aτn]=E[Aτ]. For n1 set

σninf{tDn:tτ}.

Then Aτn=Aσnn and σnτ. Using that S is of class D, we obtain

E[Aτn]=E[Aσnn]=E[Sσn]E[M0]E[Sτ]E[M0]=E[Aτ].

Acknowledgments

The first author acknowledges financial support from the Austrian Science Fund (FWF) under grant P21209. The second author gratefully acknowledges financial support from the Austrian Science Fund (FWF) under grant P19456, from the Vienna Science and Technology Fund (WWTF) under grant MA13, from the Christian Doppler Research Association, and from the European Research Council (ERC) under grant FA506041. The third author gratefully acknowledges financial support from the Austrian Science Fund (FWF) under grant P19456.

Footnotes

1

The extension to the infinite horizon case is straightforward, in this case it is appropriate to assume that S is of class DL rather than class D.

2

Indeed, [8] considers Cesaro sums along subsequences rather then arbitrary convex combinations. But for our purposes, the more modest conclusion of Lemma 2.1 is sufficient.

3

Lemma 2.1 is also a trivial consequence of Komlos’ original result [8] or other related results that have been established through the years. Cf. [6, Chapter 5.2] for an overview.

4

Strictly speaking, we would like that the sequence (Aτn)n1 is bounded by an integrable random variable to apply Fatou’s lemma. In our case we just know that E(AτnA1)+0 but the reader will easily convince herself that this assumption is sufficient.

References

  • 1.Bass R.F. The Doob–Meyer decomposition revisited. Canad. Math. Bull. 1996;39(2):138–150. [Google Scholar]
  • 2.Doléans-Dade C. Processus croissants naturels et processus croissants très-bien-mesurables. C. R. Acad. Sci. Paris Sér. A-B. 1967;264:874–876. [Google Scholar]
  • 3.Doléans-Dade C. Existence du processus croissant natural associé à un potentiel de la classe (D) Z. Wahrscheinlichkeitstheor. Verwandte Geb. 1968;9:309–314. [Google Scholar]
  • 4.Doob J.L. John Wiley & Sons Inc.; New York: 1953. Stochastic Processes. [Google Scholar]
  • 5.Jakubowski A. Séminaire de Probabilités XXXVIII. vol. 1857. Springer; Berlin: 2005. An almost sure approximation for the predictable process in the Doob–Meyer decomposition theorem; pp. 158–164. (Lecture Notes in Math.). [Google Scholar]
  • 6.Kabanov Y., Safarian M. Markets with Transaction Costs Mathematical Theory. Springer-Verlag; Berlin: 2009. (Springer Finance). [Google Scholar]
  • 7.Kallenberg O. Foundations of Modern Probability. second ed. Springer-Verlag; New York: 2002. (Probability and its Applications (New York)). [Google Scholar]
  • 8.Komlós J. A generalization of a problem of Steinhaus. Acta Math. Acad. Sci. Hungar. 1967;18:217–229. [Google Scholar]
  • 9.Meyer P.-A. A decomposition theorem for supermartingales. Illinois J. Math. 1962;6:193–205. [Google Scholar]
  • 10.Meyer P.-A. Decomposition of supermartingales: the uniqueness theorem. Illinois J. Math. 1963;7:1–17. [Google Scholar]
  • 11.Rao K.M. On decomposition theorems of Meyer. Math. Scand. 1969;24:66–78. [Google Scholar]

RESOURCES