Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2015 Oct 19.
Published in final edited form as: SIAM J Matrix Anal Appl. 2015 Jul 2;36(3):917–941. doi: 10.1137/140987900

SHARP ENTRYWISE PERTURBATION BOUNDS FOR MARKOV CHAINS

ERIK THIEDE 1, BRIAN VAN KOTEN 2,*, JONATHAN WEARE 3
PMCID: PMC4610747  NIHMSID: NIHMS700718  PMID: 26491218

Abstract

For many Markov chains of practical interest, the invariant distribution is extremely sensitive to perturbations of some entries of the transition matrix, but insensitive to others; we give an example of such a chain, motivated by a problem in computational statistical physics. We have derived perturbation bounds on the relative error of the invariant distribution that reveal these variations in sensitivity.

Our bounds are sharp, we do not impose any structural assumptions on the transition matrix or on the perturbation, and computing the bounds has the same complexity as computing the invariant distribution or computing other bounds in the literature. Moreover, our bounds have a simple interpretation in terms of hitting times, which can be used to draw intuitive but rigorous conclusions about the sensitivity of a chain to various types of perturbations.

1. Introduction

The invariant distribution of a Markov chain is often extremely sensitive to perturbations of some entries of the transition matrix, but insensitive to others. However, most perturbation estimates bound the error in the invariant distribution by a single condition number times a matrix norm of the perturbation. That is, perturbation estimates usually take the form

π(F~)π(F)κ(F)F~F, (1)

where F and are the exact and perturbed transition matrices, π(F) and π() are the invariant distributions of these matrices, ∥·∥ is a matrix norm, |·| is either a vector norm or some measure of the relative error between the two distributions, and κ(F) is a condition number depending on F. For example, see [2,57,9,13,16,1921] and the survey given in [3]. No bound of this form can capture wide variations in the sensitivities of different entries of the transition matrix.

Alternatively, one might approximate the difference between π and π~ using the linearization of π at F. (The derivative of π can be computed efficiently using the techniques of [6].) The linearization will reveal variations in sensitivities, but only yields an approximation of the form

π(F~)π(F)=π(F)[F~F]+o(F~F),

not an upper bound on the error. That is, unless global bounds on π′(F) can be derived, linearization provides only a local, not a global estimate.

In this article, we give upper bounds that yield detailed information about the sensitivity of π(F) to perturbations of individual entries of F. Given an irreducible substochastic matrix S, we show that for all stochastic matrices F, F̃ satisfying the entry-wise bound F, F̃S,

maxilogπi(F~)logπi(F)ijlog(F~ij+Qij(S)Sij)log(Fij+Qij(S)Sij), (2)

where Qij(S) is defined in Section 4. As a corollary, we also have

maxilogπi(F~)logπi(F)ijQij(S)1F~ijFij (3)

for all stochastic F, F̃S.

The difference in logarithms on the left hand sides of (2) and (3) measures relative error. Usually, when ∈ (0, ∞) is computed as an approximation to x ∈ (0, ∞), the error of relative to x is defined to be either

x~xxormax{x~x,xx~}. (4)

Instead, we define the relative error to be |log – log x|. Our definition is closely related to the other two: it is the logarithm of the second definition in (4); and by Taylor expansion,

logx~logx=x~xx+O(x~xx2),

so it is equivalent with the first in the limit of small error. We chose our definition because it allows for simple arguments based on logarithmic derivatives of π(F).

We call the coefficient Qij(S)–1 in (3) the sensitivity of the ijth entry. Qij(S) has a simple probabilistic interpretation in terms of hitting times, which can sometimes be used to draw intuitive conclusions about the sensitivities. In Theorem 4, we show that our coefficients Qij(S)–1 are within a factor of two of the smallest possible so that a bound of form (3) holds. Thus, our bound is sharp. (We note that our definition of sharp differs slightly from other standard definitions; see Remark 5.) In Theorem 5, we give an algorithm by which the sensitivities may be computed in O(L3) time for L the number of states in the chain. Therefore, computing the error bound has the same order of complexity as computing the invariant measure or computing most other perturbation bounds in the literature; see Remark 8.

Since our result takes an unusual form, we now give three examples to illustrate its use. We discuss the examples only briefly here; details follow in Sections 4 and 6. First, suppose that has been computed as an approximation to an unknown stochastic matrix F and that we have a bound on the error between and F, for example |ijFij| ≤ αij. In this case, we define Sij := max{ijαij, 0}, and we have estimate

maxilogπi(F~)logπi(F)ijQij(S)1F~ijFij

for all F so that |ijFij| ≤ αij. See Remark 7 for a more detailed explanation.

Now suppose instead that FαP where 0 < α < 1 and P is the transition matrix of an especially simple Markov chain, for example a symmetric random walk. Then we choose S := αP, and we compute or approximate Qij(αP) by easy to understand probabilistic arguments. This method can be used to draw intuitive but rigorous conclusions about the sensitivity of a chain to various types of perturbations. See Section 6.5 for details.

Finally, suppose that the transition matrix F has a large number of very small positive entries and that we desire a sparse approximation to F with approximately the same invariant distribution. In this case, we take S to be F with all its small positive entries set to zero. If the sensitivity Qij(S)–1 is very large, it is likely that the the value of Fij is important and cannot be set to zero. If Qij(S)–1 is small, then setting ij = 0 and ii = Fii + Fij will not have much effect on the invariant distribution.

We are aware of two other bounds on relative error in the literature. By [9, Theorem 4.1],

πi(F~)πi(F)πi(F)κi(F)(F~F)(Ieieit), (5)

where κi(F)=(IFi)1 for Fi the jth principal submatrix of F. This bound fails to identify the sensitive and insensitive entries of the transition matrix since the error in the ith component of the invariant distribution is again controlled only by the single condition number κi(F). Moreover, computing κi(F) for all i is of the same complexity as computing all of our sensitivities Qij(S)–1; see Remark 8. Therefore, in many respects, our result provides more detailed information at the same cost as [9, Theorem 4.1]. On the other hand, we observe that [9, Theorem 4.1] holds for all perturbations: one does not have to restrict the admissible perturbations by requiring S as we do for our result. However, this is not always an advantage, since we anticipate that in many applications bounds on the error in F are available and, as we will see, the benefit from using this information is significant.

In [18, Theorem 1], another bound on the relative error is given. Here, the relative error in the invariant distribution is bounded by the relative error in the transition matrix. Precisely, if F, F~RL×L are irreducible stochastic matrices with Fij = 0 if and only if ij = 0, then

maxmmax{π~mπm,πmπ~m}(maxijmax{F~ijFij,FijF~ij})L (6)

This surprising result requires no condition number depending on F, but it does require that F and have the same sparsity pattern, which greatly restricts the admissible perturbations. Our result may be understood as a generalization of (6) which allows perturbations changing the sparsity pattern.

Our result also bears some similarities with the analysis in [2, Section 4], which is based on the results of [9]. In [2], a state m of a Markov chain is said to be centrally located if Ei[τm] is small for all states i. (Here, τm is the first passage time to state m; see Section 2.) It is shown that if |Ei[τm] – Ej[τm]| is small, then πm(F) is insensitive to Fij in relative error. Therefore, if m is centrally located, πm(F) is not sensitive to any entry of the transition matrix. Our Qij1 can also be expressed in terms of first passage times, and they provide a better measure of the sensitivity of πm(F) to Fij than |Ei[τm] – Ej[τm]|; see Section 6.3.

Our bounds on derivatives of π(F) in Theorem 2 and our estimates (2) and (3) share some features with structured condition numbers [12, Section 5.3]. The structured condition number of an irreducible, stochastic matrix F is defined to be

C(p,q)(F)limε0sup{ε1π(F+E)π(F)p:F+Estochastic andEq<ε}.

Structured condition numbers yield approximate bounds valid for small perturbations. These bounds are useful, since for small perturbations, estimates of type (1) are often far too pessimistic. We remark that our results (2) and (3) give the user control over the size of the perturbation through the choice of S. (If S is nearly stochastic, then only small perturbations are allowed.) Therefore, like structured condition numbers, our results are good for small perturbations. In addition, our results are true upper bounds, so they are more robust than approximations derived from structured condition numbers.

Our interest in perturbation bounds for Markov chains arose from a problem in computational statistical physics; we present a drastically simplified version below in Section 6. For this problem, the invariant distribution is extremely sensitive to some entries of the transition matrix, but insensitive to others. We use the problem to illustrate the differences between our result, [18, Theorem 1], [9, Theorem 4.1], and the eight bounds on absolute error surveyed in [3]. Each of the eight bounds has form (1), and we demonstrate that the condition number κ(F) in each bound blows up exponentially with the inverse temperature parameter in our problem. By contrast, many of the sensitivities Qij1 from our result are bounded as the inverse temperature increases. Thus, our result gives a great deal more information about which perturbations can lead to large changes in the invariant distribution.

2. Notation

We fix LN, and we let X be a discrete time Markov chain with state space Ω = {1, 2, . . . , L} and irreducible, row-stochastic transition matrix FRL×L. Since F is irreducible, X has a unique invariant distribution πRL satisfying

πtF=πt,i=1Lπi=1,andπi>0for alli=1,,L.

We let ei denote the ith standard basis vector in RL, e denote the vector of ones, and I denote the identity matrix. We treat all vectors, including π, as column vectors (that is, as L × 1 matrices). For SRL×L, we let Sj:ejej be the operator defined by

Sjx(Iejejt)Sx.

Instead of defining Sj as above, we could define Sj to be S with the jth row and column set to zero. We could also define Sj to be the jth principal submatrix. We chose our definition to emphasize that we treat Sj as an operator on ej. If S and T are matrices of the same dimensions, we say ST if and only if SijTij for all indices i, j. For any vRL with v > 0, we define log(v)RL by (log v)i = log(vi).

For k ∈ {1, 2, . . . , L}, we define 1k to be the indicator function of the set {k}, and

τkmin{s>0:Xs=k}

to be the first return time to state k. We also define

Pk[A]P[AX0=k]andEk[Y]E[YX0=k]

to be the probability of the event A conditioned on X0 = k and the expectation of the random variable Y conditioned on X0 = k, respectively. Finally, for Y a random variable and B an event, we let

E[Y,B]E[YχB]=E[YB]P[B],

where χB is the indicator function of the event B.

3. Partial derivatives of the invariant distribution

Given an irreducible, stochastic matrix FRL×L, let π(F)RL be the invariant distribution of F; that is, let π(F) be the unique solution of

π(F)tF=π(F)tandπ(F)te=1.

We regard π as a function defined on the set of irreducible stochastic matrices, and in Lemma 1, we show that π is differentiable in a certain sense. We give a proof of the lemma in Appendix A.

Lemma 1

The function π admits a continuously differentiable extension π to an open neighborhood V of the set of irreducible stochastic matrices in RL×L. The extension may be chosen so that π(G)>0 for all GV and so that if Ge = e, then

π(G)tG=π(G)tandπ(G)te=1.

Remark 1

The set of stochastic matrices is not a vector space; it is a compact, convex polytope lying in the affine space {GRL×L:Ge=e}RL×L. As a consequence, we need the extension guaranteed by Lemma 1 to define the derivative of π on the boundary of the polytope, which is the set of all stochastic matrices with at least one zero entry. We introduce π only to resolve this unpleasant technicality, not to define π(F) for matrices which are not stochastic. In fact, all our results are independent of the particular choice of extension, as long as it meets the conditions in the second sentence of the lemma.

Our perturbation bounds are based on partial derivatives of π with respect to entries of F. As usual, the partial derivatives are defined in terms of a coordinate system, and we choose the off-diagonal entries of F as coordinates: Any stochastic F is determined by its off-diagonal entries through the formula

F=I+i,jΩijFij(eiejteieit).

Accordingly, for i, j ∈ Ω with ij, we define

πmFij(F)=Fijπm(I+klFkl(ekeltekekt))=ddεε=0πm(F+ε(eiejteieit)). (7)

These partial derivatives must be understood as derivatives of the extension π guaranteed by Lemma 1. Otherwise, if Fij or Fii were zero, the right hand side of (7) would be undefined.

Remark 2

We chose to define partial derivatives by (7), since that definition leads to the global bounds on the invariant distribution presented in Section 4. Other choices are reasonable: for example, one might consider derivatives of the form

ddεε=0π(F+ε(eiejteiekt)),

where ki and ji. However, to the best of our knowledge, only definition (7) leads easily to global bounds.

In Theorem 1, we derive a convenient formula for πmFij. Comparable results relating derivatives and perturbations of π to the matrix of mean first passage times were given in [2,8]. A formula for the derivative of the invariant distribution in terms of the group inverse of IF was given in [15]; a general formula for the derivative of the Perron vector of a nonnegative matrix was given in [4].

Theorem 1

Let F be an irreducible stochastic matrix, and let X be a Markov chain with transition matrix F. Define π(F) and πmFij as above. We have

πmFij(F)=πi(Ej[s=0τi11m(Xs)]πmEj[τi])

for all i, j ∈ Ω with ij.

Proof

Define π and V as in Lemma 1, let v(G)π(G)πi(G) for all GV, and let GεF+ε(eiejteieit). Define

vFij(F)(viFij(F),,vLFij(F))=ddεv(Gε)ε=0.

Since Gεe = e, Lemma 1 implies

v(Gε)t(IGε)=0andvi(Gε)=1 (8)

for all ε sufficiently close to zero.

We derive an equation for vFij(F) from (8); differentiating (8) with respect to ε gives

vFij(F)t(IF)=ejteitandviFij(F)=0. (9)

Recalling the definition of Fi from Section 2, (9) implies

vFij(F)t(IFi)=ejt.

Moreover, by [1, Chapter 6, Theorem 4.16],

Firreducible and stochastic or substochastic implies(IFi)1=s=0Fis. (10)

Therefore, we have

vFij=ejt(IFi)1=ejtk=0Fik. (11)

We now interpret (11) in terms of the Markov chain Xt with transition matrix F. We observe that for any m ∈ Ω \ {i},

ejtFikem=Pj[Xk=m,k<τi],

where τi := min{t > 0 : Xt = i} is the first passage time to state i. Therefore, for m, j ∈ Ω \ {i}, (11) yields

vmFij=k=0Pj[Xk=m,k<τi]=Ej[s=0τi11m(Xs)]. (12)

In fact, this formula also holds for m = i, since we have

viFij=Ej[s=0τi11i(Xs)]=0

for all j ∈ Ω \ {i}.

Finally, we convert our formula for vmFij to a formula for πmFij. We have

πm=vmk=1Lvk,

and so by (12),

πmFij=vmFij(k=1Lvk)vmk=1LvkFij(k=1Lvk)2=Ej[s=0τi11m(Xs)]πmEj[τi]kvk. (13)

Now by [17, Theorem 1.7.5], we have

vk(F)=πk(F)πi(F)=Ei[s=0τi11k(Xs)]and1πi(F)=Ei[τi]. (14)

Therefore, (13) implies

πmFij=πi(Ej[s=0τi11m(Xs)]πmEj[τi]).

Our goal is to bound the relative errors of the entries of the invariant measure π(F), where for x̃, x ∈ (0, ∞), we define the relative error between and x to be

logx~logx. (15)

Our definition of relative error is unusual, but it is closely related to the common definitions, as we explain in the introduction. In Theorem 2, we derive sharp bounds on the logarithmic partial derivatives of the invariant distribution. We want bounds on logarithmic derivatives, since we will ultimately prove bounds on the relative error in π, with relative error defined by (15).

The following lemma will be used in the proof of Theorem 2.

Lemma 2

We have

Pi[τj<τi]Ej[s=0τi11m(Xs)]=Ei[s=τjτi11m(Xs)],

and

Pi[τj<τi]Ej[τi]=Ei[τiτj,τj<τi].

Proof

For n ≥ 0, define Yn := Xτj+n. For i ∈ Ω, let τiX and τiY denote the first return times to i for X and Y, respectively. By the strong Markov property, (a) Y is a Markov process with the same transition matrix as X, (b) the distribution of Y0 is ejt, and (c) conditional on Y0, Yn is independent of X0, X1, . . . , Xτj. Therefore,

E[s=τjXτiX11m(Xs)X0=i]=E[s=τjXτiX11m(Xs),τjX<τiXX0=i]=E[s=0τiY11m(Ys),τjX<τiXX0=i]=E[s=0τiY11m(Ys)Y0=j]Pi[τjX<τiX].

(The second equality above follows from the definition of Y; the third equality follows from the strong Markov property, since the event τjX<τiX is determined by X0, X1, . . . , Xτj.) This proves the first formula in the statement of the lemma; the second formula follows on summing the first over all m.

Using Lemma 2, we now prove our bounds on the logarithmic derivatives.

Theorem 2

We have

121Pi[τj<τi]maxmlogπmFij1Pi[τj<τi],

and

maxmlogπmFijminmlogπmFij=1Pi[τj<τi].

Proof

Using Theorem 1, Lemma 2, and (14), we have

logπmFij=πiπmEj[s=0τi11m(Xs)]πiEj[τi] (16)
=1Pi[τj<τi]πiπmEi[s=τjτi11m(Xs)]πiEj[τi]=1Pi[τj<τi]πiπm(Ei[s=0τi11m(Xs)]Ei[s=0τj11m(Xs),τj<τi])πiEj[τi]=1Pi[τj<τi](1πiπmEi[s=0τj11m(Xs),τj<τi])πiEj[τi]. (17)

We observe that the term πiπmEj[s=0τi11m(Xs)] in formula (16) is nonnegative, hence that term attains a minimum value of zero when m = i. Therefore, we have

minmlogπmFij=πiEj[τi], (18)

with the minimum attained when m = i, since the other term in formula (16) does not depend on m. By a similar argument using (17),

maxmlogπmFij=1Pi[τj<τi]πiEj[τi], (19)

and the maximum is attained when m = j.

Subtracting (18) from (19) gives

maxmlogπmFijminmlogπmFij=1Pi[τj<τi],

hence

121Pi[τj<τi]maxmlogπmFij.

Finally, by Lemma 2 and (14),

0πiEj[τi]=πiEi[τiτj,τj<τi]Pi[τj<τi]πiEi[τi]Pi[τj<τi]=1Pi[τj<τi],

so

maxmlogπmFij1Pi[τj<τi].

Corollary 1 gives a simplified version of the bound in Theorem 2. This estimate can be used to derive [18, Theorem 1], which we have stated in equation (6) above. We omit the proof.

Corollary 1

Whenever Fij ≠ 0,

logπmFij1Fij.

Proof

We have

Pi[τj<τi]Pi[X1=j]=Fij,

and so the result follows by Theorem 2.

4. Global perturbation bounds

In this section, we use our bounds on the derivatives of the invariant distribution to prove global perturbation estimates. Our estimates assume that both the exact transition matrix F and the perturbed matrix are bounded below by some irreducible substochastic matrix S. As a consequence, coefficients Qij(S) depending on S arise.

We define Qij(S) in terms of a Markov chain with transition matrix depending on S, and our perturbation results are based on comparisons between this chain and other chains with transition matrices GS. Therefore, to avoid confusion, we let

Pi[A](G)

denote the probability that XGA conditioned on X0G=i for XG a chain with transition matrix G. To give a specific example, we intend Pi[τi < τj](G) to mean the probability that XG hits j before returning to i, conditional on X0G=i.

We now define Qij(S).

Definition 1

For S an irreducible and substochastic or stochastic matrix, let Xω be the Markov chain with state space Ω{ω} and transition matrix

SωΩω(ΩωSeSe01).

We think of Xω as a chain with transition probability S, but augmented by an absorbing state ω to adjust for the fact that S is substochastic. For all i, j ∈ Ω with ij, we define

Qij(S)Pi[τj<min{τi,τω}](Sω).

Remark 3

We observe that for F stochastic,

Qij(F)=Pi[τj<min{τi,τω}](Fω)=Pi[τj<τi](F),

since the absorbing state ω does not communicate with the other states Ω when F is stochastic.

We now show that Qij(S) is monotone as a function of S. This is the crucial step in deriving global perturbation bounds from the bounds on derivatives in Theorem 2.

Lemma 3

Let S be an irreducible substochastic matrix. If F is a stochastic or substochastic matrix with F ≥ S, then

Pi[τj<τi](Fω)=Qij(F)Qij(S)>0.

In addition, for any substochastic or stochastic matrix S,

Qij(S)Sij.

Proof

Let PijM be the set of all walks of length M in Ω which start at i, end at j, and visit i and j only at the endpoints. To be more precise, define

PijM{γ:Z[0,M]Ω:γ(0)=i,γ(M)=j,γ(k){i,j}0<k<M}.

We observe that since FS,

Pi[τj<τi](Fω)=M=1γPijMPi[Xk=γ(k)fork=1,,M](Fω)M=1γPijMPi[Xk=γ(k)fork=1,,M](Sω)=Pi[τj<min{τi,τω}](Sω)=Qij(S).

Since S is irreducible, we also have that

Qij(S)=Pi[τj<min{τi,τω}](Sω)>0.

Finally,

Qij(S)=Pi[τj<min{τi,τω}](Sω)Pi[X1=j](Sω)=Sij,

which concludes the proof.

Combining Theorem 2 with Lemma 3 yields our global perturbation estimate.

Theorem 3

Let F, F̃ be stochastic matrices, let S be substochastic and irreducible, and assume that F, F̃ ≥ S. We have

logπm(F~)logπm(F)i,jΩijlog(F~ijSij+Qij(S))log(FijSij+Qij(S)).

Remark 4

We allow S to be stochastic in Definition 1 and also in the hypotheses of Theorem 3. However, we observe that if S is stochastic, then the conclusion of the theorem is trivial since S is the unique stochastic F with FS.

Proof

Let G ∈ {tF̃+(1–t)F : t ∈ [0, 1]}. Since Qij(S) = Pi[τj < min{τi, τω}](Sω), we have

Pi[τj<τi](G)=Gij+Pi[1<τj<τi](G)Gij+Pi[1<τj<min{τi,τω}](Sω)=Gij+Pi[τj<min{τi,τω}](Sω)Sij=Gij+Qij(S)Sij.

Therefore,

logπm(F~)logπm(F)=ij0IlogπmFij(tF~+(1t)F)(F~ijFij)dtij01F~ijFijPi[τj<τi](tF~+(1t)F)dtij01F~ijFijtF~ij+(1t)Fij+Qij(S)Sijdt=ijlog(F~ijSij+Qij(S))log(FijSij+Qij(S)).

The first equality holds since by Lemma 1, the invariant distribution π is Fréchet differentiable on an open neighborhood of the set of irreducible, stochastic matrices. Directional derivatives can then be computed using partial derivatives as above. The denominator in the third line is positive since F, F̃S implies

tF~ij+(1t)FijSij0,

and by Lemma 3, Qij(S) > 0.

Theorem 3 takes a somewhat complicated form, so in Corollary 2, we present a simplified version. The proof of Corollary 2 shows that the bound in Theorem 3 is always smaller than the bound in Corollary 2.

Corollary 2

Let F, F̃ be stochastic matrices, let S be substochastic and irreducible, and assume that F, F̃ ≥ S. We have

logπm(F~)logπm(F)i,jΩijF~ijFijQij(S).

Proof

We have

log(F~ij+Qij(S)Sij)log(Fij+Qij(S)Sij)maxx[Qij(S),]dlogdx(x)(F~ijSij)(FijSij)=F~ijFijQij(S),

and so the result follows directly from Theorem 3.

We show in Theorem 4 that both Theorem 3 and Corollary 2 are sharp. That is, we show that ρij(S) = Qij(S) is within a factor of two of the largest value of ρij(S) so that a bound of the form

logπm(F~)logπm(F)i,jΩijlog(F~ijSij+ρij(S))log(FijSij+ρij(S)),

and we show that ηij(S) = Qij(S)–1 is within a factor of two of the smallest value of ηij(S) such that a bound of the form

logπm(F~)logπm(F)i,jΩijηij(S)F~ijFij

holds.

Remark 5

We note that some authors call a bound sharp if it is possible for equality to hold. For example, a bound of form (1) may be called sharp if for every stochastic F, there exists a stochastic so that equality holds. We prefer to call a bound sharp if it is the best bound of a given form, possibly up to a small constant factor. Thus, for bounds of the type which we consider, we require that for every S and every ij, Qij(S)–1 is as small as possible.

Theorem 4

Let S be an irreducible substochastic matrix, and let i, j ∈ Ω with ij. For every ε > 0, there exist stochastic matrices F̃, F with F̃, FS so that

maxmΩlogπm(F~)logπm(F)12(Qij(S)+ε)1F~ijFijlog(F~ijSij+2(Qij(S)+ε))log(FijSij+2(Qij(S)+ε))

and F~klFkl=0 for all (k, l) except (i, j) and (i, i).

Proof

Define

Fkl{Sklifli,and1miSkmifl=i.}

F is stochastic, FS, FjFjeieit=SjSjeieit and Fj,j = Sj,j. Therefore,

Qij(S)=Qij(F)=Pi[τj<τi](F).

We now distinguish two cases: mΩSim=1 and mΩSim<1. In the first case, if FS and F is stochastic, then Fim = Sim for all m ∈ Ω. Therefore, F~ijFij=0 for all stochastic F, F̃S, and so the conclusion of the theorem holds. In the second case, we observe that mΩSim<1 implies Fii > Sii ≥ 0. Therefore, for any sufficiently small η > 0,

FηF+η(eiejteieit)

is stochastic, and FηS. By Theorem 2, we have

maxmΩlogπmFij(F)121Pi[τj<τi]=121Qij(S).

It follows that for every ε > 0 there exists an η > 0 with

maxmΩlogπm(Fη)logπm(F)12(Qij(S)+ε)1FijηFijlog(FijηSij+2(Qij(S)+ε))log(FijSij+2(Qij(S)+ε)).

(The second inequality follows by an argument similar to the proof of Corollary 2.)

Theorem 3 and Corollary 2 take unusual forms, and at first glance the condition F, F̃S may seem inconvenient. In Remarks 6 and 7, we explain how to use these estimates in the common case when only one matrix and a bound on the error are known.

Remark 6

For a given application, the best upper bounds are obtained by choosing the largest possible S. This is a consequence of Lemma 3. We apply this principle in Remark 7.

Remark 7

Suppose that has been computed as an approximation to an unknown stochastic matrix F and that we have some bound on the error between and F. For example, suppose that for some matrix α ≥ 0,

F~ijFijαijfor alli,jΩ.

In this case, we define

Sijmax{F~ijαij,0}for alli,jΩ.

We observe that this choice of S is the largest possible so that FS for all F so that |Fijij| ≤ αij. Therefore, by Lemma 3, the coefficients Qij(S) are as large as possible, giving the best possible upper bounds.

If S is irreducible, we have

maxkΩlogπk(F~)logπk(F)i,jΩijlog(F~ijSij+Qij(S))log(FijSij+Qij(S))i,jΩijF~ijFijQij(S).

In general, if S is reducible, then no statement can be made about the error of the invariant distribution. In fact, if S is reducible, then there is a reducible, stochastic F with F~ijFijαij, and the invariant distribution of F is not even unique.

5. An efficient algorithm for computing sensitivities

In Theorem 5 below, we show that the coefficients Qij(S) can be computed by inverting an L × L matrix and performing additional operations of cost O(L2). Therefore, the cost of estimating the error in π using either Theorem 3 or Corollary 2 is comparable to the cost of computing π. Moreover, the cost of computing our bounds is the same as the cost of computing most other bounds in the literature, for example, those based on the group invserse; see Remark 8.

Our first step is to characterize Qij(S) as the solution of a linear equation. We advise the reader that we will make extensive use of the notation introduced in Section 2. In addition, we define

Sj,j(Iejejt)SejandSj,jejtS(Iejejt) (20)

to be the jth column and row of S with the jth entry set to zero, respectively.

Lemma 4

Let S be irreducible and substochastic or stochastic. For i, j ∈ Ω with ij, let qij(S)ej be the vector defined by

qkij(S)Pk[τj<min{τi,τω}](Sω)

for all k ∈ Ω \ {j}. Define Sj as in Section 2, and Sj,j by (20). The operator ISj+Sjeieit is invertible on ej, and qij(S) is the unique solution of the equation

(ISj+Sjeieit)qij(S)=Sj,j. (21)

Proof

Let i, j ∈ Ω with ij. We have

Pk[τj<min{τi,τω}]=lΩ{ω}Pk[τj<min{τi,τω},X1=l],

and for kj,

Pk[τj<min{τi,τω},X1=l]={0ifl{i,ω},Skjifl=j,andPl[τj<min{τi,τω}]Sklifl{i,j,ω}.}

Therefore,

Pk[τj<min{τi,τω}]=l{i,j,ω}SklPl[τj<min{τi,τω}]+Skjforkj. (22)

We observe that equation (22) above can be expressed as

qij=(SjSjeieit)qij+Sj,j. (23)

We now claim that if S is irreducible, then ISj+Sjeieit is invertible, which shows that qij is the unique solution of (23). By (10), (ISj)1=m=0Sjm. We now observe that

0SjSjeieitSj,

so m=0(SjSjeieit)m converges, hence ISj+Sjeieit is invertible.

The proof of Theorem 5 uses the following lemma.

Lemma 5

For S substochastic and irreducible,

Qij(S)=eit(ISj)1Sj,jeit(ISj)1ei.

Proof

We recall that

(ISj+Sjeieit)qij(S)=Sj,j.

Multiplying both sides by (ISj)–1 then yields

(I+(ISj)1Sjeieit)qij(S)=(ISj)1Sj,j.

Now Sj ≥ 0 is a substochastic matrix, so by (10), (ISj)1=m=0Sjm0. Therefore, 1+eit(ISj)1Sjei1, and by the Sherman-Morrison formula,

(I+(ISj)1Sjeieit)1=I(ISj)1Sjeieit1+eit(ISj)1Sjei=I(ISj)1Sjeieiteit(ISj)1ei.

Thus,

Qij(S)=qiij(S)=eit(I+(ISj)1Sjeieit)1(ISj)1Sj,j=eit(ISj)1Sj,jeit(ISj)1ei. (24)

Theorem 5

Let SRL×L be irreducible and substochastic. The set of all coefficients Qij(S) can be computed by inverting an L × L matrix and then performing additional operations of cost O(L2).

Proof

We begin with some definitions and notation. Define

A(j)IS+ejejtS=ejej(ejejISjSj,j01).

(The right hand side above denotes the block decomposition of A with respect to the decomposition RLejRej. Thus, for example, ISj is to be interpreted as an operator on ej; cf. the definition of Sj in Section 2.) We observe that A(j) is invertible, since by (10), ISj is invertible, so

A(j)1=ejej(ejej(ISj)1(ISj)1Sj,j01). (25)

In the first step of the algorithm, we compute A(1)–1, which costs O(L3) operations. Second, we compute SA(1)–1. Given A(1)–1, this can be done in O(L2) operations using the formula

SA(1)1=e1e1(e1e1A(1)11IA(1)1,11S1,1A(1)11S11+S1,1A(1)1,11).

(The formula is easily proved by direct calculation using (25).) Third, we compute Qi1(S) for all i ≠ 1. By Lemma 5 and 25, we have

Qi1(S)=eit(IS1)1S1,1eit(IS1)1ei=A(1)i11A(1)ii1. (26)

Therefore, once A(1)–1 has been computed, it costs O(L) operations to compute Qi1(S) for all i ≠ 1.

We compute the remaining sensitivities Qij(S) for j ≠ 1 by a formula analogous to (28), but with A(j)–1 in place of A(1)–1. To do so efficiently, we use the Sherman-Morrison-Woodbury identity to derive a formula expressing A(j)–1 in terms of A(1)–1:

A(j)1=(IS+e1e1tS+(ejejtSe1e1tS))1=A(1)1A(1)1(eje1)C(j)1(ejte1t)SA(1)1, (27)

where

C(j)I+(ejte1t)SA(1)1(eje1).

In the fourth step of the algorithm, we loop over all j ≠ 1. For each j, we first compute C(j)–1, which requires a total of O(L) operations. We then compute A(j)–1ej at a cost of O(L) operations using a formula derived from (27):

A(j)1ej=A(1)1ej=A(1)1(eje1)C(j)1(ejte1t)SA(1)1ej=A(1)1ej(A(1)1ejA(1)1e1)C(j)1(SA(1)jj1SA(1)1j1).

Next, we must compute A(j)ii1 for all ij. By (27),

A(j)ii1=A(1)ii1(A(1)ij1A(1)i11)C(j)1(SA(1)ji1SA(1)1i1),

so the cost of computing A(j)ii1 for each ij is O(L). Finally, we compute Qij(S) for all ij. By Lemma 5 and 25, we have

Qij(S)=eit(ISj)1Sj,jeit(ISj)1ei=A(1)ij1A(1)ii1, (28)

so this last step costs O(L).

The total cost of the algorithm described above is a single L × L matrix inversion plus O(L2).

Remark 8

Most perturbation bounds in the literature have the same computational complexity as our bound. For example, some bounds are based on the group inverse of IF [13,16]. The cost of computing the group inverse is O(L3) [6], so our bound has the same complexity as [13, 16]. Computing the bound on relative error in [9] requires finding (IFj)1 for all j; see (5). This could be done in O(L3) operations by methods similar to Theorem 5, so we conjecture that our bound and the bound of [9] have the same complexity. On the other hand, the bound on relative error in [18] (see (6)) requires almost no calculation at all.

Remark 9

We give the algorithm above to show that the cost of computing our bounds is comparable to the cost of other bounds, in principle. We do not claim that the algorithm is always reliable, since we have not performed a complete stability analysis. Nonetheless, in many cases, the computation of Qij(S) is stable even when the computation of π(F) is unstable. For example, suppose that S = αF for F a stochastic matrix and α ∈ (0, 1). (This would be a good choice of S if all entries of F were known with relative error α–1; cf. Remark 7.) Let ∥M denote the operator norm of the matrix MRL×L with respect to the -norm

vmaxi=1,,LviforvRL.

It is a standard result that

M=maxi=1,,Lj=1LMij.

Therefore, since F is stochastic and 0 ≤ SjS = αF, ∥Sjα, and we have

(ISj)1=n=0Sjnn=0Sjnn=0αn=11α.

Moreover,

ISj2,

and so the condition number for the inversion of ISj satisfies

κ(ISj)ISj(ISj)121α.

We conclude that if α is not too close to one, then the algorithm is stable for any F. For example, if the entries of F are known with 2% error, then we choose S = 0.98F, and we have κ(F) ≤ 100.

6. The hilly landscape example

In this section, we discuss an example in which the invariant distribution is very sensitive to some entries of the transition matrix, but insensitive to others. The example arose from a problem in computational statistical physics. We will use the example to compare our results with previous work, especially [2,9,18] and the bounds on absolute error summarized in [3].

6.1. Transition matrix and physical interpretation

Our hilly landscape example is a simple analogue of the dynamics of a single particle in contact with a heat bath. Define V:RR by

V(x)=14πcos(4πx)

Take LN, and let Ω := {1, 2, . . . , L} with periodic boundary conditions; that is, take ΩZLZ. Given V, we define a probability distribution on Ω by

π(i)exp(LV(iL))k=1Lexp(LV(kL)).

The measure π is in detailed balance with the Markov chain X having transition matrix FRΩ×Ω defined by

Fii12(π(i)π(i1)+π(i)+π(i)π(i+1)+π(i))for alliΩ,Fi,i+112π(i+1)π(i+1)+π(i)for alliΩ,Fi,i112π(i1)π(i1)+π(i)for alliΩ,andFij0otherwise.

(In the definition of F, FL,L+1 means FL,1 and F1,0 means F1,L, since we take Ω with periodic boundary conditions.)

We interpret X as the position of a particle which moves through the interval (0, 1] with periodic boundary conditions. If V(i) > V(j), we say that j is downhill from i. When the inequality is reversed we say that j is uphill from i. Under the dynamics prescribed by F, the particle is more likely to move downhill than uphill. In fact, as L tends to infinity, π becomes more and more concentrated near the minima of V. For large L, the particle spends most of the time near minima of V, and transitions of the particle between minima occur rarely.

6.2. Sensitivities for the hilly landscape transition matrix

In Figure 1, we plot –logQij(αF) versus i and j for F the hilly landscape transition matrix with L = 40 and α ∈ {0.7, 0.8, 0.9, 0.95, 0.98, 1}. The purpose of this section is to give an intuitive explanation of the main features observed in the figure. Recall that the potential V is shaped roughly like a “W” with peaks at 0, ½, and 1 and valleys at ¼ and ¾. When L = 40, the peaks correspond to the indices 0, 20, and 40 in Ω, and the valleys correspond to 10 and 30. (To be precise, 0 and 40 are identical since we take periodic boundary conditions.)

Figure 1.

Figure 1

Sensitivities for the hilly landscape transition matrix.

Now consider the case α = 1. We observe that –logQ20,j(F) is small for all j, so Q20,j(F)–1 is small, and π(F) is insensitive to perturbations which change the transition probabilities from the peak to other points. This is as expected, since the probability P20[τj < τ20](F) = Q20,j(F) of hitting a point j in the valley before returning to the peak should be fairly large. On the other hand, –logQ30,10(F) is enormous, so π(F) is sensitive to the transition probability from the valley to the peak. This is also as expected, since the probability P30[τ10 < τ30](F) of climbing from the valley to the peak without falling back into the valley should be small. To explain the small values of –logQij(F) observed near the diagonal, we observe that for all i ∈ Ω and all L,

Fi,i+1=1211+exp[L(V(i+1L)V(iL))]1211+exp(Lip(V))=1211+exp(1).

where Lip(V) = 1 is the Lipschitz constant of the potential V. Therefore, by Corollary 1,

Qi,i+1(F)11Fi,i+12(1+exp(1)).

The same estimate holds for Qi,i–1(F).

The coefficients Qij(αF) for α < 1 share many features with Qij(F). These coefficients would be relevant if F were known with relative error 1 – α; see Remark 7. The main difference between Qij(αF) and Qij(F) is that Qij(αF) is small whenever the minimum number of time steps required to transition between i and j is large. The reader is directed to Section 6.5 for discussion of a related phenomenon. Observe that this effect grows more dominant as α decreases. We also note that Qij(αF)–1 is again small near the diagonal. In fact, by Lemma 3, we have

Qi,i+1(αF)11αFi,i+12(1+exp(1))α.

The same estimate holds for Qi,i–1(αF).

6.3. Mean first passage times and related relative error bounds

Section 4 of [2] also suggests bounds on relative error in terms of certain first passage times. A comparison is therefore in order. We record a simplified version of these results below.

Theorem 6

[2, Corollaries 4.1,4.2] Let F and F̃ be irreducible stochastic matrices. We have

πm(F~)πm(F)πm(F)=i,jΩijπi(F~)((1δjm)Ej[τm](1δim)Ei[τm])(F~ijFij),

where the expectations are taken for the chain with transition matrix F and δ is the Kronecker delta function. Therefore,

πm(F~)πm(F)πm(F)i,jΩij(1δjm)Ej[τm](1δim)Ei[τm]F~ijFij.

Taking the maximum over all m in the second sentence of Theorem 6 yields a pertubation result having a form similar to Theorem 3, but with

βij(F)maxmΩ(1δjm)Ej[τm](1δim)Ei[τm]

in place of Qij(S)–1. In the next paragraph, we show for the hilly landscape transition matrix that for some values of i and j, βij grows exponentially with L while Qij(S)–1 remains bounded. Thus, the results of [2] dramatically overestimate the error due to some perturbations.

To derive the estimate in the second sentence of Theorem 6 from the exact formula in the first sentence, one discards a factor of πi(). Therefore, roughly speaking, the estimate is poor when πi() is small. To give a specific example, let F be the hilly landscape transition matrix, assume that L is even, and let i = L/2 and j = L/2 + 1. Observe that we have chosen i at a peak of the potential V, so

πi(F)exp(L)

(Here, we use the symbols ≲ and ≳ to denote bounds up to multiplicative constants, so in the last line above, we mean that there is some C > 0 so that the left hand side is bounded above by C exp(–L) for all L.) Let Fε=F+ε(eiejteieit). We have

logπmFij(F)=ddεε=0πm(Fε)=πi(F)((1δjm)Ej[τm](1δim)Ei[τm])

by Theorem 6. Therefore, by Theorem 2,

maxmΩlogπmFij(F)=πi(F)maxmΩ(1δjm)Ej[τm](1δim)Ei[τm]121Pi[τj<τi],

and so

βij(F)121πi(F)Pi[τj<τi]exp(L).

Now suppose that the substochastic matrix S appearing in our bound is chosen for each L so that Sij for i = L/2 and j = L/2 + 1 is bounded above zero uniformly as L → ∞. For example, one might choose S to be a multiple of F as in Section 6.2 or a multiple of a simple random walk transition matrix as in Section 6.5. Then by Lemma 3, we have

Qij(S)11Sij,

so Qij(S)–1 is bounded. Thus, βij(F) is a poor estimate of the sensitivity of the ijth entry for this problem.

6.4. The spectral gap and related absolute error bounds

The survey article [3] lists eight condition numbers κi(F) for i = 1, 2, . . . , 8 for which bounds of the form

π(F)π(F~)pκi(F)FF~p

hold. (The Hölder exponents p and p′ vary with the choice of condition number.) Some of these condition numbers are based on ergodicity coefficients [20,21], some on mean first passage times [2], and some on generalized inverses of the characteristic matrix IF [5,7,9,16,19]. We prove for the hilly landscape transition matrix F that κi(F) increases exponentially with L for all i. By contrast, we have already seen that many of the coefficients Qij(αF)–1 are bounded as L tends to infinity.

Our proof that the condition numbers increase exponentially is based on an analysis of the spectral gap of F. Let σ(F) denote the spectrum of F. The spectral gap γ is defined to be

γ1max{λ:λσ(F)\{1}}. (29)

We use the bottleneck inequality [14, Theorem 7.3] to show that the spectral gap of the hilly landscape transition matrix decreases exponentially with L. For convenience, assume that L is even, and let

E{1,,L2}.

The bottleneck ratio [14, Section 7.2] for the partition {E, Ec} is

Φ(E,Ec)=π(L2)FL2,L2+1+π(1)F1,012exp(L)1+exp[L(V(1L)V(0))]exp(L).

(As in the last section, we use the symbols ≲ and ≳ to denote bounds up to multiplicative constants.) Therefore, by the bottleneck inequality, the mixing time tmix [14, Section 4.5] satisfies

tmix14Φ(E,Ec)exp(L).

By [14, Theorem 12.3],

1γlog(4min{π(i):iΩ})=log(4)+Lγtmix.

Therefore,

1γexp(L)L, (30)

and we see that the spectral gap decreases exponentially in L.

We now relate the condition numbers to the spectral gap. Using Equation (3.3) and the table at the bottom of page 147 in [3], and also [11, Corollary 2.6], we have

κi1Lmin{1λ:λσ(F)\{1}} (31)

for all i = 1, . . . , 8. Now we claim that for sufficiently large L,

γ=1max{λ:λσ(F)\{1}}=min{1λ:λσ(F)\{1}}, (32)

in which case (30) and (31) imply that all condition numbers grow exponentially with L. To see this, we first observe that since F is reversible, its spectrum is real [14, Lemma 12.2]. Moreover,

Fii11+exp(Lip(V))=11+exp(1)>0for alliΩ,

where Lip(V) = 1 is the Lipschitz constant of V. Therefore, using the Gershgorin circle theorem we have

λ21+exp(1)1 (33)

for all λσ(F). Inequality (33) shows that σ(F) is bounded above –1 uniformly in L, and by (30),

limLγ=limL1max{λ:λσ(F)\{1}}=0.

It follows that for sufficiently large L, max{|λ| : λσ(F) \ {1}} is attained for λ > 0. Thus, equation (32) holds, and we conclude using (30) and (31) that

κiexp(L)L2

for all i = 1, . . . , 8.

6.5. Bounds below by a random walk

Let Y be the random walk on Ω with transition matrix

Pii=Pi,i+1=Pi,i113for alliΩ,andPij0otherwise.

(As above, since Ω has periodic boundaries, PL,L+1 means PL,1, etc.) In this section, we use Theorem 3 to relate Qij(P) with Qij(F) for F the hilly landscape transition matrix. First, using the lower bounds on entries of F derived in Section 6.2, we have

F3211+exp(1)P.

Therefore, by (35) and Lemma 3,

Qij(αF)Qij(3α2(1+exp(1))P). (34)

Now for any β ∈ (0, 1),

Qij(βP)=Pi[τj<min{τi,τω}](βPω)=M=1Pi[τj=M,min{τi,τω}>M](βPω).

Let |ij| denote the minimum number of time steps required for the chain to reach state j from state i. Adopting the notation used in the proof of Lemma 3, there is some path γPijij of length |ij| for which

Qij(βP)Pi[τj=ij,min{τi,τω}>ij](βPω)Pi[Xk=γ(k)fork=1,,ij](βPω)=(β3)ij. (35)

Combining (34) and (35) then yields

Qij(αF)Qij(3α2(1+exp(1))P)(3α2(1+exp(1)))ij.

Acknowledgements

This work was funded by the NIH under grant 5 R01 GM109455-02. We would like to thank Aaron Dinner, Jian Ding, Lek-Heng Lim, and Jonathan Mattingly for many very helpful discussions and suggestions.

Appendix A. Proof of Lemma 1

Proof

Let F be irreducible and stochastic. By [6, Equation (3.1)], det(IFi) > 0 for all i ∈ Ω, and

π(F)t=1i=1Ldet(IFi)(det(IF1),det(IF2),,det(IFL)). (36)

The right hand side of (36) yields the desired extension. To show this, we first observe that there exists an open neighborhood VFRL×L of F and a disc DFC with 1DF such that GVF implies (a) idet(IGi)>0 and (b) G has exactly one eigenvalue in DF and that eigenvalue is simple. There exists a neighborhood with property (a), since idet(IGi) is continuous in G and idet(IFi)>0. The existence of a neighborhood VF and disc DF with property (b) follows from standard results in perturbation theory, since F irreducible and stochastic implies that 1 is a simple eigenvalue of F; see [10, Ch. II, Theorem 5.14]. We now let U be the union of the sets VF over all irreducible, stochastic F, and we define π:URL by extending the formula on the right hand side of (36). By (a), π is continuously differentiable. By property (b), we know that if GU with Ge = e then e is a simple eigenvalue of G, and so ker(IG)=Re. Following the proof of [6, Theorem 3.1], one may then use the identity

(adj(IG))(IG)=(IG)(adj(IG))=0,

where adj(·) denotes the adjugate matrix, to show that π(G)tG=π(G)t.

Contributor Information

ERIK THIEDE, The University of Chicago, Department of Chemistry.

BRIAN VAN KOTEN, The University of Chicago, Department of Statistics.

JONATHAN WEARE, The University of Chicago, Department of Statistics and the James Franck Institute.

References

  • 1.Berman A, Plemmons RJ. Nonnegative matrices in the mathematical sciences. Academic Press (Harcourt Brace Jovanovich Publishers), New York, 1979. Computer Science and Applied Mathematics [Google Scholar]
  • 2.Cho GE, Meyer CD. Markov chain sensitivity measured by mean first passage times.. Linear Algebra Appl; Conference Celebrating the 60th Birthday of Robert J. Plemmons (Winston-Salem, NC, 1999).2000. pp. 21–28. [Google Scholar]
  • 3.Cho GE, Meyer CD. Comparison of perturbation bounds for the stationary distribution of a Markov chain. Linear Algebra Appl. 2001;335:137–150. [Google Scholar]
  • 4.Deutsch E, Neumann M. On the first and second order derivatives of the perron vector. Linear Algebra and its Applications. 1985;71(0):57–76. [Google Scholar]
  • 5.Funderlic RE, Meyer CD., Jr. Sensitivity of the stationary distribution vector for an ergodic Markov chain. Linear Algebra Appl. 1986;76:1–17. [Google Scholar]
  • 6.Golub GH, Meyer CD., Jr. Using the QR factorization and group inversion to compute, differentiate, and estimate the sensitivity of stationary probabilities for Markov chains. SIAM J. Algebraic Discrete Methods. 1986;7(2):273–281. [Google Scholar]
  • 7.Haviv M, Van der Heyden L. Perturbation bounds for the stationary probabilities of a finite Markov chain. Adv. in Appl. Probab. 1984;16(4):804–818. [Google Scholar]
  • 8.Hunter JJ. Stationary distributions and mean first passage times of perturbed markov chains. Linear Algebra and its Applications. 2005;410(0):217–243. Tenth Special Issue (Part 2) on Linear Algebra and Statistics.
  • 9.Ipsen ICF, Meyer CD. Uniform stability of Markov chains. SIAM J. Matrix Anal. Appl. 1994;15(4):1061–1074. [Google Scholar]
  • 10.Kato T. Perturbation theory for linear operators. Classics in Mathematics. Springer-Verlag; Berlin: 1995. Reprint of the 1980 edition. [Google Scholar]
  • 11.Kirkland S. On a question concerning condition numbers for markov chains. SIAM Journal on Matrix Analysis and Applications. 2002;23(4):1109–1119. [Google Scholar]
  • 12.Kirkland S, Neumann M. Chapman & Hall/CRC applied mathematics and nonlinear science series. CRC Press; Boca Raton: 2013. Group inverses of M-matrices and their applications. [Google Scholar]
  • 13.Kirkland SJ, Neumann M, Shader BL. Applications of Paz's inequality to perturbation bounds for Markov chains. Linear Algebra Appl. 1998;268:183–196. [Google Scholar]
  • 14.Levin DA, Peres Y, Wilmer EL. Markov chains and mixing times. American Mathematical Society, Providence, RI. 2009 With a chapter by James G. Propp and David B. Wilson.
  • 15.Meyer C., Jr. The role of the group generalized inverse in the theory of finite markov chains. SIAM Review. 1975;17(3):443–464. [Google Scholar]
  • 16.Meyer CD., Jr. The condition of a finite Markov chain and perturbation bounds for the limiting probabilities. SIAM J. Algebraic Discrete Methods. 1980;1(3):273–283. [Google Scholar]
  • 17.Norris JR. Markov chains, volume 2 of Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press; Cambridge: 1998. [Google Scholar]
  • 18.O'Cinneide CA. Entrywise perturbation theory and error analysis for Markov chains. Numer. Math. 1993;65(1):109–120. [Google Scholar]
  • 19.Schweitzer PJ. Perturbation theory and finite Markov chains. J. Appl. Probability. 1968;5:401–413. [Google Scholar]
  • 20.Seneta E. Perturbation of the stationary distribution measured by ergodicity coefficients. Adv. in Appl. Probab. 1988;20(1):228–230. [Google Scholar]
  • 21.Seneta E. Numerical solution of Markov chains, volume 8 of Probab. Pure Appl. Dekker; New York: 1991. Sensitivity analysis, ergodicity coefficients, and rank-one updates for finite Markov chains. pp. 121–129. [Google Scholar]

RESOURCES