Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2019 Aug 8.
Published in final edited form as: Electron J Probab. 2019 Jun 28;24:69. doi: 10.1214/19-EJP330

Random walks in a moderately sparse random environment

Dariusz Buraczewski 1, Piotr Dyszewski 2, Alexander Iksanov 3, Alexander Marynych 4, Alexander Roitershtein 5
PMCID: PMC6687397  NIHMSID: NIHMS1037383  PMID: 31396009

Abstract

A random walk in a sparse random environment is a model introduced by Matzavinos et al. [Electron. J. Probab. 21, paper no. 72: 2016] as a generalization of both a simple symmetric random walk and a classical random walk in a random environment. A random walk (Xn)n{0} in a sparse random environment (Sk,λk)k is a nearest neighbor random walk on that jumps to the left or to the right with probability 1/2 from every point of \{,S1,S0=0,S1,} and jumps to the right (left) with the random probability λk+1 (1 − λk+1) from the point Sk, k. Assuming that (SkSk1,λk)k are independent copies of a random vector (ξ,λ)×(0,1) and the mean Eξ is finite (moderate sparsity) we obtain stable limit laws for Xn, properly normalized and centered, as n. While the case ξM a.s. for some deterministic M > 0 (weak sparsity) was analyzed by Matzavinos et al., the case Eξ= (strong sparsity) will be analyzed in a forthcoming paper.

Keywords: branching process in a random environment with immigration, perpetuity, random difference equation, random walk in a random environment

1. Introduction

Simple random walks on (the set of integers) arise in various areas of classical and modern stochastics. However, their intrinsic homogeneity reduces in some situations applicability of the simple random walks. Solomon [36] eliminated this drawback by introducing a random environment which made a modified random walk space inhomogeneous. In the present article we investigate an intermediate model, called random walk in a sparse random environment (RWSRE), in which homogeneity of an environment is only perturbed on a sparse subset of . Since RWSRE is a particular case of a random walk in a random environment (RWRE) we proceed by recalling the definition of the latter.

Set Ω=(0,1) and X=. Let F be the Borel σ-algebra of subsets of Ω, P a probability measure on (Ω,F) and G the σ-algebra generated by the cylinder sets in X. A random environment is a random element ω=(ωn)n of the measurable space (Ω,F) distributed according to P. A quenched (fixed) environment ω provides us with a probability measure ω on X whose transition kernel is given by

ω{Xn+1=j|Xn=i}={ωi,ifj=i+1,1ωi,ifj=i1,0,otherwise.

With the initial condition X0 := 0 the sequence X=(Xn)n0 is a Markov chain on (under ω ) which is called random walk in the random environment ω. Here and hereafter, 0:={0}. It is natural to investigate RWRE from two viewpoints which are different in many aspects: under the quenched measure ω for almost all (with respect to P) ω, that is, for a typical ω or under an annealed measure. Formally, the annealed measure on (Ω×X,FG) is defined as a semi-direct product =Pω via the formula

{F×G}=Fω{G}P(dω),FF,GG.

Note that in general X is no longer a Markov chain under . Usually one assumes that an environment ω forms a stationary and ergodic sequence or even a sequence of iid (independent and identically distributed) random variables. In this setting RWRE has attracted a fair amount of attention among probabilistic community resulting in quenched and annealed limit theorems [3, 11, 12, 25, 26, 35, 37] and large deviations [5, 7, 9, 15, 19, 33, 34, 38, 39]. This list of references is far from being complete.

We aim at establishing annealed limit theorems for X (that is, under ) in a so called sparse random environment which corresponds to a particular choice of P which is specified as follows. Let ((ξk,λk))k be a sequence of independent copies of a random vector (ξ, λ) which satisfies λ ∈ (0, 1) and ξ a.s. For n, set

Sn={k=1nξk,ifn>0,0,ifn=0,k=n+10ξk,ifn<0.

The sparse random environment ω=(ωn)n is defined by

ωn={λk+1,ifn=Skforsomek,12,otherwise. (1.1)

The model (with λk in (1.1) replacing λk+1) was introduced by Matzavinos, Roitershtein and Seol [30]. These authors obtained various results including a recurrence/transience criterion, a strong law of large numbers and limit theorems. However, many results in [30] were proved under quite restrictive conditions including boundedness of ξ, a strong ellipticity condition for the distribution of λ and independence of ξ and λ. In this setting some essential properties of X remain hidden. Our main purpose is to relax the aforementioned assumptions substantially, thereby establishing limit theorems in full generality, and to find out how distributional properties of the vector (ξ, λ) affect the asymptotic behavior of X. It turns out that the asymptotics of X is regulated by the tail behaviors of ξ and ρ := (1 − λ) which determine sparsity of the environment and the local drift of the environment, respectively. In this paper we investigate the case where Eξ<. We call the corresponding environment ‘moderately sparse’, whereas in the opposite case where Eξ= we say that the environment is ‘strongly sparse’. The analysis of X in a strongly sparse environment requires completely different techniques and will be carried out in a companion paper [6].

The present article is organized as follows. In Section 2 we formulate our limit theorems for X and the first passage times of X. In Section 3.1 we describe our approach and define a branching process Z in a random environment which is used to analyze the random walk X. In Section 3.2 we introduce necessary notation related to the process Z. In Section 4 we explain a heuristic behind our proof and present a number of important estimates and decompositions used throughout the paper. Among other things, we demonstrate in this section how to reduce the initial problem to the asymptotic analysis of sums of certain iid random variables. The tail behavior of these variables is discussed in Section 5. Section 6 is devoted to the analysis of a particular critical Galton–Watson process with immigration which naturally arises in the context of random walks in the sparse random environment. The proofs of the main results are given in Sections 7.1, 7.2 and 7.3. The proofs of auxiliary lemmas can be found in Section 7.4 and the Appendix.

2. Main results

We focus on the case when X is -a.s. transient to +∞ and the environment is moderately sparse, that is, Eξ<. Recall the notation

ρ=1λλ.

According to Theorem 3.1 in [30], X is -a.s. transient to +∞ if

Elogρ[,0)andElogξ<. (2.1)

The first inequality excludes the degenerate case ρ = 1 a.s. in which X becomes a simple random walk. The second inequality is always true for the moderately sparse environment. We note right away that our standing assumptions Elogρ[,0) and Eξ< hold under the conditions of our main results, Theorems 2.2 and 2.6.

The sequence (Tn)n of the first passage times defined by

Tn=inf{k0:Xk=n},n

is of crucial importance for our arguments. Of course, the observation that the asymptotics of X can be derived from that of (Tn) is not new and has been exploited in many earlier papers in the area of random walks in random environments. Assuming only transience to the right it is shown on p. 12 in [30] that

limnTSnn=ETS1a.s.

This in combination with Lemma 4.4 in [30] leads to the conclusion that

limnXnn=Eξ/ETS1=:νandlimnTnn=1νa.s. (2.2)

whenever the environment is moderately sparse. Furthermore, under the additional assumption that ξ and λ are independent, Theorem 3.3 in [30] states that

ν=(1Eρ)Eξ(1Eρ)Eξ2+2Eρ(Eξ)2 (2.3)

provided that Eρ<1 and Eξ2<, and v = 0, otherwise.

In Proposition 2.1 we give an explicit formula for v when ξ and λ are allowed to be dependent.

Proposition 2.1.

Assume that Elogρ[,0) and Eξ<. Then

ν=(1Eρ)Eξ(1Eρ)Eξ2+2EξEρξ,1ν=1Eξ(Eξ2+2EξEρξ1Eρ) (2.4)

provided that Eρ<1, Eρξ< and Eξ2<, and ν = 0 (1/ν = ∞), otherwise.

Turning to weak convergence results we first formulate our assumptions on the distribution of ρ. Two different sets of conditions will be used:

(P1) for some α ∈ (0, 2]

Eρα=1, Eραlog+ρ< and the distribution of log ρ is nonarithmetic, where log+ x := max(0, log x);

(P2) there exists an open interval I(0,) such that Eρx<1 for all xI.

Assuming that (P1) holds for some α > 0 we further distinguish two cases pertaining to the distribution of ξ:

(Ξ1) Eξ2α1<, where xy := max(x, y);

(Ξ2) there exists a slowly varying function such that

{ξ>t}~tβ(t),t (2.5)

for some β ∈ (1, 2α], and Eξ2α= if β = 2α.

Finally, if (P2) holds for some open interval I we assume that either (Ξ1) holds for some αI or the regular variation assumption in (Ξ2) holds for some β satisfying β/2I.

We summarize our results in Table 1 with an emphasis on which component of the environment dominates1.

Table 1:

Influence of the environment and limit theorems for Tn.

(Ξ1) (Ξ2)
(P1) Apply Thm. 2.2 (A1) (ρ dominates) In case β < 2α apply (P 2) with α = β/2
In case β = 2α and limt(t)=0 apply Thm. 2.2 (A2) (ρ dominates)
In case β = 2α and limt(t)=C(0,) apply Thm. 2.2 (A3) (contributions of ρ and ξ are comparable)
In case β = 2α and limt(t)=+ apply Thm. 2.6 (B1) (ξ dominates)
In case β > 2α apply (P 1) and (Ξ1) (because (Ξ2) with β > 2α imply (Ξ1))
(P2) In case 2I apply Prop. 2.9 (contributions of ρ and ξ are comparable) In case β ∈ (1, 4) and β/2 ∈ I apply Thm. 2.6 (B2) (ξ dominates)

In what follows, for α ∈ (0, 2), we denote by Sα a random variable with an α-stable distribution defined by

logEexp(uSα)=Γ(1α)uα,u0,

where Γ(·) is the gamma function, if α ∈ (0, 1);

logEexp(iuS1)=(π/2)|u|iulog|u|,u;
logEexp(iuSα)=|u|αΓ(2α)α1(cos(πα/2)isin(πα/2)signu),u,

if α ∈ (1, 2). Note that Sα is a positive random variable when α ∈ (0, 1) and it has a spectrally positive α-stable distribution when α ∈ [1, 2). Throughout the paper d and will mean convergence in probability and convergence in distribution, respectively.

In Theorem 2.2 and Corollary 2.4 we treat the case (P1).

Theorem 2.2.

Assume that one of the following sets of assumptions is satisfied:

(A1) (P 1) holds for some α ∈ (0, 2], (Ξ1) holds and E(ρξ)α<;

(A2) (P 1) holds for some α ∈ (1/2, 2] and (Ξ2) holds with β = 2α and limt(t)=0, and E(ρξ)α<;

(A3) (P 1) holds for some α ∈ (1/2, 2), (Ξ2) holds with β = 2α and limt(t)=C(0,), Eρα+ε< and Eραξα+ε< for some ε > 0.

Then there exist absolute constants Aα, Bα and C1 such that the following limit relations hold as n → ∞.

  • If α ∈ (0, 1), then TnBαn1/αdSα.

  • If α = 1, then TnA1a(n)B1ndC1+Sα, where a(n) ∼ n log n.

  • If α ∈ (1, 2), then TnAαnBαn1/αdSα.

  • If α = 2, then TnA2nB2(nlogn)1/2dN(0,1) where N(0,1) is a standard normal random variable.

Remark 2.3.

See (7.11), (7.12) and (7.14) for explicit forms of the constants Aα, Bα and C1. In Theorem 2.2 we do not specify the constants by two reasons. First, these involve characteristics of random variables that have not been introduced so far. Second, some of these constants are essentially implicit in the sense that these cannot be calculated.

From Theorem 2.2 we deduce the following corollary.

Corollary 2.4.

Under the assumptions and notation of Theorem 2.2 the following limit relations hold as k → ∞.

  • If α ∈ (0, 1), then XkBααkαdSαα.

  • If α = 1, then XkA11a^(k)A12B1k(logk)2dC1S1, where a^(k)~k(logk)1.

  • If α ∈ (1, 2), then XkAα1kAα(1+1/α)Bαk1/αdSα.

  • If α = 2, then XkA21kA23/2B2(klogk)1/2dN(0,1).

Remark 2.5.

When α ∈ (0, 1) the distribution of Sαα is called the Mittag-Leffler distribution with parameter α. The term stems from the facts that

Eexp(uΓ(1α)Sαα)=n0unΓ(1+nα),u

and that the right-hand side defines the Mittag-Leffler function with parameter α.

Our next theorem treats weak convergence of Tn in cases where ξ plays a dominant role.

Theorem 2.6.

Assume that one of the following sets of assumptions is satisfied:

(B1) (P1) holds for some α ∈ (1/2, 2], (Ξ2) holds with β = 2α and limt(t)=+, and E(ρξ)α<;

(B2) (P2) holds and (Ξ2) holds with β ∈ (1, 4) such that β/2I and E(ρξ)β/2+ε< for some ε > 0.

In the case (B2) put α := β/2. Then there exist the functions cα(t) for α ∈ (1/2, 2), q1(t) and r2(t) regularly varying atof indices 1/α, 1 and 1/2, respectively, and the absolute constants Aα* and Bα* for α ∈ (1/2, 2] such that the following limit relations hold as n → ∞.

  • If α(1/2,1), then TnBα*cα(n)dSα.

  • If α = 1, then Tnnq1(A1*n)B1*c1(n)dS1.

  • If α ∈ (1, 2), then TnAα*nBα*cα(n)dSα.

  • If α = 2, then TnA2*nB2*r2(n)dN(0,1).

Remark 2.7.

This is a counterpart of Remark 2.3. Explicit forms of the normalizing and centering sequences in Theorem 2.6 and Corollary 2.8 given below can be found in (7.16), (7.17), (7.18) and (7.19), and (7.20), (7.21), (7.22) and (7.23), respectively.

Before formulating the corresponding limit theorems for Xk we need to introduce more notation. For α ∈ (1/2, 1), denote by cα(t) any positive function satisfying cα(cα(t))~cα(cα(t))~t as t → ∞. Since cα(t) is regularly varying at ∞ such cα(t) do exist by Theorem 1.5.12 in [2].

Corollary 2.8.

Under the assumptions and notation of Theorem 2.6 the following limit relations hold as k → ∞.

  • If α(1/2,1), then Xk(Bα*)αcα(k)dSαα.

  • If α = 1, then Xks(k)t(k)dS1 for appropriate sequences s(k) and t(k) which are specified in formula (7.21).

  • If α ∈ (1/2), then Xk(Aα*)1k(Aα*)(1+1/α)Bα*cα(k)dSα.

  • If α = 2, then Xk(A2*)1k(A2*)3/2B2*r2(k)dN(0,1).

The last result of this section is given for completeness only. It can be derived from a general central limit theorem (Theorem 2.2.1 in [40]) for random walk in a stationary and ergodic random environment. Since the sparse random environment is not stationary in general, to apply this theorem one has to pass to a stationary and ergodic environment. In Theorem 2.1 in [30] it is shown that such a passage is possible whenever Eξ<.

Proposition 2.9.

Assume that (P 2) and (Ξ1) hold for some α ≥ 2. Then there exists σ0 ∈ (0, ∞) such that, as n → ∞,

Tnv1nσ0n1/2dN(0,1)

and

Xnvnσ0v3/2n1/2dN(0,1),

where v is given in (2.4).

3. Branching processes in random environment with immigration

The connection between a random walk and a branching process with immigration dates back to Harris [22]. In the context of a random walk in a random environment this connection was successfully used by Kozlov [29] and Kesten, Kozlov and Spitzer [26]. In particular, these authors have shown that the asymptotic behavior of RWRE can be obtained from that of the total progeny of the aforementioned branching process. Since we are going to exploit the same idea we first recall a construction of the latter process. Most of the material in Section 3.1 can be found in [26].

3.1. Branching process with immigration

Throughout the paper the fact that Xn → ∞ -a.s. plays a crucial role. Let Ui(n) be the number of steps of the process X from i to i − 1 during the time interval [0, Tn), that is,

Ui(n)=#{k<Tn:Xk=i,Xk+1=i1},in.

Since XTn=n and X0 = 0 we have, for n,

Tn=#ofstepsduring[0,Tn)=#ofstepstotherightduring[0,Tn)+#ofstepstotheleftduring[0,Tn)=n+2#ofstepstotheleftduring[0,Tn)=n+2i=nUi(n).

Recalling that the random walk X is transient to the right we infer

i<0Ui(n)totaltimespentbyXin(,0)<a.s. (3.1)

In particular, for any γ > 0,

nγi<0Ui(n)0,n.

Thus, the asymptotics of Tn as n → ∞ is regulated by that of n+2i=0nUi(n).

In what follows, we write Geom(p) for a geometric distribution with success probability p, that is,

Geom(p){}=p(1p),0.

Claim. Let ω and n be fixed. Then, for 0 ≤ jn, Unj(n) is equal to the size of the jth generation (excluding the immigrant) of an inhomogeneous branching process with one immigrant in each generation. Under ω, the offspring distribution of the immigrant and the other particles in the (j − 1)st generation is Geom(ωnj).

Proof of the claim. First note that Un(n)=0 because X cannot reach n before time Tn. Further, Un1(n)=V0(n1), where V0(n1) is the number of excursions to the left of n − 1 made by X before time Tn. Transitivity of X entails that the ω-distribution of V0(n1) is Geom(ωn−1). Finally, for 2 ≤ jn − 1, we have

Unj(n)=k=1Unj+1(n)Vk(nj)+V0(nj)a.s.,

where V0(nj) denotes the number of excursions to the left from nj before the first excursion to the left from nj + 1 (that is, before the time Tnj+1) and Vk(nj) denotes the number of excursions to the left from nj during the kth excursion to the left from nj + 1. Under ω, the random variables (Vk(nj))k0 are iid with distribution Geom(ωnj) and also independent of Unj+1(n). The proof of the claim is complete.

Reversing the order of indices leads to a branching process Z = (Zk)k≥0 in a random environment (BPRE) with one immigrant entering the system in each generation. From the very beginning we stress that immigrants in our model are ‘artificial’, that is, even though they reproduce, they do not belong to any generation and, as such, they are not counted. The evolution of Z can be described as follows. An immigrant enters the 0th generation which is originally empty, that is, Z0 = 0. She gives birth to a random number of offspring with ω-distribution Geom(ω1) which form the first generation. For n, an immigrant enters the nth generation. She and the particles of the nth generation, independently of each other and the particles in the previous generations, give birth to random numbers of offspring with ω distribution Geom(ωn+1). The number of these newborn particles which form the (n + 1)st generation is given by

Zn+1=k=0ZnGk(n),n0,

where G0(n) is the number of offspring of the (n + 1)st immigrant and, for k, Gk(n) is the number of offspring of the kth particle in the nth generation (we set Gk(n)=0 if the kth particle in the nth generation does not exist). Observe that, under ω, for each n0, the random variables (Gk(n))k0 are iid with distribution Geom(ωn) and also independent of Zn.

Note that when the random environment is sparse (see (1.1)) and fixed, for the most time, the branching process Z behaves like a critical Galton–Watson process with one immigrant and Geom(1/2) offspring distribution. Only the particles of generation Si − 1 for i as well as the immigrants arriving in this generation reproduce according to Geom(λi) distribution. Averaging over ω and taking into account the structure of the environment we obtain

j=0SnUj(Sn)=dk=1SnZkandSn+j=0SnUj(Sn)=dSn+k=1SnZk,n (3.2)

under the annealed probability . This leads to the most important conclusion of the present section

TSn=dSn+2k=1SnZk+O(1),n, (3.3)

where O(1) is a term which is bounded in probability. Distributional equality (3.3) will prove useful on many occasions.

3.2. Notation

Before we explain the strategy of our proof some more notation have to be introduced. Denote by Z(k, n) the number of progeny residing in the nth generation of the kth immigrant. In particular, Z(k, k) is the number of offspring of this immigrant. Then

Zn=k=1nZ(k,n).

For n and 1 ≤ in, let Y(i, n) denote the number of progeny in the generations i, i + 1, … , n of the ith immigrant, that is,

Y(i,n)=k=inZ(i,k).

Similarly, for i, we denote by Yi the total progeny of the ith immigrant, that is,

Yi=Y(i,)=kiZ(i,k).

We also define Wn to be the total population size in the first n generations, that is,

Wn=j=1nZj,n.

Motivated by the structure of the environment we shall often divide the population into blocks which include generations 1, … , S1; S1 + 1, … , S2 and so on. As a preparation, we write

n=ZSn,n

for the number of particles in the generation Sn,

Wn=WSnWSn1=j=Sn1+1SnZj,n

for the total population in the generation Sn–1 + 1, … , Sn and

Yn=j=Sn1+1SnYj,n

for the total progeny of immigrants arriving in the generations Sn−1, … , Sn − 1.

3.3. Analysis of the environment

The asymptotic behavior of the branching process Z depends heavily upon the environment. At the end of this section we specify qualitatively two aspects of this dependence. A random difference equation which arises naturally in the course of our discussion, as well as in [26] and many other papers on RWRE, plays an important role in the subsequent arguments.

We proceed by recalling the definitions of random difference equations and perpetuities. Let (An,Bn)n be a sequence of independent copies of an 2-valued random vector (A, B). Further, let R0 be a random variable which is independent of (An,Bn)n. The sequence (Rk)k0, recursively defined by the random difference equation

Rk:=Bk+AkRk1,k,

forms a Markov chain which is very well known and well understood. Assuming that R0 = 0 and reversing the indices in an equivalent representation Rk = A1·…·Ak−1B1+A2·…·Ak−1B2+…+Bk leads to the random variable Rk*:=B1+A1B2++A1Ak1Bk satisfying Rk*=dRk for all k. Whenever

theseriesj1Bjl=1j1Alconvergesa.s. (3.4)

its infinite version R*:=j1Bjl=1j1Al is called perpetuity because of a possible actuarial application. The study of the random difference equations and perpetuities has a long history going back to Kesten [24] and Grincevičius [17]. We refer the reader to the recent monographs [4, 23] containing a comprehensive bibliography on the subject.

It is well-known that conditions Elog|A|[,0) and Elog+|B|< are sufficient for (3.4) and the distributional convergence RkdR* as k → ∞. There are numerous results in the literature concerning the tail behavior of R*. The first assertion of this flavor is the celebrated theorem by Kesten [24] (see also Goldie [16] and Grincevičius [18]), to be referred to as the Kesten-Grincevičius-Goldie theorem. It states that the distribution of R* has a heavy right tail under the assumptions A > 0 a.s., EAs=1 for some s > 0 and some additional conditions, see formula (7.39) below for more details in the particular case (A, B) = (ρ, ξ). The tail behavior of R* is also well understood in some other cases, in particular, when {|B|>x} is regularly varying at ∞ (see, for instance, [18], [20] and [8]).

Now we switch attention from the general random difference equations to a particular one which features in the analysis of BPRE Z. Using the branching property one easily obtains the following recurrence

R¯0:=Eω0=0,R¯k:=Eωk=EωZSk=ρkξk+ρkEωZSk1=ρkξk+ρkR¯k1,k.

This shows, among others, that the Markov chain (R¯k)k0 is an instance of the random difference equation which corresponds to (A, B) = (ρ, ρξ). Asymptotic distributional properties of a particular perpetuity which corresponds to (A, B) = (ρ, ξ) are essentially used in the proof of Lemma 7.2.

4. Proof strategy

A weak convergence result for Tn, properly normalized and centered, will be derived from the corresponding result for TSn, again properly normalized and centered. In view of (3.3), the latter may in principle be affected by the asymptotic behavior of Sn, WSn or both. Fortunately, the contribution of Sn is degenerate in the limit, for it is only regulated by the law of large numbers, fluctuations of Sn around its mean do not come into play. Summarizing, analysis of the asymptotics of WSn is our dominating task.

While dealing with WSn our main arguments follow the strategy invented by Kesten et al. [26]. Namely, for large n we decompose WSn as a sum of random variables which are iid under the annealed probability . For this purpose we define extinction times

τ0:=0,τk:=min{j>τk1:j=0},k. (4.1)

Let us emphasize that the extinctions of Z are ignored in the generations other than S1, S2, … Set

W¯τn:=WSτnWSτn1,n

and note that (W¯τn,τnτn1)n are iid random vectors. We have

k=1τn*W¯τkk=1SnZkk=1τn*+1W¯τk, (4.2)

where τn* is the number of extinctions of Z in the generations S0, … , Sn, that is,

τn*:=max{k0:τkn},n.

It turns out that the extinctions occur relatively often as the following lemma confirms.

Lemma 4.1.

Assume that Elogρ[,0) and Elogξ<. Then Eτ1<. If additionally Eρε< and Eξε< for some ε > 0, then Eexp(γτ1)< for some γ > 0.

The proof of Lemma 4.1 is given in the Appendix.

Under the assumptions of our main results μ:=Eτ1< by Lemma 4.1. The strong law of large numbers for renewal processes makes it plausible that, for large n, the behavior of WSn is comparable with the behavior of the sum k=1μ1nW¯τk. The latter, properly centered and normalized, converges in distribution if and only if the distribution of W¯τ1 belongs to the domain of attraction of a stable law. To check the latter, for i, we divide particles residing in the generations Si−1 + 1, … , Si into groups:

  • P1,i – the progeny residing in the generations Si−1 + 1, … , Si − 1 of the immigrants arriving in the generations Si−1, … , Si − 2, the number of these being
    Wi0:=j=Si1+1Si1k=jSi1Z(j,k);
  • P2,i – the progeny residing in the generations Si−1 + 1, … , Si − 1 of the immigrants arriving in the generations 0, 1, … , Si−1 − 1, the number of these being
    Wi:=j=1Si1k=Si1+1Si1Z(j,k);
  • P3,i – particles of the generation Si, the number of these being i.

The aforementioned partition of the population which is depicted on Figure 1 induces the following decompositions

Wi=Wi0+Wi+i,ia.s.

and

W¯τ1=i=1τ1Wi0+i=1τ1Wi+i=1τ1ia.s.

which are of primary importance for what follows.

Figure 1.

Figure 1

The generations 0 through S3 of the BPRE Z and the partition of the corresponding population into parts Pi,j, i, j = 1, 2, 3. The bold horizontal lines represent particles in the generations S1, S2 and S3, that is, those comprising the groups P3,i, i = 1, 2, 3. By definition, P2,1=.

Depending on the assumptions (P 1), (P 2), (Ξ1) or (Ξ2) the random variables i=1τ1Wi0, i=1τ1Wi and i=1τ1i may exhibit different tail behaviors. Often, one of the random variables dominates the others thereby determining the tail behavior of the whole sum W¯τ1.

5. Tail behavior of W¯τ1

In this section we do not assume that Eξ<.

We first analyze the tail behavior of i=1τ1Wi0. Note that by construction (Wi0)i are iid and the random variable τ1 does not depend on the future of the sequence (Wi0)i in the sense of the definition given by Denisov, Foss, Korshunov on p. 987 in [10]. The latter means that, for each n, the collections of random variables ((Wk0)kn,1{τ1n}) and (Wk0)k>n are independent. This observation in combination with Corollary 3 in [10] and Theorem 1 in [28] yields the following lemma which will be used many times throughout the paper.

Lemma 5.1.

Assume that {W10>x} is regularly varying at infinity and τ1 has a finite exponential moment. Then

{i=1τ1Wi0>x}~(Eτ1){W10>x},x. (5.1)

Proof. If EWi0<, the claim follows from Corollary 3 in [10]. If EW10= we use Theorem 1 in [28] to conclude that, as t → ∞,

0t{i=1τ1Wi0>x}dx=E[(i=1τ1Wi0)t]~(Eτ1)E[W10t]=(Eτ1)0t{W10>x}dx.

By the monotone density theorem, see Theorem 1.7.2 in [2], the last formula entails (5.1).

Lemma 5.2.

Assume that (2.5) holds with some β > 0. Then

{W10>x}~Eϑβ/2xβ/2(x1/2),x,

where ϑ is a random variable with Laplace transform

Eesϑ=1/cosh(s1/2),s0. (5.2)

The proof of Lemma 5.2 is given in Section 6. In the next two lemmas we provide moment estimates for the two other summands i=1τ1Wi and i=1τ1i.

Lemma 5.3.

Assume that Elogρ[,0) and that, for some k ≤ 2, E(ρξ)κ and Eξκ are finite. Then E1κ< and there exists a positive constant C such that, for all n,

Enκ{CifEρκ<1,CnifEρκ=1,CγnifEρκ>1 (5.3)

If additionally Eξ2κ<, then

EW1κ<. (5.4)

Remark 5.4.

Since ξ ≥ 1 a.s., the assumption E(ρξ)κ< entails Eρκ<. This explains the absence of the latter condition in Lemma 5.3.

Lemma 5.5.

Assume that, for some κ ≤ 2, Eρκ<1, E(ρξ)κ and Eξκ are finite. Then, for all κ0 ∈ (0, κ),

E(i=1τ1i)κ0<. (5.5)

If additionally Eξ3κ/2<, then

E(i=1τ1Wi)κ0<. (5.6)

Lemma 5.6 states that under the assumption (P1) the distribution of k=1τ1(k+Wk) has a power tail.

Lemma 5.6.

Assume that (P1) holds for some α ∈ (0,2], Eξ3α/2< and E(ρξ)α<. Then

{k=1τ1(k+Wk)>x}~C2(α)xα,x

for a positive constant C2(α).

Lemma 5.7 points out the tail behavior of W¯τ1 in the situation where the slowly varying factor in (Ξ2) is a constant.

Lemma 5.7.

Assume that (P 1) holds for some α ∈ (0, 2), (Ξ2) holds with β = 2α and ℓ such that limt(t)=C>0, Eρα+ε< and Eραξα+ε< for some ε > 0. Then

{W¯τ1>x}~((Eτ1)(Eϑα)C+C2(α))xα,x,

where C2(α) is the same constant as in Lemma 5.6.

The proofs of Lemmas 5.3 through 5.7 are postponed until Section 7.4.

For the ease of reference the tail behavior of W¯τ1 is summarized in the following proposition.

Proposition 5.8.

The following asymptotic relations hold.

(C1) If (P1) holds for some α ∈ (0, 2], either Eξ2α< or (Ξ2) holds with β = 2α, limt(t)=0, and E(ρξ)α<, then

{W¯τ1>x}~C2(α)xα,x,

where C2(α) is the same constant as in Lemma 5.6.

(C2) If (P 1) holds for some α ∈ (0, 2), (Ξ2) holds with β = 2α and limt(t)=C(0,), Eρα+ε< and Eραξα+ε< for some ε > 0, then

{W¯τ1>x}~((Eτ1)(Eϑα)C+C2(α))xα,x.

(C3) If (P1) holds for some α ∈ (0, 2], (Ξ2) holds with β = 2α and limt(t)=, and E(ρξ)α<, then

{W¯τ1>x}~((Eτ1)(Eϑα)xα(x1/2),x.

(C4) If (P2) holds, (Ξ2) holds for some β ∈ (0, 4) such that β/2I and E(ρξ)β/2+ε< for some ε > 0, then

{W¯τ1>x}~(Eτ1)(Eϑβ/2)xβ/2(x1/2),x.

Proof. Under the assumptions (Ci), i = 1, 2, 3, 4, τ1 has some finite exponential moment by Lemma 4.1. This fact combined with Lemma 5.1 ensures (5.1) whenever the right tail of W10 is regularly varying.

Proof of (C1). Each of Eξ2α< and (Ξ2) with β = 2α implies Eξ3α/2<. Therefore, in view of Lemma 5.6 it is enough to show that

{i=1τ1Wi0>x}=o(xα),x. (5.7)

If (Ξ2) holds with β = 2α, then according to Lemma 5.2

{W10>x}~Eϑαxα(x1/2),x.

This in combination with limt(t)=0 which holds by assumption and (5.1) proves (5.7).

Assuming that Eξ2α< we intend to show that

E[i=1τ1Wi0]α< (5.8)

which, of course, entails (5.7). The proof of (5.8) utilizes two technical lemmas whose formulations and proofs are postponed until later. Since τ1 does not depend on the future of the sequence (Wi0)i, by Lemma A.1 it is enough to show that E[W10]α<. At the beginning of Section 6 we show that W10 has the same distribution as the total progeny of a critical Galton–Watson process with unit immigration and Geom(1/2) offspring distribution stopped at random time ξ1 − 1. The conclusion E[W10]α< then follows from Lemma 6.3.

Proof of (C2). This is just Lemma 5.7.

Proof of (C3). This follows from Lemma 5.2 in conjunction with (5.1) and Lemma 5.6 because (Ξ2) with β = 2α entails Eξ3α/2<.

Proof of (C4). Since the interval I is open, there exists ε1 > 0 such that β/2 + ε1 ∈ (0, 2], Eρβ/2+ε1<1, Eξ3β/4+3ε1/2< and E(ρξ)β/2+ε1<. In view of this Lemma 5.5 applies κ=β/2+ε1 and κ0=β/2+ε1/2 which gives E(i=1τ1i)β/2+ε1/2< and E(i=1τ1Wi)β/2+ε1/2<. An appeal to Lemma 5.2 in combination with (5.1) does the rest.

6. Critical Galton–Watson process with immigration

As has already been mentioned in Section 3, (Zn)0nξ11=d(Zncrit)0nξ11, where ξ1 is assumed independent of (Zncrit)n0 a critical Galton–Watson process with unit immigration and Geom(1/2) offspring distribution. In this section we collect some known properties of (Zncrit)n0 and prove several auxiliary results which to our knowledge are not available in the literature. The evolution of (Zncrit)n0 is the same as that of the BRPE Z with ωn ≡ 1/2 for all n, see Section 3.1.

For n, let Wncrit:=k=1nZkcrit denote the total progeny in the first n generations. Further, for n and 1kn, write Zcrit(k, n) for the number of the nth generation progeny of the kth immigrant and Y crit(k, n) for the number of progeny of the kth immigrant which reside in generations k through n, that is,

Ycrit(k,n)=j=knZcrit(k,j).

Here is the main result of this section of which Lemma 5.2 is an immediate consequence because W10=dWξ11crit, where ξ1 is assumed independent of (Wkcrit)k.

Proposition 6.1.

Let ς be an integer-valued random variable independent of (Wncrit)n0 and such that

{ς>x}~x2α(x),x

for some α > 0 and some ℓ slowly varying at ∞. Then

{Wςcrit>x}~Eϑα{ς>x1/2}~Eϑαxα(x1/2),x,

where ϑ is a random variable with Laplace transform (5.2).

Remark 6.2. For fixed n, EWncrit=n(n+1)2 and the distribution of Wncrit inherits an exponential tail from Geom(1/2) offspring distribution. Thus, for ς which has distribution with a heavy tail and is independent of (Wncrit)n it is natural to expect that

Wςcritconstς2.

Proposition 6.1 makes this intuition precise.

Lemma 6.3 given next is used in the proof of Proposition 5.8, part (C1).

Lemma 6.3.

Let ς be an integer-valued random variable independent of (Wncrit)n0 and such that Eς2α< for some α > 0. Then E[Wςcrit]α<.

To prove Proposition 6.1 and Lemma 6.3 we need some auxiliary lemmas. The first one is due to Pakes [32, Theorem 5].

Lemma 6.4.

We have

n2Wncritdϑ,n, (6.1)

where ϑ is a random variable with Laplace transform (5.2).

In the cited article Pakes investigates Galton–Watson processes with general, not necessarily unit, immigration. One of the standing assumptions of that paper is that the probability of having no immigrants is positive. However, a perusal of the proof of Theorem 5 in [32] reveals that the result still holds without this assumption.

With some additional effort one can prove the convergence of all moments in (6.1).

Lemma 6.5.

For each s > 0,

limnE(n2Wncrit)s=Eϑs. (6.2)

Proof. Suppose for the moment that we have verified that

supnn0Eexp(βn2Wncrit)< (6.3)

for some β > 0 and some n0. Then in view of

supnn0E(n2Wncrit)sC(s)supnn0E(βn2Wncrit)<

for all s > 0 and some constant C(s), the Vallée–Poussin criterion for uniform integrability (see e.g. Theorem T22 in [31]) in combination with (6.1) ensures (6.2).

Left with the proof of (6.3) observe that, for fixed k, the process initiated by the kth immigrant (Zcrit(k, n))nk is a Galton–Watson process with Geom(1/2) offspring distribution. Moreover, the processes started by different immigrants are iid. Therefore, writing

Wncrit=k=1nZkcrit=k=1nj=1kZcrit(j,k)=j=1n(k=jnZcrit(j,k))=j=1nYcrit(j,n)a.s.

we obtain a representation of Wncrit as the sum of independent random variables. This formula entails

Eexp(xWncrit)=j=1naj(x),x0 (6.4)

(the case that both sides of (6.4) are infinite for some x > 0 is not excluded), where

aj(x):=Eexp(xYcrit(nj+1,n))=Eexp(xYcrit(1,j)),1jn,x0.

We have a0(x) = 1 for all x ≥ 0 and

a1(x)=Eexp(xZcrit(1,1))=k0ekx2k1=(2ex)1

for x ∈ [0, log 2). Using a decomposition

Ycrit(1,j)=m=1Zcrit(1,1)Ymcrit(1,j1)+Zcrit(1,1),j2a.s., (6.5)

where (Ymcrit(1,j1))m are independent copies of Ycrit(1,j1) which are also independent of Zcrit(1,1) we infer

aj(x)=12exaj1(x),j.

In particular, for every fixed j0, aj(x) < ∞ for all x from some right vicinity of the origin.

Set bj(x) = exaj(x) for j0 and x ≥ 0, so that

bj(x)=ex2bj1(x).

By technical reasons, it is more convenient to work with bj rather than aj. We intend to show that, for every γ ∈ (0, 1/4), there exists K = K(γ) > 1 and x0(γ) > 0 such that

bj(x)1+Kx(j+1). (6.6)

for j0 and x > 0 satisfying j(1 + j)xγ and x < x0(γ).

Given γ ∈ (0, 1/4) pick K > 1 such that KK2γ > 1. This is possible because the largest root of the quadratic equation γx2x + 1 = 0 is larger than one. There exists x0(γ) > 0 such that

ex1+(KK2γ)x,x(0,x0(γ)).

Moreover, since we assume j(1 + j)xγ we have

ex1+KxK2x2j(j+1)=(1Kxj)(1+Kx(j+1)).

Now (6.6) follows by the mathematical induction. While for j = 0 we obtain

b0(x)=ex1+(KK2γ)x1+Kx,x(0,x0(γ)),

an induction step works as follows

bj(x)=ex2bj1(x)ex1Kjx1+Kx(j+1)

for x ∈ (0, x0(γ)) and j(j + 1)xγ. The proof of (6.6) is complete.

Armed with (6.6) we can deduce (6.3). Given β ∈ (0, 1/4) take γ ∈ (β, 1/4) and pick n0 such that β/n2 < x0(γ) and (n+1)β for nn0. Such a choice ensures that j(j +1)βn−2γ for integer 0 ≤ jn whenever nn0. Using (6.4) and then (6.6) we arrive at

Eexp(βn2Wncrit)=j=0naj(βn2)j=0nbj(βn2)j=0n(1+Kβn2(j+1)),nn0

for β ∈ (0, 1/4). It remains to note that

supnn0j=0n(1+Kβn2(j+1))exp(3Kβ)<,

thereby finishing the proof of (6.3).

We are now ready to prove Proposition 6.1 and Lemma 6.3.

Proof of Proposition 6.1.

By virtue of (6.1) we infer Wncrit in probability and then Wncrit a.s. by monotonicity. Therefore,

vx:=inf{k:Wkcrit>x}[1,)a.s.forx>1.

For x > 1 we have

{Wςcrit>x}={ςvx}=Eh(vx),

where h(y):={ςy}. Under the introduced notation, we have to prove that

limxEh(vx)h(x1/2)=Eϑα. (6.7)

By a standard inversion technique á la Feller (see Theorem 7 in [13]) (6.1) entails

vxx1/2dϑ1/2,x, (6.8)

We claim that the latter implies further that

h(vx)h(x1/2)dϑα,x. (6.9)

The simplest way to see it is to pass in (6.8) to versions which converge a.s., that is,

limxx1/2vx*=(ϑ*)1/2a.s.

and then exploit the fact that

limxh(y(x)x1/2)h(x1/2)=y2αwheneverlimxy(x)=y(0,)

(see Theorem 1.5.2 in [2]). This gives

limxh((x1/2vx*)x1/2)h(x1/2)=(ϑ*)αa.s.

because ϑ* > 0 a.s.

With (6.9) at hand, relation (6.7) follows if we can show that the family (h(vx)/h(x1/2))xx0 uniformly integrable for some x0 > 0. By Potter’s bound for regularly varying functions (Theorem 1.5.6 (iii) in [2]), given A > 1 and δ > 0 there exists n1 such that

h(vx)1{vx>n1}h(x1/2)Amax((x1/2vx)2αδ,(x1/2vx)2α+δ)a.s.

whenever xn12. Further, by monotonicity of h,

h(vx)1{vxn1}h(x1/2)h(1)h(x1/2)1{vx>n1}a.s.

Thus, for uniform integrability of (h(vx)/h(x1/2))xx0 it suffices to check two things: first,

supx4xβ/2Evxβ< (6.10)

for some β > 2α and second

supxx0(h(1)h(x1/2))γ{vxn1}< (6.11)

for some γ > 1.

From the proof of Lemma 6.5 we know that Eexp(sWn1crit)< for some s > 0, whence

{vxn1}={Wn1crit>x}=O(esx),x

which proves (6.11).

Now we intend to show that (6.10) holds for all β > 0. We have for x ≥ 4

Evxβ=01{vxβ>y}dy=β1{vxz}zβ1dzβk2{vxk}(k1)β1=βk=2[x1/2]{Wkcrit>x}(k1)β1+βk[x1/2]+1{Wkcrit>x}(k1)β1βk=2[x1/2]E(Wkcrit)βxβ(k1)β+1+βk[x1/2]+11(k1)β+1constxβk=1[x1/2]kβ1+O(xβ/2)=O(xβ/2),

where the last and penultimate inequalities follow from Lemma 6.5 and Markov’s inequality, respectively. The proof of Proposition 6.1 is complete.

Proof of Lemma 6.3.

By Lemma 6.5, E[n2Wncrit]αC for all n and some C > 0. This entails

E[Wςcrit]α=n1E[n2Wncrit]αn2α{ς=n}CEς2α<.

The proof of Lemma 6.3 is complete.

For later use, we note that, for n,

EZcrit(1,n)=1,VarZcrit(1,n)=2n,EYcrit(1,n)=n,VarYcrit(1,n)=n(n+1)(2n+1)3. (6.12)

The first three of these equalities follow by an elementary calculation. The fourth one can be derived with the help of (6.5) and the mathematical induction.

7. Proofs

7.1. Proof of Proposition 2.1

Recalling that v=Eξ/ETS1 it suffices to show that

ETS1={Eξ2+2EξEρξ1Eρ,ifEρ<1,Eρξ<,Eξ2<;,otherwise.

Using (3.3) yields

TSnnETS1,nj=1SnZjn=WSnn12(ETS1Eξ),n.

Let us prove the latter convergence in probability. According to Lemma 4.1, we have Eτ1< whenever Elogρ[,0) and Elog+ξ<. Recalling from (4.2) that

1nk=1τn*W¯τkWSnn1nk=1τn*+1W¯τk

we conclude by the strong law of large numbers that

limnWSnn=1Eτ1EW¯τ1a.s.

Hence,

ETS1=Eξ+2Eτ1EW¯τ1.

Left with identifying EW¯τ1 we recall that, for k, Yk denotes the total progeny of immigrants arriving in the generations Sk−1, … , Sk − 1, that is,

Yk=j=Sk1+1SkY(j,).

Since Y1, Y2, … are identically distributed and, for k, Yk is independent of {τ1k}={ZS1>0,,ZSk1>0} we infer

EW¯τ1=Ek=1τ1Yk=k1EYk1{τ1k}=k1EYk{τ1k}=EY1Eτ1

(if EY1=, the formula just says that EW¯τ1= ). To calculate EY1 we note that

EωY(j,)1{jξ1}=(ξ1j+k2ξki=1k1ρi)1{jξ1}a.s.,

whence

EωY1=ξ1(ξ11)2+ξ1ρ1k2ξki=1k1ρia.s.,

where the a.s. convergence of the last series is secured by our assumptions Elogρ[,0) and Eξ<. Taking the expectation with respect to yields

EY1={12Eξ(ξ1)+EξEρξ1Eρ,ifEρ<1,Eρξ<,Eξ2<;,otherwise.

The proof of Proposition 2.1 is complete.

7.2. Proof of Theorem 2.2 and Corollary 2.4

The assumptions of Theorem 2.2 ensure that Eξ< and that μ:=Eτ1 and s2:=Varτ1 are finite (for the latter use Lemma 4.1). It is also clear that the distribution of τ1 is nondegenerate, whence s2 > 0.

From Proposition 5.8 (parts (C1) and (C2)) we know that

{W¯τ1>x}~Cxα,x,

where C = C2(α) in the cases (A1) and (A2) and C=(Eτ1)(Eϑα)C+C2(α) in the case (A3). Therefore, the distribution of W¯τ1 belongs to the domain of attraction of an α-stable distribution. This means that

k=1nW¯τka(n)b(n)dSα,n (7.1)

for some a(t) and b(t), where S2=dN(0,1). To find a(t) and b(t) explicitly we use Theorem 3 on p. 580 and formula (8.15) on p. 315 in [14]:

b(t)=(Ct)1/αanda(t)=0ifα(0,1);
b(t)=Ctanda(t)=t0Ct{W¯τ1>x}dxifα=1;
b(t)=(Ct)1/αanda(t)=(EW¯τ1)tifα(1,2);
b(t)=(Ctlogt)1/2anda(t)=(EW¯τ1)tifα=2.

Our subsequent proof will be based on representation (3.3). In view of this we first analyze the asymptotics of WSn.

Step 1. Limit theorems for WSn. We claim that

WSna(μ1n)b(μ1n)dSα,n. (7.2)

In view of (4.2) relation (7.2) follows once we have checked that (7.1) entails

k=1τn*W¯τka(μ1n)b(μ1n)dSαandk=1τn*+1W¯τka(μ1n)b(μ1n)dSα,n. (7.3)

According to the central limit theorem for renewal processes

τn*μ1nsμ3/2ndN(0,1),n.

This implies that, for ε > 0 small enough, we can pick z = z(ε) so large that

{τn*tn}1ε,

where tn:=[μ1nsμ3/2zn]. Note that n=μtn+O(tn1/2) and that

limna(tn)a(tn+O(tn1/2))b(tn+O(tn1/2))=0andlimnb(tn+O(tn1/2))b(tn)=1. (7.4)

These can be easily checked with the exception of the case α = 1 in which a proof of the first relation is needed: for any r ∈ (1, 2],

a(tn+O(tn1/r))a(tn)b(tn)=tnCtnCtn+O(tn1/r){W¯τ1>x}dx+O(tn1/r)0Ctn+O(tn1/r){W¯τ1>x}dxCtnO(tn1/r)logtntn=o(1),n. (7.5)

Motivated by our later needs we have proved this in a slightly extended form with r instead of 2.

To prove the first relation in (7.3) we write, for x,

{k=1τn*W¯τka(μ1n)b(μ1n)x}ε+{k=1tnW¯τka(μ1n)b(μ1n)x}=ε+{k=1tnW¯τka(tn+O(tn1/2))b(tn+O(tn1/2))x}.

Sending n → ∞ in the last inequality and using (7.1) and (7.4) we obtain

limsupn{k=1τn*W¯τka(μ1n)b(μ1n)x}ε+{Sαx}.

Letting now ε → 0+ yields

limsupn{k=1τn*W¯τka(μ1n)b(μ1n)x}{Sαx}.

A symmetric argument leads to

limsupn{k=1τn*W¯τka(μ1n)b(μ1n)x}{Sαx}.

The second relation in (7.3) follows in a similar manner.

Step 2. Limit theorems for TSn.

Case α > 1. Since Eξ2< and n=o(b(μ1n)) we infer

Sn(Eξ)nb(μ1n)0,n

by the central limit theorem. Now

TSn(Eξ+2μ1EW¯τ1)nb(μ1n)d2Sα,n (7.6)

follows from (7.2) and (3.3) written in an equivalent form

TSn=d(Sn(Eξ)n)+(Eξ)n+2WSn+O(1),n.

Case α = 1. Using the weak law of large numbers and (7.2) we arrive at

TSn2a(μ1n)Cμ1ndμEξC+2S1,n. (7.7)

Case α < 1. Since n = o(b(µ−1n)) we conclude that Snb(μ1n)0 as n → ∞ by the weak law of large numbers. This in combination with (7.2) and (3.3) proves

TSn(Cμ1n)1/αd2Sα,n. (7.8)

Step 3. Limit theorem for Tn. At this step we are going to deduce limit theorems Tn from the corresponding results for TSn proved at the previous step. Set

ν(t)=inf{k:Sk>t},t0,

so that (ν(t))t0 is the first passage time process associated with the random walk (Sk)k0. The reason for introducing ν(t) is justified by

TSν(n)1TnTSν(n),n. (7.9)

Case α ≥ 1. Fix any r ∈ (1, 2). Then Eξr< and thereupon

ν(t)(Eξ)1t=o(t1/r),ta.s. (7.10)

by Theorem 4.4 on p. 89 in [21].

Subcase α = 1. Using (7.9) and (7.10) we obtain, for any x and ε > 0,

{Tn2a((μEξ)1n)C(μEξ)1n>x}{TSν(n)2a((μEξ)1n)C(μEξ)1n>x}{ν(n)>(Eξ)1n+εn1/r}+{TS[(Eξ)1n+εn1/r]2a([(μEξ)1n+εn1/r])C(μEξ)1n+2a([(μEξ)1n+εn1/r])2a((μEξ)1n)C(μEξ)1n>x}.

Letting n yields, for x,

limsupn{Tn2a((μEξ)1n)C(μEξ)1n>x}{μEξC+2S1>x}

having utilized (7.5), (7.7) and (7.10). Arguing similarly we get the converse inequality for the lower limit, thereby proving that

Tn2a((μEξ)1n)C(μEξ)1ndμEξC+2S1,n. (7.11)

Subcase α > 1. An analogous but simpler argument enables us to show that (7.6) entails

Tn(1+2(μEξ)1EW¯τ1)nb((μEξ)1n)d2Sα,n. (7.12)

Case α < 1. The proof given for the case α ≥ 1 does not work in the case (A1) when α ≤ 1/2 because it is then not necessarily true that Eξr< for some r > 1. In view of this we use the weak law of large numbers

ν(t)t1μ,t (7.13)

rather than the Marcinkiewicz-Zygmund strong law (7.10).

Another appeal to (7.9) gives, for any x and ε > 0,

{Tn(C(μEξ)1n)1/α>x}{TSν(n)(C(μEξ)1n)1/α>x}{ν(n)>((Eξ)1+ε)n}+{TS[((Eξ)1+ε)n](C(μEξ)1n)1/α>x}.

Sending n → ∞ we obtain with the help of (7.8) and (7.13)

limsupn{Tn(C(μEξ)1n)1/α>x}{2Sα>x(1+εEξ)1/α}.

Letting ε → 0+ and using continuity of the distribution of Sα yields

limsupn{Tn(C(μEξ)1n)1/α>x}{2Sα>x}.

The converse inequality for the lower limit can be derived analogously. Thus,

Tn(C(μEξ)1n)1/αd2Sα,n. (7.14)

The proof of Theorem 2.2 is complete.

Proof of Corollary 2.4.

The forms of limit relations for Tn in our Theorem 2.2 and Theorem on pp. 146–148 in [26] are the same, only the values of constants differ. In view of this the limit relations for Xk in our setting are obtained by copying the corresponding limit relations from the aforementioned theorem in [26].

7.3. Proof of Theorem 2.6 and Corollary 2.8

The proof goes the same path as that of Theorem 2.2. However, appearance of nontrivial slowly varying factors leads to minor technical complications. We shall only give the weak convergence results explicitly (recall that in the formulation of Theorem 2.6 normalizing and centering functions were not specified). Also, we shall check several claims wherever we feel it is necessary.

According to Proposition 5.8 (parts (C3) and (C4)),

{W¯τ1>x}~Eτ1Eϑαxα(x1/2),x,

where α = β/2 in case (B2). Therefore, limit relation (7.1) holds with some a(t) and b(t). To identify them we need more notation. For α ∈ (1/2, 2), let cα(t) be any positive function satisfying limtt{W¯τ1>cα(t)}=1. Further, assuming that α = 2 let r2(t) be any positive function satisfying limt[0,r2(t)]x2d{W¯τ1x}/(r2(t))2=1. By Lemma 6.1.3 in [23], cα(t) and r2(t) are regularly varying at ∞ of ∫indices 1 and 1/2, respectively. For the latter, the fact is also needed that the function t[0,r2(t)]x2d{W¯τ1x} is slowly varying at ∞. Observe that the case α = 2 only arises under the assumptions (B1) which then ensure that Eξ2=. This in combination with the aforementioned lemma yields

limtt1/2r2(t)=. (7.15)

Using again Theorem 3 on p. 580 and formula (8.15) on p. 315 in [14] we obtain

b(t) = cα(t) and a(t) = 0 if α ∈ (1/2, 1);

b(t) = c1(t) and a(t)=t0c1(t){W¯τ1>x}dx if α = 1;

b(t) = cα(t) and a(t)=(EW¯τ1)t if α ∈ (1, 2);

b(t) = r2(t) and a(t)=(EW¯τ1)t if α = 2.

Case α ∈ (1/2, 1). Repeating verbatim the proof of Theorem 2.2 for the case α ∈ (0, 1) we obtain

Tn(μEξ)1/αcα(n)d2Sα,n. (7.16)

Case α = 1. We need an analogue of relation (7.5): for r ∈ (1, 2], as n → ∞,

a(tn+O(tn1/r))a(tn)b(tn)=tnc1(tn)c1(tn+O(tn1/r)){W¯τ1>x}dx+O(tn1/r)0c1(tn+O(tn1/r)){W¯τ1>x}dxc1(tn)tn{W¯τ1>c1(tn)}(c1(tn+O(tn1/r))c1(tn))c1(tn)+O(tn1/r)0c1(tn+O(tn1/r)){W¯τ1>x}dxc1(tn)=o(1).

The first summand tends to zero in view of two facts: limntn{W¯τ1c1(tn)}=1 by the definition of c1(t) and limn(c1+(tn+O(tn1/r))c1(tn))/c1(tn)=0 which is a consequence of regular variation of c1(t). The second summand tends to zero because 0c1(t){W¯τ1>x}dx is slowly varying at ∞ as a superposition of the slowly varying and regularly varying functions.

For Step 2 in the proof of Theorem 2.2 we need the following modified argument. In view of (ξ2) the function {ξ>t} is regularly varying at ∞ of index −2 and Eξ2 can be finite or infinite. Therefore, Sn satisfies the central limit theorem with normalization sequence which is regularly varying at ∞ of index 1/2. Since c1(t) is regularly varying at ∞ of order 1 we infer

Sn(Eξ)nc1(n)0,n

and thereupon

TSn(Eξ)n2a(μ1n)μ1nc1(n)d2S1,n.

To pass from this limit relation to the final result

Tnn2a((μEξ)1n)(μEξ)1c1(n)d2S1,n, (7.17)

that is, to realize Step 3 in the proof Theorem 2.2, one can mimic the proof of Theorem 2.2.

Case α ∈ (1, 2]. While implementing Step 2 of the previous result in the case α = 2 one uses the fact that according to (7.15) b(t) = r2(t) satisfies n=o(r2(μ1n)) as n → ∞. Since the other parts of the proof of Theorem 2.2 do not require essential changes we arrive at

Tn(1+2(μEξ)1EW¯τ1)n(μEξ)1/αcα(n)d2Sα,n, (7.18)

when α ∈ (1, 2), and

Tn(1+2(μEξ)1EW¯τ1)n(μEξ)1/2r2(n)d2N(0,1),n, (7.19)

when α = 2. The proof of Theorem 2.6 is complete.

Proof of Corollary 2.8.

Since (Tn)n0 is an ‘inverse’ sequence for (Xk)k0 we can use a standard inversion technique (see, for instance, the proof of Theorem 7 in [13]) to pass from the distributional convergence of Tn, properly centered and normalized, as n → ∞ to that of Xk, again properly centered and normalized, as k → ∞. Additional complications arising in the case α = 1 can be handled with the help of arguments given in Section 3 of [1].

Here are the limit relations for Xk, properly normalized and centered, as k → ∞ which correspond to (7.16), (7.17), (7.18) and (7.19):

if α ∈ (1/2, 1), then

{W¯τ1>k}XkdμEξ(2Sα)α; (7.20)

if α = 1, then

Xks(k)t(k)dS1,

where, with m(t):=0t{W¯τ1>x}dx for t > 0 and b:=(μEξ)1,

s(k):=k1+2bm(c1(bk/(1+2bm(bk)))),k

and

t(k):=c1(k/m(k))1+2bm(k),k

(we do not write 2bm(k) instead of 1 + 2bm(k) because the case limtm(t)=EW¯τ1< is not excluded); if α ∈ (1, 2), then

Xk(1+2(μEξ)1EW¯τ1)1kcα(k)d2(μEξ)1/2(1+2(μEξ)1EW¯τ1)(1+1/α)Sα; (7.22)

if α = 2, then

Xk(1+2(μEξ)1EW¯τ1)1kr2(k)d2(μEξ)1/2(1+2(μEξ)1EW¯τ1)3/2N(0,1). (7.23)

The proof of Corollary 2.8 is complete.

7.4. Proof of auxiliary Lemmas 5.3, 5.5, 5.6 and 5.7

7.4.1. Proof of Lemma 5.3

Proof of Lemma 5.3. To prove (5.3) we first represent ZSn−1 as a sum of independent random variables

ZSn1=j=1n1Vj(n)+V˜(n),na.s., (7.24)

where Vj(n) is the number of progeny residing in the generation Sn − 1 of the jth particle in the generation Sn−1 and V˜(n) is the number of progeny residing in the generation Sn − 1 of the immigrants arriving in the generations Sn−1, … , Sn − 2. For later use, we note that, under ω,

Vj(n)=dZcrit(1,ξn1)andV˜(n)=dZξn1crit,n, (7.25)

where ω is assumed independent of (Zkcrit)k0 a Galton–Watson process with unit immigration and Geom(1/2) offspring distribution.

With the help of (7.24) we now write a standard decomposition for the number of particles in the generation Sn over the particles comprising the generation Sn−1 and their offspring

n=j=1n1i=1Vj(n)Ui,j(n)+i=1V˜(n)U˜i(n)+U0(n)=:j=1n1Vj(n)+V˜(n)+U0(n),na.s. (7.26)

Here, the notation Ui,j(n), U˜i(n), U0(n) is self-explained, but for clarity we provide explicit definitions. The variable Ui,j(n) is the number of offspring of the ith particle in the generation Sn–1, i=1,,Vj(n). The variable U˜i(n) is the number of particles in the generation Sn which are the progeny of the immigrants arriving in the generations Sn−1 through Sn−1. Finally, U0(n) is the number of offspring of the immigrant arriving in the generation Sn − 1. Observe that, under ω, (Ui,j(n))i,j, (U˜i(n))i and U0(n) are independent with distribution Geom(λn). In what follows, for simplicity we omit the superscripts (n): for instance, we write Vj for Vj(n) and similarly for the other variables. The following formulas play an important role in the subsequent proof:

Eω[U0|n1]=EωU0=ρn,Eω[U02|n1|=EωU02=2ρn2+ρnEω[Vi|n1]=EViρn=ρn,Eω[V˜|n1]=(ξn1)ρn. (7.27)

The two cases κ ∈ (0, 1] and κ ∈ (1, 2] should be treated separately.

Case κ ≤ 1. By Jensen’s inequality and subadditivity of the function ssκ on [0, ∞)

Eω[nκ|n1](Eω[n|n1])κ=[Eω[j=1n1Vj+V˜+U0|n1]]κ(n1ρn+(ξn1)ρn+ρn)κn1κρnκ+ξnκρnκ.

Taking the expectations we obtain

EnκγEn1κ+E(ρξ)κ

which entails (5.3).

Case κ ∈ (1, 2]. An application of conditional Jensen’s inequality yields

Eωnκ=Eω[Eω[nκ|n1]]Eω[(Eω[n2|n1])κ/2].

To estimate the conditional second moment we represent it as follows

7.4.1.

Appealing now to (7.27) we conclude that

Eω[n2|n1]n12ρn2+n1EωV12+2n1ξnρn2+EωV˜2+2ρn2+ρn+2ξnρn2. (7.29)

Plugging the last inequality into (7.28) and using subadditivity once again we obtain

EnκγEn1κ+(En1κ/2E[(EωV12)κ/2]+2En1κ/2Eξκ/2ρk+E[(EωV˜2)κ/2]+2γ+Eρκ/2+2Eξκ/2ρk). (7.30)

Next, we check that

E[(EωV12)κ/2]<andE[(EωV˜2)κ/2]<. (7.31)

With the help of

EωVi=1andVarωVi=2(ξn1)

which is a consequence of (7.25) and (6.12) we infer

E[(EωV12)κ/2]=E[(Eω(j=1V1U1,j)2)κ/2]=E[(Eω[1jlV1U1,jU1,l+j=1V1U1,j2])κ/2]E(ρn2EωV12+(2ρn2+ρn)EωV1)κ/22κ/2Eξκ/2ρk+γ+Eρκ/2<.

A similar argument in combination with EωV˜=ξn1 leads to the conclusion

E[(EωV˜2)κ/2]=E(ρn2EωV˜2+(ρn2+ρn)EωV˜)κ/2E[(ρn2EωV˜2)κ/2]+Eξκ/2ρk+E(ρξ)κ/2.

Left with the proof of finiteness of the first term on the right-hand side we represent V˜ as a sum of independent random variables

V˜=V˜(n)=i=1ξn1V˜i(n),na.s.,

where, for 1iξn1,V˜i(n) is the number of progeny residing in the generation Sn –1 of the immigrant arriving in the generation Sn –i. Under ω, V˜i(n)=dZcirt(i,ξn1), where ω is assumed independent of (Zcirt(i,k))ki. With this at hand, an appeal to (6.12) yields

EωV˜i2=Eω[Zcirt(i,ξn1)]2=Eω[Zcirt(i,ξn1)]2=2(ξni)+12ξn

and EωV˜i=1. Here and hereafter, to ease the notation we write V˜i for V˜i(n). Finally,

E[(ρn2EωV˜2)κ/2]=Eρnκ(Eω(i=1ξn1V˜i)2)κ/2=Eρnκ(i=1ξn1EωV˜i2+1ij<ξnEωV˜iEωV˜j)κ/2(5/2)κ/2E(ρξ)κ<

which finishes the proof of (7.31).

Turning to the asymptotic behavior of En1κ/2 which appears on the right-hand side of (7.30) we consider yet another two cases.

Case γ ≤ 1 in which Eρκ/2<1. To see it, observe that when γ = 1 the inequality Eρκ/2<γ1/2 is strict because the assumption Elogρ[,0) implies that the distribution of ρ is nondegenerate at 1. By the already proved inequality (5.3) for powers ≤ 1

supnEnκ/2<

which in combination with (7.31) shows that the expression in the parentheses in (7.30) is bounded. This ensures (5.3).

Case γ > 1. By the already proved inequality (5.3) for powers ≤ 1

Enκ/2<Can,n,

where an = 1 or = n or =[Eρκ/2]n depending on whether Eρκ/2<1 or Eρκ/2=1 or Eρκ/2>1. Since in any event anγn/2 for n, (7.30) entails

EnκγEn1κ+C1γn/2,n

for some C1 > 0. Iterating this yields EnκC2γn for some C2 > 1 and all n, thereby finishing the proof of (5.3) in the case γ > 1 and in general.

To prove (5.4) we use a decomposition W1=Wξ11+1 a.s. Inequality (5.3) tells that we are left with checking that

EWξ11κ<.

Since, under ω, Wξ11=dWξ11crit, where ω is assumed independent of (Wncrit)n0, an application of Lemma 6.5 yields

E[Wξ11κ]=j0E[1{ξ1=j+1}(Wjcrit)κ]Cj0{ξ=j+1}j2κ=CE(ξ1)2κ<

for a positive constant C. The proof of Lemma 5.3 is complete.

7.4.2. Proof of Lemma 5.5

Proof of Lemma 5.5. We start by proving (5.5). Pick κ0 ∈ (0, κ), put p = κ/κ0 and choose q such that 1/p + 1/q = 1. Recall that Eρκ<1. Hence, according to Lemma 5.3,

EnκC,n (7.32)

for a positive constant C, whence

E(i=1ni)κCmax(nκ,n),n

by subadditivity (convexity) of xxk when κ ∈ (0, 1] (κ ∈ (1, 2]). By Lemma 4.1, {τ1=n}C1eC2n for all n and positive constants C1 and C2. With these at hand, an application of Hölder’s inequality yields

E[(i=1τ1i)κ0]=n1E[(i=1τ1i)κ01{τ1=n}]n1(E(i=1ni)κ)1/p{τ1=n}1/qC1/pC1n1max(nκ/p,n1/p)eC2n/q<.

The proof of (5.5) is complete.

Turning to the proof of (5.6) we shall only show that

E(Wn)κC,n2 (7.33)

for a positive constant C. Formula (5.6) then follows with the help of the same argument (involving Hölder’s inequality) that we used while proving (5.5).

For i ≥ 2 and 1ji1=ZSi1, denote by Uj(i) the number of progeny in the generations Si−1 + 1, … , Si − 1 of the jth particle in the generation Si−1, so that

Wn=j=1i1Uj(i),i2.

Under ω, Uj(i)=dYcrit(1,ξi1) for i ≥ 2, where we set Y crit(1, 0) = 0 and ω is assumed independent of (Ycrit(1,k))k. In particular, according to (6.12)

EωUj(i)=ξi1andEω[Uj(i)]23ξi3.

We shall treat the cases κ ∈ (0, 1] and κ ∈ (1, 2] separately.

Case κ ∈ (0, 1]. Under ω, for 1ji1, Uj(i) is independent of i1. This in combination with (7.34) proves that

Eω[Wi|i1]=i1(ξi1),i2.

Therefore, we obtain

Eω(Wi)κE[[Eω(Wi|i1)]κ]EξκEi1κC,i2

having utilized Jensen’s inequality, (7.32) and the fact that ξi and i1 are independent.

Case κ ∈ (1, 2]. Another application of Jensen’s inequality in combination with (7.34), (7.32) and subadditivity of xxκ/2 on [0, ∞) yields, for i ≥ 2,

E(Wi)κ=E[Eω[(j=1i1Uj(i))κ|i1]]E(Eω[(j=1i1Uj(i))2|i1])κ/2=E[(1lji1EωUl(i)EωUj(i)+j=1i1Eω[Uj(i)]2)κ/2]Ei1κEξκ+3Ei1κ/2Eξ3κ/2C

for a positive constant C. The proof of (7.33) is complete. □

7.4.3. Proof of Lemma 5.6

We follow the method invented by Kesten et al. [26]. While some parts of the proofs given in [26] can be directly transferred to our setting, the others require an additional work. We do not present all the details of the proof focussing instead on the main differences.

We begin with a brief overview of the arguments leading to the claim of Lemma 5.6. Given a large positive constant A, put

σ=σ(A):=min{i:Zj>AforsomejSi}.

Thus, we observe the process (Zn)n0 up to the first time j when it exceeds the level A and then put σ = i for the smallest index i satisfying Sij. The following decomposition holds

k=1τ1(k+Wk)=k=1τ1(k+Wk)1{στ1}+(k=1σ1(k+Wk)+Sσ+i=σ+1τ1Yk)1{στ1}a.s.,

where Sσ is the number of particles in the generation Sσ plus their total progeny, and, for i, Yi is the total progeny in the generations Si + 1, Si + 2, … of the immigrants arriving in the generations Si−1, … , Si − 1.

We intend to prove that the first, second and fourth summands on the right-hand side of this decomposition are negligible in a sense to be made precise, so that

k=1τ1(k+Wk)Sσ1{σ<τ1}.

In view of the definition of Sσ and the fact that σ=SσA for A as above one can expect that Sσ1{σ<τ1}σEω[Y(Sσ,)]1{σ<τ1}. We shall demonstrate that the variable Eω[Y(Sσ,)] is related to a random difference equation whose tail behavior determines that of Sσ.

To realize the programme just outlined we need two auxiliary results.

Lemma 7.1.

Assume that the assumptions of Lemma 5.6 hold. Then, for any A > 0, as x → ∞,

{k=1τ1(k+Wk)>x,στ1}+{k=1σ1(k+Wk)>x,στ1}=o(xα). (7.35)

Proof. We only give a proof for the first summand in (7.35). The second summand can be treated along similar lines.

The random variable τ1 has a finite exponential moment by Lemma 4.1. Furthermore, τ1 does not depend on the future of the sequence (ξi)i. Therefore, the assumption Eξ3α/2< ensures that

E[Sτ1]3α/2< (7.36)

by Lemma A.1.

Write, for x > 0,

{k=1τ1(k+Wk)>x,στ1}{k=1τ11(k+Wk)>x/2,στ1}+{Wτ1>x/2,σ=τ1}{ASτ1>x/2}+{τ1A,Wτ1>x/2}

and observe that, in view of (7.36), the first summand on the right-hand side is o(x3α/2) as x → ∞. To estimate the second term we use a decomposition

Wτ1=i=1τ11Via.s.,

where, for 1iτ11 1ji1, Vi is the number of progeny in the generations Sτ1−1 + 1, … , Sτ1 − 1 of the ith particle in the generation Sτ1−1. We claim that

EV1α<. (7.37)

For the proof, note that V1=dYcrit(1,ξτ11), where ξτ1 is assumed independent of (Ycrit(1,n))n. Consequently, we obtain with the help of Jensen’s inequality and the inequality E[Ycrit(1,n)]23n3 for n which is a consequence of (6.12)

EV1α=E[Ycrit(1,ξτ11)]α=k0E[Ycrit(1,k)]α{ξτ11=k}k0(E[Ycrit(1,k)]2)α/2{ξτ11=k}3k0k3α/2{ξτ11=k}=3E[ξτ11]3α/23E[Sτ1]3α/2<,

where the last inequality is secured by (7.36).

With (7.37) at hand, we immediately conclude that

{τ11A,Wτ1>x/2}{i=1[A]Vi>x/2}=o(xα),x

because V1, V2, … are identically distributed. The proof of Lemma 7.1 is complete.

Before formulating another auxiliary result we recall from Section 3.2 the notation Y1=i1Z(1,i), where Z(1, i) is the number of progeny residing in the ith generation of the first immigrant, so that Y1 is the total progeny of the first immigrant.

Lemma 7.2.

Suppose that the assumptions of Lemma 5.6 hold. Let (Yj*)j be a sequence of ω-independent copies of Y1. Then there exists a constant C > 0 such that

{j=1NYj*>x}CNαxα,N.

Proof. For k, put

R˜k=ξk+ρkξk+1+ρkρk+1ξk+2+.

Recall from Section 3.3 that the so defined random variable is called perpetuity. The Kesten-Grincevičius-Goldie theorem says that if (P 1) holds and Eξα<, then, for all k,

{R˜k>x}~Cxα,x

for some positive constant C which does not depend on k.

Put Z(1, 0) := 1. For i0, denote by Z1(1, i), Z2(1, i), … ω-independent copies of Z(1, i). Recall that Sk = Sk−1 + ξk and write

Yj*=i1Zj(1,i)=k1i=Sk1Sk1Zj(1,i)=k1(i=Sk1Sk1(Zj(1,i)Zj(1,Sk1)+ξkZj(1,Sk1)).

Our proof will be based on the following decomposition which holds a.s.

j=1NYj*=j=1Nk1ξkZj(1,Sk1)+j=1Nk1Sk1Sk1(Zj(1,i)Zj(1,Sk1))=:U1+U2.

Formula (7.38) implies that, for k, ξk=R˜kρkR˜k+1, whence

U1=j=1Nk1ξkZj(1,Sk1)=j=1Nk1Zj(1,Sk1)(R˜kρkR˜k+1)=k1(j=1N(Zj(1,Sk)ρkZj(1,Sk1)))R˜k+1+NR˜1.

Since

k121k2=π2/12<1,

and R˜k+1 and (Zj(1, Sk), Zj(1, Sk−1), ρk) are independent for each j we obtain with the help of (7.39), for x > 0,

{U1>x}k1{|j=1N(Zj(1,Sk)ρkZj(1,Sk1))|R˜k+1>x/(4k2)}+{NR˜1>x/2}k1[0,){|j=1N(Zj(1,Sk)ρkZj(1,Sk1))|ds}{R˜k+1>x/(4sk2)}+{NR˜1>x/2}constxα(k1k2αE|j=1N(Zj(1,Sk)ρkZj(1,Sk1))|α+Nα).

Here and hereafter, const denote constants which may be different on different appearances. To estimate the last term observe that the equality

EωZ(1,Si)=ρ1ρi,i

implies that, under ω, j=1N(Zj(1,Sk)ρkZj(1,Sk1)) is the sum of iid centered random variables. In particular, conditioning on the environment,

Eω(j=1N(Zj(1,Sk)ρkZj(1,Sk1)))2=NEω(Z1(1,Sk)ρkZ1(1,Sk1)).

With this at hand an application of conditional Jensen’s inequality yields, for k,

E|j=1N(Zj(1,Sk)ρkZj(1,Sk1))|αE[Eω(j=1N(Zj(1,Sk)ρkZj(1,Sk1)))2]α/2=Nα/2E(Eω(Z(1,Sk)ρkZ(1,Sk1))2)α/2.
Z(1,Sk)=i=1Z(1,Sk1)Vi(k),ka.s.,

and, under ω, V1(k),V2(k), are independent copies of Z(Sk−1, Sk) which are also independent of Z(1, Sk−1). Hence,

Eω[(Z(1,Sk)ρkZ(1,Sk1))2|Z(1,Sk1)]=Z(1,Sk1)Varω(V1(k)),k.

Observe that, under ω,

V1(k)=dm=1Zcrit(Sk1,Sk1)Um(k),k,

where U1(k),U2(k), are ω-independent random variables with Geom(λk) distribution, and ω is assumed independent of (Zcrit(i,j))ji1. This in combination with Zcrit(i,j)=dZcrit(1,ji+1) for fixed ji ≥ 1 and (6.12) gives, for k,

Varω(V1(k))=EωZcrit(Sk1,Sk1)Varω(U1(k))+(EωU1(k))2VarωZcrit(Sk1,Sk1)=(ρk+ρk2)+2ρk2(ξk1).

Equality (7.41) together with the last formula and subadditivity of xxα/2 on [0, ∞) enables us to conclude that

{U1>x}constxα(k1k2αNα/2E[(EωZ(1,Sk1))α/2(ρkα/2+ρkα+ρkα2α(ξk1)α/2)]+Nα)constxα(k1k2αNα/2(Eρα/2)k1+Nα)=constNαxα.

To obtain the last inequality we have utilized E(ραξα/2)< which is secured by the assumption E(ρξ)α< and the inequality Eρα/2<1 which is a consequence of (P1).

To estimate U2 we proceed similarly but use additionally Markov’s inequality

{U2>x}={j=1Nk1(i=Sk1Sk1(Zj(1,i)Zj(1,Sk1)))>x}=k1{|j=1N(i=Sk1Sk1(Zj(1,i)Zj(1,Sk1)))|>x/(2k2)}constxαk1k2αE|j=1N(i=Sk1Sk1(Zj(1,i)Zj(1,Sk1)))|αconstxαk1k2αE(Eω(j=1Ni=Sk1Sk1(Zj(1,i)Zj(1,Sk1)))2)α/2,x>0.

For k and 1 ≤ iZ(1, Sk−1), take the ith particle among the progeny in the generation Sk−1 of the first immigrant and denote by Vi(k) the number of progeny residing in the generation Sk of the chosen particle. Then

i=Sk1Sk1(Z(1,i)Z(1,Sk1))=r=1Z(1,Sk1)(Wr(k)(ξk1)),ka.s.

Furthermore, under ω, W1(k),W2(k), are independent random variables which are independent of Z(1, Sk−1) and have the same distribution as Ycrit(1,ξk1). Here, as usual, ω is assumed independent of (Ycrit(1,n))n. Invoking (6.12) we infer Varω(Wr(k))2ξk3 and further

{U2>x}constxαk1k2αNα/2E[(EωZ(1,Sk1)Varω(W1(k)))α/2]constxαk1k2αNα/2(Eρα/2)kEξ3α/2constNα/2xα,x>0.

The proof of Lemma 7.2 is complete.

Proof of Lemma 5.6. Lemma 7.1 implies that the contribution of particles residing in the generations 1, 2, … , Sσ − 1 is negligible in the sense that

{k=1τ1(k+Wk)>x}={Sσ+i=σ+1τ1Yi>x,σ<τ1}+o(xα),x. (7.42)

Next we prove that

limAlimsupxxα{i=σ(A)+1τ1Yi>x,σ(A)<τ1}=0. (7.43)

This means that the contribution of the total progeny of immigrants arriving in the generations Sσ(A),Sσ(A)+1, is negligible whenever A is sufficiently large.

The random variables Y1,Y2, are identically distributed and, for each i, the random variables 1{σ<iτ1}=1{σ<i}(11{τ1<i}) and Yi are independent. Therefore,

{i=σ(A)+1τ1Yi>x,σ(A)<τ1}i1{1{σ(A)<iτ1}Yi>x/(2i2)}=i1{σ(A)<iτ1}{Y1>x/(2i2)} (7.44)

having utilized (7.40). Further, observe that Y1 is the sum of 1 ω-independent copies of Y1 = Y (1, ∞) which are also -independent of 1. Hence, using Lemma 7.2 yields

{Y1>x}CE1αxα,x>0

for some positive constant C. The assumptions Eξ3α/2< and E(ρξ)α< guarantee E1α< by Lemma 5.3. Continuing (7.44) we obtain

{i=σ(A)+1τ1Yi>x,σ(A)<τ1}CE1αxαi1i2α{σ(A)<iτ1}C1ExαEτ12α+11{α(A)<τ1}

for a positive constant C1, and (7.43) follows on letting A → ∞ and recalling that Eτ12α+1< by Lemma 4.1.

Summarizing it remains to show that {Sσ(A)>x,σ(A)<τ1}~C2(α)xα, x → ∞, where C2(α) does not depend on A. This can be accomplished by comparing Sσ(A) on the event {σ(A)<τ1} with σ(A)R˜σ(A)+1 along the lines of Lemmas 4 and 6 in [26]. We omit the details.

7.4.4. Proof of Lemma 5.7

Proof of Lemma 5.7. Recall that

W¯τ1=WSτ1=k=1τ1Wk0+k=1τ1(k+Wk)a.s.

According to Lemma 5.6,

{k=1τ1(k+Wk)>x}~C2(α)xα,x.

By the same reasoning as in the proof of Proposition 5.8 (part (C1)), Lemma 5.2 in combination with Lemma 4.1 and Lemma 5.1 entails

{k=1τ1Wk0>x}~(Eτ1)(Eϑα)Cxα,x.

Thus to prove the lemma it suffices to check that

{k=1τ1Wk0>x,k=1τ1(k+Wk)>x}=o(xα),x, (7.45)

see, for example, Lemma B.6.1 in [4].

For the proof of (7.45) we need a number of auxiliary limit relations. First, according to Lemma 4.1 there exists a constant C1 > 0 such that

{τ1>C1logx}=o(xα),x. (7.46)

Further, we claim that for any δ ∈ (0, 1) and large enough x the following inequalities hold uniformly in k

{Wk0>x/(C1logx),ξk2x1δ}constx(α+ε1); (7.47)
{ξk2>x1δ,j=1(k1)τ1(j+Wj)>x/2}constx(α+ε1); (7.48)
{ξk2>x1δ,k1>x2δ}constx(α+ε1), (7.49)

where uv := min(u, v) and ε1 := (α(1 − δ)) ∧ (αδ/2) > 0.

Proof of (7.47). Fix any s > 0 that satisfies δs > α + ε1. Recall that, under ω, Wk0=dWξk1crit, where ω is assumed independent of (Wncrit)n0. This in combination with Markov’s inequality yields

{Wk0>x/(C1logx),ξk2x1δ}={Wξk1crit>x/(C1logx),ξk2x1δ}{W[x(1δ)/2]crit>x/(C1logx)}E(W[x(1δ)/2]crit)s[x(1δ)/2]2s(C1logx)sxδsconstx(α+ε1)

having utilized boundedness of E(n2Wncrit)s for n, see Lemma 6.5.

Proof of (7.48). For fixed k, ξk is independent of j=1(k1)τ1(j+Wj). Using this, Lemma 5.6 and the assumptions of Lemma 5.7 we conclude that

{ξk2>x1δ,j=1(k1)τ1(j+Wj)>x/2}{ξ2>x1δ}{j=1τ1(j+Wj)>x/2}~2αCC2(α)xαxα(1δ)constx(α+ε1).

Proof of (7.49). Observing that, for every fixed k, ξk is independent of k1 and invoking Lemma 5.3 with κ = 3α/4 we obtain with the help of Markov’s inequality

{ξk2>x1δ,k1>x2δ}={ξk2>x1δ}{k1>x2δ}constCCxαx(3/2)αδconstx(α+ε1).

Combining (7.46), (7.47), (7.48) and (7.49) yields, for any δ ∈ (0, 1),

{k=1τ1Wk0>x,j=1τ1(j+Wj)>x}

(7.46)

{k=1τ1Wk0>x,j=1τ1(j+Wj)>x,τ1C1logx}+o(xα)kC1logx{Wk0>xC1logx,j=1τ1(j+Wj)>x,τ1C1logx}+o(xα)

(7.47)

kC1logx{ξk2>x1δ,j=1τ1(j+Wj)>x,τ1C1logx}+o(xα)

(7.48)

kC1logx{ξk2>x1δ,j=1τ1(j+Wj)>x/2,kτ1,τ1C1logx}+o(xα)

(7.49)

kC1logx{ξk2>x1δ,j=1τ1(j+Wj)>x/2,kτ1,k1x2δ}+o(xα).

Now (7.45) follows if we can show that for some δ ∈ (0, 1) the following inequality holds uniformly in k

{ξk2>x1δ,k+Wk>x/4,k1x2δ}constx(α+ε2)

for large enough x and some ε2 > 0 to be specified below, and that

kC1logx{ξk2>x1δ,j=k+1τ1(j+Wj)>x/4}=o(xα),x.

Proof of (7.50). Observe that

k+Wk=i=1k1Vi(k)a.s.,

where, for k and 1ik1, Vi(k) denotes the number of progeny residing in the generations Sk−1 + 1 through Sk of the ith particle in the generation Sk−1. Clearly, for fixed k, V1(k),,Vk1(k) are independent of k1 and have the same distribution as

Ycrit(1,ξk1)+j=1Zcrit(1,ξk1)Uj(k),

where (Ycrit(1,n))n and (Zcrit(1,n))n are assumed independent of (ξk,ρk),U1(k),U2(k), have Geom(λk) distribution and, given (ξk, ρk), they are independent of Zcrit(1,ξk1). In particular, E(V1(k)|(ξk,ρk))=ξk1+ρk in view of (6.12). With this at hand we obtain

{ξk2>x1δ,k+Wk>x/4,k1x2δ}=E1{k1x2δ}{ξk2>x1δ,i=1k1Vi(k)>x/4|k1}Ek11{k1x2δ}{ξk2>x1δ,V1(k)>x/(4k1)|k1}x2δ{ξk2>x1δ,V1(k)>x12δ/4}constx2δE[1{ξk2>x1δ}E[(V1(k))rxr(12δ)|(ξk,ρk)]]constx2δr(12δ)E[1{ξk2>x1δ}(E[V1(k)|(ξk,ρk)])r]constx2δr(12δ)E[1{ξk2>x1δ}(ξk+ρk)r]

for k, large enough x and any r ∈ (0, 1], having utilized conditional Jensen’s inequality for the penultimate step. By assumption Eργ< and Eξγ< for some γ ∈ (α, 2α). Taking r ∈ (0, γ) and applying Hölder’s inequality with parameters γ/(γr) and γ/r we arrive at

{ξk2>x1δ,k+Wk>x/4,k1<x2δ}const(Eξkγ+Eρkγ)r/γx2δr(12δ)(1δ)α(1r/γ).

Pick any ρ ∈ (0, (1 − α/γ)/(2 + α)) and then any r ∈ (0, γ ∧ ((1 − α/γρ(2 + α))/(ρ(2 − α/γ)))). Setting now δ = ρr (so that δ ∈ (0, 1)) we obtain (7.50) with ε2 := −α − 2δ + r(1 − 2δ) + (1 − δ)α(1 − r/γ). Throughout the rest of the proof δ always denotes the number chosen above.

Proof of (7.51). For k and 1ik, denote by Yi(k) the total progeny of the ith particle in the generation Sk. Further, for k and jk + 2, denote by Wj(k) the number of progeny in the generations Sj−1, Sj−1 + 1, … , Sj − 1 of the immigrants arriving in the generations Sk, Sk + 1, … , Sj−1 − 1. Then

j=k+1τ1(j+Wj)=i=1kYi(k)+j=k+2τ1Wj(k)a.s.

and thereupon, for x > 0,

{ξk2>x1δ,j=k+1τ1(j+Wj)>x/4}{ξk2>x1δ,i=1kYi(k)>x/8}+{ξk2>x1δ,j=k+2τ1Wj(k)>x/8}=:I1(x)+I2(x).

Since, for fixed k, i=k+2τ1Wi(k) is independent of ξk we obtain with the help of a crude estimate

i=k+2τ1Wi(k)i=1τ1(i+Wi),ka.s.

and Lemma 5.6

I2(x){ξk2>x1δ}{i=1τ1(i+Wi)>x/8}constxα(1δ)xα

for large enough x. Of course, this entails kC1logxI2(x)=o(xα) as x → ∞.

To estimate I1(x) we note that, for fixed k, under {|ω,k}, Y1(k),,Yk(k) are independent copies of Y (1, ∞). Furthermore, these random variables are -independent of k and ξk. Invoking Lemma 7.2 and conditional Jensen’s inequality yields

{ξk2>x1δ,i=1kYi(k)>x/8}=E[1{ξk2>x1δ}[i=1kYi(k)>x/8|ξk,k]]constxαE[1{ξk2>x1δ}kα]=constxαE[1{ξk2>x1δ}Eω[kα|k1]]constxαE[1{ξk2>x1δ}(Eω[k2|k1])α/2].

Inequality (7.29) was obtained in the proof of Lemma 5.3 under the assumption κ ∈ (1, 2]. However, by the same reasoning it also holds for κ ∈ (0, 2]. Using (7.29) in combination with the fact that ξ ≥ 1 a.s. and subadditivity of xxα/2 we infer

(Eω[k2|k1])α/2const(k1α(ρkξk)α+k1α/2((ρkξk)α+(ρkξk)α/2)+(ρkξk)α+(ρkξk)α/2)

and thereupon

E[1{ξk2>x1δ}(Eω[k2|k1])α/2]const(kE(ρξ)α1{ξ2>x1δ}+E(ρξ)α/21{ξ2>x1δ})constxε(1δ)/2(kEραξα+ε+Eρα/2ξα/2+ε)constkxε(1δ)/2

by Lemma 5.3 and the assumption Eραξα+ε< for some ε > 0. The latter entails

kC1logxI1(x)=o(xα),x.

The proof of Lemma 5.7 is complete.

Acknowledgment

We thank the two anonymous referees for a number of useful suggestions and Vitali Wachtel for bringing the article [28] to our attention. D. Buraczewski and P. Dyszewski were partially supported by the National Science Center, Poland (Sonata Bis, grant number DEC-2014/14/E/ST1/00588). A. Marynych was partially supported by the Return Fellowship of the Alexander von Humboldt Foundation. A part of this work was done while A. Iksanov and A. Marynych were visiting Wroclaw in February 2018. They gratefully acknowledge hospitality and the financial support.

A. Appendix

Lemma A.1 is an important ingredient in the proof of Proposition 5.8, part (C1). In its formulation we use the notion of a random variable which does not depend on the future of a sequence of random variables. The corresponding definition can be found at the beginning of Section 5.

Lemma A.1.

Let (θi)i be a sequence of iid nonnegative random variables and T a nonnegative integer-valued random variable which does not depend on the future of the sequence (θi)i. Assume that Eθ1s< for some s > 0 and that EeλT< for some λ > 0. Then E(i=1Tθi)s<.

Proof. Set R0 := 0 and Ri := θ1 + + θi for i. By assumption, for fixed i, θi is independent of (Ri1,1{Ti}).

The result is trivial when s ∈ (0, 1]. Indeed, we use subadditivity of xxs on [0, ∞) together with the aforementioned independence to conclude that

E(i=1Tθi)si1Eθis1{Ti}=Eθ1sET<.

Assume now that s > 1. Invoking the inequality

(x+y)sxs+sy(x+y)s1,x,y0

which is secured by the mean value theorem for differentiable functions we obtain

RTisRT(i1)s+sθiRis11{Ti},i.

Iterating this yields

RTnssi=1nθiRis11{Ti},n.

Therefore, it is enough to check that

A:=Ei1θiRis11{Ti}<.

Using once again the aforementioned independence together with the inequality

(x+y)s1Cs(xs1+ys1),x,y0,

where Cs := max(2s−2, 1), we infer

ACsEi1θi(Ri1s1+θis1)1{Ti}=CsEθ1i1ERi1s11{Ti}+CsEθ1sET.

Left with checking convergence of the series we appeal to Hölder’s inequality in conjunction with convexity of xxs on [0, ∞) to get

ERi1s11{Ti}[ERi1s](s1)/s[{Ti}]1/sis1[Eθ1s](s1)/s[{Ti}]1/s.

Since [{Ti}]1/s decreases at least exponentially in i, ERi1s11{Ti} is the general term of converging series. The proof of Lemma A.1 is complete.

The remaining part of the Appendix is concerned with the proof of Lemma 4.1. In essence the lemma follows from the arguments presented by Key [27] who considered a model very similar to ours. For n and 1 ≤ kn, set

(k,n)=j=Sk1+1SkZ(j,Sn)

and observe that, under ω, (1,n),,(n,n) are independent. The following representation holds

(0)=0,n=k=1n1(k,n)+(n,n),n

which shows that (n)n0 is a branching process in a random environment with the random number (k,k) of immigrants in the kth generation. The basic observation for what follows is that (n)n0 has the structure similar to that of the branching process investigated by Key [27]. The main difference manifests in the term (n,n) which is absent in Key’s model. It is curious that the branching process in [27] is similar to our (n)n0 in that the immigrants arriving in the generation n only affect the system by their offspring residing in the generation n + 1. In particular, neither Key’s process nor our (n)n0 counts immigrants, whereas (n)n0 does.

Even though (n)n0 and Key’s process are slightly different it is natural to expect that sufficient conditions ensuring finiteness of power and exponential moments of the first extinction time should be similar. While demonstrating that this is indeed the case we shall only point out principal changes with respect to Key’s arguments.

Denote by

p(n,k)=ω{(1,n)=k|(1,n1)=1},n2,k0

and

a(n,k)=ω{(n,n)=k},n,k0

the quenched reproduction and immigration distribution in the generation n, respectively. It can be checked that the mean of the quenched reproduction distribution is

M(n)=k0kp(n,k)=Eω[(1,n)|(1,n1)=1]=ρn,n2

and that the quenched expected number of immigrants is

I(n)=k0ka(n,k)=Eω[(n,n)]=ρnξn,n.

Lemma A.2 is a counterpart of Theorem 3.3 in [27].

Lemma A.2.

Assume that Elogρ[,0) and Elog+ξ<. Then, for k0, π(k)=limn{n=k} exists and defines a probability distribution on . If additionally

{p(2,0)>0,a(2,0)>0}>0, (A.1)

then π(0) > 0.

Sketch of proof. As far as the first claim is concerned, the proofs of Lemmas 2.1, 2.2, 3.1, 3.2 in [27] only require inessential changes concerning the range of summation. The second claim follows after a minor alteration, namely the term q(n, k) appearing in the proof of Theorem 3.3 in [27] should be changed to

q(n,k)=ω{n+1=0|n=k}=p(n+1,0)ka(n+1,0),n,k0.

The sequence (q(1,k))k0 must be positive which justifies condition (A.1). The corresponding condition in [27] is slightly different.

We are ready to prove Lemma 4.1.

Proof of Lemma 4.1.

The present proof is very similar to that of Theorem 4.2 in [27]. Put

v(n):={τ1>n},n0

and

V(x):=n1v(n)xn,x0

which may be finite or infinite. While finiteness of Eτ1 is equivalent to V (1) < ∞, finiteness of some exponential moment of τ1 is equivalent to V (x) < ∞ for some x > 1.

For n, put

h(k,n):={(k,n)>0,j=k+1n(j,n)=0},1kn

(with the usual convention that h(n,n)={(n,n)>0} ) and note that h(k, n) = h(1, nk + 1) for 1 ≤ kn. Now we use a decomposition

v(n)={τ1>n,n>0}={τ1>n,k=1n(k,n)>0}=k=1n1{τ1>n,(k,n)>0,j=k+1n(j,n)=0}+{τ1>n,(n,n)>0}.

in combination with

{τ1>n,(k,n)>0,j=k+1n(j,n)=0}={τ1>k1,(k,n)>0,j=k+1n(j,n)=0}={τ1>k1}{(k,n)>0,j=k+1n(j,n)=0}=v(k1)h(k,n)=v(k1)h(1,nk+1)

which holds for 1 ≤ kn to obtain

v(n)=k=0n1v(k)h(1,nk),n.

This convolution equation is equivalent to

V(x)=H(x)1H(x),x0

(the possibility that both sides are infinite is not excluded), where

H(x)=j1h(1,j)xj,x0.

Now Eτ1< follows from

H(1)=j1h(1,j)=limn{n>0}=1π(0)

once we can show that π(0) > 0. To this end, we recall that (Zn)n0 is governed by a geometric distribution, whence

p(n,0)λn1{ξn>1}+211{ξn>1}λn1/2,n2

and

a(n,0)=j1λnj(j1)λn1{ξn=j}λnj1j11{ξn=j},n.

These inequalities ensure (A.1) and thereupon π(0) > 0 by Lemma A.2.

To prove finiteness of some exponential moment pick δ ∈ (0, 1) such that

E(ρξ)δ<andr:=Eρδ<1.

Existence of such a δ is justified by assumptions and the Cauchy-Schwarz inequality. In view of

h(1,j){(1,j)1}E(Eω(1,j))δ=E(ρξ)δrj1

we infer that the radius of convergence of H is greater than one. This in combination with H(1) < 1 implies that H(x) < 1 and thereupon V (x) < ∞ for some x > 1.

Footnotes

1

In some cases we also need additional technical assumptions concerning the joint distribution of ρ and ξ, for instance, E(ρξ)α<. These will be stated explicitly in the corresponding theorems.

Contributor Information

Dariusz Buraczewski, Mathematical Institute, University of Wroclaw, 50-384 Wroclaw, Poland.

Piotr Dyszewski, Mathematical Institute, University of Wroclaw, 50-384 Wroclaw, Poland.

Alexander Iksanov, Faculty of Computer Science and Cybernetics, Taras Shevchenko National University of Kyiv, 01601 Kyiv, Ukraine.

Alexander Marynych, Faculty of Computer Science and Cybernetics, Taras Shevchenko National University of Kyiv, 01601 Kyiv, Ukraine.

Alexander Roitershtein, Department of Mathematics, Iowa State University, Ames, IA 50011, USA.

References

  • [1].Anderson KK and Athreya KB A note on conjugate Π-variation and a weak limit theorem for the number of renewals. Statist. Probab. Lett, 6: 151–154, 1988. [Google Scholar]
  • [2].Bingham NH, Goldie CM and Teugels JL Regular variation Cambridge University Press, 1989. [Google Scholar]
  • [3].Bouchet É, Sabot C and dos Santos RS A quenched functional central limit theorem for random walks in random environments under (T)γ. Stoch. Proc. Appl, 126(4):1206–1225, 2016. [Google Scholar]
  • [4].Buraczewski D, Damek E and Mikosch T Stochastic models with power-law tails. The equation X = AX + B. Springer Series in Operations Research and Financial Engineering Springer, 2016. [Google Scholar]
  • [5].Buraczewski D and Dyszewski P Precise large deviations for random walk in random environment. Electron. J. Probab 23(114):1–26, 2018. [Google Scholar]
  • [6].Buraczewski D, Dyszewski P, Iksanov A and Marynych A Random walks in a strongly sparse random environment. arXiv preprint:1903.02972, 2019. [DOI] [PMC free article] [PubMed]
  • [7].Comets F, Gantert N and Zeitouni O Quenched, annealed and functional large deviations for one-dimensional random walk in random environment. Probab. Theory Related Fields, 118(1):65–114, 2000. [Google Scholar]
  • [8].Damek E and Kolodziejek B A renewal theorem and supremum of a perturbed random walk. Electron. Commun. Probab 23(82):1–13, 2018. [Google Scholar]
  • [9].Dembo A, Peres Y and Zeitouni O Tail estimates for one-dimensional random walk in random environment. Comm. Math. Phys, 181(3):667–683, 1996. [Google Scholar]
  • [10].Denisov D, Foss S and Korshunov D Asymptotics of randomly stopped sums in the presence of heavy tails. Bernoulli, 16(4):971–994, 2010. [Google Scholar]
  • [11].Dolgopyat D, and Goldsheid I Quenched limit theorems for nearest neighbour random walks in 1D random environment. Comm. Math. Phys, 315(1):241–277, 2012. [Google Scholar]
  • [12].Enriquez NI, Sabot C and Zindy O Limit laws for transient random walks in random environment on Z. Annales de l’institut Fourier, 59:2469–2508, 2009. [Google Scholar]
  • [13].Feller W Fluctuation theory of recurrent events. Trans. Amer. Math. Soc, 67(1):98–119, 1949. [Google Scholar]
  • [14].Feller W An introduction to probability theory and its applications 2nd edition. Wiley, 1971. [Google Scholar]
  • [15].Gantert N and Zeitouni O Quenched sub-exponential tail estimates for one-dimensional random walk in random environment. Comm. Math. Phys, 194(1):177–190, 1998. [Google Scholar]
  • [16].Goldie CM Implicit renewal theory and tails of solutions of random equations. Ann. Appl. Probab, 1(1):126–166, 1991. [Google Scholar]
  • [17].Grincevičius AK The continuity of the distribution of a certain sum of dependent variables that is connected with independent walks on lines. Teor. Verojatnost. i Primenen, 19:163–168, 1974. [Google Scholar]
  • [18].Grincevičius AK On a limit distribution for a random walk on lines. Litovsk. Mat. Sb, 15(4): 79–91, 1975. [Google Scholar]
  • [19].Greven A and den Hollander F Large deviations for a random walk in random environment. Ann. Probab, 22(3):1381–1428, 1994. [Google Scholar]
  • [20].Grey DR Regular variation in the tail behaviour of solutions of random difference equations. Ann. Appl. Probab, 4(1):169–183, 1994. [Google Scholar]
  • [21].Gut A Stopped random walks: limit theorems and applications 2nd edition. Springer, 2009. [Google Scholar]
  • [22].Harris TE First passage and recurrence distributions. Trans. Amer. Math. Soc 73(3) (1952): 471–486, 1952. [Google Scholar]
  • [23].Iksanov A Renewal theory for perturbed random walks and similar processes Birkhäuser, 2016. [Google Scholar]
  • [24].Kesten H Random difference equations and renewal theory for products of random matrices. Acta Math, 131:207–248, 1973. [Google Scholar]
  • [25].Kesten H The limit distribution of Sinaĭ’s random walk in random environment. Phys. A, 138(1–2):299–309, 1986. [Google Scholar]
  • [26].Kesten H, Kozlov MV and Spitzer F A limit law for random walk in a random environment. Compositio Math, 30:145–168, 1975. [Google Scholar]
  • [27].Key ES Limiting distributions and regeneration times for multitype branching processes with immigration in a random environment. Ann. Probab, 15(1):344–353, 1987. [Google Scholar]
  • [28].Korshunov DA An analog of Wald’s identity for random walks with infinite mean. Siberian Math. J, 50(4): 663–666, 2009. [Google Scholar]
  • [29].Kozlov MV Random walk in a one-dimensional random medium Theory Probab. Appl 18(2), 387–388, 1974. [Google Scholar]
  • [30].Matzavinos A, Roitershtein A and Seol Y Random walks in a sparse random environment. Electron. J. Probab, 21, paper no. 72, 2016. [Google Scholar]
  • [31].Meyer P-A Probability and potentials Blaisdell Publishing Co. Ginn and Co., Waltham, Mass.-Toronto, Ont.-London, 1966. [Google Scholar]
  • [32].Pakes AG Further results on the critical Galton–Watson process with immigration. J. Austral. Math. Soc, 13:277–290, 1972. [Google Scholar]
  • [33].Pisztora A and Povel T Large deviation principle for random walk in a quenched random environment in the low speed regime. Ann. Probab, 27(3):1389–1413, 1999. [Google Scholar]
  • [34].Pisztora A, Povel T and Zeitouni O Precise large deviation estimates for a one-dimensional random walk in a random environment. Probab. Theory Related Fields, 113(2):191–219, 1999. [Google Scholar]
  • [35].Sinaĭ Ya. G. The limit behavior of a one-dimensional random walk in a random environment. Teor. Veroyatnost. i Primenen, 27(2):247–258, 1982. [Google Scholar]
  • [36].Solomon F Random walks in a random environment. Ann. Probab, 3:1–31, 1975. [Google Scholar]
  • [37].Sznitman A and Zerner M A law of large numbers for random walks in random environment. Ann. Probab, 27(4):1851–1869, 1999. [Google Scholar]
  • [38].Varadhan SRS Large deviations for random walks in a random environment. Comm. Pure Appl. Math, 56(8):1222–1245, 2003. [Google Scholar]
  • [39].Zerner MPW Lyapounov exponents and quenched large deviations for multidimensional random walk in random environment. Ann. Probab, 26(4):1446–1476, 1998. [Google Scholar]
  • [40].Zeitouni O Random Walks in Random Environment. XXXI Summer School in Probability, (St. Flour, 2001). Lecture Notes in Math., 1837, Springer, 193–312, 2004. [Google Scholar]

RESOURCES