Abstract
A random walk in a sparse random environment is a model introduced by Matzavinos et al. [Electron. J. Probab. 21, paper no. 72: 2016] as a generalization of both a simple symmetric random walk and a classical random walk in a random environment. A random walk in a sparse random environment is a nearest neighbor random walk on that jumps to the left or to the right with probability 1/2 from every point of and jumps to the right (left) with the random probability λk+1 (1 − λk+1) from the point Sk, . Assuming that are independent copies of a random vector and the mean is finite (moderate sparsity) we obtain stable limit laws for Xn, properly normalized and centered, as n → ∞. While the case ξ ≤ M a.s. for some deterministic M > 0 (weak sparsity) was analyzed by Matzavinos et al., the case (strong sparsity) will be analyzed in a forthcoming paper.
Keywords: branching process in a random environment with immigration, perpetuity, random difference equation, random walk in a random environment
1. Introduction
Simple random walks on (the set of integers) arise in various areas of classical and modern stochastics. However, their intrinsic homogeneity reduces in some situations applicability of the simple random walks. Solomon [36] eliminated this drawback by introducing a random environment which made a modified random walk space inhomogeneous. In the present article we investigate an intermediate model, called random walk in a sparse random environment (RWSRE), in which homogeneity of an environment is only perturbed on a sparse subset of . Since RWSRE is a particular case of a random walk in a random environment (RWRE) we proceed by recalling the definition of the latter.
Set and . Let be the Borel σ-algebra of subsets of Ω, P a probability measure on and the σ-algebra generated by the cylinder sets in . A random environment is a random element of the measurable space distributed according to P. A quenched (fixed) environment ω provides us with a probability measure on whose transition kernel is given by
With the initial condition X0 := 0 the sequence is a Markov chain on (under ) which is called random walk in the random environment ω. Here and hereafter, . It is natural to investigate RWRE from two viewpoints which are different in many aspects: under the quenched measure for almost all (with respect to P) ω, that is, for a typical ω or under an annealed measure. Formally, the annealed measure on is defined as a semi-direct product via the formula
Note that in general X is no longer a Markov chain under . Usually one assumes that an environment ω forms a stationary and ergodic sequence or even a sequence of iid (independent and identically distributed) random variables. In this setting RWRE has attracted a fair amount of attention among probabilistic community resulting in quenched and annealed limit theorems [3, 11, 12, 25, 26, 35, 37] and large deviations [5, 7, 9, 15, 19, 33, 34, 38, 39]. This list of references is far from being complete.
We aim at establishing annealed limit theorems for X (that is, under ) in a so called sparse random environment which corresponds to a particular choice of P which is specified as follows. Let be a sequence of independent copies of a random vector (ξ, λ) which satisfies λ ∈ (0, 1) and a.s. For , set
The sparse random environment is defined by
| (1.1) |
The model (with λk in (1.1) replacing λk+1) was introduced by Matzavinos, Roitershtein and Seol [30]. These authors obtained various results including a recurrence/transience criterion, a strong law of large numbers and limit theorems. However, many results in [30] were proved under quite restrictive conditions including boundedness of ξ, a strong ellipticity condition for the distribution of λ and independence of ξ and λ. In this setting some essential properties of X remain hidden. Our main purpose is to relax the aforementioned assumptions substantially, thereby establishing limit theorems in full generality, and to find out how distributional properties of the vector (ξ, λ) affect the asymptotic behavior of X. It turns out that the asymptotics of X is regulated by the tail behaviors of ξ and ρ := (1 − λ)/λ which determine sparsity of the environment and the local drift of the environment, respectively. In this paper we investigate the case where . We call the corresponding environment ‘moderately sparse’, whereas in the opposite case where we say that the environment is ‘strongly sparse’. The analysis of X in a strongly sparse environment requires completely different techniques and will be carried out in a companion paper [6].
The present article is organized as follows. In Section 2 we formulate our limit theorems for X and the first passage times of X. In Section 3.1 we describe our approach and define a branching process Z in a random environment which is used to analyze the random walk X. In Section 3.2 we introduce necessary notation related to the process Z. In Section 4 we explain a heuristic behind our proof and present a number of important estimates and decompositions used throughout the paper. Among other things, we demonstrate in this section how to reduce the initial problem to the asymptotic analysis of sums of certain iid random variables. The tail behavior of these variables is discussed in Section 5. Section 6 is devoted to the analysis of a particular critical Galton–Watson process with immigration which naturally arises in the context of random walks in the sparse random environment. The proofs of the main results are given in Sections 7.1, 7.2 and 7.3. The proofs of auxiliary lemmas can be found in Section 7.4 and the Appendix.
2. Main results
We focus on the case when X is -a.s. transient to +∞ and the environment is moderately sparse, that is, . Recall the notation
According to Theorem 3.1 in [30], X is -a.s. transient to +∞ if
| (2.1) |
The first inequality excludes the degenerate case ρ = 1 a.s. in which X becomes a simple random walk. The second inequality is always true for the moderately sparse environment. We note right away that our standing assumptions and hold under the conditions of our main results, Theorems 2.2 and 2.6.
The sequence of the first passage times defined by
is of crucial importance for our arguments. Of course, the observation that the asymptotics of X can be derived from that of (Tn) is not new and has been exploited in many earlier papers in the area of random walks in random environments. Assuming only transience to the right it is shown on p. 12 in [30] that
This in combination with Lemma 4.4 in [30] leads to the conclusion that
| (2.2) |
whenever the environment is moderately sparse. Furthermore, under the additional assumption that ξ and λ are independent, Theorem 3.3 in [30] states that
| (2.3) |
provided that and , and v = 0, otherwise.
In Proposition 2.1 we give an explicit formula for v when ξ and λ are allowed to be dependent.
Proposition 2.1.
Assume that and . Then
| (2.4) |
provided that , and , and ν = 0 (1/ν = ∞), otherwise.
Turning to weak convergence results we first formulate our assumptions on the distribution of ρ. Two different sets of conditions will be used:
(P1) for some α ∈ (0, 2]
, and the distribution of log ρ is nonarithmetic, where log+ x := max(0, log x);
(P2) there exists an open interval such that for all .
Assuming that (P1) holds for some α > 0 we further distinguish two cases pertaining to the distribution of ξ:
(Ξ1) , where x ∨ y := max(x, y);
(Ξ2) there exists a slowly varying function ℓ such that
| (2.5) |
for some β ∈ (1, 2α], and if β = 2α.
Finally, if (P2) holds for some open interval we assume that either (Ξ1) holds for some or the regular variation assumption in (Ξ2) holds for some β satisfying .
We summarize our results in Table 1 with an emphasis on which component of the environment dominates1.
Table 1:
Influence of the environment and limit theorems for Tn.
| (Ξ1) | (Ξ2) | |
|---|---|---|
| (P1) | Apply Thm. 2.2 (A1) (ρ dominates) | In case β < 2α apply (P 2) with α = β/2 |
| In case β = 2α and apply Thm. 2.2 (A2) (ρ dominates) | ||
| In case β = 2α and apply Thm. 2.2 (A3) (contributions of ρ and ξ are comparable) | ||
| In case β = 2α and apply Thm. 2.6 (B1) (ξ dominates) | ||
| In case β > 2α apply (P 1) and (Ξ1) (because (Ξ2) with β > 2α imply (Ξ1)) | ||
| (P2) | In case apply Prop. 2.9 (contributions of ρ and ξ are comparable) | In case β ∈ (1, 4) and β/2 ∈ I apply Thm. 2.6 (B2) (ξ dominates) |
In what follows, for α ∈ (0, 2), we denote by a random variable with an α-stable distribution defined by
where Γ(·) is the gamma function, if α ∈ (0, 1);
if α ∈ (1, 2). Note that is a positive random variable when α ∈ (0, 1) and it has a spectrally positive α-stable distribution when α ∈ [1, 2). Throughout the paper and will mean convergence in probability and convergence in distribution, respectively.
In Theorem 2.2 and Corollary 2.4 we treat the case (P1).
Theorem 2.2.
Assume that one of the following sets of assumptions is satisfied:
(A1) (P 1) holds for some α ∈ (0, 2], (Ξ1) holds and ;
(A2) (P 1) holds for some α ∈ (1/2, 2] and (Ξ2) holds with β = 2α and , and ;
(A3) (P 1) holds for some α ∈ (1/2, 2), (Ξ2) holds with β = 2α and , and for some ε > 0.
Then there exist absolute constants Aα, Bα and C1 such that the following limit relations hold as n → ∞.
If α ∈ (0, 1), then .
If α = 1, then , where a(n) ∼ n log n.
If α ∈ (1, 2), then .
If α = 2, then where is a standard normal random variable.
Remark 2.3.
See (7.11), (7.12) and (7.14) for explicit forms of the constants Aα, Bα and C1. In Theorem 2.2 we do not specify the constants by two reasons. First, these involve characteristics of random variables that have not been introduced so far. Second, some of these constants are essentially implicit in the sense that these cannot be calculated.
From Theorem 2.2 we deduce the following corollary.
Corollary 2.4.
Under the assumptions and notation of Theorem 2.2 the following limit relations hold as k → ∞.
If α ∈ (0, 1), then .
If α = 1, then , where .
If α ∈ (1, 2), then .
If α = 2, then .
Remark 2.5.
When α ∈ (0, 1) the distribution of is called the Mittag-Leffler distribution with parameter α. The term stems from the facts that
and that the right-hand side defines the Mittag-Leffler function with parameter α.
Our next theorem treats weak convergence of Tn in cases where ξ plays a dominant role.
Theorem 2.6.
Assume that one of the following sets of assumptions is satisfied:
(B1) (P1) holds for some α ∈ (1/2, 2], (Ξ2) holds with β = 2α and , and ;
(B2) (P2) holds and (Ξ2) holds with β ∈ (1, 4) such that and for some ε > 0.
In the case (B2) put α := β/2. Then there exist the functions cα(t) for α ∈ (1/2, 2), q1(t) and r2(t) regularly varying at ∞ of indices 1/α, 1 and 1/2, respectively, and the absolute constants and for α ∈ (1/2, 2] such that the following limit relations hold as n → ∞.
If , then .
If α = 1, then .
If α ∈ (1, 2), then .
If α = 2, then .
Remark 2.7.
This is a counterpart of Remark 2.3. Explicit forms of the normalizing and centering sequences in Theorem 2.6 and Corollary 2.8 given below can be found in (7.16), (7.17), (7.18) and (7.19), and (7.20), (7.21), (7.22) and (7.23), respectively.
Before formulating the corresponding limit theorems for Xk we need to introduce more notation. For α ∈ (1/2, 1), denote by any positive function satisfying as t → ∞. Since cα(t) is regularly varying at ∞ such do exist by Theorem 1.5.12 in [2].
Corollary 2.8.
Under the assumptions and notation of Theorem 2.6 the following limit relations hold as k → ∞.
If , then .
If α = 1, then for appropriate sequences s(k) and t(k) which are specified in formula (7.21).
If α ∈ (1/2), then .
If α = 2, then .
The last result of this section is given for completeness only. It can be derived from a general central limit theorem (Theorem 2.2.1 in [40]) for random walk in a stationary and ergodic random environment. Since the sparse random environment is not stationary in general, to apply this theorem one has to pass to a stationary and ergodic environment. In Theorem 2.1 in [30] it is shown that such a passage is possible whenever .
Proposition 2.9.
Assume that (P 2) and (Ξ1) hold for some α ≥ 2. Then there exists σ0 ∈ (0, ∞) such that, as n → ∞,
and
where v is given in (2.4).
3. Branching processes in random environment with immigration
The connection between a random walk and a branching process with immigration dates back to Harris [22]. In the context of a random walk in a random environment this connection was successfully used by Kozlov [29] and Kesten, Kozlov and Spitzer [26]. In particular, these authors have shown that the asymptotic behavior of RWRE can be obtained from that of the total progeny of the aforementioned branching process. Since we are going to exploit the same idea we first recall a construction of the latter process. Most of the material in Section 3.1 can be found in [26].
3.1. Branching process with immigration
Throughout the paper the fact that Xn → ∞ -a.s. plays a crucial role. Let be the number of steps of the process X from i to i − 1 during the time interval [0, Tn), that is,
Since and X0 = 0 we have, for ,
Recalling that the random walk X is transient to the right we infer
| (3.1) |
In particular, for any γ > 0,
Thus, the asymptotics of Tn as n → ∞ is regulated by that of .
In what follows, we write Geom(p) for a geometric distribution with success probability p, that is,
Claim. Let ω and n be fixed. Then, for 0 ≤ j ≤ n, is equal to the size of the jth generation (excluding the immigrant) of an inhomogeneous branching process with one immigrant in each generation. Under , the offspring distribution of the immigrant and the other particles in the (j − 1)st generation is Geom(ωn−j).
Proof of the claim. First note that because X cannot reach n before time Tn. Further, , where is the number of excursions to the left of n − 1 made by X before time Tn. Transitivity of X entails that the -distribution of is Geom(ωn−1). Finally, for 2 ≤ j ≤ n − 1, we have
where denotes the number of excursions to the left from n − j before the first excursion to the left from n − j + 1 (that is, before the time Tn−j+1) and denotes the number of excursions to the left from n − j during the kth excursion to the left from n − j + 1. Under , the random variables are iid with distribution Geom(ωn−j) and also independent of . The proof of the claim is complete.
Reversing the order of indices leads to a branching process Z = (Zk)k≥0 in a random environment (BPRE) with one immigrant entering the system in each generation. From the very beginning we stress that immigrants in our model are ‘artificial’, that is, even though they reproduce, they do not belong to any generation and, as such, they are not counted. The evolution of Z can be described as follows. An immigrant enters the 0th generation which is originally empty, that is, Z0 = 0. She gives birth to a random number of offspring with -distribution Geom(ω1) which form the first generation. For , an immigrant enters the nth generation. She and the particles of the nth generation, independently of each other and the particles in the previous generations, give birth to random numbers of offspring with distribution Geom(ωn+1). The number of these newborn particles which form the (n + 1)st generation is given by
where is the number of offspring of the (n + 1)st immigrant and, for , is the number of offspring of the kth particle in the nth generation (we set if the kth particle in the nth generation does not exist). Observe that, under , for each , the random variables are iid with distribution Geom(ωn) and also independent of Zn.
Note that when the random environment is sparse (see (1.1)) and fixed, for the most time, the branching process Z behaves like a critical Galton–Watson process with one immigrant and Geom(1/2) offspring distribution. Only the particles of generation Si − 1 for as well as the immigrants arriving in this generation reproduce according to Geom(λi) distribution. Averaging over ω and taking into account the structure of the environment we obtain
| (3.2) |
under the annealed probability . This leads to the most important conclusion of the present section
| (3.3) |
where is a term which is bounded in probability. Distributional equality (3.3) will prove useful on many occasions.
3.2. Notation
Before we explain the strategy of our proof some more notation have to be introduced. Denote by Z(k, n) the number of progeny residing in the nth generation of the kth immigrant. In particular, Z(k, k) is the number of offspring of this immigrant. Then
For and 1 ≤ i ≤ n, let Y(i, n) denote the number of progeny in the generations i, i + 1, … , n of the ith immigrant, that is,
Similarly, for , we denote by Yi the total progeny of the ith immigrant, that is,
We also define Wn to be the total population size in the first n generations, that is,
Motivated by the structure of the environment we shall often divide the population into blocks which include generations 1, … , S1; S1 + 1, … , S2 and so on. As a preparation, we write
for the number of particles in the generation Sn,
for the total population in the generation Sn–1 + 1, … , Sn and
for the total progeny of immigrants arriving in the generations Sn−1, … , Sn − 1.
3.3. Analysis of the environment
The asymptotic behavior of the branching process Z depends heavily upon the environment. At the end of this section we specify qualitatively two aspects of this dependence. A random difference equation which arises naturally in the course of our discussion, as well as in [26] and many other papers on RWRE, plays an important role in the subsequent arguments.
We proceed by recalling the definitions of random difference equations and perpetuities. Let be a sequence of independent copies of an -valued random vector (A, B). Further, let R0 be a random variable which is independent of . The sequence , recursively defined by the random difference equation
forms a Markov chain which is very well known and well understood. Assuming that R0 = 0 and reversing the indices in an equivalent representation Rk = A1·…·Ak−1B1+A2·…·Ak−1B2+…+Bk leads to the random variable satisfying for all . Whenever
| (3.4) |
its infinite version is called perpetuity because of a possible actuarial application. The study of the random difference equations and perpetuities has a long history going back to Kesten [24] and Grincevičius [17]. We refer the reader to the recent monographs [4, 23] containing a comprehensive bibliography on the subject.
It is well-known that conditions and are sufficient for (3.4) and the distributional convergence as k → ∞. There are numerous results in the literature concerning the tail behavior of . The first assertion of this flavor is the celebrated theorem by Kesten [24] (see also Goldie [16] and Grincevičius [18]), to be referred to as the Kesten-Grincevičius-Goldie theorem. It states that the distribution of has a heavy right tail under the assumptions A > 0 a.s., for some s > 0 and some additional conditions, see formula (7.39) below for more details in the particular case (A, B) = (ρ, ξ). The tail behavior of is also well understood in some other cases, in particular, when is regularly varying at ∞ (see, for instance, [18], [20] and [8]).
Now we switch attention from the general random difference equations to a particular one which features in the analysis of BPRE Z. Using the branching property one easily obtains the following recurrence
This shows, among others, that the Markov chain is an instance of the random difference equation which corresponds to (A, B) = (ρ, ρξ). Asymptotic distributional properties of a particular perpetuity which corresponds to (A, B) = (ρ, ξ) are essentially used in the proof of Lemma 7.2.
4. Proof strategy
A weak convergence result for Tn, properly normalized and centered, will be derived from the corresponding result for , again properly normalized and centered. In view of (3.3), the latter may in principle be affected by the asymptotic behavior of Sn, or both. Fortunately, the contribution of Sn is degenerate in the limit, for it is only regulated by the law of large numbers, fluctuations of Sn around its mean do not come into play. Summarizing, analysis of the asymptotics of is our dominating task.
While dealing with our main arguments follow the strategy invented by Kesten et al. [26]. Namely, for large n we decompose as a sum of random variables which are iid under the annealed probability . For this purpose we define extinction times
| (4.1) |
Let us emphasize that the extinctions of Z are ignored in the generations other than S1, S2, … Set
and note that are iid random vectors. We have
| (4.2) |
where is the number of extinctions of Z in the generations S0, … , Sn, that is,
It turns out that the extinctions occur relatively often as the following lemma confirms.
Lemma 4.1.
Assume that and . Then . If additionally and for some ε > 0, then for some γ > 0.
The proof of Lemma 4.1 is given in the Appendix.
Under the assumptions of our main results by Lemma 4.1. The strong law of large numbers for renewal processes makes it plausible that, for large n, the behavior of is comparable with the behavior of the sum . The latter, properly centered and normalized, converges in distribution if and only if the distribution of belongs to the domain of attraction of a stable law. To check the latter, for , we divide particles residing in the generations Si−1 + 1, … , Si into groups:
- – the progeny residing in the generations Si−1 + 1, … , Si − 1 of the immigrants arriving in the generations Si−1, … , Si − 2, the number of these being
- – the progeny residing in the generations Si−1 + 1, … , Si − 1 of the immigrants arriving in the generations 0, 1, … , Si−1 − 1, the number of these being
– particles of the generation Si, the number of these being .
The aforementioned partition of the population which is depicted on Figure 1 induces the following decompositions
and
which are of primary importance for what follows.
Figure 1.

The generations 0 through S3 of the BPRE Z and the partition of the corresponding population into parts , i, j = 1, 2, 3. The bold horizontal lines represent particles in the generations S1, S2 and S3, that is, those comprising the groups , i = 1, 2, 3. By definition, .
Depending on the assumptions (P 1), (P 2), (Ξ1) or (Ξ2) the random variables , and may exhibit different tail behaviors. Often, one of the random variables dominates the others thereby determining the tail behavior of the whole sum .
5. Tail behavior of
In this section we do not assume that .
We first analyze the tail behavior of . Note that by construction are iid and the random variable τ1 does not depend on the future of the sequence in the sense of the definition given by Denisov, Foss, Korshunov on p. 987 in [10]. The latter means that, for each , the collections of random variables and are independent. This observation in combination with Corollary 3 in [10] and Theorem 1 in [28] yields the following lemma which will be used many times throughout the paper.
Lemma 5.1.
Assume that is regularly varying at infinity and τ1 has a finite exponential moment. Then
| (5.1) |
Proof. If , the claim follows from Corollary 3 in [10]. If we use Theorem 1 in [28] to conclude that, as t → ∞,
By the monotone density theorem, see Theorem 1.7.2 in [2], the last formula entails (5.1).
Lemma 5.2.
Assume that (2.5) holds with some β > 0. Then
where ϑ is a random variable with Laplace transform
| (5.2) |
The proof of Lemma 5.2 is given in Section 6. In the next two lemmas we provide moment estimates for the two other summands and .
Lemma 5.3.
Assume that and that, for some k ≤ 2, and are finite. Then and there exists a positive constant C such that, for all ,
| (5.3) |
If additionally , then
| (5.4) |
Remark 5.4.
Since ξ ≥ 1 a.s., the assumption entails . This explains the absence of the latter condition in Lemma 5.3.
Lemma 5.5.
Assume that, for some κ ≤ 2, , and are finite. Then, for all κ0 ∈ (0, κ),
| (5.5) |
If additionally , then
| (5.6) |
Lemma 5.6 states that under the assumption (P1) the distribution of has a power tail.
Lemma 5.6.
Assume that (P1) holds for some α ∈ (0,2], and . Then
for a positive constant C2(α).
Lemma 5.7 points out the tail behavior of in the situation where the slowly varying factor in (Ξ2) is a constant.
Lemma 5.7.
Assume that (P 1) holds for some α ∈ (0, 2), (Ξ2) holds with β = 2α and ℓ such that , and for some ε > 0. Then
where C2(α) is the same constant as in Lemma 5.6.
The proofs of Lemmas 5.3 through 5.7 are postponed until Section 7.4.
For the ease of reference the tail behavior of is summarized in the following proposition.
Proposition 5.8.
The following asymptotic relations hold.
(C1) If (P1) holds for some α ∈ (0, 2], either or (Ξ2) holds with β = 2α, , and , then
where C2(α) is the same constant as in Lemma 5.6.
(C2) If (P 1) holds for some α ∈ (0, 2), (Ξ2) holds with β = 2α and , and for some ε > 0, then
(C3) If (P1) holds for some α ∈ (0, 2], (Ξ2) holds with β = 2α and , and , then
(C4) If (P2) holds, (Ξ2) holds for some β ∈ (0, 4) such that and for some ε > 0, then
Proof. Under the assumptions (Ci), i = 1, 2, 3, 4, τ1 has some finite exponential moment by Lemma 4.1. This fact combined with Lemma 5.1 ensures (5.1) whenever the right tail of is regularly varying.
Proof of (C1). Each of and (Ξ2) with β = 2α implies . Therefore, in view of Lemma 5.6 it is enough to show that
| (5.7) |
If (Ξ2) holds with β = 2α, then according to Lemma 5.2
This in combination with which holds by assumption and (5.1) proves (5.7).
Assuming that we intend to show that
| (5.8) |
which, of course, entails (5.7). The proof of (5.8) utilizes two technical lemmas whose formulations and proofs are postponed until later. Since τ1 does not depend on the future of the sequence , by Lemma A.1 it is enough to show that . At the beginning of Section 6 we show that has the same distribution as the total progeny of a critical Galton–Watson process with unit immigration and Geom(1/2) offspring distribution stopped at random time ξ1 − 1. The conclusion then follows from Lemma 6.3.
Proof of (C2). This is just Lemma 5.7.
Proof of (C3). This follows from Lemma 5.2 in conjunction with (5.1) and Lemma 5.6 because (Ξ2) with β = 2α entails .
Proof of (C4). Since the interval is open, there exists ε1 > 0 such that β/2 + ε1 ∈ (0, 2], , and . In view of this Lemma 5.5 applies and which gives and . An appeal to Lemma 5.2 in combination with (5.1) does the rest.
6. Critical Galton–Watson process with immigration
As has already been mentioned in Section 3, , where ξ1 is assumed independent of a critical Galton–Watson process with unit immigration and Geom(1/2) offspring distribution. In this section we collect some known properties of and prove several auxiliary results which to our knowledge are not available in the literature. The evolution of is the same as that of the BRPE Z with ωn ≡ 1/2 for all , see Section 3.1.
For , let denote the total progeny in the first n generations. Further, for and , write Zcrit(k, n) for the number of the nth generation progeny of the kth immigrant and Y crit(k, n) for the number of progeny of the kth immigrant which reside in generations k through n, that is,
Here is the main result of this section of which Lemma 5.2 is an immediate consequence because , where ξ1 is assumed independent of .
Proposition 6.1.
Let ς be an integer-valued random variable independent of and such that
for some α > 0 and some ℓ slowly varying at ∞. Then
where ϑ is a random variable with Laplace transform (5.2).
Remark 6.2. For fixed , and the distribution of inherits an exponential tail from Geom(1/2) offspring distribution. Thus, for ς which has distribution with a heavy tail and is independent of it is natural to expect that
Proposition 6.1 makes this intuition precise.
Lemma 6.3 given next is used in the proof of Proposition 5.8, part (C1).
Lemma 6.3.
Let ς be an integer-valued random variable independent of and such that for some α > 0. Then .
To prove Proposition 6.1 and Lemma 6.3 we need some auxiliary lemmas. The first one is due to Pakes [32, Theorem 5].
Lemma 6.4.
We have
| (6.1) |
where ϑ is a random variable with Laplace transform (5.2).
In the cited article Pakes investigates Galton–Watson processes with general, not necessarily unit, immigration. One of the standing assumptions of that paper is that the probability of having no immigrants is positive. However, a perusal of the proof of Theorem 5 in [32] reveals that the result still holds without this assumption.
With some additional effort one can prove the convergence of all moments in (6.1).
Lemma 6.5.
For each s > 0,
| (6.2) |
Proof. Suppose for the moment that we have verified that
| (6.3) |
for some β > 0 and some . Then in view of
for all s > 0 and some constant C(s), the Vallée–Poussin criterion for uniform integrability (see e.g. Theorem T22 in [31]) in combination with (6.1) ensures (6.2).
Left with the proof of (6.3) observe that, for fixed , the process initiated by the kth immigrant (Zcrit(k, n))n≥k is a Galton–Watson process with Geom(1/2) offspring distribution. Moreover, the processes started by different immigrants are iid. Therefore, writing
we obtain a representation of as the sum of independent random variables. This formula entails
| (6.4) |
(the case that both sides of (6.4) are infinite for some x > 0 is not excluded), where
We have a0(x) = 1 for all x ≥ 0 and
for x ∈ [0, log 2). Using a decomposition
| (6.5) |
where are independent copies of which are also independent of we infer
In particular, for every fixed , aj(x) < ∞ for all x from some right vicinity of the origin.
Set bj(x) = exaj(x) for and x ≥ 0, so that
By technical reasons, it is more convenient to work with bj rather than aj. We intend to show that, for every γ ∈ (0, 1/4), there exists K = K(γ) > 1 and x0(γ) > 0 such that
| (6.6) |
for and x > 0 satisfying j(1 + j)x ≤ γ and x < x0(γ).
Given γ ∈ (0, 1/4) pick K > 1 such that K − K2γ > 1. This is possible because the largest root of the quadratic equation γx2 − x + 1 = 0 is larger than one. There exists x0(γ) > 0 such that
Moreover, since we assume j(1 + j)x ≤ γ we have
Now (6.6) follows by the mathematical induction. While for j = 0 we obtain
an induction step works as follows
for x ∈ (0, x0(γ)) and j(j + 1)x ≤ γ. The proof of (6.6) is complete.
Armed with (6.6) we can deduce (6.3). Given β ∈ (0, 1/4) take γ ∈ (β, 1/4) and pick such that β/n2 < x0(γ) and (n+1)β ≤ nγ for n ≥ n0. Such a choice ensures that j(j +1)βn−2 ≤ γ for integer 0 ≤ j ≤ n whenever n ≥ n0. Using (6.4) and then (6.6) we arrive at
for β ∈ (0, 1/4). It remains to note that
thereby finishing the proof of (6.3).
We are now ready to prove Proposition 6.1 and Lemma 6.3.
Proof of Proposition 6.1.
By virtue of (6.1) we infer in probability and then a.s. by monotonicity. Therefore,
For x > 1 we have
where . Under the introduced notation, we have to prove that
| (6.7) |
By a standard inversion technique á la Feller (see Theorem 7 in [13]) (6.1) entails
| (6.8) |
We claim that the latter implies further that
| (6.9) |
The simplest way to see it is to pass in (6.8) to versions which converge a.s., that is,
and then exploit the fact that
(see Theorem 1.5.2 in [2]). This gives
because ϑ* > 0 a.s.
With (6.9) at hand, relation (6.7) follows if we can show that the family uniformly integrable for some x0 > 0. By Potter’s bound for regularly varying functions (Theorem 1.5.6 (iii) in [2]), given A > 1 and δ > 0 there exists such that
whenever . Further, by monotonicity of h,
Thus, for uniform integrability of it suffices to check two things: first,
| (6.10) |
for some β > 2α and second
| (6.11) |
for some γ > 1.
From the proof of Lemma 6.5 we know that for some s > 0, whence
which proves (6.11).
Now we intend to show that (6.10) holds for all β > 0. We have for x ≥ 4
where the last and penultimate inequalities follow from Lemma 6.5 and Markov’s inequality, respectively. The proof of Proposition 6.1 is complete.
Proof of Lemma 6.3.
By Lemma 6.5, for all and some C > 0. This entails
The proof of Lemma 6.3 is complete.
For later use, we note that, for ,
| (6.12) |
The first three of these equalities follow by an elementary calculation. The fourth one can be derived with the help of (6.5) and the mathematical induction.
7. Proofs
7.1. Proof of Proposition 2.1
Recalling that it suffices to show that
Using (3.3) yields
Let us prove the latter convergence in probability. According to Lemma 4.1, we have whenever and . Recalling from (4.2) that
we conclude by the strong law of large numbers that
Hence,
Left with identifying we recall that, for , denotes the total progeny of immigrants arriving in the generations Sk−1, … , Sk − 1, that is,
Since , , … are identically distributed and, for , is independent of we infer
(if , the formula just says that ). To calculate we note that
whence
where the a.s. convergence of the last series is secured by our assumptions and . Taking the expectation with respect to yields
The proof of Proposition 2.1 is complete.
7.2. Proof of Theorem 2.2 and Corollary 2.4
The assumptions of Theorem 2.2 ensure that and that and are finite (for the latter use Lemma 4.1). It is also clear that the distribution of τ1 is nondegenerate, whence s2 > 0.
From Proposition 5.8 (parts (C1) and (C2)) we know that
where C = C2(α) in the cases (A1) and (A2) and in the case (A3). Therefore, the distribution of belongs to the domain of attraction of an α-stable distribution. This means that
| (7.1) |
for some a(t) and b(t), where . To find a(t) and b(t) explicitly we use Theorem 3 on p. 580 and formula (8.15) on p. 315 in [14]:
Our subsequent proof will be based on representation (3.3). In view of this we first analyze the asymptotics of .
Step 1. Limit theorems for . We claim that
| (7.2) |
In view of (4.2) relation (7.2) follows once we have checked that (7.1) entails
| (7.3) |
According to the central limit theorem for renewal processes
This implies that, for ε > 0 small enough, we can pick z = z(ε) so large that
where . Note that and that
| (7.4) |
These can be easily checked with the exception of the case α = 1 in which a proof of the first relation is needed: for any r ∈ (1, 2],
| (7.5) |
Motivated by our later needs we have proved this in a slightly extended form with r instead of 2.
To prove the first relation in (7.3) we write, for ,
Sending n → ∞ in the last inequality and using (7.1) and (7.4) we obtain
Letting now ε → 0+ yields
A symmetric argument leads to
The second relation in (7.3) follows in a similar manner.
Step 2. Limit theorems for .
Case α > 1. Since and we infer
by the central limit theorem. Now
| (7.6) |
follows from (7.2) and (3.3) written in an equivalent form
Case α = 1. Using the weak law of large numbers and (7.2) we arrive at
| (7.7) |
Case α < 1. Since n = o(b(µ−1n)) we conclude that as n → ∞ by the weak law of large numbers. This in combination with (7.2) and (3.3) proves
| (7.8) |
Step 3. Limit theorem for Tn. At this step we are going to deduce limit theorems Tn from the corresponding results for proved at the previous step. Set
so that is the first passage time process associated with the random walk . The reason for introducing is justified by
| (7.9) |
Case α ≥ 1. Fix any r ∈ (1, 2). Then and thereupon
| (7.10) |
by Theorem 4.4 on p. 89 in [21].
Subcase α = 1. Using (7.9) and (7.10) we obtain, for any and ε > 0,
Letting yields, for ,
having utilized (7.5), (7.7) and (7.10). Arguing similarly we get the converse inequality for the lower limit, thereby proving that
| (7.11) |
Subcase α > 1. An analogous but simpler argument enables us to show that (7.6) entails
| (7.12) |
Case α < 1. The proof given for the case α ≥ 1 does not work in the case (A1) when α ≤ 1/2 because it is then not necessarily true that for some r > 1. In view of this we use the weak law of large numbers
| (7.13) |
rather than the Marcinkiewicz-Zygmund strong law (7.10).
Another appeal to (7.9) gives, for any and ε > 0,
Sending n → ∞ we obtain with the help of (7.8) and (7.13)
Letting ε → 0+ and using continuity of the distribution of Sα yields
The converse inequality for the lower limit can be derived analogously. Thus,
| (7.14) |
The proof of Theorem 2.2 is complete.
Proof of Corollary 2.4.
The forms of limit relations for Tn in our Theorem 2.2 and Theorem on pp. 146–148 in [26] are the same, only the values of constants differ. In view of this the limit relations for Xk in our setting are obtained by copying the corresponding limit relations from the aforementioned theorem in [26].
7.3. Proof of Theorem 2.6 and Corollary 2.8
The proof goes the same path as that of Theorem 2.2. However, appearance of nontrivial slowly varying factors leads to minor technical complications. We shall only give the weak convergence results explicitly (recall that in the formulation of Theorem 2.6 normalizing and centering functions were not specified). Also, we shall check several claims wherever we feel it is necessary.
According to Proposition 5.8 (parts (C3) and (C4)),
where α = β/2 in case (B2). Therefore, limit relation (7.1) holds with some a(t) and b(t). To identify them we need more notation. For α ∈ (1/2, 2), let cα(t) be any positive function satisfying . Further, assuming that α = 2 let r2(t) be any positive function satisfying . By Lemma 6.1.3 in [23], cα(t) and r2(t) are regularly varying at ∞ of ∫indices 1/α and 1/2, respectively. For the latter, the fact is also needed that the function is slowly varying at ∞. Observe that the case α = 2 only arises under the assumptions (B1) which then ensure that . This in combination with the aforementioned lemma yields
| (7.15) |
Using again Theorem 3 on p. 580 and formula (8.15) on p. 315 in [14] we obtain
b(t) = cα(t) and a(t) = 0 if α ∈ (1/2, 1);
b(t) = c1(t) and if α = 1;
b(t) = cα(t) and if α ∈ (1, 2);
b(t) = r2(t) and if α = 2.
Case α ∈ (1/2, 1). Repeating verbatim the proof of Theorem 2.2 for the case α ∈ (0, 1) we obtain
| (7.16) |
Case α = 1. We need an analogue of relation (7.5): for r ∈ (1, 2], as n → ∞,
The first summand tends to zero in view of two facts: by the definition of c1(t) and which is a consequence of regular variation of c1(t). The second summand tends to zero because is slowly varying at ∞ as a superposition of the slowly varying and regularly varying functions.
For Step 2 in the proof of Theorem 2.2 we need the following modified argument. In view of (ξ2) the function is regularly varying at ∞ of index −2 and can be finite or infinite. Therefore, Sn satisfies the central limit theorem with normalization sequence which is regularly varying at ∞ of index 1/2. Since c1(t) is regularly varying at ∞ of order 1 we infer
and thereupon
To pass from this limit relation to the final result
| (7.17) |
that is, to realize Step 3 in the proof Theorem 2.2, one can mimic the proof of Theorem 2.2.
Case α ∈ (1, 2]. While implementing Step 2 of the previous result in the case α = 2 one uses the fact that according to (7.15) b(t) = r2(t) satisfies as n → ∞. Since the other parts of the proof of Theorem 2.2 do not require essential changes we arrive at
| (7.18) |
when α ∈ (1, 2), and
| (7.19) |
when α = 2. The proof of Theorem 2.6 is complete.
Proof of Corollary 2.8.
Since is an ‘inverse’ sequence for we can use a standard inversion technique (see, for instance, the proof of Theorem 7 in [13]) to pass from the distributional convergence of Tn, properly centered and normalized, as n → ∞ to that of Xk, again properly centered and normalized, as k → ∞. Additional complications arising in the case α = 1 can be handled with the help of arguments given in Section 3 of [1].
Here are the limit relations for Xk, properly normalized and centered, as k → ∞ which correspond to (7.16), (7.17), (7.18) and (7.19):
if α ∈ (1/2, 1), then
| (7.20) |
if α = 1, then
where, with for t > 0 and ,
and
(we do not write 2bm(k) instead of 1 + 2bm(k) because the case is not excluded); if α ∈ (1, 2), then
| (7.22) |
if α = 2, then
| (7.23) |
The proof of Corollary 2.8 is complete.
7.4. Proof of auxiliary Lemmas 5.3, 5.5, 5.6 and 5.7
7.4.1. Proof of Lemma 5.3
Proof of Lemma 5.3. To prove (5.3) we first represent ZSn−1 as a sum of independent random variables
| (7.24) |
where is the number of progeny residing in the generation Sn − 1 of the jth particle in the generation Sn−1 and is the number of progeny residing in the generation Sn − 1 of the immigrants arriving in the generations Sn−1, … , Sn − 2. For later use, we note that, under ,
| (7.25) |
where ω is assumed independent of a Galton–Watson process with unit immigration and Geom(1/2) offspring distribution.
With the help of (7.24) we now write a standard decomposition for the number of particles in the generation Sn over the particles comprising the generation Sn−1 and their offspring
| (7.26) |
Here, the notation , , is self-explained, but for clarity we provide explicit definitions. The variable is the number of offspring of the ith particle in the generation Sn–1, . The variable is the number of particles in the generation Sn which are the progeny of the immigrants arriving in the generations Sn−1 through Sn−1. Finally, is the number of offspring of the immigrant arriving in the generation Sn − 1. Observe that, under , , and are independent with distribution Geom(λn). In what follows, for simplicity we omit the superscripts (n): for instance, we write for and similarly for the other variables. The following formulas play an important role in the subsequent proof:
| (7.27) |
The two cases κ ∈ (0, 1] and κ ∈ (1, 2] should be treated separately.
Case κ ≤ 1. By Jensen’s inequality and subadditivity of the function on [0, ∞)
Taking the expectations we obtain
which entails (5.3).
Case κ ∈ (1, 2]. An application of conditional Jensen’s inequality yields
To estimate the conditional second moment we represent it as follows
![]() |
Appealing now to (7.27) we conclude that
| (7.29) |
Plugging the last inequality into (7.28) and using subadditivity once again we obtain
| (7.30) |
Next, we check that
| (7.31) |
With the help of
which is a consequence of (7.25) and (6.12) we infer
A similar argument in combination with leads to the conclusion
Left with the proof of finiteness of the first term on the right-hand side we represent as a sum of independent random variables
where, for is the number of progeny residing in the generation Sn –1 of the immigrant arriving in the generation Sn –i. Under , , where ω is assumed independent of . With this at hand, an appeal to (6.12) yields
and . Here and hereafter, to ease the notation we write for . Finally,
which finishes the proof of (7.31).
Turning to the asymptotic behavior of which appears on the right-hand side of (7.30) we consider yet another two cases.
Case γ ≤ 1 in which . To see it, observe that when γ = 1 the inequality is strict because the assumption implies that the distribution of ρ is nondegenerate at 1. By the already proved inequality (5.3) for powers ≤ 1
which in combination with (7.31) shows that the expression in the parentheses in (7.30) is bounded. This ensures (5.3).
Case γ > 1. By the already proved inequality (5.3) for powers ≤ 1
where an = 1 or = n or depending on whether or or . Since in any event for , (7.30) entails
for some C1 > 0. Iterating this yields for some C2 > 1 and all , thereby finishing the proof of (5.3) in the case γ > 1 and in general.
To prove (5.4) we use a decomposition a.s. Inequality (5.3) tells that we are left with checking that
Since, under , , where ω is assumed independent of , an application of Lemma 6.5 yields
for a positive constant C. The proof of Lemma 5.3 is complete.
7.4.2. Proof of Lemma 5.5
Proof of Lemma 5.5. We start by proving (5.5). Pick κ0 ∈ (0, κ), put p = κ/κ0 and choose q such that 1/p + 1/q = 1. Recall that . Hence, according to Lemma 5.3,
| (7.32) |
for a positive constant C, whence
by subadditivity (convexity) of when κ ∈ (0, 1] (κ ∈ (1, 2]). By Lemma 4.1, for all and positive constants C1 and C2. With these at hand, an application of Hölder’s inequality yields
The proof of (5.5) is complete.
Turning to the proof of (5.6) we shall only show that
| (7.33) |
for a positive constant C. Formula (5.6) then follows with the help of the same argument (involving Hölder’s inequality) that we used while proving (5.5).
For i ≥ 2 and , denote by the number of progeny in the generations Si−1 + 1, … , Si − 1 of the jth particle in the generation Si−1, so that
Under , for i ≥ 2, where we set Y crit(1, 0) = 0 and ω is assumed independent of . In particular, according to (6.12)
We shall treat the cases κ ∈ (0, 1] and κ ∈ (1, 2] separately.
Case κ ∈ (0, 1]. Under , for , is independent of . This in combination with (7.34) proves that
Therefore, we obtain
having utilized Jensen’s inequality, (7.32) and the fact that ξi and are independent.
Case κ ∈ (1, 2]. Another application of Jensen’s inequality in combination with (7.34), (7.32) and subadditivity of on [0, ∞) yields, for i ≥ 2,
for a positive constant C. The proof of (7.33) is complete. □
7.4.3. Proof of Lemma 5.6
We follow the method invented by Kesten et al. [26]. While some parts of the proofs given in [26] can be directly transferred to our setting, the others require an additional work. We do not present all the details of the proof focussing instead on the main differences.
We begin with a brief overview of the arguments leading to the claim of Lemma 5.6. Given a large positive constant A, put
Thus, we observe the process up to the first time j when it exceeds the level A and then put σ = i for the smallest index i satisfying Si ≥ j. The following decomposition holds
where is the number of particles in the generation Sσ plus their total progeny, and, for , is the total progeny in the generations Si + 1, Si + 2, … of the immigrants arriving in the generations Si−1, … , Si − 1.
We intend to prove that the first, second and fourth summands on the right-hand side of this decomposition are negligible in a sense to be made precise, so that
In view of the definition of Sσ and the fact that for A as above one can expect that . We shall demonstrate that the variable is related to a random difference equation whose tail behavior determines that of .
To realize the programme just outlined we need two auxiliary results.
Lemma 7.1.
Assume that the assumptions of Lemma 5.6 hold. Then, for any A > 0, as x → ∞,
| (7.35) |
Proof. We only give a proof for the first summand in (7.35). The second summand can be treated along similar lines.
The random variable τ1 has a finite exponential moment by Lemma 4.1. Furthermore, τ1 does not depend on the future of the sequence . Therefore, the assumption ensures that
| (7.36) |
by Lemma A.1.
Write, for x > 0,
and observe that, in view of (7.36), the first summand on the right-hand side is as x → ∞. To estimate the second term we use a decomposition
where, for , Vi is the number of progeny in the generations Sτ1−1 + 1, … , Sτ1 − 1 of the ith particle in the generation Sτ1−1. We claim that
| (7.37) |
For the proof, note that , where is assumed independent of . Consequently, we obtain with the help of Jensen’s inequality and the inequality for which is a consequence of (6.12)
where the last inequality is secured by (7.36).
With (7.37) at hand, we immediately conclude that
because V1, V2, … are identically distributed. The proof of Lemma 7.1 is complete.
Before formulating another auxiliary result we recall from Section 3.2 the notation , where Z(1, i) is the number of progeny residing in the ith generation of the first immigrant, so that Y1 is the total progeny of the first immigrant.
Lemma 7.2.
Suppose that the assumptions of Lemma 5.6 hold. Let be a sequence of -independent copies of Y1. Then there exists a constant C > 0 such that
Proof. For , put
Recall from Section 3.3 that the so defined random variable is called perpetuity. The Kesten-Grincevičius-Goldie theorem says that if (P 1) holds and , then, for all ,
for some positive constant C which does not depend on k.
Put Z(1, 0) := 1. For , denote by Z1(1, i), Z2(1, i), … -independent copies of Z(1, i). Recall that Sk = Sk−1 + ξk and write
Our proof will be based on the following decomposition which holds a.s.
Formula (7.38) implies that, for , , whence
Since
and and (Zj(1, Sk), Zj(1, Sk−1), ρk) are independent for each we obtain with the help of (7.39), for x > 0,
Here and hereafter, const denote constants which may be different on different appearances. To estimate the last term observe that the equality
implies that, under , is the sum of iid centered random variables. In particular, conditioning on the environment,
With this at hand an application of conditional Jensen’s inequality yields, for ,
and, under , are independent copies of Z(Sk−1, Sk) which are also independent of Z(1, Sk−1). Hence,
Observe that, under ,
where are -independent random variables with Geom(λk) distribution, and ω is assumed independent of . This in combination with for fixed j ≥ i ≥ 1 and (6.12) gives, for ,
Equality (7.41) together with the last formula and subadditivity of on [0, ∞) enables us to conclude that
To obtain the last inequality we have utilized which is secured by the assumption and the inequality which is a consequence of (P1).
To estimate we proceed similarly but use additionally Markov’s inequality
For and 1 ≤ i ≤ Z(1, Sk−1), take the ith particle among the progeny in the generation Sk−1 of the first immigrant and denote by the number of progeny residing in the generation Sk of the chosen particle. Then
Furthermore, under , are independent random variables which are independent of Z(1, Sk−1) and have the same distribution as . Here, as usual, ω is assumed independent of . Invoking (6.12) we infer and further
The proof of Lemma 7.2 is complete.
Proof of Lemma 5.6. Lemma 7.1 implies that the contribution of particles residing in the generations 1, 2, … , Sσ − 1 is negligible in the sense that
| (7.42) |
Next we prove that
| (7.43) |
This means that the contribution of the total progeny of immigrants arriving in the generations is negligible whenever A is sufficiently large.
The random variables are identically distributed and, for each , the random variables and are independent. Therefore,
| (7.44) |
having utilized (7.40). Further, observe that is the sum of -independent copies of Y1 = Y (1, ∞) which are also -independent of . Hence, using Lemma 7.2 yields
for some positive constant C. The assumptions and guarantee by Lemma 5.3. Continuing (7.44) we obtain
for a positive constant C1, and (7.43) follows on letting A → ∞ and recalling that by Lemma 4.1.
Summarizing it remains to show that , x → ∞, where C2(α) does not depend on A. This can be accomplished by comparing on the event with along the lines of Lemmas 4 and 6 in [26]. We omit the details.
7.4.4. Proof of Lemma 5.7
Proof of Lemma 5.7. Recall that
According to Lemma 5.6,
By the same reasoning as in the proof of Proposition 5.8 (part (C1)), Lemma 5.2 in combination with Lemma 4.1 and Lemma 5.1 entails
Thus to prove the lemma it suffices to check that
| (7.45) |
see, for example, Lemma B.6.1 in [4].
For the proof of (7.45) we need a number of auxiliary limit relations. First, according to Lemma 4.1 there exists a constant C1 > 0 such that
| (7.46) |
Further, we claim that for any δ ∈ (0, 1) and large enough x the following inequalities hold uniformly in
| (7.47) |
| (7.48) |
| (7.49) |
where u ∧ v := min(u, v) and ε1 := (α(1 − δ)) ∧ (αδ/2) > 0.
Proof of (7.47). Fix any s > 0 that satisfies δs > α + ε1. Recall that, under , , where ω is assumed independent of . This in combination with Markov’s inequality yields
having utilized boundedness of for , see Lemma 6.5.
Proof of (7.48). For fixed , ξk is independent of . Using this, Lemma 5.6 and the assumptions of Lemma 5.7 we conclude that
Proof of (7.49). Observing that, for every fixed , is independent of and invoking Lemma 5.3 with κ = 3α/4 we obtain with the help of Markov’s inequality
Combining (7.46), (7.47), (7.48) and (7.49) yields, for any δ ∈ (0, 1),
(7.46)
(7.47)
(7.48)
(7.49)
Now (7.45) follows if we can show that for some δ ∈ (0, 1) the following inequality holds uniformly in k
for large enough x and some ε2 > 0 to be specified below, and that
Proof of (7.50). Observe that
where, for and , denotes the number of progeny residing in the generations Sk−1 + 1 through Sk of the ith particle in the generation Sk−1. Clearly, for fixed , are independent of and have the same distribution as
where and are assumed independent of have Geom(λk) distribution and, given (ξk, ρk), they are independent of . In particular, in view of (6.12). With this at hand we obtain
for , large enough x and any r ∈ (0, 1], having utilized conditional Jensen’s inequality for the penultimate step. By assumption and for some γ ∈ (α, 2α). Taking r ∈ (0, γ) and applying Hölder’s inequality with parameters γ/(γ − r) and γ/r we arrive at
Pick any ρ ∈ (0, (1 − α/γ)/(2 + α)) and then any r ∈ (0, γ ∧ ((1 − α/γ − ρ(2 + α))/(ρ(2 − α/γ)))). Setting now δ = ρr (so that δ ∈ (0, 1)) we obtain (7.50) with ε2 := −α − 2δ + r(1 − 2δ) + (1 − δ)α(1 − r/γ). Throughout the rest of the proof δ always denotes the number chosen above.
Proof of (7.51). For and , denote by the total progeny of the ith particle in the generation Sk. Further, for and j ≥ k + 2, denote by the number of progeny in the generations Sj−1, Sj−1 + 1, … , Sj − 1 of the immigrants arriving in the generations Sk, Sk + 1, … , Sj−1 − 1. Then
and thereupon, for x > 0,
Since, for fixed , is independent of ξk we obtain with the help of a crude estimate
and Lemma 5.6
for large enough x. Of course, this entails as x → ∞.
To estimate I1(x) we note that, for fixed , under , are independent copies of Y (1, ∞). Furthermore, these random variables are -independent of and ξk. Invoking Lemma 7.2 and conditional Jensen’s inequality yields
Inequality (7.29) was obtained in the proof of Lemma 5.3 under the assumption κ ∈ (1, 2]. However, by the same reasoning it also holds for κ ∈ (0, 2]. Using (7.29) in combination with the fact that ξ ≥ 1 a.s. and subadditivity of we infer
and thereupon
by Lemma 5.3 and the assumption for some ε > 0. The latter entails
The proof of Lemma 5.7 is complete.
Acknowledgment
We thank the two anonymous referees for a number of useful suggestions and Vitali Wachtel for bringing the article [28] to our attention. D. Buraczewski and P. Dyszewski were partially supported by the National Science Center, Poland (Sonata Bis, grant number DEC-2014/14/E/ST1/00588). A. Marynych was partially supported by the Return Fellowship of the Alexander von Humboldt Foundation. A part of this work was done while A. Iksanov and A. Marynych were visiting Wroclaw in February 2018. They gratefully acknowledge hospitality and the financial support.
A. Appendix
Lemma A.1 is an important ingredient in the proof of Proposition 5.8, part (C1). In its formulation we use the notion of a random variable which does not depend on the future of a sequence of random variables. The corresponding definition can be found at the beginning of Section 5.
Lemma A.1.
Let be a sequence of iid nonnegative random variables and T a nonnegative integer-valued random variable which does not depend on the future of the sequence . Assume that for some s > 0 and that for some λ > 0. Then .
Proof. Set R0 := 0 and Ri := θ1 + … + θi for . By assumption, for fixed , θi is independent of .
The result is trivial when s ∈ (0, 1]. Indeed, we use subadditivity of on [0, ∞) together with the aforementioned independence to conclude that
Assume now that s > 1. Invoking the inequality
which is secured by the mean value theorem for differentiable functions we obtain
Iterating this yields
Therefore, it is enough to check that
Using once again the aforementioned independence together with the inequality
where Cs := max(2s−2, 1), we infer
Left with checking convergence of the series we appeal to Hölder’s inequality in conjunction with convexity of on [0, ∞) to get
Since decreases at least exponentially in i, is the general term of converging series. The proof of Lemma A.1 is complete.
The remaining part of the Appendix is concerned with the proof of Lemma 4.1. In essence the lemma follows from the arguments presented by Key [27] who considered a model very similar to ours. For and 1 ≤ k ≤ n, set
and observe that, under , are independent. The following representation holds
which shows that is a branching process in a random environment with the random number of immigrants in the kth generation. The basic observation for what follows is that has the structure similar to that of the branching process investigated by Key [27]. The main difference manifests in the term which is absent in Key’s model. It is curious that the branching process in [27] is similar to our in that the immigrants arriving in the generation n only affect the system by their offspring residing in the generation n + 1. In particular, neither Key’s process nor our counts immigrants, whereas does.
Even though and Key’s process are slightly different it is natural to expect that sufficient conditions ensuring finiteness of power and exponential moments of the first extinction time should be similar. While demonstrating that this is indeed the case we shall only point out principal changes with respect to Key’s arguments.
Denote by
and
the quenched reproduction and immigration distribution in the generation n, respectively. It can be checked that the mean of the quenched reproduction distribution is
and that the quenched expected number of immigrants is
Lemma A.2.
Assume that and . Then, for , exists and defines a probability distribution on . If additionally
| (A.1) |
then π(0) > 0.
Sketch of proof. As far as the first claim is concerned, the proofs of Lemmas 2.1, 2.2, 3.1, 3.2 in [27] only require inessential changes concerning the range of summation. The second claim follows after a minor alteration, namely the term q(n, k) appearing in the proof of Theorem 3.3 in [27] should be changed to
The sequence must be positive which justifies condition (A.1). The corresponding condition in [27] is slightly different.
We are ready to prove Lemma 4.1.
Proof of Lemma 4.1.
The present proof is very similar to that of Theorem 4.2 in [27]. Put
and
which may be finite or infinite. While finiteness of is equivalent to V (1) < ∞, finiteness of some exponential moment of τ1 is equivalent to V (x) < ∞ for some x > 1.
For , put
(with the usual convention that ) and note that h(k, n) = h(1, n − k + 1) for 1 ≤ k ≤ n. Now we use a decomposition
in combination with
which holds for 1 ≤ k ≤ n to obtain
This convolution equation is equivalent to
(the possibility that both sides are infinite is not excluded), where
Now follows from
once we can show that π(0) > 0. To this end, we recall that is governed by a geometric distribution, whence
and
These inequalities ensure (A.1) and thereupon π(0) > 0 by Lemma A.2.
To prove finiteness of some exponential moment pick δ ∈ (0, 1) such that
Existence of such a δ is justified by assumptions and the Cauchy-Schwarz inequality. In view of
we infer that the radius of convergence of H is greater than one. This in combination with H(1) < 1 implies that H(x) < 1 and thereupon V (x) < ∞ for some x > 1.
Footnotes
In some cases we also need additional technical assumptions concerning the joint distribution of ρ and ξ, for instance, . These will be stated explicitly in the corresponding theorems.
Contributor Information
Dariusz Buraczewski, Mathematical Institute, University of Wroclaw, 50-384 Wroclaw, Poland.
Piotr Dyszewski, Mathematical Institute, University of Wroclaw, 50-384 Wroclaw, Poland.
Alexander Iksanov, Faculty of Computer Science and Cybernetics, Taras Shevchenko National University of Kyiv, 01601 Kyiv, Ukraine.
Alexander Marynych, Faculty of Computer Science and Cybernetics, Taras Shevchenko National University of Kyiv, 01601 Kyiv, Ukraine.
Alexander Roitershtein, Department of Mathematics, Iowa State University, Ames, IA 50011, USA.
References
- [1].Anderson KK and Athreya KB A note on conjugate Π-variation and a weak limit theorem for the number of renewals. Statist. Probab. Lett, 6: 151–154, 1988. [Google Scholar]
- [2].Bingham NH, Goldie CM and Teugels JL Regular variation Cambridge University Press, 1989. [Google Scholar]
- [3].Bouchet É, Sabot C and dos Santos RS A quenched functional central limit theorem for random walks in random environments under (T)γ. Stoch. Proc. Appl, 126(4):1206–1225, 2016. [Google Scholar]
- [4].Buraczewski D, Damek E and Mikosch T Stochastic models with power-law tails. The equation X = AX + B. Springer Series in Operations Research and Financial Engineering Springer, 2016. [Google Scholar]
- [5].Buraczewski D and Dyszewski P Precise large deviations for random walk in random environment. Electron. J. Probab 23(114):1–26, 2018. [Google Scholar]
- [6].Buraczewski D, Dyszewski P, Iksanov A and Marynych A Random walks in a strongly sparse random environment. arXiv preprint:1903.02972, 2019. [DOI] [PMC free article] [PubMed]
- [7].Comets F, Gantert N and Zeitouni O Quenched, annealed and functional large deviations for one-dimensional random walk in random environment. Probab. Theory Related Fields, 118(1):65–114, 2000. [Google Scholar]
- [8].Damek E and Kolodziejek B A renewal theorem and supremum of a perturbed random walk. Electron. Commun. Probab 23(82):1–13, 2018. [Google Scholar]
- [9].Dembo A, Peres Y and Zeitouni O Tail estimates for one-dimensional random walk in random environment. Comm. Math. Phys, 181(3):667–683, 1996. [Google Scholar]
- [10].Denisov D, Foss S and Korshunov D Asymptotics of randomly stopped sums in the presence of heavy tails. Bernoulli, 16(4):971–994, 2010. [Google Scholar]
- [11].Dolgopyat D, and Goldsheid I Quenched limit theorems for nearest neighbour random walks in 1D random environment. Comm. Math. Phys, 315(1):241–277, 2012. [Google Scholar]
- [12].Enriquez NI, Sabot C and Zindy O Limit laws for transient random walks in random environment on Z. Annales de l’institut Fourier, 59:2469–2508, 2009. [Google Scholar]
- [13].Feller W Fluctuation theory of recurrent events. Trans. Amer. Math. Soc, 67(1):98–119, 1949. [Google Scholar]
- [14].Feller W An introduction to probability theory and its applications 2nd edition. Wiley, 1971. [Google Scholar]
- [15].Gantert N and Zeitouni O Quenched sub-exponential tail estimates for one-dimensional random walk in random environment. Comm. Math. Phys, 194(1):177–190, 1998. [Google Scholar]
- [16].Goldie CM Implicit renewal theory and tails of solutions of random equations. Ann. Appl. Probab, 1(1):126–166, 1991. [Google Scholar]
- [17].Grincevičius AK The continuity of the distribution of a certain sum of dependent variables that is connected with independent walks on lines. Teor. Verojatnost. i Primenen, 19:163–168, 1974. [Google Scholar]
- [18].Grincevičius AK On a limit distribution for a random walk on lines. Litovsk. Mat. Sb, 15(4): 79–91, 1975. [Google Scholar]
- [19].Greven A and den Hollander F Large deviations for a random walk in random environment. Ann. Probab, 22(3):1381–1428, 1994. [Google Scholar]
- [20].Grey DR Regular variation in the tail behaviour of solutions of random difference equations. Ann. Appl. Probab, 4(1):169–183, 1994. [Google Scholar]
- [21].Gut A Stopped random walks: limit theorems and applications 2nd edition. Springer, 2009. [Google Scholar]
- [22].Harris TE First passage and recurrence distributions. Trans. Amer. Math. Soc 73(3) (1952): 471–486, 1952. [Google Scholar]
- [23].Iksanov A Renewal theory for perturbed random walks and similar processes Birkhäuser, 2016. [Google Scholar]
- [24].Kesten H Random difference equations and renewal theory for products of random matrices. Acta Math, 131:207–248, 1973. [Google Scholar]
- [25].Kesten H The limit distribution of Sinaĭ’s random walk in random environment. Phys. A, 138(1–2):299–309, 1986. [Google Scholar]
- [26].Kesten H, Kozlov MV and Spitzer F A limit law for random walk in a random environment. Compositio Math, 30:145–168, 1975. [Google Scholar]
- [27].Key ES Limiting distributions and regeneration times for multitype branching processes with immigration in a random environment. Ann. Probab, 15(1):344–353, 1987. [Google Scholar]
- [28].Korshunov DA An analog of Wald’s identity for random walks with infinite mean. Siberian Math. J, 50(4): 663–666, 2009. [Google Scholar]
- [29].Kozlov MV Random walk in a one-dimensional random medium Theory Probab. Appl 18(2), 387–388, 1974. [Google Scholar]
- [30].Matzavinos A, Roitershtein A and Seol Y Random walks in a sparse random environment. Electron. J. Probab, 21, paper no. 72, 2016. [Google Scholar]
- [31].Meyer P-A Probability and potentials Blaisdell Publishing Co. Ginn and Co., Waltham, Mass.-Toronto, Ont.-London, 1966. [Google Scholar]
- [32].Pakes AG Further results on the critical Galton–Watson process with immigration. J. Austral. Math. Soc, 13:277–290, 1972. [Google Scholar]
- [33].Pisztora A and Povel T Large deviation principle for random walk in a quenched random environment in the low speed regime. Ann. Probab, 27(3):1389–1413, 1999. [Google Scholar]
- [34].Pisztora A, Povel T and Zeitouni O Precise large deviation estimates for a one-dimensional random walk in a random environment. Probab. Theory Related Fields, 113(2):191–219, 1999. [Google Scholar]
- [35].Sinaĭ Ya. G. The limit behavior of a one-dimensional random walk in a random environment. Teor. Veroyatnost. i Primenen, 27(2):247–258, 1982. [Google Scholar]
- [36].Solomon F Random walks in a random environment. Ann. Probab, 3:1–31, 1975. [Google Scholar]
- [37].Sznitman A and Zerner M A law of large numbers for random walks in random environment. Ann. Probab, 27(4):1851–1869, 1999. [Google Scholar]
- [38].Varadhan SRS Large deviations for random walks in a random environment. Comm. Pure Appl. Math, 56(8):1222–1245, 2003. [Google Scholar]
- [39].Zerner MPW Lyapounov exponents and quenched large deviations for multidimensional random walk in random environment. Ann. Probab, 26(4):1446–1476, 1998. [Google Scholar]
- [40].Zeitouni O Random Walks in Random Environment. XXXI Summer School in Probability, (St. Flour, 2001). Lecture Notes in Math., 1837, Springer, 193–312, 2004. [Google Scholar]

