Abstract
Baum and Katz (Trans. Am. Math. Soc. 120:108-123, 1965) obtained convergence rates in the Marcinkiewicz-Zygmund law of large numbers. Their result has already been extended to the short-range dependent linear processes by many authors. In this paper, we extend the result of Baum and Katz to the long-range dependent linear processes. As a corollary, we obtain convergence rates in the Marcinkiewicz-Zygmund law of large numbers for short-range dependent linear processes.
Keywords: linear process, convergence rate, Marcinkiewicz-Zygmund law of large numbers
Introduction
There are many literature works concerning the convergence rates in the Marcinkiewicz-Zygmund law of large numbers. One can refer to Alf [2], Alsmeyer [3], Baum and Katz [1], Heyde and Rohatgi [4], Hu and Weber [5], Rohatgi [6], and so on.
Baum and Katz [1] obtained the following convergence rates in the Marcinkiewicz-Zygmund law of large numbers.
Theorem 1.1
Baum and Katz [1]
Let , and be a sequence of independent and identically distributed (i.i.d.) random variables. Then and imply
When , the cases of and have already been proved by Hsu and Robbins [7] and Katz [8], respectively.
Let be a sequence of i.i.d. random variables and be a sequence of real numbers. Here and in the following, denotes the set of all integers. Then is called a linear process or an infinite order moving average process if is defined by
| 1.1 |
If , then has short memory or is short-range dependent. If , then has long memory or is long-range dependent (see Chapter 3 in Giraitis et al. [9]).
In the short-range dependent case, Koopmans [10] showed that if has the moment generating function, then the strong law of large numbers for the linear process holds with exponential convergence rate. Hanson and Koopmans [11] generalized this result to a class of linear processes of independent but non-identically distributed random variables and to arbitrary subsequences of . Li et al. [12] extended Katz [8] theorem to the setting of short-range dependent linear processes.
Theorem 1.2
Li et al. [12]
Let . Let be an absolutely summable sequence of real numbers. Suppose that is the linear process of a sequence of i.i.d. random variables with mean zero and . Then
Note that Theorem 1.2 corresponds to Theorem 1.1 with . Zhang [13] extended Theorem 1.1 with to the short-range dependent linear process of a sequence of identically distributed φ-mixing random variables. Since independent random variables are also φ-mixing, it follows by Zhang [13] theorem that Theorem 1.2 also holds for .
In this paper, we obtain convergence rates in the Marcinkiewicz-Zygmund law of large numbers for long-range dependent linear processes of i.i.d. random variables. For convenience of notation, let
where . In the long-range dependent case, Characiejus and Račkauskas [14] obtained the convergence rate in the Marcinkiewicz-Zygmund law of large numbers for the linear process which is slightly different from (1.1) and defined by
| 1.2 |
where if .
Theorem 1.3
Characiejus and Račkauskas [14]
Let be defined as above and . Let be a sequence of real numbers such that
where if . Assume that
If and , then
| 1.3 |
The above theorem shows a convergence rate in the Marcinkiewicz-Zygmund weak law of large numbers with the norming sequence .
We now compare Theorem 1.3 with Theorem 1.1. Since Theorem 1.3 deals with only the case , it is interesting to prove that Theorem 1.3 holds for the case . When , Theorem 1.1 requires a finite pth moment condition, but Theorem 1.3 requires more than finite pth moment. To apply Theorem 1.3, it is necessary to estimate . If is an absolutely summable sequence, then we have, by the result of Burton and Dehling [15] (see also Lemma 2.4), that for any
and hence (1.3) holds with replaced by . However, for the long-range dependent case, it is not easy to estimate .
In this paper, we extend Theorem 1.1 to the long-range dependent linear processes. As a corollary, we obtain a long-range dependent setting of Theorem 1.2. Further, we propose a method to estimate for the long-range dependent case.
Throughout this paper, C denotes a positive constant which may vary at each occurrence. For events A and B, denotes the indicator function of the event A, and .
Convergence of long-range dependent linear processes
In this section, we extend Theorem 1.1 to the long-range dependent linear processes. To prove the main results, we need the following lemmas. The first one is the von Bahr-Esseen inequality (see von Bahr and Esseen [16]). The second is known as Fuk-Nagaev inequality (see Corollary 1.8 in Nagaev [17]).
Lemma 2.1
Let be a sequence of independent random variables with and for some . Then, for all ,
where is a positive constant depending only on t.
Lemma 2.2
Let be a sequence of independent random variables with . Then, for any and ,
The following lemma is well known and can be easily proved by using a standard method.
Lemma 2.3
Let and ζ be a random variable. Then the following statements hold.
-
(i)
If , then .
-
(ii)
If , then .
-
(iii)
If , then .
-
(iv)
If , then .
The following lemma is useful to estimate when the sequence is absolutely summable. However, it is not applicable to the long-range dependent case.
Lemma 2.4
Burton and Dehling [15]
Let be an absolutely convergent series of real numbers with . Then, for any ,
where .
We now state and prove our main results. The first theorem treats the case .
Theorem 2.1
Let and . Let be a sequence of real numbers with
Suppose that is the linear process of a sequence of i.i.d. random variables with mean zero and . Furthermore, assume that one of the following conditions holds.
- If , then
- If , then
and
Then
Proof
(1) For each , we have
and hence,
| 2.1 |
By the Markov inequality, Lemmas 2.1 and 2.3, we have
Thus the first series on the right-hand side of (2.1) converges.
Similarly, by the Markov inequality, Lemmas 2.1 and 2.3, we have
Hence the second series on the right-hand side of (2.1) also converges.
(2) For each , we have
and hence,
| 2.2 |
By the Markov inequality, Lemmas 2.1 and 2.3, we have
Thus the first series on the right-hand side of (2.2) converges.
We next prove that the second series on the right-hand side of (2.2) converges. We have by Lemma 2.2 that for ,
| 2.3 |
Hence it is enough to show that two series on the right-hand side of (2.3) converge.
If we take , then we have by Lemma 2.3 that
Hence the first series on the right-hand side of (2.3) converges.
Finally, we show that the second series on the right-hand side of (2.3) converges. Since , we have that
which implies that
□
The next theorem treats the case .
Theorem 2.2
Let . Let be a sequence of real numbers with
Suppose that is the linear process of a sequence of i.i.d. random variables with mean zero and . Furthermore, assume that
and
Then
Proof
The proof is similar to that of Theorem 2.1(1). We proceed with two cases and .
For the case , we have by Lemmas 2.1 and 2.3 that
As in the proof of Theorem 2.1(1), we have that
For the case , we rewrite as
If , then . It follows by Lemma 2.4 that
as . Hence
The rest of the proof is the same as that of the previous case and is omitted. □
The following corollary extends Theorem 1.1 to the short-range dependent linear processes.
Corollary 2.1
Let , , and . Let be an absolutely summable sequence of real numbers. Suppose that is the linear process of a sequence of i.i.d. random variables with mean zero and . Then
Proof
We first note that
If , then we take θ such that . Then
By Lemma 2.4, for any , there exist positive constants and independent of n such that
Then all conditions on in Theorems 2.1 and 2.2 are easily satisfied. Hence the proof follows from Theorems 2.1 and 2.2. □
Remark 2.1
In Corollary 2.1, the case (i.e., and ) is not considered. In fact, Corollary 2.1 does not hold for this case (see Sung [18]).
An estimation of for the long-range dependent case
As we have seen in Sections 1 and 2, it is easy to estimate for the short-range dependent case. In this section, we propose a method to estimate for the long-range dependent case. It is not easy to estimate when the sequence is not absolutely summable. For simplicity, we will consider non-increasing sequences of positive numbers. For the finiteness of , without loss of generality, it is necessary to assume that if and .
Lemma 3.1
Let . Let be a non-increasing sequence of positive real numbers satisfying if and . Then
Proof
Since if and , we get that
Similarly,
Thus the proof is completed. □
The following lemma can be found in Martikainen [19].
Lemma 3.2
Martikainen [19]
Let be a non-decreasing sequence of positive real numbers. Then
Similarly, we can obtain a counterpart of Lemma 3.2.
Lemma 3.3
Let be a non-decreasing sequence of positive real numbers. Then
Proof
The proof is similar to that of Lemma 3.2 and is omitted. □
Lemma 3.4
Let and let be a sequence of positive real numbers satisfying , , and
Then the following statements hold:
-
(i)
.
-
(ii)
.
Proof
The proof of (i) follows from Lemma 3.2. The proof of (ii) follows from Lemma 3.3. □
Now we present a method to estimate for the long-range dependent case.
Theorem 3.1
Let , and let be a sequence of positive real numbers satisfying the same conditions as in Lemma 3.4. Then there exist positive constants and independent of n such that
where if .
Proof
By the condition , we have , which implies . The upper bound of follows by Lemmas 3.1 and 3.4. For the lower bound, we have by that
It follows that for all large n
Since ,
Hence the lower bound follows from Lemma 3.1. □
Finally, we give two long-range dependent linear processes.
Example 3.1
Let if and if . Then the series diverges, but converges if . Observe that
If , then
By Lemma 3.1, for any , there exist positive constants and independent of n such that
Let be the long-range dependent linear process of a sequence of i.i.d. random variables with mean zero and , where and . Then all conditions of Theorem 2.1 are easily satisfied. By Theorem 2.1,
Example 3.2
Let . Let if and if , where . Then the series diverges, but converges if . Since , we have by Theorem 3.1 that
Let be the long-range dependent linear process of a sequence of i.i.d. random variables with mean zero and . Take θ such that . Then all conditions of Theorem 2.2 are easily satisfied. By Theorem 2.2,
Acknowledgements
The research of Pingyan Chen is supported by the National Natural Science Foundation of China (No. 11271161). The research of Soo Hak Sung is supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2017R1D1A1B03029898).
Authors’ contributions
All authors read and approved the manuscript.
Competing interests
The authors declare that they have no competing interests.
Footnotes
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Baum LE, Katz M. Convergence rates in the law of large numbers. Trans. Am. Math. Soc. 1965;120:108–123. doi: 10.1090/S0002-9947-1965-0198524-1. [DOI] [Google Scholar]
- 2.Alf C. Rates of convergence for the laws of large numbers for independent Banach-valued random variables. J. Multivar. Anal. 1975;5:322–329. doi: 10.1016/0047-259X(75)90051-2. [DOI] [Google Scholar]
- 3.Alsmeyer G. Convergence rates in the law of large numbers for martingales. Stoch. Process. Appl. 1990;36:181–194. doi: 10.1016/0304-4149(90)90090-F. [DOI] [Google Scholar]
- 4.Heyde CC, Rohatgi VK. A pair of complementary theorems on convergence rates in the law of large numbers. Proc. Camb. Philol. Soc. 1967;63:73–82. doi: 10.1017/S0305004100040901. [DOI] [Google Scholar]
- 5.Hu T-C, Weber NC. On the rate of convergence in the strong law of large numbers for arrays. Bull. Aust. Math. Soc. 1992;45:479–482. doi: 10.1017/S0004972700030379. [DOI] [Google Scholar]
- 6.Rohatgi VK. Convergence rates in the law of large numbers II. Proc. Camb. Philol. Soc. 1968;64:485–488. doi: 10.1017/S0305004100043103. [DOI] [Google Scholar]
- 7.Hsu PL, Robbins H. Complete convergence and the law of large numbers. Proc. Natl. Acad. Sci. USA. 1947;33:25–31. doi: 10.1073/pnas.33.2.25. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Katz M. The probability in the tail of a distribution. Ann. Math. Stat. 1963;34:312–318. doi: 10.1214/aoms/1177704268. [DOI] [Google Scholar]
- 9.Giraitis L, Koul HL, Surgailis D. Large Sample Inference for Long Memory Processes. London: Imperial College Press; 2012. [Google Scholar]
- 10.Koopmans LH. An exponential bound on the strong law of large numbers for linear stochastic processes with absolutely convergent coefficients. Ann. Math. Stat. 1961;32:583–586. doi: 10.1214/aoms/1177705063. [DOI] [Google Scholar]
- 11.Hanson DL, Koopmans LH. On the convergence rate of the law of large numbers for linear combinations of independent random variables. Ann. Math. Stat. 1965;36:559–564. doi: 10.1214/aoms/1177700167. [DOI] [Google Scholar]
- 12.Li D, Rao MB, Wang X. Complete convergence of moving average processes. Stat. Probab. Lett. 1992;14:111–114. doi: 10.1016/0167-7152(92)90073-E. [DOI] [Google Scholar]
- 13.Zhang L. Complete convergence of moving average processes under dependence assumptions. Stat. Probab. Lett. 1996;30:165–170. doi: 10.1016/0167-7152(95)00215-4. [DOI] [Google Scholar]
- 14.Characiejus V, Račkauskas A. Weak law of large numbers for linear processes. Acta Math. Hung. 2016;149:215–232. doi: 10.1007/s10474-016-0603-4. [DOI] [Google Scholar]
- 15.Burton RM, Dehling H. Large deviations for some weakly dependent random processes. Stat. Probab. Lett. 1990;9:397–401. doi: 10.1016/0167-7152(90)90031-2. [DOI] [Google Scholar]
- 16.von Bahr B, Esseen CG. Inequalities for the rth absolute moment of a sum of random variables, . Ann. Math. Stat. 1965;36:299–303. doi: 10.1214/aoms/1177700291. [DOI] [Google Scholar]
- 17.Nagaev SV. Large deviations of sums of independent random variables. Ann. Probab. 1979;7:745–789. doi: 10.1214/aop/1176994938. [DOI] [Google Scholar]
- 18.Sung SH. A note on the complete convergence of moving average processes. Stat. Probab. Lett. 2009;79:1387–1390. doi: 10.1016/j.spl.2009.03.001. [DOI] [Google Scholar]
- 19.Martikainen AI. Criteria for strong convergence of normalized sums of independent random variables and their applications. Theory Probab. Appl. 1985;29:519–533. doi: 10.1137/1129065. [DOI] [Google Scholar]
