Skip to main content
Springer logoLink to Springer
. 2018 Jun 1;2018(1):121. doi: 10.1186/s13660-018-1710-2

On complete convergence and complete moment convergence for weighted sums of ρ-mixing random variables

Pingyan Chen 1, Soo Hak Sung 2,
PMCID: PMC5982509  PMID: 29901025

Abstract

Let r1, 1p<2, and α,β>0 with 1/α+1/β=1/p. Let {ank,1kn,n1} be an array of constants satisfying supn1n1k=1n|ank|α<, and let {Xn,n1} be a sequence of identically distributed ρ-mixing random variables. For each of the three cases α<rp, α=rp, and α>rp, we provide moment conditions under which

n=1nr2P{max1mn|k=1mankXk|>εn1/p}<,ε>0.

We also provide moment conditions under which

n=1nr2q/pE(max1mn|k=1mankXk|εn1/p)+q<,ε>0,

where q>0. Our results improve and generalize those of Sung (Discrete Dyn. Nat. Soc. 2010:630608, 2010) and Wu et al. (Stat. Probab. Lett. 127:55–66, 2017).

Keywords: ρ-mixing random variables, Complete convergence, Complete moment convergence, Weighted sum

Introduction

Due to the estimation of least squares regression coefficients in linear regression and nonparametric curve estimation, it is very interesting and meaningful to study the limit behaviors for the weighted sums of random variables.

We recall the concept of ρ-mixing random variables.

Definition 1.1

Let {Xn,n1} be a sequence of random variables defined on a probability space (Ω,F,P). For any SN={1,2,}, define FS=σ(Xi,iS). Given two σ-algebras A and B in F, put

ρ(A,B)=sup{EXYEXEYE(XEX)2E(YEY)2:XL2(A),YL2(B)}.

Define the ρ-mixing coefficients by

ρn=sup{ρ(FS,FT):S,TNwithdist(S,T)n},

where dist(S,T)=inf{|st|:sS,tT}. Obviously, 0ρn+1ρnρ0=1. Then the sequence {Xn,n1} is called ρ-mixing if there exists kN such that ρk<1.

A number of limit results for ρ-mixing sequences of random variables have been established by many authors. We refer to Bradley [3] for the central limit theorem, Bryc and Smolenski [4], Peligrad and Gut [5], and Utev and Peligrad [6] for the moment inequalities, and Sung [1] for the complete convergence of weighted sums.

Special cases for weighted sums have been studied by Bai and Cheng [7], Chen et al. [8], Choi and Sung [9], Chow [10], Cuzick [11], Sung [12], Thrum [13], and others. In this paper, we focus on the array weights {ank,1kn,n1} of real numbers satisfying

supn1n1k=1n|ank|α< 1.1

for some α>0. In fact, under condition (1.1), many authors have studied the limit behaviors for the weighted sums of random variables.

Let {X,Xn,n1} be a sequence of independent and identically distributed random variables. When α=2, Chow [10] showed that the Kolmogorov strong law of large numbers

n1k=1nankXk0a.s. 1.2

holds if EX=0 and EX2<. Cuzick [11] generalized Chow’s result by showing that (1.2) also holds if EX=0 and E|X|β< for β>0 with 1/α+1/β=1. Bai and Cheng [7] proved that the Marcinkiewicz–Zygmund strong law of large numbers

n1/pk=1nankXk0a.s. 1.3

holds if EX=0 and E|X|β<, where 1p<2 and 1/α+1/β=1/p. Chen and Gan [14] showed that if 0<p<1 and E|X|β<, then (1.3) still holds without the independent assumption.

Under condition (1.1), a convergence rate in the strong law of large numbers is also discussed. Chen [15] showed that

n=1nr2P{max1mn|k=1mankXk|>εn1/p}<,ε>0, 1.4

if {X,Xn,n1} is a sequence of identically distributed negatively associated (NA) random variables with EX=0 and E|X|(r1)β<, where r>1, 1p<2, 1/α+1/β=1/p, and α<rp. The main tool used in Chen [15] is the exponential inequality for NA random variables (see Theorem 3 in Shao [16]). Sung [1] proved (1.4) for a sequence of identically distributed ρ-mixing random variables with EX=0 and E|X|rp<, where α>rp, by using the Rosenthal moment inequality. Since the Rosenthal moment inequality for NA has been established by Shao [16], it is easy to see that Sung’s result also holds for NA random variables. However, for ρ-mixing random variables, we do not know whether the corresponding exponential inequality holds or not, and so the method of Chen [15] does not work for ρ-mixing random variables. On the other hand, the method of Sung [1] is complex and not applicable to the case αrp.

In this paper, we show that (1.4) holds for a sequence of identically distributed ρ-mixing random variables with suitable moment conditions. The moment conditions for the cases α<rp and α>rp are optimal. The moment conditions for α=rp are nearly optimal. Although the main tool is the Rosenthal moment inequality for ρ-mixing random variables, our method is simpler than that of Sung [1] even in the case α>rp.

We also extend (1.4) to complete moment convergence, that is, we provide moment conditions under which

n=1nr2q/pE(max1mn|k=1mankXk|εn1/p)+q<,ε>0, 1.5

where q>0.

Note that if (1.5) holds for some q>0, then (1.4) also holds. The proof is well known.

Throughout this paper, C always stands for a positive constant that may differ from one place to another. For events A and B, we denote I(A,B)=I(AB), where I(A) is the indicator function of an event A.

Preliminary lemmas

To prove the main results, we need the following lemmas. The first one belongs to Utev and Peligrad [6].

Lemma 2.1

Let q2, and let {Xn,n1} be a sequence of ρ-mixing random variables with EXn=0 and E|Xn|q< for every n1. Then for all n1,

Emax1mn|k=1mXk|qCq{k=1nE|Xk|q+(k=1nE|Xk|2)q/2},

where Cq>0 depends only on q and the ρ-mixing coefficients.

Remark 2.1

By the Hölder inequality, (1.1) implies that

supn1n1k=1n|ank|s<

for any 0<sα, and

supn1nq/αk=1n|ank|q<

for any q>α. These properties will be used in the proofs of the following lemmas and main results.

Lemma 2.2

Let r1, 0<p<2, α>0, β>0 with 1/α+1/β=1/p, and let X be a random variable. Let {ank,1kn,n1} be an array of constants satisfying (1.1). Then

n=1nr2k=1nP{|ankX|>n1/p}{CE|X|(r1)βif α<rp,CE|X|(r1)βlog(1+|X|)if α=rp,CE|X|rpif α>rp. 2.1

Proof

Case 1: αrp. We observe by the Markov inequality that, for any s>0,

P{|ankX|>n1/p}=P{|ankX|>n1/p,|X|>n1/β}+P{|ankX|>n1/p,|X|n1/β}nα/p|ank|αE|X|αI(|X|>n1/β)+ns/p|ank|sE|X|sI(|X|n1/β). 2.2

It is easy to show that

n=1nr2nα/p(k=1n|ank|α)E|X|αI(|X|>n1/β)Cn=1nr1α/pE|X|αI(|X|>n1/β){CE|X|(r1)βif α<rp,CE|X|(r1)βlog(1+|X|)if α=rp. 2.3

Taking s>max{α,(r1)β}, we have that

n=1nr2ns/p(k=1n|ank|s)E|X|sI(|X|n1/β)Cn=1nr2s/p+s/αE|X|sI(|X|n1/β)=Cn=1nr2s/βE|X|sI(|X|n1/β)CE|X|(r1)β, 2.4

since s>(r1)β. Then (2.1) holds by (2.2)–(2.4).

Case 2: α>rp. The proof is similar to that of Case 1. However, we use a different truncation for X. We observe by the Markov inequality that, for any t>0,

P{|ankX|>n1/p}=P{|ankX|>n1/p,|X|>n1/p}+P{|ankX|>n1/p,|X|n1/p}nt/p|ank|tE|X|tI(|X|>n1/p)+nα/p|ank|αE|X|αI(|X|n1/p). 2.5

Taking 0<t<rp, we have that

n=1nr2nt/p(k=1n|ank|t)E|X|tI(|X|>n1/p)Cn=1nr1t/pE|X|tI(|X|>n1/p)CE|X|rp. 2.6

It is easy to show that

n=1nr2nα/p(k=1n|ank|α)E|X|αI(|X|n1/p)Cn=1nr1α/pE|X|αI(|X|n1/p)CE|X|rp, 2.7

since α>rp. Then (2.1) holds by (2.5)–(2.7). □

Lemma 2.3

Let r1, 0<p<2, α>0, β>0 with 1/α+1/β=1/p, and let X be a random variable. Let {ank,1kn,n1} be an array of constants satisfying (1.1). Then, for any s>max{α,(r1)β},

n=1nr2s/pk=1nE|ankX|sI(|ankX|n1/p){CE|X|(r1)βif α<rp,CE|X|(r1)βlog(1+|X|)if α=rp,CE|X|rpif α>rp. 2.8

Proof

Case 1: αrp. By (2.3) and (2.4) we get that

n=1nr2s/pk=1nE|ankX|sI(|ankX|n1/p)=n=1nr2s/pk=1nE|ankX|sI(|ankX|n1/p,|X|>n1/β)+n=1nr2s/pk=1nE|ankX|sI(|ankX|n1/p,|X|n1/β)n=1nr2s/pn(sα)/pk=1nE|ankX|αI(|X|>n1/β)+n=1nr2s/pk=1nE|ankX|sI(|X|n1/β){CE|X|(r1)βif α<rp,CE|X|(r1)βlog(1+|X|)if α=rp.

Case 2: α>rp. Taking 0<t<rp, we have by (2.6) and (2.7) that

n=1nr2s/pk=1nE|ankX|sI(|ankX|n1/p)=n=1nr2s/pk=1nE|ankX|sI(|ankX|n1/p,|X|>n1/p)+n=1nr2s/pk=1nE|ankX|sI(|ankX|n1/p,|X|n1/p)n=1nr2s/pn(st)/pk=1nE|ankX|tI(|X|>n1/p)+n=1nr2s/pn(sα)/pk=1nE|ankX|αI(|X|n1/p)CE|X|rp.

Therefore (2.8) holds. □

The following lemma is a counterpart of Lemma 2.3. The truncation for |ankX| is reversed.

Lemma 2.4

Let q>0, r1, 0<p<2, α>0, β>0 with 1/α+1/β=1/p, and let X be a random variable. Let {ank,1kn,n1} be an array of constants satisfying (1.1). Then the following statements hold.

  1. If α<rp, then
    n=1nr2q/pk=1nE|ankX|qI(|ankX|>n1/p){CE|X|(r1)βif q<(r1)β,CE|X|(r1)βlog(1+|X|)if q=(r1)β,CE|X|qif q>(r1)β. 2.9
  2. If α=rp, then
    n=1nr2q/pk=1nE|ankX|qI(|ankX|>n1/p){CE|X|(r1)βlog(1+|X|)if qα=rp,CE|X|qif q>α=rp. 2.10
  3. If α>rp, then
    n=1nr2q/pk=1nE|ankX|qI(|ankX|>n1/p){CE|X|rpif q<rp,CE|X|rplog(1+|X|)if q=rp,CE|X|qif q>rp. 2.11

Proof

Without loss of generality, we may assume that n1k=1n|ank|α1 for all n1. From this we have that |ank|n1/α for all 1kn and n1.

(1) In this case, we have that α<rp<(r1)β. If 0<q<α, then

n=1nr2q/pk=1nE|ankX|qI(|ankX|>n1/p)n=1nr2q/pn(αq)/pk=1nE|ankX|αI(|ankX|>n1/p)n=1nr2q/pn(αq)/pk=1nE|ankX|αI(|n1/αX|>n1/p)Cn=1nr1α/pE|X|αI(|X|>n1/β)=Ci=1E|X|αI(i1/β<|X|(i+1)1/β)n=1inr1α/pCE|X|(r1)β. 2.12

If qα, then

n=1nr2q/pk=1nE|ankX|qI(|ankX|>n1/p)n=1nr2q/pk=1nE|ankX|qI(|n1/αX|>n1/p)=n=1nr2q/pk=1nE|ankX|qI(|X|>n1/β)Cn=1nr2q/p+q/αE|X|qI(|X|>n1/β)=Ci=1E|X|qI(i1/β<|X|(i+1)1/β)n=1inr2q/β{CE|X|(r1)βif αq<(r1)β,CE|X|(r1)βlog(1+|X|)if q=(r1)β,CE|X|qif q>(r1)β. 2.13

Combining (2.12) and (2.13) gives (2.9).

(2) In this case, we have that α=rp=(r1)β. If qα=rp=(r1)β, then

n=1nr2q/pk=1nE|ankX|qI(|ankX|>n1/p)n=1nr2q/p+(qα)/pk=1nE|ankX|αI(|n1/αX|>n1/p)Cn=1nr1α/pE|X|αI(|X|>nβ)=Cn=1n1E|X|αI(|X|>nβ)=Ci=1E|X|αI(iβ<|X|(i+1)β)n=1in1CE|X|(r1)βlog(1+|X|). 2.14

If q>α=rp=(r1)β, then

n=1nr2q/pk=1nE|ankX|qI(|ankX|>n1/p)Cn=1nr2q/pnq/αE|X|qCE|X|q. 2.15

Combining (2.14) and (2.15) gives (2.10).

(3) In this case, we have that (r1)β<rp<α. If qrp, then

n=1nr2q/pk=1nE|ankX|qI(|ankX|>n1/p)=n=1nr2q/pk=1nE|ankX|qI(|ankX|>n1/p,|X|>n1/p)+n=1nr2q/pk=1nE|ankX|qI(|ankX|>n1/p,|X|n1/p)n=1nr2q/pk=1nE|ankX|qI(|X|>n1/p)+n=1nr2q/pn(αq)/pk=1nE|ankX|αI(|X|n1/p)Cn=1nr1q/pE|X|qI(|X|>n1/p)+Cn=1nr1α/pE|X|αI(|X|n1/p)=Ci=1E|X|qI(i1/p<|X|(i+1)1/p)n=1inr1q/p+Ci=1E|X|αI((i1)1/p<|X|i1/p)n=inr1α/p{CE|X|rpif q<rp,CE|X|rplog(1+|X|)if q=rp. 2.16

If rp<q<α, then

n=1nr2q/pk=1nE|ankX|qI(|ankX|>n1/p)Cn=1nr1q/pE|X|qCE|X|q. 2.17

If qα, then

n=1nr2q/pk=1nE|ankX|qI(|ankX|>n1/p)Cn=1nr2q/pnq/αE|X|qCE|X|q, 2.18

since qα>(r1)β.

Combining (2.16)–(2.18) gives (2.11). □

Lemma 2.5

Let 1p<2, α>0, β>0 with 1/α+1/β=1/p, and let X be a random variable. Let {ank,1kn,n1} be an array of constants satisfying (1.1). If E|X|p<, then

n1/pk=1nE|ankX|I(|ankX|>n1/p)0 2.19

as n, and hence, in addition, if EX=0, then

n1/pmax1mn|k=1mankEXI(|ankX|n1/p)|0 2.20

as n.

Proof

Denote Aα=supn1n1k=1n|ank|α. Then |ank|An1/α for all 1kn and n1. It follows that

n1/pk=1nE|ankX|I(|ankX|>n1/p)n1k=1nE|ankX|pI(|ankX|>n1/p)n1(k=1n|ank|p)E|X|pI(|AX|>n1/β)CE|X|pI(|AX|>n1/β)0 2.21

as n. Hence (2.19) holds.

If, in addition, EX=0, then we get by (2.21) that

n1/pmax1mn|k=1mankEXI(|ankX|n1/p)|=n1/pmax1mn|k=1mankEXI(|ankX|>n1/p)|n1/pk=1nE|ankX|I(|ankX|>n1/p)0

as n. Hence (2.20) holds. □

The following lemma shows that if 0<p<1, then (2.20) holds without the condition EX=0.

Lemma 2.6

Let 0<p<1, α>0, β>0 with 1/α+1/β=1/p, and let X be a random variable. Let {ank,1kn,n1} be an array of constants satisfying (1.1). If E|X|p<, then

n1/pk=1nE|ankX|I(|ankX|n1/p)0

as n, and hence (2.20) holds.

Proof

Note that

n1/pk=1nE|ankX|I(|ankX|n1/p)=n1/pk=1nE|ankX|I(|ankX|n1/p,|X|>n1/β)+n1/pk=1nE|ankX|I(|ankX|n1/p,|X|n1/β)n1/pn(1p)/pk=1nE|ankX|pI(|X|>n1/β)+n1/pk=1nE|ankX|I(|X|n1/β)CE|X|pI(|X|>n1/β)+Cn1/p+1/(α1)E|X|I(|X|n1/β)CE|X|pI(|X|>n1/β)+Cn1/p+1/(α1)+(1p)/βE|X|p0

as n, since 1/p+1/(α1)+(1p)/β=p/β if α1 and 1/p+1/(α1)+(1p)/β=(1p)/α if α>1. □

Main results

We first present complete convergence for weighted sums of ρ-mixing random variables.

Theorem 3.1

Let r1, 1p<2, α>0, β>0 with 1/α+1/β=1/p. Let {ank,1kn,n1} be an array of constants satisfying (1.1), and let {X,Xn,n1} be a sequence of identically distributed ρ-mixing random variables. If

EX=0,{E|X|(r1)β<if α<rp,E|X|(r1)βlog(1+|X|)<if α=rp,E|X|rp<if α>rp, 3.1

then (1.4) holds.

Conversely, if (1.4) holds for any array {ank,1kn,n1} satisfying (1.1) for some α>p, then EX=0, E|X|rp< and E|X|(r1)β<.

Remark 3.1

When 0<p<1, (3.1) without the condition EX=0 implies (1.4). The proof is the same as that of Theorem 3.1 except that Lemma 2.5 is replaced by Lemma 2.6.

Remark 3.2

The case α>rp (r>1) of Theorem 3.1 corresponds to Theorem 2.2 of Sung [1], and the proof is much simpler than that of Sung [1]. Hence Theorem 3.1 generalizes the result of Sung [1].

Remark 3.3

Suppose that r1, 1p<2, α>0, β>0 with 1/α+1/β=1/p. Then the case α<rp is equivalent to the case rp<(r1)β, and in this case, α<rp<(r1)β. The case α=rp is equivalent to the case rp=(r1)β, and in this case, α=rp=(r1)β. The case α>rp is equivalent to the case rp>(r1)β, and in this case, α>rp>(r1)β.

Remark 3.4

In two cases α<rp and α>rp, the moment conditions are necessary and sufficient conditions, but in the case α=rp, the moment condition E|X|(r1)βlog(1+|X|)=E|X|rplog(1+|X|)< is only sufficient for (1.4). It may be difficult to prove (1.4) under the necessary moment condition E|X|rp<. An and Yuan [17] proved (1.4) under the moment condition E|X|rp< and the condition

supn1nδk=1n|ank|rp<

for some δ(0,1). However, their result is not an extension of the classical one and is a particular case of Sung [1]. In fact, if we set α=rp/δ, then α>rp, and (1.1) holds.

Proof of Theorem 3.1

Sufficiency. For any 1kn and n1, set

Xnk=ankXkI(|ankXk|n1/p).

Note that

{max1mn|k=1mankXk|>εn1/p}k=1n{|ankXk|>n1/p}{max1mn|k=1mXnk|>εn1/p}.

Then by Lemmas 2.2 and 2.5, to prove (1.4), it suffices to prove that

n=1nr2P{max1mn|k=1m(XnkEXnk)|>εn1/p}<,ε>0. 3.2

When r>1, set s(p,min{2,α}) if αrp and s(p,min{2,rp}) if α>rp. Note that, when r=1, we cannot choose such s, since α>p=rp. Then p<s<min{2,α}, and E|X|s< by Remark 3.3. Taking q>max{2,α,(r1)β,2p(r1)/(sp)}, we have by the Markov inequality and Lemma 2.1 that

P{max1mn|k=1m(XnkEXnk)|>εn1/p}Cnq/p(k=1nE(XnkEXnk)2)q/2+Cnq/pk=1nE|XnkEXnk|q. 3.3

Since q>2p(r1)/(sp), we have that r2+q(1s/p)/2<1. It follows that

n=1nr2nq/p(k=1nE(XnkEXnk)2)q/2n=1nr2nq/p(k=1nE|ankXk|2I(|ankXk|n1/p))q/2n=1nr2nq/p(n(2s)/pk=1nE|ankXk|sI(|ankXk|n1/p))q/2n=1nr2nq/p(n(2s)/pk=1n|ank|sE|X|s)q/2Cn=1nr2+q(1s/p)/2<. 3.4

By Lemma 2.3 we have

n=1nr2nq/pk=1nE|XnkEXnk|qCn=1nr2nq/pk=1nE|ankXk|qI(|ankXk|n1/p)<. 3.5

Hence (3.2) holds by (3.3)–(3.5).

When r=1, we always have that α>p=rp. If (1.1) holds for some α>0, then (1.1) also holds for any α (0<αα) by Remark 2.1. Thus we may assume that p<α<2. Taking q=2, we have by the Markov inequality and Lemmas 2.1 and 2.3 that

n=1nr2P{max1mn|k=1m(XnkEXnk)|>εn1/p}Cn=1nr2n2/pk=1nE|ankXk|2I(|ankXk|n1/p)<.

Necessity. Set ank=1 for all 1kn and n1. Then (1.4) can be rewritten as

n=1nr2P{max1mn|k=1mXk|>εn1/p}<,ε>0,

which implies that EX=0 and E|X|rp< (see Theorem 2 in Peligrad and Gut [5]). Set ank=0 if 1kn1 and ann=n1/α. Then (1.4) can be rewritten as

n=1nr2P{n1/α|Xn|>εn1/p}<,ε>0,

which is equivalent to E|X|(r1)β<. The proof is completed. □

Now we extend Theorem 3.1 to complete moment convergence.

Theorem 3.2

Let q>0, r1, 1p<2, α>0, β>0 with 1/α+1/β=1/p. Let {ank,1kn,n1} be an array of constants satisfying (1.1), and let {X,Xn,n1} be a sequence of identically distributed ρ-mixing random variables. Assume that one of the following conditions holds.

  1. If α<rp, then
    EX=0,{E|X|(r1)β<if q<(r1)β,E|X|(r1)βlog(1+|X|)<if q=(r1)β,E|X|q<if q>(r1)β. 3.6
  2. If α=rp, then
    EX=0,{E|X|(r1)βlog(1+|X|)<if qα=rp,E|X|q<if q>α=rp. 3.7
  3. If α>rp, then
    EX=0,{E|X|rp<if q<rp,E|X|rplog(1+|X|)<if q=rp,E|X|q<if q>rp. 3.8

Then (1.5) holds.

Remark 3.5

As stated in the Introduction, if (1.5) holds for some q>0, then (1.4) also holds. If α<rp, EX=0, and E|X|(r1)β<, then (3.6) holds for some 0<q<(r1)β. If α=rp, EX=0, and E|X|(r1)βlog(1+|X|)<, then (3.7) holds for some 0<qα. If α>rp, EX=0, and E|X|rp<, then (3.8) holds for some 0<q<rp. Therefore the sufficiency of Theorem 3.1 holds by Theorem 3.2.

Remark 3.6

The case α>rp of Theorem 3.2 corresponds to combining Theorems 3.1 and 3.2 in Wu et al. [2]. The condition on weights {ank} in Wu et al. [2] is

supn1n1k=1n|ank|t<for some t>max{rp,q},

which is stronger than (1.1) with α>rp. Hence Theorem 3.2 generalizes and improves the results of Wu et al. [2].

Remark 3.7

In this paper, the ρ-mixing condition is only used in Lemma 2.1. Therefore our main results (Theorems 3.1 and 3.2) also hold for random variables satisfying Lemma 2.1.

Proof of Theorem 3.2

We apply Theorems 2.1 and 2.2 in Sung [18] with Xnk=ankXk, bn=nr2, an=n1/p. When the second moment of X does not exist, we apply Theorem 2.1 in Sung [18]. We can easily prove that Theorem 2.1 in Sung [18] still holds for 0<q<1. When the second moment of X exists, we apply Theorem 2.2 in Sung [18].

(1) If α<rp, then α<rp<(r1)β by Remark 3.3. We first consider the case q<(r1)β. In this case, the moment conditions are EX=0 and E|X|(r1)β<. When q<(r1)β<2, we prove (1.5) by using Theorem 2.1 in Sung [18]. To apply Theorem 2.1 in Sung [18], we take s=2. By Lemma 2.1,

Emax1mn|k=1m(Xnk(x)EXnk(x))|2Ck=1nE|Xnk(x)|2,n1,x>0,

where Xnk(x)=ankXkI(|ankXk|x1/q)+x1/qI(ankXk>x1/q)x1/qI(ankXk<x1/q). By Lemma 2.3,

n=1nr2s/pk=1nE|ankXk|sI(|ankXk|n1/p)CE|X|(r1)β<. 3.9

By Lemma 2.4,

n=1nr2q/pk=1nE|ankXk|qI(|ankXk|>n1/p)CE|X|(r1)β<. 3.10

By Lemma 2.5 (note that E|X|p<, since prp<(r1)β),

n1/pk=1nE|ankXk|I(|ankXk|>n1/p)0. 3.11

Hence all conditions of Theorem 2.1 in Sung [18] are satisfied. Therefore (1.5) holds by Theorem 2.1 in Sung [18].

When q<(r1)β and (r1)β2, we prove (1.5) by using Theorem 2.2 in Sung [18]. To apply Theorem 2.2 in Sung [18], we take s>0 such that s>max{2,q,α,(r1)β,(r1)p(α2)/((α2)p)}. By Lemma 2.1,

Emax1mn|k=1m(Xnk(x)EXnk(x))|sC{k=1nE|Xnk(x)|s+(k=1nE|Xnk(x)|2)s/2},n1,x>0.

Since s>max{α,(r1)β}, (3.9) holds. Also, (3.10) and (3.11) hold. Since E|X|2< and s>(r1)p(α2)/((α2)p), we have that

n=1nr2(n2/pk=1nE|ankXk|2)s/2Cn=1nr2(n2/pn2/(α2))s/2<.

Hence all conditions of Theorem 2.2 in Sung [18] are satisfied. Therefore (1.5) holds by Theorem 2.2 in Sung [18].

For the cases q=(r1)β and q>(r1)β, the proofs are similar to that of the previous case and are omitted.

The proofs of (2) and (3) are similar to that of (1) and are omitted. □

Acknowledgements

The research of Pingyan Chen is supported by the National Natural Science Foundation of China [grant number 71471075]. The research of Soo Hak Sung is supported by the research grant of Pai Chai University in 2018.

Authors’ contributions

Both authors read and approved the manuscript.

Competing interests

The authors declare that they have no competing interests.

Footnotes

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Sung S.H. Complete convergence for weighted sums of ρ-mixing random variables. Discrete Dyn. Nat. Soc. 2010;2010:630608. doi: 10.1155/2010/630608. [DOI] [Google Scholar]
  • 2.Wu Y., Wang X., Hu S. Complete moment convergence for weighted sums of weakly dependent random variables and its application in nonparametric regression model. Stat. Probab. Lett. 2017;127:56–66. doi: 10.1016/j.spl.2017.03.027. [DOI] [Google Scholar]
  • 3.Bradley R.C. On the spectral density and asymptotic normality of weakly dependent random fields. J. Theor. Probab. 1992;5:355–373. doi: 10.1007/BF01046741. [DOI] [Google Scholar]
  • 4.Bryc W., Smolenski W. Moment conditions for almost sure convergence of weakly correlated random variables. Proc. Am. Math. Soc. 1993;119:629–635. doi: 10.1090/S0002-9939-1993-1149969-7. [DOI] [Google Scholar]
  • 5.Peligrad M., Gut A. Almost-sure results for a class of dependent random variables. J. Theor. Probab. 1999;12:87–104. doi: 10.1023/A:1021744626773. [DOI] [Google Scholar]
  • 6.Utev S., Peligrad M. Maximal inequalities and an invariance principle for a class of weakly dependent random variables. J. Theor. Probab. 2003;16:101–115. doi: 10.1023/A:1022278404634. [DOI] [Google Scholar]
  • 7.Bai Z.D., Cheng P.E. Marcinkiewicz strong laws for linear statistics. Stat. Probab. Lett. 2000;46:105–112. doi: 10.1016/S0167-7152(99)00093-0. [DOI] [Google Scholar]
  • 8.Chen P., Ma X., Sung S.H. On complete convergence and strong law for weighted sums of i.i.d. random variables. Abstr. Appl. Anal. 2014;2014:251435. [Google Scholar]
  • 9.Choi B.D., Sung S.H. Almost sure convergence theorems of weighted sums of random variables. Stoch. Anal. Appl. 1987;5:365–377. doi: 10.1080/07362998708809124. [DOI] [Google Scholar]
  • 10.Chow Y.S. Some convergence theorems for independent random variables. Ann. Math. Stat. 1966;37:1482–1493. doi: 10.1214/aoms/1177699140. [DOI] [Google Scholar]
  • 11.Cuzick J. A strong law for weighted sums of i.i.d. random variables. J. Theor. Probab. 1995;8:625–641. doi: 10.1007/BF02218047. [DOI] [Google Scholar]
  • 12.Sung S.H. Complete convergence for weighted sums of random variables. Stat. Probab. Lett. 2007;77:303–311. doi: 10.1016/j.spl.2006.07.010. [DOI] [Google Scholar]
  • 13.Thrum R. A remark on almost sure convergence of weighted sums. Probab. Theory Relat. Fields. 1987;75:425–430. doi: 10.1007/BF00318709. [DOI] [Google Scholar]
  • 14.Chen P., Gan S. Limiting behavior of weighted sums of i.i.d. random variables. Stat. Probab. Lett. 2007;77:1589–1599. doi: 10.1016/j.spl.2007.03.038. [DOI] [Google Scholar]
  • 15.Chen P. Limiting behavior of weighted sums of negatively associated random variables. Acta Math. Sin. 2005;25A(4):489–495. [Google Scholar]
  • 16.Shao Q.M. A comparison theorem on moment inequalities between negatively associated and independent random variables. J. Theor. Probab. 2000;13:343–356. doi: 10.1023/A:1007849609234. [DOI] [Google Scholar]
  • 17.An J., Yuan D. Complete convergence of weighted sums for ρ-mixing sequence for random variables. Stat. Probab. Lett. 2008;78:1466–1472. doi: 10.1016/j.spl.2007.12.020. [DOI] [Google Scholar]
  • 18.Sung S.H. Complete qth moment convergence for arrays of random variables. J. Inequal. Appl. 2013;2013:24. doi: 10.1186/1029-242X-2013-24. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Journal of Inequalities and Applications are provided here courtesy of Springer

RESOURCES