Skip to main content
Springer logoLink to Springer
. 2018 Sep 14;2018(1):242. doi: 10.1186/s13660-018-1835-3

Almost sure central limit theorem for self-normalized products of the some partial sums of ρ-mixing sequences

Xili Tan 1, Wei Liu 1,
PMCID: PMC6154046  PMID: 30839646

Abstract

Let {X,Xn}nN be a strictly stationary ρ-mixing sequence of positive random variables, under the suitable conditions, we get the almost sure central limit theorem for the products of the some partial sums (i=1kSk,i(k1)nμn)μβVk, where β>0 is a constant, and E(X)=μ, Sk,i=j=1kXjXi, 1ik, Vk2=i=1k(Xiμ)2.

Keywords: Almost sure central limit theorem, ρ-Mixing sequence, Self-normalized, Products of the some partial sums

Introduction and main result

In 1988, Brosamler [1] and Schatte [2] proposed the almost sure central limit theorem (ASCLT) for the sequence of i.i.d. random variables. On the basis of i.i.d., Khurelbaatar and Grzegorz [3] got the ASCLT for the products of the some partial sums of random variables. In 2008, Miao [4] gave a new form of ASCLT for products of some partial sums.

Theorem A

([4])

Let {X,Xn}nN be a sequence of i.i.d. positive square integrable random variables with E(X1)=μ, Var(X1)=σ2>0 and the coefficient of variation γ=σμ. Denote the Sk,i=j=1kXjXi, 1ik. Then, for xR,

limN1logNn=1N1nI[(k=1nSn,k(n1)nμn)1γnx]=F(x)a.s.,

where F() is the distribution function of the random variables eN, N is a standard normal random variable.

For random variables X, Y, define

ρ(X,Y)=0supCov(f(X),g(Y))(Varf(X))12(Varg(Y))12,

where the sup is taken over all f,gC such that E(f(X))2< and E(g(Y))2<, and C is a class of functions which are coordinatewise increasing.

Definition

([5])

A sequence {X,Xn}nN is called ρ-mixing, if

ρ(s)=sup{ρ(S,T);S,TN,dist(S,T)s}0,s,

where

ρ(S,T)=0sup{Cov{f(Xi,iS),g(Xj,jT)}Var{f(Xi,iS)}Var{g(Xj,jT)},f,gC},

C is a class of functions which are coordinatewise increasing.

The precise definition of ρ-mixing random variables was introduced initially by Zhang and Wang [5] in 1999. Obviously, ρ-mixing random variables include NA and ρ-mixing random variables, which have a lot of applications, their limit properties have aroused wide interest recently, and a lot of results have been obtained by many authors. In 2005, Zhou [6] proved the almost central limit theorem of the ρ-mixing sequence. The almost sure central limit theorem for products of the partial sums of ρ-mixing sequences was given by Tan [7] in 2012. Because the denominator of the self-normalized partial sums contains random variables, this brings about difficulties to the study of the self-normalized form limit theorem of the ρ-mixing sequence. At present, there are very few results of this kind. In this paper, we extend Theorem A, and get the almost sure central limit theorem for self-normalized products of the some partial sums of ρ-mixing sequences.

Throughout this paper, anbn means limnanbn=1, and C denotes a positive constant, which may take different values whenever it appears in different expressions, and logx=ln(xe). We assume {X,Xn}nN is a strictly stationary sequence of ρ-mixing random variables, and we denote Yi=Xiμ.

For every 1ikn, define

Y¯ni=nI(Yi<n)+YiI(|Yi|n)+nI(Yi>n),Tk,n=i=1kY¯ni,Vn2=i=1nYi2,V¯n2=i=1nY¯ni2,V¯n,12=i=1nY¯ni2I(Yi0),V¯n,22=i=1nY¯ni2I(Yi<0),σn2=Var(Tn,n),δn2=E(Y¯n12),δn,12=EY¯n12I(Y10),δn,22=EY¯n12I(Y1<0),

apparently, δn2=δn,12+δn,22, E(V¯n2)=nδn2=nδn,12+nδn,22.

Our main theorem is as follows.

Theorem 1

Let {X,Xn}nN be a strictly stationary ρ-mixing sequence of positive random variables with EX=μ>0, and for some r>2, we have 0<E|X|r<. Denote Sk,i=j=1kXjXi, 1ik and Y=Xμ. Suppose that

(a1)

Ev(Y2I(Y0))>0, E(Y2I(Y<0))>0,

(a2)

σ12=EX12+2k=2Cov(X1,Xk)>0, k=2|Cov(X1,Xk)|<,

(a3)

σk2β2kδk2, for some β>0,

(a4)

ρ(n)=O(logδn), δ>1.

Suppose 0α<12, and let

dk=exp(logαk)k,Dn=k=1ndk, 1

then, for xR, we have

limn1Dnk=1ndkI[(i=1kSk,i(k1)kμk)μβVkx]=F(x)a.s., 2

where F() is the distribution function of the random variables eN, N is a standard normal random variable.

Corollary 1

By [8], (2) remains valid if we replace the weight sequence {dk,k1} by any {dk,k1} such that 0dkdk, k=1dk=.

Corollary 2

If {Xn,n1} is a sequence of strictly stationary independent positive random variables then one has (a3) and β=1.

Some lemmas

We will need the following lemmas.

Lemma 2.1

([7])

Let {X,Xn}nN be a strictly stationary sequence of ρ-mixing random variables with EX1=0, 0<EX12<, σ12=EX12+2k=2Cov(X1,Xk)>0 and k=2|Cov(X1,Xk)|<, then, for 0<p<2, we have

Snn1p0,a.s.,n.

Lemma 2.2

([9])

Let {X,Xn}nN be a sequence of ρ-mixing random variables, with

EXn=0,E|Xn|q<,n1,q2,

then there is a positive constant C=C(q,ρ()) only depending on q and ρ() such that

E(max1jn|Sj|q)C{i=1nE|Xi|q+(i=1nEXi2)q2}.

Lemma 2.3

([10])

Suppose that f1(x) and f2(y) are real, bounded, absolutely continuous functions on R with |f1(x)|C1 and |f2(y)|C2, then, for any random variables X and Y,

|Cov(f1(X),f2(Y))|C1C2{Cov(X,Y)+8ρ(X,Y)X2,1Y2,1},

where X2,1=0(P(|X|>x))12dx.

Lemma 2.4

Let {ξ,ξn}nN be a sequence of uniformly bounded random variables. If δ>1, ρ(n)=O(logδn), there exist constants C>0 and ε>0, such that

|Eξkξl|C(ρ(k)+(kl)ε),12k<l, 3

then

limn1Dnk=1ndkξk=0,a.s.

Proof

See the proof of Theorem 1 in [7]. □

Lemma 2.5

If the assumptions of Theorem 1 hold, then

limn1Dnk=1ndkI[Tk,kE(Tk,k)βδkkx]=Φ(x)a.s.,xR, 4
limn1Dnk=1ndk[f(V¯k,l2kδk,l2)Ef(V¯k,l2kδk,l2)]=0a.s.,l=1,2, 5

where dk and Dk is defined as (1) and f is real, bounded, absolutely continuous function on R.

Proof

Firstly, we prove (4), by the property of ρ-mixing sequence, we know that {Y¯ni}n1,in is a ρ-mixing sequence; using Lemma 2.1 in [7], the condition (a2), (a3), and β>0, δk2EY2>0, it follows that

Tk,kE(Tk,k)βδkkdN,k,

hence, for any g(x) which is a bounded function with bounded continuous derivative, we have

Eg(Tk,kE(Tk,k)βδkk)Eg(N),k,

by the Toeplitz lemma, we get

limn1Dnk=1ndkE[g(Tk,kE(Tk,k)βδkk)]=E(g(N)).

On the other hand, from Theorem 7.1 of [11] and Sect. 2 of [12], we know that (4) is equivalent to

limn1Dnk=1ndkg(Tk,kE(Tk,k)βδkk)=E(g(N))a.s.,

hence, to prove (4), it suffices to prove

limn1Dnk=1ndk[g(Tk,kE(Tk,k)βδkk)E(gTk,kE(Tk,k)βδkk)]=0a.s., 6

noting that

ξk=g(Tk,kE(Tk,k)βδkk)E(g(Tk,kE(Tk,k)βδkk)),

for every 12k<l, we have

|Eξkξl|=|Cov(g(Tk,kETk,kβδkk),g(Tl,lETl,lβδll))||Cov(g(Tk,kETk,kβδkk),g(Tl,lETl,lβδll)g(Tl,lETl,l(T2k,lET2k,l)βδll))|+|Cov(g(Tk,kETk,kβδkk),g(Tl,lETl,l(T2k,lET2k,l)βδll))|=I1+I2. 7

First we estimate I1; we know that g is a bounded Lipschitz function, i.e., there exists a constant C such that

|g(x)g(y)|C|xy|

for any x,yR, since {Y¯ni}n1,in also is a ρ-mixing sequence; we use the condition δl2E(Y2)<, l, and Lemma 2.2, to get

I1CE|T2k,lET2k,l|lCE(T2k,lET2k,l)2lCli=12kEY¯l,i2Cli=12kEY2C(kl)12. 8

Next we estimate I2; by Lemma 2.2, we have

Var(Tk,kETk,kβδkk)CkVar(Tk,kETk,k)Cki=1kE(Y¯kiEY¯ki)2Cki=1kE(Y¯ki)2CkkC

and

Var(Tl,lETl,l(T2k,lET2k,l)βδll)ClVar(Tl,lETl,l(T2k,lET2k,l))Cli=2k+1lE(Y¯liEY¯li)2Cl(i=1lEY¯li2)CllC.

By the definition of a ρ-mixing sequence, EY2<, and Lemma 2.3, we have

I2(Cov(Tk,kETk,kβδkk,Tl,lETl,l(T2k,lET2k,l)βδll)+8ρ(Tk,kETk,kβδkk,Tl,lETl,l(T2k,lET2k,l)βδll)Tk,kETk,kβδkk2,1Tl,lETl,l(T2k,lET2k,l)βδll2,1)Cρ(k)(Var(Tk,kETk,kβδkk))12(Var(Tl,lETl,l(T2k,lET2k,l)βδll))12+8ρ(k)Tk,kETk,kβδkk2,1Tl,lETl,l(T2k,lET2k,l)βδll2,1.

By X2,1r/(r2)Xr, r>2 (see p. 254 of [10] or p. 251 of [13]), Minkowski inequality, Lemma 2.2, and the Hölder inequality, we get

Tk,kETk,kβδkk2,1rr2Tk,kETk,kβδkkr=rr21βδkk(E|Tk,kETk,k|r)1rCk(i=1kE|Y¯ki|r+(i=1kEY¯ki2)r/2)1/rCk(k+kr/2)1/rC,

similarly

Tl,lETl,l(T2k,lET2k,l)βδll2,1C.

Hence

I2Cρ(k). 9

Combining with (7)–(9), (3) holds, and by (a4), Lemma 2.4, (6) holds, then (4) is true.

Secondly, we prove (5); for k1, ηk=f(V¯k,12/(kδk,12))E(f(V¯k,12/(kδk,12))), we have

|Eηkηl|=|Cov(f(V¯k,12kδk,12),f(V¯l,12lδl,12))||Cov(f(V¯k,12kδk,12),f(V¯l,12lδl,12)f(i=2k+1lY¯l,i2I(Yi0)lδl,12))|+|Cov(f(V¯k,12kδk,12),f(i=2k+1lY¯l,i2I(Yi0)lδl,12))|=J1+J2, 10

by the property of f, we know

J1C(E(i=12kY¯ki2I(Yi0))/l)C(kl). 11

Now we estimate J2,

Var(V¯k,12kδk,12)=Var(i=1kY¯ki2I(Yi0)kδk,12)Ck2E(i=1kY¯ki2I(Yi0))2=Ck2E(i=1kY¯ki2I(Yi0)E(i=1kY¯ki2I(Yi0))+E(i=1kY¯ki2I(Yi0)))2Ck2E(i=1k(Y¯ki2I(Yi0)E(Y¯ki2I(Yi0))))2+Ck2(i=1kE(Y¯ki2I(Yi0)))2Ck2i=1kEY¯ki4I(Yi0)+Ck2(kE(Y¯k12I(Y10)))2Ck2i=1kEk(Yi)2C,

and similarly Var(i=2k+1lY¯li2I(Yi0)/(lδl,12))C. On the other hand, we have

V¯k,12kδk,122,1rr2Ck(E|V¯k,12|r)1/rCk(E|i=1k(Y¯ki2I(Yi0)E(Y¯ki2I(Yi0)))|r+|i=1kE(Y¯ki2I(Yi0))|r)1/rCk(i=1kE|(Y¯ki2I(Yi0)E(Y¯ki2I(Yi0)))|r+(i=1kE(Y¯ki2I(Yi0)E(Y¯ki2I(Yi0)))2)r/2)1/r+Ck|i=1kE(Y¯ki2I(Yi0))|Ck(i=1kE|Y¯ki2I(Yi0)|r+(i=1kE|Y¯ki2I(Yi0)|2)r/2)1/r+Ck|kE(Y¯k12I(Y10))|Ck(i=1kE|kYi|r+(i=1kE|kYi|2)r/2)1/r+C1Ck(k1+r/2+kr)1/r+C1C,

similarly

i=2k+1lY¯li2I(Yi0)/(lδl,12)2,1C.

Thus, by Lemma 2.3, we have

J2C{Cov(V¯k,12kδk,12,i=2k+1lY¯li2I(Yi0)lδl,12)+8ρ(V¯k,12kδk,12,i=2k+1lY¯li2I(Yi0)lδl,12)V¯k,12kδk,122,1i=2k+1lY¯li2I(Yi0)lδl,122,1}C{ρ(k)(Var(V¯k,12kδk,12))1/2Var(i=2k+1lY¯li2I(Yi0)lδl,12)1/2+ρ(k)V¯k,12kδk,122,1i=2k+1lY¯li2I(Yi0)lδl,122,1}Cρ(k), 12

hence, combining with (11) and (12), (3) holds, and by Lemma 2.4, (5) holds. □

Proof of Theorem 1

Let Ck,i=Sk,i(k1)μ, hence, (2) is equivalent to

limn1Dnk=1ndkI(μβVki=1klogCk,ix)=Φ(x)a.s. 13

So we only need to prove (13), for a fixed k, 1kn and ε>0; we have

limkP{m=k(|Xim|ε)}=limkP{|Xik|ε}=limkP{|X1|εk}=0,

therefore, by Theorem 1.5.2 in [14], we have

Xik0a.s. k,

on the unanimous establishment of i.

By Lemma 2.1, for some 43<p<2, and enough large k, we have

sup1ik|Ck,i1||j=1k(Xjμ)(k1)μ|+sup1ik|Xi(k1)μ|+1k1|Skkμk1pk1p(k1)μ|Ck1p1,

by log(1+x)=x+O(x2), x0, we get

|μβδk(1±ε)ki=1klnCk,iμβδk(1±ε)ki=1k(Ck,i1)|Cμβδk(1±ε)ki=1k(Ck,i1)2Ckk2p10a.s.,k,

and then, for δ>0 and every ω, there exists k0=k0(ω,δ,x); when k>k0, we have

I{μβδk(1±ε)ki=1k(Ck,i1)xδ}I{μβδk(1±ε)ki=1klogCk,ix}I{μβδk(1±ε)ki=1k(Ck,i1)x+δ}, 14

under the condition |Xiμ|k, 1ik, we have

μi=1k(Ck,i1)=i=1kSk,i(k1)μk1=i=1kYi=i=1kY¯ki=Tk,i, 15

furthermore, by (14) and (15), for any given 0<ε<1, δ>0, when k>k0, we obtain

I(μβVki=1klogCk,ix)I(Tk,iδkβk(1+ε)x+δ)+I(V¯k2>(1+ε)kδk2)+I(i=1k(|Xiμ|>k)),x0,I(μβVki=1klogCk,ix)I(Tk,iδkβk(1ε)x+δ)+I(V¯k2<(1ε)kδk2)+I(i=1k(|Xiμ|>k)),x<0,I(μβVki=1klogCk,ix)I(Tk,iδkβk(1ε)xδ)I(V¯k2<(1ε)kδk2)I(i=1k(|Xiμ|>k)),x0,I(μβVki=1klogCk,ix)I(Tk,iδkβk(1+ε)xδ)I(V¯k2>(1+ε)kδk2)I(i=1k(|Xiμ|>k)),x<0.

Therefore, to prove (13), for any 0<ε<1, δ1>0, it suffices to prove

limn1Dnk=1ndkI(Tk,iβδkk1±εx±δ1)=Φ(1±εx±δ1)a.s., 16
limn1Dnk=1ndkI(i=1k(|Xiμ|>k))=0a.s., 17
limn1Dnk=1ndkI(V¯k2>(1+ε)kδk2)=0a.s., 18
limn1Dnk=1ndkI(V¯k2<(1ε)kδk2)=0a.s. 19

Firstly, we prove (16), by E(Y2)<, we know limxx2P(|Y|>x)=0, and by E(Y)=0, it follows that

|E(Tk,i)|=|E(i=1kY¯ki)|=|kEY¯k1|k|E(YI(|Y|>k))|+k32E(I(|Y|>k))kE(Y2I(|Y|>k))+k32P(|Y|>k)=o(k),

so, combining with δk2E(Y2)<, for any α>0, when k, we have

I(Tk,iETk,iβδkk1±εx±δ1α)I(Tk,iβδkk1±εx±δ1)I(Tk,iETk,iβδkk1±εx±δ1+α),

thus, by (4), we get

limn1Dnk=1ndkI(Tk,iβδkk1±εx±δ1)limn1Dnk=1ndkI(Tk,iETk,iβδkk1±εx±δ1α)Φ(1±εx±δ1α), 20
limn1Dnk=1ndkI(Tk,iβδkk1±εx±δ1)limn1Dnk=1ndkI(Tk,iETk,iβδkk1±εx±δ1+α)Φ(1±εx±δ1+α)a.s., 21

letting α0 in (20) and (21), (16) holds.

Now, we prove (17); by E(Y2)<, we know limxx2P(|Y|>x)=0, such that

EI(i=1k(|Yi|>k))i=1kP(|Yi|>k)kP(|Y|>k)0,k,

by the Toeplitz lemma, we get

limn1Dnk=1ndkEI(i=1k(|Yi|>k))0a.s., 22

hence, to prove (17), it suffices to prove

limn1Dnk=1ndk(I(i=1k(|Yi|>k))E[I(i=1k(|Yi|>k))])0a.s., 23

writing

Zk=I(i=1k(|Yi|>k))E[I(i=1k(|Yi|>k))],

for every 02k<l, so by the definition of ρ-mixing sequence, we have

E|ZkZl|=|Cov(I(i=1k(|Yi|>k)),I(i=1l(|Yi|>l)))||Cov(I(i=1k(|Yi|>k)),I(i=1l(|Yi|>l))I(i=2k+1l(|Yi|>l)))|+|Cov(I(i=1k(|Yi|>k)),I(i=2k+1l(|Yi|>l)))|E|I(i=1l(|Yi|>l))I(i=2k+1l(|Yi|>l))|+ρ(k)Var(I(i=1k(|Yi|>k)))Var(I(i=2k+1l(|Yi|>l)))E[I(i=12k(|Yi|>l))]+Cρ(k)i=1kP(|Yi|>l)+Cρ(k)kP(|Y|>l)+Cρ(k)C(kl+ρ(k)),

so by Lemma 2.4, (23) holds. And combining with (22), we know that (17) holds.

Next, we prove (18); by E(V¯k2)=kδk2, V¯k2=V¯k,12+V¯k,22, E(V¯k,l2)=kδk,l2, and δk,12δk2, l=1,2, we have

I(V¯k2>(1+ε)kδk2)=I(V¯k2E(V¯k2)>εkδk2)I(V¯k,12E(V¯k,12)>εkδk2/2)+I(V¯k,22E(V¯k,22)>εkδk2/2)I(V¯k,12>(1+ε2)kδk,12)+I(V¯k,22>(1+ε2)kδk,22),

therefore, by the arbitrariness of ε>0, to prove (18), it suffices to prove

limn1Dnk=1ndkI(V¯k,l2>(1+ε2)kδk,l2)=0a.s. l=1,2, 24

when l=1, for given ε>0, let f be a bounded function with bounded continuous derivative such that

I(x>1+ε)f(x)I(x>1+ε2), 25

under the condition

E(V¯k,12)=kδk,12,E(Y2)<,E(Y2I(Y0))>0,

by the Markov inequality, and Lemma 2.2, we get

P(V¯k,12>(1+ε2)kδk,12)=P(V¯k,12E(V¯k,12)>ε2kδk,12)CE(V¯k,12E(V¯k,12))2k2Ci=1kE(Y¯ki2I(Y¯ki0))2k2CEY¯k14I(Y¯k10)kCEY4I(0Yk)+k2P(Y>k)k, 26

because E(Y2)< implies limxx2P(|Y|>x)=0, we have

EY4I(0Yk)=0P(|Y|I(0Yk)t)4t3dtC0kP(|Y|t)t3dt=0ko(1)tdt=o(1)k,

thus, combining with (26),

P(V¯k,12>(1+ε2)kδk,12)0,k.

Therefore, from (5), (25) and the Toeplitz lemma

01Dnk=1ndkI(V¯k,12>(1+ε2)kδk,12)1Dnk=1ndkf(V¯k,12kδk,12)=1Dnk=1ndkE(f(V¯k,12kδk,12))+1Dnk=1ndk(f(V¯k,12kδk,12)E(f(V¯k,12kδk,12)))1Dnk=1ndkE(I(V¯k,12>(1+ε2)kδk,12))+1Dnk=1ndk(f(V¯k,12kδk,12)E(f(V¯k,12kδk,12)))=1Dnk=1ndkP(V¯k,12>(1+ε2)kδk,12)+1Dnk=1ndk(f(V¯k,12kδk,12)E(f(V¯k,12kδk,12)))0a.s.,k,

hence, (24) holds for l=1. Similarly, we can prove (24) for l=2, so (18) is true. By similar methods used to prove (18), we can prove (19), this completes the proof of Theorem 1.

Authors’ information

XiLi Tan, Professor, Doctor, working in the field of probability and statistics. Wei Liu, Master, working in the field of probability and statistics.

Authors’ contributions

All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (11171003), the Foundation of Jilin Educational Committee of China (2015-155) and the Innovation Talent Training Program of Science and Technology of Jilin Province of China (20180519011JH).

Competing interests

The authors declare that there is no conflict of interest regarding the publication of this paper. We confirm that the received funding mentioned in the “Acknowledgment” section did not lead to any conflict of interests regarding the publication of this manuscript. We declare that we do not have any commercial or associated interest that represents a conflict of interest in connection with the work submitted.

Footnotes

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Xili Tan, Email: tanxl0832@sina.com.

Wei Liu, Email: 120671554@qq.com.

References

  • 1.Brosamle G. An almost everywhere central limit theorem. Math. Proc. Camb. Philos. Soc. 1988;104(3):561–574. doi: 10.1017/S0305004100065750. [DOI] [Google Scholar]
  • 2.Schatte P. On strong versions of the central limit theorem. Math. Nachr. 1988;137(4):249–256. doi: 10.1002/mana.19881370117. [DOI] [Google Scholar]
  • 3. Khurelbaatar, G.: A note on the almost sure central limit theorem for the product of partial sums. IMA Preprint Series 1968, University of Minnesota, Minnesota (2004)
  • 4.Yu M. Central limit theorem and almost sure central limit theorem for the product of some partial sums. Proc. Indian Acad. Sci. Math. Sci. 2008;118(2):289–294. doi: 10.1007/s12044-008-0021-9. [DOI] [Google Scholar]
  • 5.Zhang L.X., Wang X.Y. Convergence rates in the strong laws of asymptotically negatively associated random fields. Appl. Math. J. Chin. Univ. Ser. B. 1999;14(4):406–416. doi: 10.1007/s11766-999-0070-6. [DOI] [Google Scholar]
  • 6.Zhou H. A note on the almost sure central limit theorem of the mixed sequences. J. Zhejiang Univ. Sci. Ed. 2005;32(5):503–505. [Google Scholar]
  • 7.Tan X.L., Zhang Y. An almost sure central limit theorem for products of partial sums for ρ-mixing sequences. J. Inequal. Appl. 2012;2012:51. doi: 10.1186/1029-242X-2012-51. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Chandrasekharan K., Minakshisundaram S. Typical Means. Oxford: Oxford University Press; 1952. [Google Scholar]
  • 9.Wang J.F., Lu F.B. Inequalities of maximum of partial sums and weak convergence for a class of weak dependent random variables. Acta Math. Sin. 2006;22(3):693–700. doi: 10.1007/s10114-005-0601-x. [DOI] [Google Scholar]
  • 10.Zhang L.X. Central limit theorems for asymptotically negatively associated random fields. Acta Math. Sin. 2000;6(4):691–710. doi: 10.1007/s101140000084. [DOI] [Google Scholar]
  • 11.Peligrad M., Shao Q.M. A note on the almost sure central limit theorem for weakly dependent random variables. Stat. Probab. Lett. 1995;22:131–136. doi: 10.1016/0167-7152(94)00059-H. [DOI] [Google Scholar]
  • 12.Billingsley P. Convergence of Probability Measures. New York: Wiley; 1968. [Google Scholar]
  • 13.Ledoux M., Talagrand M. Probability in Banach Space. New York: Springer; 1991. [Google Scholar]
  • 14.Wu Q. Probability Limit Theorems of Mixing Sequences. Beijing: Science Press; 2006. [Google Scholar]

Articles from Journal of Inequalities and Applications are provided here courtesy of Springer

RESOURCES