Abstract
Consider the linear regression model
where are general dependence errors. The Bahadur representations of M-estimators of the parameter β are given, by which asymptotically the theory of M-estimation in linear regression models is unified. As applications, the normal distributions and the rates of strong convergence are investigated, while are m-dependent, and the martingale difference and -weakly dependent.
Keywords: Linear regression models, M-estimate, Bahadur representation, Normal distribution, Rate of strong convergence
Introduction
Consider the following linear regression model:
| 1.1 |
where is an unknown parametric vector, denotes the ith row of an design matrix X, and are stationary dependence errors with a common distribution.
An M-estimate of β is defined as any value of β minimizing
| 1.2 |
for a suitable choice of the function ρ, or any solution for β of the estimating equation
| 1.3 |
for a suitable choice of ψ.
There is a body of statistical literature dealing with linear regression models with independent and identically distributed (i.i.d.) random errors, see e.g. Babu [1], Bai et al. [2], Chen [7], Chen and Zhao [8], He and Shao [24], Gervini and Yohai [23], Huber and Ronchetti [28], Xiong and Joseph [50], Salibian-Barrera et al. [44]. Recently, linear regression models with serially correlated errors have attracted increasing attention from statisticians; see, for example, Li [33], Wu [49], Maller [38], Pere [41], Hu [25, 26]. Over the last 40 years, M-estimators in linear regression models have been investigated by many authors. Let be i.i.d. random variables. Koul [30] discussed the asymptotic behavior of a class of M-estimators in the model (1.1) with long range dependence errors . Wu [49] and Zhou and Shao [52] discussed the model (1.1) with and derived strong Bahadur representations of M-estimators and a central limit theorem. Zhou and Wu [53] considered the model (1.1) with , and obtained some asymptotic results including consistency of robust estimates. Fan et al. [20] investigated the model (1.1) with the errors and established the moderate deviations and strong Bahadur representations for M-estimators. Wu [47] discussed strong consistency of an M-estimator in the model (1.1) for negatively associated samples. Fan [19] considered the model (1.1) with φ-mixing errors, and the moderate deviations for the M-estimators. In addition, Berlinet et al. [4], Boente and Fraiman [5], Chen et al. [6], Cheng et al. [9], Gannaz [22], Lô and Ronchetti [37], Valdora and Yohai [45] and Yang [51] have also studied some asymptotic properties of M-estimators in nonlinear models. However, no people have investigated a unified the theory of M-estimation in linear regression models with more general errors.
In this paper, we assume that
| 1.4 |
where is a measurable function such that is a proper random variable, and (where Z is the set of integers) are very general random variables, including m-dependent, martingale difference, -weakly dependent, and so on.
We try to investigate the unified the theory of M-estimation in the linear regression model. In the article, we use the idea of Wu [49] to study the Bahadur representative of M-estimator, and we extend some results to general errors. The paper is organized as follows. In Sect. 2, the weak and strong linear representation of an M-estimate of the vector regression parameter β in the model (1.1) are presented. Section 3 contains some applications of our results, including the m-dependent, -weakly dependent, martingale difference. In Sect. 4, proofs of the main results are given.
Main results
In the section, we investigate the weak and strong linear representation of an M-estimate of the vector regression parameter β in the model (1.1). Without loss of generality, we assume that the true parameter . We start with some notation and assumptions.
For a vector , let . A random vector V is said to be in , if . Let , , and assume that is positive definite for large enough n. Let . Then the model (1.1) can be written as
| 2.1 |
with , where is an identity matrix of order p. Assume that ρ has derivative ψ. For and a function f, write if f has derivatives up to lth order and is continuous. Define the function
| 2.2 |
where , let be an i.i.d. copy of , and .
Throughout the paper, we use the following assumptions.
is a convex function, .
has a strictly positive derivative at .
is continuous at .
.
- There exists a such that
2.3 - Let . For some and
2.4 2.5 2.6
Remark 1
Conditions (A1)–(A5) and (A6) are imposed in the M-estimation considering the theory of linear regression models with dependent errors (Wu [49]; Zhou and Shao [52]). Condition (2.6) is similar to (7) of Wu [49]. measures the difference of the contribution of and its copy in predicting . However, measures the contribution of in predicting under the given copy of : .
If are i.i.d., then (A6) and (A7) hold. For the other settings, (A6) and (A7) are very easily satisfied. The following proposition provides some sufficient conditions for (A6) and (A7).
Proposition 2.1
Let and be the conditional distribution and density function of at u given , respectively. Let and be the density function of and , respectively.
Let , and . If , then (A6) holds.
- Let
and . If and , then assumption (A7) holds.
Proof
(1) By the conditions of (1), we have
| 2.7 |
Namely (A6) holds.
(2) (A7) follows from
and
Hence, the proposition is proved. □
Define the M-processes
where
Theorem 2.1
Let be a sequence of positive numbers such that and . If (A1)–(A5), and (A6) and (A7) with hold, then
| 2.8 |
where
Corollary 2.1
Assume that (A1)–(A5), and (A6) and (A7) with hold. If as , , then, for ,
| 2.9 |
Moreover, if, as , for some , then
| 2.10 |
Remark 2
If i.i.d., then follows from (3.2) of Rao and Zhao [42]. If i.i.d., then follows from Theorem 1 of Wu [49] and Zhou and Shao [52]. If , where the function satisfies some condition and i.i.d., then follows from Theorem 2.2 of Fan et al. [20]. If NA, then follows from Theorem 1 of Wu [47]. Therefore the condition is not strong. In the paper, we do not discuss it.
Theorem 2.2
Assume that (A1)–(A3), (A5), and (A6) and (A7) with hold. Let be the minimum eigenvalue of , , , and . If and , then
where , and
Corollary 2.2
Assume that and as , and . Under the conditions of Theorem 2.2, we have:
;
,
where is the minimizer of (1.2).
Remark 3
From the above results, we easily obtain the corresponding conclusions of Wu [49].
From the corollary below, we only derive convergence rates of . However, it is to be regretted that we cannot give laws of the iterated logarithm , which is still an open problem.
Corollary 2.3
Under the conditions of Corollary 2.2, we have
Proof
Note that and as ; we have
and
By Corollary 2.2, we have
| 2.11 |
and
| 2.12 |
Applications
In the following three subsections, we shall investigate some applications of our results. In Sect. 3.1, we consider that is a m-dependent random variable sequence. We shall investigate that are -weakly dependent in Sect. 3.2, and martingale difference errors in Sect. 3.3.
m-dependent process
In the subsection, we shall firstly show that the m-dependent sequence satisfies conditions (A6) and (A7) and secondly obtain the asymptotic normal distribution and strong convergence rates for M-estimators of the parameter. Koul [30] discussed the asymptotic behavior of a class of M-estimators in the model (1.1) with long range dependence errors , where i.i.d. Here we assume that is a m-dependent sequence, of which the definition was given by Example 2.8.1 in Lehmann [32]. For m-dependent sequences or processes, there are some results (e.g., see Hu et al. [27], Romano and Wolf [43] and Valk [46]).
Proposition 3.1
Let in (1.4) be a m-dependent sequence. Then (A6) and (A7) hold.
Proof
Note that is a m-dependent sequence, we have
| 3.1 |
and
| 3.2 |
Corollary 3.1
Assume that (A1)–(A5) hold. If and for some as , and , then
In order to prove Corollary 3.1, we give the following lemmas.
Lemma 3.1
(Lehmann [32])
Let be a stationary m-dependent sequence of random variables with and , and . Then
where .
Using the argument of Lemma 3.1, we easily obtain the following result. Here we omit the proof.
Lemma 3.2
Let be a stationary m-dependent sequence of random variables with and , and . Then
where .
Proof of Corollary 3.1
By (2.10), we have
| 3.3 |
Since is a stationary m-dependent sequence, so is . Let , . Then and
Therefore, by and , we have
| 3.4 |
Thus the corollary follows from Lemma 3.2, (3.3) and (3.4). □
Corollary 3.2
Assume that (A1)–(A5) hold. If and as , and , then
Proof
The corollary follows from Proposition 3.1 and Corollary 2.2. □
-weakly dependent process
In the subsection, we assume that are -weakly dependent (Doukhan and Louhichi [14] and Dedecker et al. [11]) random variables. In 1999, Doukhan and Louhichi proposed a new idea of -weakly dependence which focuses on covariance rather than the total variation distance between joint distributions and the product of the corresponding marginal. It has been shown that this concept is more general than mixing and includes, under natural conditions on the process parameters, essentially all classes of processes of interest in statistics. Therefore, many researchers are interested in the -weakly dependent and related possesses, and one obtained lots of sharp results. For example, Doukhan and Louhichi [14], Dedecker and Doukhan [10], Dedecker and Prieur [12], Doukhan and Neumann [16], Doukhan and Wintenberger [17], Bardet et al. [3], Doukhan and Wintenberger [18], Doukhan et al. [13]. However, a few people (only Hwang and Shin [29], Nze et al. [40]) investigated regression models with -weakly dependent errors. Nobody has investigated a robust estimate for the regression model with -weakly dependent errors. To give the definition of the -weakly dependence, let us consider a process with values in a Banach space . For , , we define the Lipschitz modulus of h,
| 3.5 |
where we have the -norm, i.e., .
Definition 1
(Doukhan and Louhich [14])
A process with values in is called a -weakly dependent process if, for some classes of functions :
as .
According to the definition, mixing sequences (-mixing), associated sequences (positively or negatively associated), Gaussian sequences, Bernoulli shifts and Markovian models or time series bootstrap processes with discrete innovations are -weakly dependent (Doukhan et al. [15]).
From now on, assume that the classes of functions contain functions bounded by 1. Distinct functions Ψ yield and a λ weak dependence of the coefficients as follows (Doukhan et al. [15]):
| 3.6 |
In Corollary 3.3, we only consider λ and η-weakly dependence. Let be λ or η-weakly dependent, and assume that g satisfies: for each , if satisfy for each index
| 3.7 |
Lemma 3.3
(Dedecker et al. [11])
Assume that g satisfies the condition (3.7) with and some sequence such that . Assume that with for some . Then:
- If the process is λ-weakly dependent with coefficients , then is λ-weakly dependent with coefficients
3.8 - If the process is η-weakly dependent with coefficients , then is η-weakly dependent and there exists a constant such that
Lemma 3.4
(Bardet et al. [3])
Let be a sequence of -valued random variables. Assume that there exists some constant such that . Let h be a function from to R such that and for , there exist a in and such that
| 3.9 |
Now we define the sequence by . Then:
(1) If the process is λ-weakly dependent with coefficients , then is also with coefficients
| 3.10 |
(2) If the process is ζ-weakly dependent with coefficients , so is with coefficients .
Lemma 3.5
(Dedecker et al. [11])
Let be a centered and stationary real-valued sequence with , , and . If for , then as .
Corollary 3.3
Let be λ-weakly dependent with coefficients for some , and for some . Assume that , and, for , there exists a constant such that
| 3.11 |
Under the conditions of Corollary 2.1, we have
| 3.12 |
where .
Proof
Note that is λ-weakly dependent. By Lemma 3.3, we find that is λ-weakly dependent with coefficients
| 3.13 |
from (3.8) and Proposition 3.1 in Chap. 3 (Dedecker et al. [11]).
Let , , and . Then . Choose , in (3.9), and by (3.11), we have
| 3.14 |
for and . Therefore, by Lemma 3.4, is λ-weakly dependent with coefficients
| 3.15 |
By Corollary 2.1, we have
| 3.16 |
By (3.13) and (3.15), there exist and for some such that
| 3.17 |
for enough large r and with .
By Lemma 3.5 and (3.16)–(3.17), we have
where . Using the Cramer device, we complete the proof of Corollary 3.3. □
Lemma 3.6
(Dedecker et al. [11])
Suppose that are stationary real-valued random variables with and for all . Let be one of the following functions:
| 3.18 |
for some . We assume that there exist constants and a nonincreasing sequence of real coefficients such that, for all u-tuples and all v-tuples with the following inequality is fulfilled:
| 3.19 |
where
| 3.20 |
Let and . If , then
| 3.21 |
Corollary 3.4
Let be η-weakly dependent with coefficients for some , and for some . Assume that and (3.11) hold. Under the conditions of Corollary 2.2 with replaced by , and , we have:
for ;
for .
Proof
Let . Then for as
| 3.22 |
Therefore, there exists some such that
| 3.23 |
Similar to the proofs of (3.13) and (3.15), we easily obtain
| 3.24 |
where
| 3.25 |
| 3.26 |
Let and
| 3.27 |
Thus (3.19) holds. Since , there exist and for some and such that
| 3.28 |
Thus
| 3.29 |
| 3.30 |
By Lemma 3.6 and Corollary 2.3, we have
| 3.31 |
Therefore, by Corollary 2.3, (3.23) and (3.31), we complete the proof of Corollary 3.4. □
Linear martingale difference processes
In the subsection, we will investigate martingale difference errors . We shall provide some sufficient conditions for (A6) and (A7) and give the central limit theorem and strong convergence rates.
Let be a martingale difference sequence, and be real numbers such that exists. It is well known that the theory of martingales provides a natural unified method for dealing with limit theorems. Under its influence, there is great interest in the martingale difference. Liang and Jing [34] were concerned with the partial linear model under the linear com of martingale differences and obtained asymptotic normality of the least squares estimator of the parameter. Nelson [39] has given conditions for the pointwise consistency of weighted least squares estimators from multivariate regression models with martingale difference errors. Lai [31] investigated stochastic regression models with martingale difference sequence errors and obtained strong consistency and asymptotic normality of the least squares estimate of the parameter.
Let be the distribution function of and let be its density.
Proposition 3.2
Suppose that , and , where . If , then and .
Proof
Let , and
| 3.32 |
where . By the Schwartz inequality, we have
| 3.33 |
Note that
| 3.34 |
and
| 3.35 |
Let . By the Schwartz inequality, we have
| 3.36 |
By and Chatterji’s inequality (Lin and Bai [35]), we have
| 3.37 |
By (3.33)–(3.37) and the Schwartz inequality, we have
| 3.38 |
Note that implies and , and by (3.33) and (3.39), we have
| 3.39 |
The general case similarly follows. Similar to the proof of (3.39), we easily prove the other results. □
From Propositions 2.1 and 3.2, (A6) and (A7) hold. Hence, we can obtain the following two corollaries from Corollaries 2.1 and 2.2. In order to prove the following two corollaries, we first give some lemmas.
Lemma 3.7
(Liptser and Shiryayev [36])
Let be a strictly stationary sequence on a probability space , and be a σ-algebra of invariant sets of the sequence ξ and . For a certain , let and , where . Then
where the random variable Z has the characteristic function , and .
Corollary 3.5
Assume that (A1)–(A5) hold, and for some as , . Under the conditions of Proposition 3.2, and , we have
| 3.40 |
where the random variable Z has the characteristic function , and .
Proof
By Proposition 2.1, Proposition 3.2 and Corollary 2.1, we have
| 3.41 |
By and , we have
| 3.42 |
| 3.43 |
and
By Proposition 2.1, Proposition 3.2 and Corollary 2.2, we easily obtain the following result. Here we omit the proof. □
Corollary 3.6
Assume that (A1)–(A5) hold, and as , . Under the conditions of Proposition 3.2, we have
Proofs of the main results
For the proofs of Theorem 2.1 and Theorem 2.2, we need some lemmas as follows.
Lemma 4.1
(Freedman [21])
Let τ be a stopping time, and K a positive real number. Suppose that , where are measurable random variables and . Then, for all positive real numbers a and b,
Lemma 4.2
Let
| 4.1 |
Assume that (A5) and (A6) hold. Then
| 4.2 |
Proof
Note that , and , we have . For any positive sequence , let
and
By the monotonicity of ψ and , we have
| 4.3 |
By (4.3), the -inequality and (A3), we have
Thus
| 4.4 |
By the Chebyshev inequality,
| 4.5 |
Similarly,
| 4.6 |
Let . For , define
| 4.7 |
Since , it suffices to prove that Lemma 4.2 holds with replaced by .
Let and
| 4.8 |
Note that
| 4.9 |
By (4.9), for large enough n, we have
| 4.10 |
Let the projections . Since
| 4.11 |
Note that are bound martingale differences. By Lemma 4.1 and (4.10), for , we have
| 4.12 |
Let and . Then , where the symbol # denotes the number of elements of the set . It is easy to show
| 4.13 |
By (4.12) and (4.13), for , we have
| 4.14 |
By (4.5), (4.6) and (4.14), we have
| 4.15 |
For a, let and . For a vector , let .
By (A5), for and large n, we have
Let . By condition (A5), the Markov inequality and , we have
| 4.16 |
Note that , which implies . Thus
| 4.17 |
Without loss of generality, assume that in the following proof.
Let . Then and . Since ψ is nondecreasing,
Note that
Namely
Therefore
| 4.18 |
| 4.19 |
Note that , (4.2) immediately follows from (4.15) and (4.19). □
Lemma 4.3
Assume that the processes . Let . Then
| 4.20 |
where .
Proof
Since
we have
| 4.21 |
By the Jensen inequality, we have
| 4.22 |
That is,
| 4.23 |
Note that
| 4.24 |
and
| 4.25 |
By (4.24), (4.25) and the Jensen inequality, we have
| 4.26 |
□
Remark 4
If i.i.d., then . In this case, the above lemma becomes Theorem 1 of Wu [48].
Lemma 4.4
Let be a sequence of positive numbers such that and . If (A6)–(A7) hold, then
| 4.27 |
where
Proof
Let be a nonempty set and , and , with vector . Write
In the following, we will prove that
| 4.28 |
uniformly over .
In fact, let
and
Then , and are martingale differences. By the orthogonality of martingale differences and the stationarity of , and Lemma 4.3, we have
| 4.29 |
By Lemma 4.3, and the -inequality, for , we have
| 4.30 |
where
Note that , we have
| 4.31 |
By the conditions (A6), (A7) and (4.29)–(4.31), we have
Let . By . Note that and . By (4.28), we have
| 4.32 |
Since
| 4.33 |
Lemma 4.5
Let be a sequence of bounded positive numbers, and let there exist a constant such that holds for all large n. And let and . Assume that (A5) and hold. Then as
where .
Proof
Let
and
Since and . By the argument of Lemma 4.2 and the Borel–Cantelli lemma, we have
| 4.34 |
Similar to the proof of (4.12), we have
| 4.35 |
Let and . Then . By (4.34) and (4.35), for , we have
| 4.36 |
Therefore,
| 4.37 |
Since and in (4.17) can be replaced by , and the lemma follows from . □
Lemma 4.6
Let be a sequence of bounded positive numbers, and let there exist a constant such that and hold for all large n. And let . Assume that (A6), (A7) and hold. Then
| 4.38 |
and, as , for any ,
| 4.39 |
where .
Proof
Let , and
| 4.40 |
Note that
| 4.41 |
It is easy to see that the argument in the proof of Lemma 4.4 implies that there exists a positive constant such that
| 4.42 |
holds uniformly over . Therefore (4.38) holds.
Let , where
| 4.43 |
For a positive integer , write its dyadic expansion , where , and . By the Schwartz inequality, we have
| 4.44 |
Thus
| 4.45 |
Since and , (4.42) implies that
| 4.46 |
Lemma 4.7
Under the conditions of Theorem 2.2, we have:
;
for and , where .
Proof
Observe that . Since , (1) follows from Lemma 4.5 and 4.6. □
As with the argument in (4.29), we have .
Proof of Theorem 2.1
Observe that
| 4.47 |
By (4.47), Lemma 4.2 and Lemma 4.4, we have
| 4.48 |
This completes the proof of Theorem 2.1. □
Proof of Corollary 2.1
Take an arbitrary sequence , which satisfies the assumption of Theorem 2.1. Note that
| 4.49 |
and
| 4.50 |
for . By Theorem 2.1 and (4.49), we have
| 4.51 |
| 4.52 |
By (4.52), as , and , we have
and
Namely
| 4.53 |
By for some , we have
| 4.54 |
Then it follows from (4.53) and (4.54) that
| 4.55 |
for any , which implies
| 4.56 |
□
Proof of Theorem 2.2
Proof of Corollary 2.2
(1) By Lemma 4.7, we have
| 4.57 |
where . Let
| 4.58 |
and
| 4.59 |
Note that
| 4.60 |
| 4.61 |
It is easy to show that . By , we have
| 4.62 |
By as , we have . Thus
| 4.63 |
By the convexity of the function , we have
| 4.64 |
Therefore the minimizer satisfies .
(2) Let . By a Taylor expansion, we have
| 4.65 |
Therefore (2) follows from Theorem 2.2 and (1). □
Acknowledgements
The author’s work was supported by the National Natural Science Foundation of China (No. 11471105, 11471223), and the Natural Science Foundation of Hubei Province (No. 2016CFB526).
Authors’ contributions
The author organized and wrote this paper. Further he examined all the steps of the proofs in this paper. The author read and approved the final manuscript.
Competing interests
The author declares to have no competing interests.
Footnotes
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Babu G.J. Strong representations for LAD estimators in linear models. Probab. Theory Relat. Fields. 1989;83:547–558. doi: 10.1007/BF01845702. [DOI] [Google Scholar]
- 2.Bai Z.D., Rao C.R., Wu Y. M-estimation of multivariate linear regression parameters under a convex discrepancy function. Stat. Sin. 1992;2:237–254. [Google Scholar]
- 3.Bardet J., Doukhan P., Lang G., Ragache N. Dependent Lindeberg central limit theorem and some applications. ESAIM Probab. Stat. 2008;12:154–172. doi: 10.1051/ps:2007053. [DOI] [Google Scholar]
- 4.Berlinet A., Liese F., Vaida I. Necessary and sufficient conditions for consistency of M-estimates in regression models with general errors. J. Stat. Plan. Inference. 2000;89:243–267. doi: 10.1016/S0378-3758(99)00218-9. [DOI] [Google Scholar]
- 5.Boente G., Fraiman R. Robust nonparametric regression estimation for dependent observations. Ann. Stat. 1989;17(3):1242–1256. doi: 10.1214/aos/1176347266. [DOI] [Google Scholar]
- 6.Chen J., Li D.G., Zhang L.X. Bahadur representation of nonparametric M-estimators for spatial processes. Acta Math. Sin. Engl. Ser. 2008;24(11):1871–1882. doi: 10.1007/s10114-008-6589-2. [DOI] [Google Scholar]
- 7.Chen X. Linear representation of parametric M-estimators in linear models. Sci. China Ser. A. 1993;23(12):1264–1275. [Google Scholar]
- 8.Chen X., Zhao L. M-methods in Linear Model. Shanghai: Shanghai Scientific & Technical Publishers; 1996. [Google Scholar]
- 9.Cheng C.L., Van Ness J.W. Generalized m-estimators for errors-in-variables regression. Ann. Stat. 1992;20(1):385–397. doi: 10.1214/aos/1176348528. [DOI] [Google Scholar]
- 10.Dedecker J., Doukhan P. A new covariance inequality and applications. Stoch. Process. Appl. 2003;106:63–80. doi: 10.1016/S0304-4149(03)00040-1. [DOI] [Google Scholar]
- 11.Dedecker J., Doukhan P., Lang G., Leon J.R., Louhichi S., Prieur C. Weak Dependence: With Examples and Applications. New York: Springer; 2007. [Google Scholar]
- 12.Dedecker J., Prieur C. New dependence coefficients, examples and applications to statistics. Probab. Theory Relat. Fields. 2005;132:203–236. doi: 10.1007/s00440-004-0394-3. [DOI] [Google Scholar]
- 13.Doukhan P., Klesov O., Lang G. Rates of convergence in some SLLN under weak dependence conditions. Acta Sci. Math. (Szeged) 2010;76:683–695. [Google Scholar]
- 14.Doukhan P., Louhichi S. A new weak dependence condition and applications to moment inequalities. Stoch. Process. Appl. 1999;84:313–342. doi: 10.1016/S0304-4149(99)00055-1. [DOI] [Google Scholar]
- 15.Doukhan P., Mayo N., Truquet L. Weak dependence, models and some applications. Metrika. 2009;69:199–225. doi: 10.1007/s00184-008-0216-1. [DOI] [Google Scholar]
- 16.Doukhan P., Neumann M.H. Probability and moment inequalities for sums of weakly dependent random variables with applications. Stoch. Process. Appl. 2007;117:878–903. doi: 10.1016/j.spa.2006.10.011. [DOI] [Google Scholar]
- 17. Doukhan, P., Wintenberger, O.: An invariance principle for weakly dependent stationary general models. Probab. Math. Stat. 27(1) (2007)
- 18.Doukhan P., Wintenberger O. Weakly dependent chains with infinite memory. Stoch. Process. Appl. 2008;118:1997–2013. doi: 10.1016/j.spa.2007.12.004. [DOI] [Google Scholar]
- 19.Fan J. Moderate deviations for M-estimators in linear models with ϕ-mixing errors. Acta Math. Sin. Engl. Ser. 2012;28(6):1275–1294. doi: 10.1007/s10114-011-9188-6. [DOI] [Google Scholar]
- 20.Fan J., Yan A., Xiu N. Asymptotic properties for M-estimators in linear models with dependent random errors. J. Stat. Plan. Inference. 2014;148:49–66. doi: 10.1016/j.jspi.2013.12.005. [DOI] [Google Scholar]
- 21.Freedman D.A. On tail probabilities for martingales. Ann. Probab. 1975;3(1):100–118. doi: 10.1214/aop/1176996452. [DOI] [Google Scholar]
- 22.Gannaz I. Robust estimation and wavelet thresholding in partially linear models. Stat. Comput. 2007;17:293–310. doi: 10.1007/s11222-007-9019-x. [DOI] [Google Scholar]
- 23.Gervini D., Yohai V.J. A class of robust and fully efficient regression estimators. Ann. Stat. 2002;30(2):583–616. doi: 10.1214/aos/1021379866. [DOI] [Google Scholar]
- 24.He X., Shao Q. A general Bahadur representation of M-estimators and its application to linear regression with nonstochastic designs. Ann. Stat. 1996;24(8):2608–2630. doi: 10.1214/aos/1032181172. [DOI] [Google Scholar]
- 25.Hu H.C. QML estimators in linear regression models with functional coefficient autoregressive processes. Math. Probl. Eng. 2010;2010:956907. doi: 10.1155/2010/956907. [DOI] [Google Scholar]
- 26.Hu H.C. Asymptotic normality of Huber–Dutter estimators in a linear model with AR(1) processes. J. Stat. Plan. Inference. 2013;143(3):548–562. doi: 10.1016/j.jspi.2012.08.012. [DOI] [Google Scholar]
- 27.Hu Y., Ming R., Yang W. Large deviations and moderate deviations for m-negatively associated random variables. Acta Math. Sci. 2007;27B(4):886–896. [Google Scholar]
- 28.Huber P.J., Ronchetti E.M. Robust Statistics. 2. New Jersey: John Wiley & Sons; 2009. [Google Scholar]
- 29.Hwang E., Shin D. Semiparametric estimation for partially linear regression models with ψ-weak dependent errors. J. Korean Stat. Soc. 2011;40:411–424. doi: 10.1016/j.jkss.2011.01.002. [DOI] [Google Scholar]
- 30.Koul H.L. M-estimators in linear regression models with long range dependent errors. Stat. Probab. Lett. 1992;14:153–164. doi: 10.1016/0167-7152(92)90079-K. [DOI] [Google Scholar]
- 31.Lai T.L. Asymptotic properties of nonlinear least squares estimates in stochastic regression models. Ann. Stat. 1994;22(4):1917–1930. doi: 10.1214/aos/1176325764. [DOI] [Google Scholar]
- 32.Lehmann E.L. Elements of Large-Sample Theory. New York: Springer; 1998. [Google Scholar]
- 33.Li I. On Koul’s minimum distance estimators in the regression models with long memory moving averages. Stoch. Process. Appl. 2003;105:257–269. doi: 10.1016/S0304-4149(02)00266-1. [DOI] [Google Scholar]
- 34.Liang H., Jing B. Asymptotic normality in partial linear models based on dependent errors. J. Stat. Plan. Inference. 2009;139:1357–1371. doi: 10.1016/j.jspi.2008.08.005. [DOI] [Google Scholar]
- 35.Lin Z., Bai Z. Probability Inequalities. Beijing: Science Press; 2010. [Google Scholar]
- 36.Liptser R.S., Shiryayev A.N. Theory of Martingale. London: Kluwer Academic Publishers; 1989. [Google Scholar]
- 37.Lô S.N., Ronchetti E. Robust and accurate inference for generalized linear models. J. Multivar. Anal. 2009;100:2126–2136. doi: 10.1016/j.jmva.2009.06.012. [DOI] [Google Scholar]
- 38.Maller R.A. Asymptotics of regressions with stationary and nonstationary residuals. Stoch. Process. Appl. 2003;105:33–67. doi: 10.1016/S0304-4149(02)00263-6. [DOI] [Google Scholar]
- 39.Nelson P.I. A note on strong consistency of least squares estimators in regression models with martingale difference errors. Ann. Stat. 1980;8(5):1057–1064. doi: 10.1214/aos/1176345142. [DOI] [Google Scholar]
- 40.Nze P.A., Bühlmann P., Doukhan P. Weak dependence beyond mixing and asymptotics for nonparametric regression. Ann. Stat. 2002;30(2):397–430. doi: 10.1214/aos/1021379859. [DOI] [Google Scholar]
- 41.Pere P. Adjusted estimates and Wald statistics for the AR(1) model with constant. J. Econom. 2000;98:335–363. doi: 10.1016/S0304-4076(00)00023-3. [DOI] [Google Scholar]
- 42.Rao C.R., Zhao L.C. Linear representation of M-estimates in linear models. Can. J. Stat. 1992;20(4):359–368. doi: 10.2307/3315607. [DOI] [Google Scholar]
- 43.Romano J.P., Wolf M. A more general central limit theorem for m-dependent random variables with unbounded m. Stat. Probab. Lett. 2000;47:115–124. doi: 10.1016/S0167-7152(99)00146-7. [DOI] [Google Scholar]
- 44.Salibian-Barrera M., Aelst S.V., Yohai V.J. Robust tests for linear regression models based on τ-estimates. Comput. Stat. Data Anal. 2016;93:436–455. doi: 10.1016/j.csda.2014.09.012. [DOI] [Google Scholar]
- 45.Valdora M., Yohai V.J. Robust estimators for generalized linear models. J. Stat. Plan. Inference. 2014;146:31–48. doi: 10.1016/j.jspi.2013.09.016. [DOI] [Google Scholar]
- 46.Valk V.D. Hilbert space representations of m-dependent processes. Ann. Probab. 1993;21(3):1550–1570. doi: 10.1214/aop/1176989130. [DOI] [Google Scholar]
- 47.Wu Q. Strong consistency of M estimator in linear model for negatively associated samples. J. Syst. Sci. Complex. 2006;19:592–600. doi: 10.1007/s11424-006-0592-4. [DOI] [Google Scholar]
- 48.Wu W.B. Nonlinear system theory: another look at dependence. Proc. Natl. Acad. Sci. USA. 2005;102(40):14150–14154. doi: 10.1073/pnas.0506715102. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Wu W.B. M-estimation of linear models with dependent errors. Ann. Stat. 2007;35(2):495–521. doi: 10.1214/009053606000001406. [DOI] [Google Scholar]
- 50.Xiong S., Joseph V.R. Regression with outlier shrinkage. J. Stat. Plan. Inference. 2013;143:1988–2001. doi: 10.1016/j.jspi.2013.06.007. [DOI] [Google Scholar]
- 51.Yang Y. Asymptotics of M-estimation in non-linear regression. Acta Math. Sin. Engl. Ser. 2004;20(4):749–760. doi: 10.1007/s10114-004-0378-3. [DOI] [Google Scholar]
- 52.Zhou Z., Shao X. Inference for linear models with dependent errors. J. R. Stat. Soc. Ser. B. 2013;75(2):323–343. doi: 10.1111/j.1467-9868.2012.01044.x. [DOI] [Google Scholar]
- 53.Zhou Z., Wu W.B. On linear models with long memory and heavy-tailed errors. J. Multivar. Anal. 2011;102:349–362. doi: 10.1016/j.jmva.2010.09.009. [DOI] [Google Scholar]
