Abstract
Although robust divergence, such as density power divergence and -divergence, is helpful for robust statistical inference in the presence of outliers, the tuning parameter that controls the degree of robustness is chosen in a rule-of-thumb, which may lead to an inefficient inference. We here propose a selection criterion based on an asymptotic approximation of the Hyvarinen score applied to an unnormalized model defined by robust divergence. The proposed selection criterion only requires first and second-order partial derivatives of an assumed density function with respect to observations, which can be easily computed regardless of the number of parameters. We demonstrate the usefulness of the proposed method via numerical studies using normal distributions and regularized linear regression.
Keywords: efficiency, Hyvarinen score, outlier, unnormalized model
1. Introduction
Data with outliers naturally arise in diverse areas. In the analysis of data containing outliers, statistical models with robust divergence are known to be powerful and have been used regularly. In particular, the density power divergence [1] and -divergence [2] have been routinely used in this context due to their robustness properties while there now exist others. In these studies, the theoretical properties of the robustness of robust divergence against outliers are also clarified through the analysis of influence functions. For its interesting applications, see for example [3,4] and references therein. Robust divergence, in general, holds a tuning parameter that controls robustness under model misspecification or contamination. Ref. [1] noted that there is a trade-off between estimation efficiency and strength of robustness; thereby, a suitable choice of the tuning parameter seems crucial in practice. However, a well-known selection strategy such as cross-validation is not straightforward under contamination, so that we need to rely on a trial-and-error way to find a reasonable value of the tuning parameter.
To select a turning parameter, we here propose a simple but novel selection criterion for the tuning parameter by using the asymptotic approximation of Hyvarinen score [5,6] with unnormalized models based on robust divergence. Typical existing methods [7,8] choose a tuning parameter based on the asymptotic approximation of the mean square error but have the drawback of requiring some pilot estimators and an analytical expression of the asymptotic variance. In addition, their works are essentially limited to the simple normal distribution and simple linear regression. Our proposed method has the following advantages over the existing studies.
Our method does not require an explicit representation of the asymptotic variance. Therefore, our method can be applied to rather complex statistical models, such as multivariate models, which seems difficult to be handled by the previous methods;
In the existing studies, it is necessary to determine a certain value as a pilot estimate to optimize a tuning parameter. Thus, the estimates may strongly depend on the pilot estimate. On the other hand, our method does not require a pilot estimate and is stable and statistically efficient;
Although our proposed method is based on a simple asymptotic expansion, it is more statistically meaningful and easier to interpret the results statistically than existing methods because it is based on the theory of parameter estimation for unnormalized statistical models.
Through numerical studies under simple settings, we show that the existing methods can be sensitive to a pilot estimate and tends to select an unnecessarily larger value of a tuning parameter, leading to loss of efficiency compared with the proposed method. Moreover, we still apply the proposed selection method, an estimation procedure in which the asymptotic variance is difficult to compute. As an illustrative example of such a case, we consider robust linear regression with -divergence and -regularization, where the existing approach is infeasible to apply.
As related works, there are two information criteria using the Hyvarinen score. [9] proposed AIC-type information criteria for unnormalized models by deriving an asymptotic unbiased estimator of the Hyvarinen score, but it does not allow unnormalized models whose normalizing constants do not exist. Hence, the criterion cannot be applied to the current situation. On the other hand, [10] proposed an information criterion via Laplace approximation of the marginal likelihood in which the potential function is constructed by the Hyvarinen score. Although [10] covers unnormalized models with possibly diverging normalizing constants, the estimator used in the criterion is entirely different from one defined as the maximizer of robust divergence; thereby, the criterion does not apply to the tuning parameter selection of robust divergence either. Moreover, ref. [11] developed an robust sequential Monte Carlo sampler based on robust divergence in which is adaptively selected. However, it does not provide selection of in a frequentist framework.
The rest of the paper is organized as follows. Section 2 introduces a new selection criterion based on the Hyvarinen score. We then provide concrete expressions of the proposed criterion under density power divergence and -divergence in Section 3. We numerically illustrate the proposed method in two situations in Section 4. Concluding remarks are given in Section 5.
2. Tuning Parameter Selection of Robust Divergence
Suppose we observe as realizations from a true distribution or data generating process G, and we want to fit a statistical model where for some . Furthermore, assume that the density of G is expressed as , where is a contaminated distribution that produces outliers in observations. Our goal is to make statistical inference on by successfully eliminating information of outliers. To this end, robust divergence such as density power divergence [1] and -divergence [2] is typically used for robust inference on . Let be a vector of observations and be a (negative) robust divergence with a tuning parameter . We assume that the robust divergence has a additive form, namely, . This constraint is necessary when using the H-score, but it is not a strong constraint in the context of this study because the well-known robust divergence, as presented in Section 3, satisfies this property.
For selecting the tuning parameter , our main idea is to regard as an unnormalized statistical model whose normalizing constant may not exist. Recently, ref. [10] pointed out that the role of such unnormalized models can be recognized in terms of relative probability. For such model, we employ the Hyvarinen score (H-score) in terms of Bayesian model selection [5,6], defined as
| (1) |
where is the marginal likelihood given by
| (2) |
with some prior distribution . We consider an asymptotic approximation of the H-score (1) under large sample sizes. Under regularity conditions [12], the Laplace approximation of (2) is
| (3) |
where is the M-estimator given by
and is the Hessian matrix at . Then, we have the following approximation, where the proof is deferred to Appendix A.
Proposition 1.
Under some regularity conditions, it follows that
where and .
The above results give the following approximation of the original H-score:
| (4) |
which satisfies under . We then define the optimal as
Let be the quantity that converges to. Then, from the argument given in [5,10], the criterion (4) would converge to the Fisher divergence between the unnormalized model and the true data generating process. Although the unnormalized model is not a valid statistical model in the sense that it does not have a finite normalizing constant, the Fisher divergence can be recognized as the distance in terms of relative probabilities [10].
Existing selection strategies for mostly use the asymptotic variance of . For example, under the density power divergence, refs. [7,8] suggested using asymptotic approximation of the mean squared errors of . However, computation of the asymptotic variance is not straightforward, especially when an additional penalty function is incorporated into the objective function or the dimension of is large. On the other hand, the proposed criterion (4) does not require the computation of asymptotic variance but only needs the derivatives of robust divergence concerning . Furthermore, it should be noted that the proposed criterion (4) can be applied to a variety of robust divergence.
3. Possible Robust Divergences to Consider
We here provide detailed expressions for the proposed criterion (4) under some robust divergences. For simplicity, we focus on two robust divergences which can be empirically estimated from the data. Still, the proposed method could be applied to other divergences such as Hellinger divergence [13] or -divergence [14]. In what follows, we shall use the notations, and .
3.1. Density Power Divergence
The density power divergence [1] for a statistical model is
It can be seen that as , so the above function can be regarded as an extension of the standard log-likelihood. Then, a straightforward calculation leads to the expression of (4), given by
3.2. -Divergence
The original form of -divergence [2] for a statistical model is given by
which is not an additive form. However, the maximization of the above function with respect to is equivalent to the maximization of the transformed version of -divergence, , where
Then, we have
where .
4. Numerical Examples
4.1. Normal Distribution with Density Power Divergence
We first consider a simple example of robust estimation of the normal population mean under unknown variance. Let be sampled observations and we fit to the data. The density power divergence [1] of the model is given by
where is the density function of . In this case, the criterion (4) is expressed as
where and are the estimator based on the density power divergence.
We first demonstrate the proposed selection strategy through simulation studies. We simulated from the normal distribution with true parameters, , and , and then replace the first observations by . We adopted four settings for . Using the simulated dataset, the optimal is selected among through the criterion , and we obtain the adaptive estimator . For comparison, we also employed two selection methods, OWJ [7] and IWJ [8], in which the optimal value of is selected via asymptotic approximation of mean squared errors of the estimator. We set to compute a pilot estimator that must be specified in the two methods. Furthermore, we also computed with , and . Using an estimator of the asymptotic variance of given in [8], we also computed the Wald-type confidence interval of . Based on 5000 simulated datasets, we obtained the squared root of mean squared error (RMSE) of the point estimator, as well as coverage probability (CP) and average length (AL) of the interval estimation. The results are reported in Table 1. It is observed that the use of small (such as ) may lead to unsatisfactory results when the contamination is heavy. It can also be seen that with the use of relatively large , the estimation results can be inefficient. On the other hand, the proposed method can adaptively select a suitable value of since the averaged value of increases with the contamination ratio , as confirmed in Table 2. This would be the main reason that the proposed method provides reasonable performance in all the scenarios.
Table 1.
RMSE of the point estimation and CP and AL of interval estimation of the normal population mean.
| Fixed | |||||||
|---|---|---|---|---|---|---|---|
| HS | OWJ | IWJ | |||||
| 0 | 10.3 | 10.6 | 10.3 | 10.2 | 10.5 | 11.0 | |
| RMSE | 0.05 | 10.7 | 10.9 | 10.7 | 14.4 | 10.8 | 11.3 |
| 0.1 | 11.0 | 11.1 | 11.0 | 44.7 | 11.1 | 11.5 | |
| 0.15 | 11.4 | 11.4 | 11.4 | 82.6 | 11.5 | 11.8 | |
| 0 | 94.8 | 93.8 | 94.2 | 94.6 | 94.5 | 94.4 | |
| CP | 0.05 | 94.7 | 93.9 | 94.1 | 93.2 | 94.2 | 94.1 |
| 0.1 | 94.3 | 94.1 | 94.2 | 36.7 | 94.2 | 94.4 | |
| 0.15 | 94.1 | 93.7 | 93.8 | 0.1 | 93.6 | 94.1 | |
| 0 | 40.6 | 40.1 | 39.8 | 40.4 | 40.7 | 42.6 | |
| AL | 0.05 | 41.7 | 41.0 | 40.9 | 50.4 | 41.2 | 43.3 |
| 0.1 | 42.5 | 41.9 | 41.8 | 79.5 | 42.0 | 44.1 | |
| 0.15 | 43.4 | 42.9 | 42.9 | 100.4 | 43.1 | 45.1 | |
Table 2.
Average values of selected in the three methods in simulation studies with normal distribution.
| HS | OWJ | IWJ | |
|---|---|---|---|
| 0 | 0.088 | 0.212 | 0.158 |
| 0.05 | 0.169 | 0.260 | 0.230 |
| 0.1 | 0.217 | 0.284 | 0.267 |
| 0.15 | 0.252 | 0.302 | 0.294 |
We next apply the proposed method to Simon Newcomb’s measurements of the speed of light data, motivated by applications in Basu et al. [1], Basak et al. [8], Stigler [15]. We searched the optimal among and the H-sores are shown in left panel in Figure 1. The obtained optimal value is , which is substantially smaller than selected by the existing methods as reported in [8]. Since the method proposed in [8] requires a pilot estimate and the estimation results depend significantly on it, we believe that our estimation results are more reasonable. In fact, it is unlikely that we will have to use a value of for a dataset that contains only two outliers. As shown in the right panel in Figure 1, the estimated density functions are almost the same when and when . However, it would be preferable to adopt the smaller value of if the estimates are almost identical in terms of statistical efficiency.
Figure 1.
H-scores for each (left) and the estimate normal density functions with optimal gamma selected via the H-score and IJW methods (right) in the analysis of the Newcomb data.
4.2. Gamma Distribution with Density Power Divergence
We next consider robust estimation of the gamma distribution. Let be sampled observations and we fit to the data, where is a shape parameter and is a rate parameter, so that the expectation of the gamma distribution is . The density power divergence of the model is given by
where is the density function of . In this case, the criterion of is one given in Section 3.1 in which the first and second order derivatives of the density are given as
We demonstrate the proposed selection criterion through simulation studies. We simulated from the gamma distribution with true parameters, , and , and then replace the first observations by . We adopted three settings for and two scenarios for . Using the simulated dataset, the optimal is selected among through the HS criterion , and we obtain the adaptive estimators, and . For comparison, we applied the standard maximum likelihood (ML) estimator, as well as the robust estimator with fixed . In this study, we do not include OWJ or IWJ methods since the asymptotic variance formula in this case has not been investigated before and the derivation would require tedious algebraic calculation.
Based on 5000 simulated datasets, we obtained the squared root of mean squared error (RMSE) of the point estimator, where the results are shown in Table 3. We also provide the average values of the selected in Table 4. It is observed that the (non-robust) ML and the robust method using the small () leads to unsatisfactory results when the data are contaminated. It can also be confirmed that does not hold reasonable accuracy when the contamination ratio is small or 0, which indicates that a suitable selection step is substantially related to the estimation result. The proposed method, however, has some adaptive property. When there is not contamination, the estimation performance is almost identical to the the ML estimator which is the most efficient in this scenario since a small value (sometimes exactly zero) of is selected in this scenario, as shown in Table 4. On the other hand, the estimation performance is still better than the other methods when the data are contaminated, by successfully finding a suitable value (increasing according to ) of .
Table 3.
RMSE of the point estimation in the gamma distribution.
| Fixed | Fixed | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| n | ML | HS | ML | HS | |||||
| 0 | 0.28 | 0.29 | 0.38 | 1.25 | 0.65 | 0.73 | 0.91 | 3.63 | |
| 100 | 0.91 | 0.37 | 0.70 | 1.29 | 2.50 | 1.00 | 1.98 | 3.74 | |
| 1.13 | 0.40 | 0.99 | 1.34 | 3.10 | 1.09 | 2.81 | 3.86 | ||
| 0 | 0.19 | 0.20 | 0.29 | 1.14 | 0.44 | 0.49 | 0.70 | 3.37 | |
| 200 | 0.92 | 0.28 | 0.69 | 1.18 | 2.54 | 0.75 | 1.98 | 3.47 | |
| 1.14 | 0.28 | 1.01 | 1.21 | 3.13 | 0.78 | 2.86 | 3.53 | ||
Table 4.
Average values of selected by the proposed criterion in the gamma distribution.
| 0.036 | 0.137 | 0.164 | 0.025 | 0.133 | 0.161 | |
4.3. Regularized Linear Regression with -Divergence
Note that the proposed criterion can be used when some regularized terms are introduced in the objective function, while the existing method requiring an asymptotic variance of the estimator is not simply applicable. We demonstrate the advantage of the proposed method through regularized linear regression with -divergence [16]. Let and be a response variable and a p-dimensional vector of covariates, respectively, for . The model is . Then, the transformed -divergence is with , and the H-score is expressed as
Here, and are estimated as the minimizer of the following regularized -divergence:
where is an additional tuning parameter that can be optimized via 10-fold cross-validation. We use the R package gamreg [16] to estimate and under given .
We apply the aforementioned method to the well-known Boston housing dataset [17]. In this analysis, we included the original 13 covariates and 12 quadratic terms of the covariates except for one binary covariate, resulting in 25 covariates in total. We searched the optimal among , and the estimated H-scores are shown in the left panel in Figure 2, where the optimal value of was . For comparison, we estimated the regression coefficients with and . Note that reduces to the (non-robust) standard regularized linear regression. The scatter plots of the estimated standardized coefficients under against ones under the two choices of are shown in the right panel of Figure 2. It is confirmed that the estimates with and are comparable while there are substantial differences between estimates with and , indicating that a certain amount of robustness is required for the dataset.
Figure 2.
H-scores for each (left) and the estimated regression coefficients with three choices of (right).
5. Concluding Remarks
We proposed a new criterion for selecting the optimal tuning parameter in robust divergence, using the Hyvarinen score for unnormalized models with robust divergence. The proposed criterion does not require the asymptotic variance formula of the estimator that is needed in the existing selection methods. Although we simply focused on the univariate and continuous situation, the proposed criterion can also be applied to multivariate or discrete distribution, where finite differences under discrete distributions should replace derivatives. Applications of the proposed score to such cases would also be helpful, and we left it to future work.
Acknowledgments
We are grateful to the two referees for several useful comments that helped us to improve the content of the paper.
Appendix A. Proof of Proposition 1
We first assume standard regularity conditions in the M-estimation theory [18] for the objective function . We also assume that is twice continuously differentiable with respect to , is continuously differentiable and the derivative of is bounded.
We first note that is a solution of the following estimating equation:
From the implicit function theorem, it follows that
where we defined . Note that under large n. From (3), the first order partial derivative of the marginal log-likelihood can be approximated as
| (A1) |
Under the regularity conditions for , it follows that
under large n. From the same argument, we can also show that . Regarding the first term in (A1), we have
| (A2) |
since is a score function and .
Using the expression of the first order derivative (A2), it holds that
| (A3) |
Note that
By applying the same formula to , we can confirm that the third and forth terms in (A3) are . Regarding the second term in (A3), we have
which shows that the second term in (A3) is , so that the proof is completed.
Author Contributions
Conceptualization, S.S. and S.Y.; Methodology, S.S.; Writing—original draft, S.S. and S.Y. All authors have read and agreed to the published version of the manuscript.
Funding
This work was supported by Japan Society for Promotion of Science (KAKENHI) under grant numbers 21H00699 and 21K17713.
Conflicts of Interest
The authors declare no conflict of interest.
Footnotes
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Basu A., Harris I.R., Hjort N.L., Jones M. Robust and efficient estimation by minimising a density power divergence. Biometrika. 1998;85:549–559. doi: 10.1093/biomet/85.3.549. [DOI] [Google Scholar]
- 2.Fujisawa H., Eguchi S. Robust parameter estimation with a small bias against heavy contamination. J. Multivar. Anal. 2008;99:2053–2081. doi: 10.1016/j.jmva.2008.02.004. [DOI] [Google Scholar]
- 3.Hua X., Ono Y., Peng L., Cheng Y., Wang H. Target detection within nonhomogeneous clutter via total bregman divergence-based matrix information geometry detectors. IEEE Trans. Signal Process. 2021;69:4326–4340. doi: 10.1109/TSP.2021.3095725. [DOI] [Google Scholar]
- 4.Liu M., Vemuri B.C., Amari S.i., Nielsen F. Shape retrieval using hierarchical total Bregman soft clustering. IEEE Trans. Pattern Anal. Mach. Intell. 2012;34:2407–2419. doi: 10.1109/TPAMI.2012.44. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Shao S., Jacob P.E., Ding J., Tarokh V. Bayesian model comparison with the Hyvärinen score: Computation and consistency. J. Am. Stat. Assoc. 2019;114:1826–1837. doi: 10.1080/01621459.2018.1518237. [DOI] [Google Scholar]
- 6.Dawid A.P., Musio M. Bayesian model selection based on proper scoring rules. Bayesian Anal. 2015;10:479–499. doi: 10.1214/15-BA942. [DOI] [Google Scholar]
- 7.Warwick J., Jones M. Choosing a robustness tuning parameter. J. Stat. Comput. Simul. 2005;75:581–588. doi: 10.1080/00949650412331299120. [DOI] [Google Scholar]
- 8.Basak S., Basu A., Jones M. On the ‘optimal’density power divergence tuning parameter. J. Appl. Stat. 2021;48:536–556. doi: 10.1080/02664763.2020.1736524. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Matsuda T., Uehara M., Hyvarinen A. Information criteria for non-normalized models. arXiv. 20191905.05976 [Google Scholar]
- 10.Jewson J., Rossell D. General Bayesian Loss Function Selection and the use of Improper Models. arXiv. 20212106.01214 [Google Scholar]
- 11.Yonekura S., Sugasawa S. Adaptation of the Tuning Parameter in General Bayesian Inference with Robust Divergence. arXiv. 2021 doi: 10.3390/e23091147.2106.06902 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Geisser S., Hodges J., Press S., ZeUner A. The validity of posterior expansions based on Laplace’s method. Bayesian Likelihood Methods Stat. Econom. 1990;7:473. [Google Scholar]
- 13.Devroye L., Gyorfi L. Nonparametric Density Estimation: The L1 View. John Wiley; Hoboken, NJ, USA: 1985. [Google Scholar]
- 14.Cichocki A., Cruces S., Amari S.i. Generalized alpha-beta divergences and their application to robust nonnegative matrix factorization. Entropy. 2011;13:134–170. doi: 10.3390/e13010134. [DOI] [Google Scholar]
- 15.Stigler S.M. Do robust estimators work with real data? Ann. Stat. 1977;5:1055–1098. doi: 10.1214/aos/1176343997. [DOI] [Google Scholar]
- 16.Kawashima T., Fujisawa H. Robust and sparse regression via γ-divergence. Entropy. 2017;19:608. doi: 10.3390/e19110608. [DOI] [Google Scholar]
- 17.Harrison Jr D., Rubinfeld D.L. Hedonic housing prices and the demand for clean air. J. Environ. Econ. Manag. 1978;5:81–102. doi: 10.1016/0095-0696(78)90006-2. [DOI] [Google Scholar]
- 18.Van der Vaart A.W. Asymptotic Statistics. Volume 3 Cambridge University Press; Cambridge, UK: 2000. [Google Scholar]


