Skip to main content
Computational Intelligence and Neuroscience logoLink to Computational Intelligence and Neuroscience
. 2022 Aug 29;2022:1200611. doi: 10.1155/2022/1200611

Ridge Regression Method and Bayesian Estimators under Composite LINEX Loss Function to Estimate the Shape Parameter in Lomax Distribution

Mansour F Yassen 1, Fuad S Al-Duais 1,, Mohammed M A Almazah 2
PMCID: PMC9444360  PMID: 36072714

Abstract

In this paper, the Ridge Regression method is employed to estimate the shape parameter of the Lomax distribution (LD). In addition to that, the approaches of both classical and Bayesian are considered with several loss functions as a squared error (SELF), Linear Exponential (LLF), and Composite Linear Exponential (CLLF). As far as Bayesian estimators are concerned, informative and noninformative priors are used to estimate the shape parameter. To examine the performance of the Ridge Regression method, we compared it with classical estimators which included Maximum Likelihood, Ordinary Least Squares, Uniformly Minimum Variance Unbiased Estimator, and Median Method as well as Bayesian estimators. Monte Carlo simulation compares these estimators with respect to the Mean Square Error criteria (MSE's). The result of the simulation mentioned that the Ridge Regression method is promising and can be used in a real environment. where it revealed better performance the than Ordinary Least Squares method for estimating shape parameter.

1. Introduction

Ridge Regression is a popular parameter estimation method for analyzing multiple regression data which has multicollinearity. When Multicollinearity happens, Least squares estimates are unbiased, but their variances are large. As a result, the estimator of the Ordinary Least Squares Method (OLS) becomes far from the true value. Ridge Regression decreases the standard errors when a degree of bias is added to the Regression Estimates. Several authors have addressed Ridge Regression as in [110].

The Lomax distribution (LD) is a proposed distribution of the Pareto distribution of type II; it was used to obtain a good model for biomedical problems. Also, it is an important model for modeling failure times. The Lomax distribution was used as a stochastic model with a decreasing failure rate for the operating times of the electronic vehicles under study. It was also used in studies related to income and studies related to the size of cities. As well as being a useful model in studying queuing theory and in analyzing data related to biostatistics.

Many theoretical and statistician's studies have given great interest to estimating the parameter and survival analysis of LD.

Al-Noor and Alwan [11] compared the Nonbayesian, Bayesian, and Empirical Bayes estimate for the parameters of the LD by considering the symmetric and asymmetric loss functions. Al-Duais and Hmood [12] compared the Bayesian estimators and Classical estimators to estimate the parameter and survival analysis depending on record values and by considering the SELF, LLF, and WLLF. Ellah [13] estimated the parameter, reliability, and hazard function of the LD by applying the Bayesian estimators and Classical estimators based on record values by considering the SELF and LLF. Asl et al. [14] applied the Bayesian estimators and Classical estimators of prediction on unidentified parameters of an LD based on a progressively type-I hybrid censoring scheme. Mohie El-Din et al. [15] studied the Classical estimations and Bayesian estimations of the LD based on progressively type-II censored samples and by considering symmetric (SELF) and asymmetric (LLF and GELF), Okasha [16] estimated the parameters, and survival analysis of the LD by applying the E-Bayesian estimators and Bayesian estimators under type-II censored data and by regarding the balanced squared error loss function. Liu and Zhang [17] studied the Bayesian and E-Bayesian estimations of the LD Based on the Generalized Type-I Hybrid Censoring Scheme. to estimate the unknown parameter of LD and by considering SELF and LLF to estimate the parameter and reliability function. Al-Bossly [18] developed a compound LINEX loss function (CLLF) to estimate the shape parameter of the Lomax distribution utilizing the E-Bayes and Bayes estimation methods for the distributional parameters of the LD.

In the current study, Ridge regression was employed to estimate the shape parameter of LD and compare it with the classical estimators which included Maximum Likelihood, Ordinary Least Squares, Uniformly Minimum Variance Unbiased Estimator, and Median Method as well as Bayesian estimators. The uniqueness of this work comes from the fact that, to date, no attempt has been made to estimate the shape parameter of the LD using the method of Ridge regression.

The pdf of LD is given as follows [19]:

fx;ϑ,δ=ϑδ1+xδϑ+1;x0;ϑ,δ>0,0;o.w, (1)

where x is a random variable, and δ > 0, ϑ > 0 are the scale and shape parameters, respectively.

The CDF and reliability function R(t) of (1) are given by the following equation:

Fx;ϑ,δ=11+xδϑ;x0;ϑ,δ>0, (2)
Rt=1+tδϑ;t0;ϑ,δ>0. (3)

2. Classical Methods of Estimation of Lomax Shape Parameter

The Classical methods selected for the comparative study are (i) Maximum Likelihood Estimator (MLE), (ii) Ordinary Least Squares Method (OLS), (iii) Ridge Regression method, (iv) Uniformly Minimum Variance Unbiased Estimator (UMVUE), and (v) Median Method (M.M).

2.1. MLE's of the Shape Parameter ϑ

Suppose that x¯=x1,x2,xn is a random sample from the LD as in (1), then the Lx¯|ϑ for the sample observation will be as follows:

Lx¯|ϑ=i=1nϑδ1+xδϑ+1=ϑδnexpTϑ+1, (4)

where T=∑i=1nIn(1+xi/δ).

Log likelihood function

ln  Lϑ,δ=nln  ϑnln  δϑ+1i=1nln1+xiδ. (5)

The MLE's of ϑ denoted by ϑ^MLE is given as follows:

ϑ^MLE=ni=1nln1+xi/δ. (6)

2.2. Ordinary Least Squares Method (OLS)

The CDF in equation (2) satisfies

ln1Fx=ϑln1+xδ=ϑlnδ+x+ϑlnδ. (7)

Now, suppose that X1, X2,……Xn form a random sample from LD defined by (1), and that X(1) < X(2) < …<X(n) are the order statistics. With observed ordered observations x(1) < x(2) < …<x(n) (2) gives the following equation:

ln1Fxi=ϑln1+xiδ=ϑlnδ+xi+ϑlnδ. (8)

(8) represents a simple linear regression function corresponding to F(x(i))

Yi=α+βXi+εi, (9)

where Yi=ln1F^i and F^ii it is a point estimator of F(x(i)) many estimators for F^i are used.

For example, the Median Rank estimator F^i=i0.3/n+0.4 or F^i=i3/8/n+0.25, the mean rank estimator F^i=i/n+1. where i denotes the ith smallest value of x(1), x(2),…x(n), i=1,2,…, n. εi is the random error with expected value E(εi)=0. Xi=ln(δ+x(i)), β=−ϑ, α=ϑlnδ.

The estimates α^ and β^ of the regression parameters, α and β minimize the function,

Qα,β=i=1nYiαβ  lnδ+xi2. (10)

Therefore, the estimates β^OLS of the parameter, β is given by the following equation:

β^OLS=ni=1nlnδ+xiln1F^ii=1nlnδ+xii=1nln1F^ini=1nln2δ+xii=1nlnδ+xi2,α^OLS=1ni=1nln1F^iβ^OLS1ni=1nlnδ+xi. (11)

The estimate ϑ^OLS of the parameter ϑ is given by the following equation:

ϑ^OLS=β^OLS. (12)

2.3. Ridge Regression Method

Ridge Regression estimates can be obtained by minimizing the function.

Qα,β=i=1nln1F^iαβ  lnδ+xi2. (13)

According to the following constraint

α2+β2=ϕ, (14)

where ϕ is a definite positive constant.

The Lagrange multiples method requires that we derive the following:

L=i=1nln1F^iαβ  lnδ+xi2+λα2+β2ϕ,  lnLα=2i=1nln1F^iαβ  lnδ+xilnδ+xi+2αλ=0,  lnLβ=2i=1nln1F^iαβ  lnδ+xilnδ+xi+2βλ. (15)

Therefore, the estimates α^Rid and β^Rid of the parameters, α and β are given by the following equation:

β^Rid=n+λi=1nlnδ+xiln1F^ii=1nlnδ+xii=1nln1F^in+λi=1nln2δ+xii=1nlnδ+xi2,α^Rid=i=1nln1F^iβi=1nlnδ+xin+λ, (16)

and

λ=ρσ2ββ;0<λ<1, (17)

where ρ represents the number of parameter of the distribution and ββ represents the covariance matrix.

NOT when λ=0, we get the estimations of the OLS.

The Ridge Regression estimate of ϑ denoted by ϑ^Rid is given as follows:

ϑ^Rid=β^Rid. (18)

2.4. UMVUE Estimator of the Shape Parameter ϑ

The pdf of LD belongs to the exponential family. Therefore, T=∑i=1nln(1+(xi/δ)) is a complete sufficient statistic for ϑ. Then, depending on the theorem of Lehmann-Scheffe [20], the uniformly minimum variance unbiased estimator ϑ^UMVUE of ϑ, may be given by the following equation:

ϑ^UMVUE=n1i=1nln1+xi/δ. (19)

2.5. Median Method (M.M)

This method is dependent on the basis that the median divides the data into two equal parts

Fxmed=0.5. (20)

By substituting into the cumulative distribution function defined by (2). The equation will become

11+xmedδϑ=0.5. (21)

Therefore, the estimates ϑ^Med of ϑ, can be obtained as follows:

ϑ^Med=log0.5log1+xmed/δ, (22)

where xmed is the median of the data.

3. Prior and Posterior Density Functions

3.1. Prior Distribution

The Bays estimators demand an appropriate selection of priors for the parameter. If we do not have sufficient knowledge about the parameter, in this case, the noninformative priors are better chosen. Or else, it is desirable to use informative priors. In this research, we study both types of priors: informative priors and noninformative priors.

3.1.1. Non-Informative Prior

Let us assume that ϑ has noninformative prior density defined as using extended Jeffrey's prior h1(ϑ) which is given by the following equation:

h1ϑIϑc;c>0, (23)

where I(ϑ) represented Fisher information matrix which defined as follows:

Iϑ=nE2logfx;ϑ,δϑ,h1ϑ=1ϑ2c;c>0. (24)

3.1.2. Informative Priors (The Natural Conjugate Prior)

In this work, three types of prior distributions were used to study the effect of the different prior distributions on a Bayesian estimate of ϑ.

  • (a)
    Chi-squared prior
    h2ϑ=dk/22k/2Γk/2ϑk/21expdϑ2;ϑ>0;k,d>0. (25)
  • (b)
    Inverted levy prior
    h3ϑ=k2πϑ1/2expdϑ2;ϑ>0;k>0, (26)
  • (c)
    Gamma Prior
    h4ϑ=dkΓkϑk1expdϑ;ϑ>0;k,d>0. (27)

3.2. Posterior Density Functions

The posterior distribution for the shape parameter ϑ can be expressed as follows:

πϑ|x¯=Lϑ,δ|x¯hϑ0Lϑ,δ|x¯hϑdϑ. (28)

Combining the L|x¯|ϑ in (4) and the prior distribution of extended Jeffrey's prior (16), chi-square prior (17), inverted Levy prior (18), and gamma prior (19). The posterior density of ϑ It can be found on respectively as follows:

π1ϑ|x¯=Tn2c+1Γn2c+1ϑn2cexpTϑ;ϑ>0;c>0,π2ϑ|x¯=T+d/2n+k/2Γn+k/2ϑn+k/21expT+d2ϑ;ϑ>0;k,d>0,π3ϑ|x¯=T+d/2n+1/2Γn+1/2ϑn1/2expT+d2ϑ;ϑ>0;k,d>0,π4ϑ|x¯=T+dn+kΓn+kϑn+k1expT+dϑ;ϑ>0;k,d>0. (29)

4. Loss Functions

In Bayes estimation, we will consider three types of loss functions including SELL, LLF, and CLLF.

4.1. Squared Error Loss Function

The SELF is defined as follows [21]:

Lϑ^,ϑ=ϑ^ϑ2. (30)

The Bayes estimator of ϑ relative to SELF, signified by ϑ^BSE is

ϑ^BSE=Ehϑ|x¯. (31)

4.2. LINEX Loss Function

The LINEX loss function for ϑ can be written as follows [22, 23]:

Lϑ^,ϑexpaϑ^ϑaϑ^ϑ1;a0. (32)

The Bayes estimator of ϑ relative to LLF, denoted by ϑ^BL is

ϑ^BL=1aLnEϑexpaϑ;a0. (33)

Provided that Eϑ=exp[−] exists and is finite, where Eϑ denotes the expected value.

4.3. Composite LINEX Loss Function

CLLF is given by the following formula [24].

Lϑ^,ϑ=Laϑ^,ϑ+Laϑ^,ϑ=expaϑ^,ϑ+expaϑ^,ϑ2a>0. (34)

The Bayes estimator of ϑ relative to CLLF, denoted by β^BCL, is

β^BCL=12alnEϑexpaϑ|x¯Eϑexpaϑ|x¯. (35)

Provided that Eϑ=expaϑ|x¯ and Eϑ|x¯expaϑ exist and are finite.

5. Bayes Estimator

In this part, we estimate ϑ, using three various loss functions, including SELF, LLF, and CLLF. We assume four different prior distributions for ϑ including; extended Jeffrey's prior, chi-square prior, inverted Levy prior, and gamma prior [2528].

5.1. Bayesian Estimator of ϑ under SELF

The Bayes estimates of ϑ relative to SELF depended on π1ϑ|x¯ which is signified as ϑ^BSE1 and can be acquired by using equations (21) and (26) to be

ϑ^BSE1=Eϑ|x¯=0ϑπ1ϑ|x¯dϑ,ϑ^BSE1=0Tn2c+1Γn2c+1ϑn2c+1expTϑdϑ=n2c+1T. (36)

Likewise, we can obtain the Bayesian estimates of ϑ relative to SELF depending on π2ϑ|x¯, π3ϑ|x¯, and π4ϑ|x¯, which are signified as ϑ^BSE2, ϑ^BSE3, and ϑ^BSE4 by using equations ((22) and (26)), ((23) and (26)) and ((24) and (26)), respectively, to be

ϑ^BSE2=Eϑ|x¯=0ϑπ2ϑ|x¯dϑ,ϑ^BSE2=0T+d/2n+k/2Γn+k/2ϑn+k/2expT+d2ϑdϑ=n+k/2T+d/2,ϑ^BSE3=Eϑ|x¯=0ϑπ3ϑ|x¯dϑ,ϑ^BSE3=0T+d/2n+1/2Γn+1/2ϑn+1/2expT+d2ϑdϑ=Γn+3/2Γn+1/2T+d/2, (37)

and

ϑ^BSE4=Eϑ|x¯=0ϑπ3ϑ|x¯dϑ,ϑ^BSE4=Eϑ|x¯=0ϑπ3ϑ|x¯dϑ,ϑ^BSE4=0T+dn+kΓn+kϑn+kexpT+dϑdϑ=n+kT+d. (38)

5.2. Bayesian Estimator of ϑ under LLF

We can obtain the Bayes estimator of ϑ under the LLF depending on π1ϑ|x¯ signified as ϑ^BL1 by using equations (21) and (28) as follows:

ϑ^BL1=1aLnEϑexpaϑ=0expaϑπ1ϑ|x¯dϑ,ϑ^BL1=1aln0expaϑTn2c+1Γn2c+1ϑn2cexpTϑdϑ=n2c+1aln1+aT. (39)

In the same way, the Bayes estimates of ϑ relative to LLF depended on π2ϑ|x¯, π3ϑ|x¯, and π4ϑ|x¯, which are signified as ϑ^BL2, ϑ^BL3, and ϑ^BL4 by using equations ((22) and (28)) ((23) and (28)), and ((24) and (28)), respectively, to be

ϑ^BL2=1aLnEϑexpaϑ=0expaϑπ2ϑ|x¯dϑ,ϑ^BL2=1aln0expaϑT+d/2n+k/2Γn+k/2ϑn+k/21expT+d2ϑdϑ=n+0.5kaln1+aT+d/2,ϑ^BL3=1aLnEϑexpaϑ=0expaϑπ3ϑ|x¯dϑ,ϑ^BL3=1aln0expaϑT+d/2n+1/2Γn+1/2ϑn1/2expT+d2ϑdϑ=n+0.5aln1+aT+d/2, (40)

and

ϑ^BL4=1aLnEϑexpaϑ=0expaϑπ4ϑ|x¯dϑ,ϑ^BL4=1aln0expaϑT+dn+kΓn+kϑn+k1expT+dϑdϑ=n+kaln1+aT+d. (41)

5.3. Bayesian Estimation of ϑ under CLLF

The Bayes estimate of ϑ under CLLF depended on π1ϑ|x¯, which is signified as ϑ^BCL1 by using equations (21) and (30) to be

ϑ^BCL1=12alnEϑexpaϑ|x¯Eϑexpaϑ|x¯=I1I2, (42)

where:

I1=Eϑexpaϑ|x¯=0expaϑπ1ϑ|x¯dϑ=0expaϑTn2c+1Γn2c+1ϑn2cexpTϑdϑ=TTan2c+1, (43)

and

I2=Eϑexpaϑ|x¯=0expaϑπ1ϑ|x¯dϑ=0expaϑTn2c+1Γn2c+1ϑn2cexpTϑdϑ=TT+an2c+1. (44)

Therefore, the Bayes estimation of parameter ϑ is

ϑ^BCL1=12alnT/Tan2c+1T/T+an2c+1=n2c+12alnT+aTa. (45)

Similarly, the Bayesian estimates of ϑ under CLLF depended on π2ϑ|x¯, π3ϑ|x¯, and π4ϑ|x¯, which are signified as ϑ^BCL2, ϑ^BCL3, and ϑ^BCL4 by using equations ((22) and (30)), ((23) and (30)), and ((24) and (30)), respectively, to be

ϑ^BCL2=12alnEϑexpaϑ|x¯Eϑexpaϑ|x¯=I3I4, (46)

where

I3=Eϑexpaϑ|x¯=0expaϑπ2ϑ|x¯dϑ=0expaϑT+d/2n+k/2Γn+k/2ϑn+k/21expT+d2ϑdϑ=T+d/2T+d/2an+k/2, (47)

and

I4=Eϑexpaϑ|x¯=0expaϑπ2ϑ|x¯dϑ=0expaϑT+d/2n+k/2Γn+k/2ϑn+k/21expT+d2ϑdϑ=T+d/2T+d/2+an+k/2, (48)

So

ϑ^BCL2=12alnT+d/2/T+d/2an+k/2  T+d/2/T+d/2+an+k/2=n+k/22alnT+d/2+aT+d/2a, (49)

and

ϑ^BCL3=12alnEϑexpaϑ|x¯Eϑexpaϑ|x¯=I5I6, (50)

where

I5=Eϑexpaϑ|x¯=0expaϑπ3ϑ|x¯dϑ=0expaϑT+d/2n+1/2Γn+1/2ϑn1/2expT+d2ϑdϑ=T+d/2T+d/2an+1/2, (51)

and

I6=Eϑexpaϑ|x¯=0expaϑπ3ϑ|x¯dϑ=0expaϑT+d/2n+1/2Γn+1/2ϑn1/2expT+d2ϑdϑ=T+d/2T+d/2+an+1/2, (52)

So

ϑ^BCL3=12alnT+d/2/T+d/2an+1/2T+d/2/T+d/2+an+1/2=n+1/22alnT+d/2+aT+d/2a,ϑ^BCL4=12alnEϑexpaϑ|x¯Eϑexpaϑ|x¯=I7I8, (53)

where

I7=Eϑexpaϑ|x¯=0expaϑπ4ϑ|x¯dϑ=0expaϑT+dn+kΓn+kϑn+k1expT+dϑdϑ=T+dT+dan+k, (54)

and

I8=Eϑexpaϑ|x¯=0expaϑπ4ϑ|x¯dϑ=0expaϑT+dn+kΓn+kϑn+k1expT+dϑdϑ=T+dT+d+an+k. (55)

Therefore, the Bayes estimation of parameter ϑ is

ϑ^BCL4=12alnT+d/T+d/2an+k  T+d/T+d+an+k=n+k2alnT+d+aT+da. (56)

6. Simulation Study and Results

In this part, a simulation study has been conducted to assess and examine the behavior of Classical methods and Bayes estimators for the shape parameter of LD under different cases. The following steps of the simulation are as follows:

  • (1)

    Set the true values for the parameters of LD which are varied into three cases to observe their effect on the estimates when δ > ϑ, δ=ϑ, and δ < ϑ “case I (δ=2, ϑ=1.5 ), case II (δ=2, ϑ=2 ) and case III (δ=2, ϑ=2.5 )”

  • (2)

    Determine the sample size n=20,40,60,80 and 100

  • (3)

    Determine the value λ = 0.75, c=0.5, (K, d)=(0.6, 0.2) and a=0.5, 1 and 1.5

  • (4)

    For a given sample size n, generate x1, x2,…xn by using the following formula: xi=δ[(1 − Ui)−1/ϑ − 1], i = 1,   …,  n, where Ui is uniform (0,1)

  • (5)

    Classical methods estimation, ϑ^MLE, ϑ^OLS , ϑ^Rid, ϑ^UMVUE, and ϑ^Med of ϑ are computed from equations (6), (10), (13)–(15), respectively

  • (6)

    Under SELL and based on h1(ϑ), h2(ϑ)h3(ϑ) and h4(ϑ) priors, Bayesian estimation, ϑ^BSE1, ϑ^BSE2, ϑ^BSE3 and ϑ^BSE.$, of ϑ are computed from equations (31), (32), (33), and (34), respectively

  • (7)

    Under LLF and based on h1(ϑ), h2(ϑ), h3(ϑ) and h4(ϑ) priors, Bayesian estimation, ϑ^BL1, ϑ^BL2, ϑ^BL3 and ϑ^BL4, of ϑ are computed from equations (35), (36), (37), and (38), respectively

  • (8)

    Under CLLF and based on h1(ϑ), h2(ϑ)h3(ϑ) and h4(ϑ) priors, ϑ^BCL1, ϑ^BCL2, ϑ^BCL3 and ϑ^BCL4, of ϑ are calculated from equations (40), (42), (44), and (46), respectively

  • (9)
    Steps 4 to 8 are replicated 10,000 times. The (MSE's) for all Estimates of the parameter  ϑ are obtained, where
    MSE ϑ^=110000i=110000ϑ^iϑ2. (57)

The results are displayed in the following Tables 15.

Table 1.

MSE values for non-bayes estimators of ϑ.

Cases n ϑ^MLE ϑ^OLS ϑ^Rid ϑ^UMVUE ϑ^Med
I 20 0.1457 0.2021 0.1350 0.1249 0.3042
40 0.0645 0.1005 0.0870 0.0599 0.1440
60 0.0408 0.0689 0.0631 0.0388 0.0873
80 0.0302 0.0531 0.0501 0.0291 0.0665
100 0.0237 0.0417 0.0399 0.0229 0.0507

II 20 0.2551 0.3585 0.2434 0.2196 0.5406
40 0.1126 0.1814 0.1521 0.1047 0.2412
60 0.0702 0.1224 0.1105 0.0667 0.1528
80 0.0526 0.0942 0.0882 0.0507 0.1148
100 0.0412 0.0770 0.0732 0.0399 0.0878

III 20 0.3995 0.5743 0.4357 0.3462 0.7967
40 0.1743 0.2818 0.2455 0.1621 0.3770
60 0.1116 0.1864 0.1735 0.1062 0.2475
80 0.0807 0.1478 0.1416 0.0778 0.1775
100 0.0668 0.1215 0.1175 0.0649 0.1440

Table 2.

MSE values for bayes estimators of ϑ with extended Jeffrey's prior.

Cases n ϑ^BSE1 ϑ^BL1 ϑ^BCL1
a=0.5 a=1 a=1.5 a=0.5 a=1 a=1.5
I 20 0.1457 0.1298 0.1177 0.1081 0.1464 0.1476 0.1511
40 0.0645 0.0611 0.0560 0.0532 0.0645 0.0621 0.0614
60 0.0408 0.0393 0.0371 0.0371 0.0408 0.0397 0.0408
80 0.0302 0.0294 0.0292 0.0275 0.0303 0.0307 0.0296
100 0.0237 0.0232 0.0230 0.0223 0.0237 0.0239 0.0237

II 20 0.2551 0.2200 0.1961 0.1803 0.2574 0.2628 0.2776
40 0.1126 0.1050 0.0967 0.0952 0.1128 0.1104 0.1151
60 0.0702 0.0670 0.0655 0.0632 0.0703 0.0716 0.0708
80 0.0526 0.0508 0.0504 0.0480 0.0526 0.0536 0.0526
100 0.0412 0.0400 0.0397 0.0388 0.0412 0.0419 0.0415

III 20 0.3995 0.3347 0.2922 0.2727 0.4051 0.4189 0.4535
40 0.1743 0.1601 0.1517 0.1441 0.1748 0.1780 0.1794
60 0.1116 0.1054 0.1012 0.0985 0.1117 0.1126 0.1121
80 0.0807 0.0774 0.0729 0.0723 0.0808 0.0786 0.0805
100 0.0668 0.0646 0.0600 0.0596 0.0668 0.0636 0.0646

Table 3.

MSE values for bayes estimators of ϑ with chi-square prior.

Cases n ϑ^BSE1 ϑ^BL1 ϑ^BCL1
a=0.5 a=1 a=1.5 a=0.5 a=1 a=1.5
I 20 0.1468 0.1305 0.1177 0.1077 0.1476 0.1486 0.1522
40 0.0648 0.0613 0.0561 0.0531 0.0649 0.0625 0.0617
60 0.0410 0.0395 0.0371 0.0370 0.0410 0.0399 0.0410
80 0.0304 0.0295 0.0292 0.0275 0.0304 0.0309 0.0297
100 0.0238 0.0232 0.0231 0.0223 0.0238 0.0240 0.0238

II 20 0.2529 0.2177 0.1933 0.1771 0.2552 0.2602 0.2746
40 0.1124 0.1045 0.0961 0.0944 0.1126 0.1101 0.1149
60 0.0702 0.0668 0.0653 0.0629 0.0702 0.0715 0.0708
80 0.0526 0.0507 0.0502 0.0479 0.0526 0.0536 0.0526
100 0.0412 0.0400 0.0397 0.0386 0.0412 0.0419 0.0415

III 20 0.3895 0.3265 0.2849 0.2658 0.3948 0.4077 0.4405
40 0.1726 0.1584 0.1499 0.1423 0.1731 0.1762 0.1776
60 0.1109 0.1047 0.1005 0.0977 0.1110 0.1119 0.1114
80 0.0804 0.0770 0.0725 0.0719 0.0804 0.0783 0.0801
100 0.0666 0.0643 0.0598 0.0593 0.0666 0.0634 0.0643

Table 4.

MSE values for bayes estimators of ϑ with inverted Levy prior.

Cases n ϑ^BSE1 ϑ^BL1 ϑ^BCL1
a=0.5 a=1 a=1.5 a=0.5 a=1 a=1.5
I 20 0.1528 0.1352 0.1210 0.1100 0.1536 0.1544 0.1583
40 0.0662 0.0624 0.0569 0.0536 0.0663 0.0638 0.0630
60 0.0416 0.0400 0.0375 0.0373 0.0416 0.0405 0.0416
80 0.0307 0.0298 0.0294 0.0277 0.0307 0.0312 0.0300
100 0.0240 0.0234 0.0232 0.0224 0.0240 0.0242 0.0240

II 20 0.2630 0.2248 0.1974 0.1793 0.2654 0.2703 0.2858
40 0.1146 0.1062 0.0972 0.0951 0.1149 0.1124 0.1174
60 0.0712 0.0676 0.0658 0.0631 0.0712 0.0726 0.0717
80 0.0531 0.0512 0.0505 0.0481 0.0532 0.0541 0.0532
100 0.0416 0.0403 0.0399 0.0387 0.0416 0.0422 0.0418

III 20 0.4043 0.3358 0.2892 0.2662 0.4099 0.4234 0.4581
40 0.1760 0.1607 0.1511 0.1425 0.1765 0.1798 0.1813
60 0.1124 0.1057 0.1011 0.0977 0.1126 0.1135 0.1128
80 0.0812 0.0776 0.0728 0.0720 0.0813 0.0791 0.0811
100 0.0671 0.0647 0.0599 0.0594 0.0671 0.0639 0.0649

Table 5.

MSE values for bayes estimators of ϑ with gamma prior.

Cases n ϑ^BSE1 ϑ^BL1 ϑ^BCL1
a=0.5 a=1 a=1.5 a=0.5 a=1 a=1.5
I 20 0.1482 0.1314 0.1179 0.1075 0.1489 0.1498 0.1534
40 0.0652 0.0615 0.0562 0.0530 0.0653 0.0629 0.0621
60 0.0412 0.0396 0.0372 0.0370 0.0412 0.0401 0.0412
80 0.0305 0.0296 0.0293 0.0275 0.0305 0.0310 0.0298
100 0.0239 0.0233 0.0231 0.0223 0.0239 0.0240 0.0239

II 20 0.2510 0.2156 0.1907 0.1742 0.2531 0.2578 0.2720
40 0.1121 0.1041 0.0956 0.0937 0.1123 0.1099 0.1147
60 0.0701 0.0667 0.0651 0.0626 0.0702 0.0715 0.0707
80 0.0526 0.0507 0.0501 0.0477 0.0526 0.0535 0.0526
100 0.0412 0.0400 0.0396 0.0385 0.0412 0.0419 0.0415

III 20 0.3801 0.3187 0.2779 0.2591 0.3851 0.3971 0.4284
40 0.1709 0.1567 0.1482 0.1405 0.1714 0.1745 0.1759
60 0.1102 0.1040 0.0997 0.0969 0.1104 0.1113 0.1107
80 0.0800 0.0767 0.0721 0.0714 0.0801 0.0780 0.0798
100 0.0663 0.0641 0.0595 0.0591 0.0664 0.0632 0.0641

7. Conclusions and Recommendations

In this paper, the Ridge Regression method was employed to estimate the shape parameter of LD. Besides, researchers made a Monte Carlo simulation to test the performance of the Ridge Regression method. Then, compared the Ridge Regression estimator with the other estimators, including MLE, OLS, UMVUE, M.M, and Bayesian estimators based on SELF, LLF, and CLLF. However, the major observations are identified in the following points:

  1. Among classical estimators, in Table 1, the performance of the UMVUE was shown as better than other estimators: “MLE, OLS, Ridge, and M.M estimators” in all different cases and all samples sizes. Whereas the performance of Ridge Regression was better than MLE. Estimator especially for a small sample size (n = 10). In the meanwhile, the results showed that the performance of the Ridge estimator was better than OLS estimator in all different cases and sample sizes.

  2. With Bayes estimators, gamma prior records full appearance as best prior based on LLF and CLLF for all different cases and all sample sizes. As well as that is true under SELF with δ = ϑ and δ < ϑ , while extended Jeffrey's prior record as best prior based on SELF when δ > ϑ.

  3. The MSE values associated with each of the classical and Bayes estimate “corresponding to each prior and every loss function” reduces with the increase in the sample size. Also, the results show a convergence between most of the estimators to increase the sample sizes and this conforms to the statistical theory.

  4. For all cases and all sample sizes, LLF (a = 1.5) records full appearance as the best loss function associated with Bayes estimates corresponding to gamma prior.

  5. According to the results, MSE values of all classical and Bayes estimators of shape parameters are decreasing as the shape parameter value increase.

Acknowledgments

Their authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through Larg Groups. (Project under grant number (RGP.2/4/43).

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  • 1.Fan P., Deng R., Qiu J., Zhao Z., Wu S. Well logging curve reconstruction based on kernel ridge regression. Arabian Journal of Geosciences . 2021;14(16):1559–1610. doi: 10.1007/s12517-021-07792-y. [DOI] [Google Scholar]
  • 2.Dorugade A. V. New ridge parameters for ridge regression. Journal of the Association of Arab Universities for Basic and Applied Sciences . 2014;15(1):94–99. doi: 10.1016/j.jaubas.2013.03.005. [DOI] [Google Scholar]
  • 3.Malthouse E. C. Ridge regression and direct marketing scoring models. Journal of Interactive Marketing . 1999;13(4):10–23. doi: 10.1002/(sici)1520-6653(199923)13:4&#x0003c;10::aid-dir2&#x0003e;3.0.co;2-3. [DOI] [Google Scholar]
  • 4.Özkale M. R., Lemeshow S., Sturdivant R. Logistic regression diagnostics in ridge regression. Computational Statistics . 2018;33(2):563–593. doi: 10.1007/s00180-017-0755-x. [DOI] [Google Scholar]
  • 5.Assaf A. G., Tsionas M., Tasiopoulos A. Diagnosing and correcting the effects of multicollinearity: Bayesian implications of ridge regression. Tourism Management . 2019;71:1–8. doi: 10.1016/j.tourman.2018.09.008. [DOI] [Google Scholar]
  • 6.Ullah M. I., Aslam M., Altaf S. Lmridge: a comprehensive R package for Ridge regression. Rice Journal . 2019;10(2):p. 326. doi: 10.32614/rj-2018-060. [DOI] [Google Scholar]
  • 7.Sevinç V., Göktaş A. Çoklu doğrusallık ve değişen varyans altında farklı ridge parametrelerinin bir karşılaştırması. Süleyman Demirel Üniversitesi Fen Bilimleri Enstitüsü Dergisi . 2019;23(2):381–389. doi: 10.19113/sdufenbed.484275. [DOI] [Google Scholar]
  • 8.Rajan M. P. An efficient Ridge regression algorithm with parameter estimation for data analysis in machine learning. SN Computer Science . 2022;3(2):171–216. doi: 10.1007/s42979-022-01051-x. [DOI] [Google Scholar]
  • 9.Sami F., Amin M., Butt M. M. On the ridge estimation of the ConwayMaxwell Poisson regression model with multicollinearity: methods and applications. Concurrency and Computation: Practice and Experience . 2022;34(1):p. e6477. doi: 10.1002/cpe.6477. [DOI] [Google Scholar]
  • 10.Amin M., Akram M. N., Ramzan Q. Bayesian estimation of ridge parameter under different loss functions. Communications in Statistics - Theory and Methods . 2022;51(12):4055–4071. doi: 10.1080/03610926.2020.1809675. [DOI] [Google Scholar]
  • 11.Al-Noor N. H., Alwan S. S. Non-Bayes, Bayes and empirical Bayes estimators for the shape parameter of Lomax distribution. Mathematical Theory and Modeling . 2015;5(2):17–28. [Google Scholar]
  • 12.Al-Duais F. S., Hmood M. Y. Bayesian and non-Bayesian estimation of the Lomax model based on upper record values under weighted LINEX loss function. Periodicals of Engineering and Natural Sciences . 2020;8(3):1786–1794. [Google Scholar]
  • 13.Ellah H. Comparison of estimates using record statistics from Lomax model: bayesian and non bayesian approaches. Journal of Statistical Research of Iran . 2007;3(2):139–158. doi: 10.18869/acadpub.jsri.3.2.139. [DOI] [Google Scholar]
  • 14.Asl M. N., Belaghi R. A., Bevrani H. Classical and Bayesian inferential approaches using Lomax model under progressively type-I hybrid censoring. Journal of Computational and Applied Mathematics . 2018;343:397–412. doi: 10.1016/j.cam.2018.04.028. [DOI] [Google Scholar]
  • 15.Mohie El-Din M. M., Okasha H. M., Al-Zahrani B. Empirical Bayes estimators of reliability performances using progressive type-II censoring from Lomax model. Journal of Advanced Research in Applied Mathematics . 2013;5(1):74–83. doi: 10.5373/jaram.1564.092912. [DOI] [Google Scholar]
  • 16.Okasha H. M. E-Bayesian estimation for the Lomax distribution based on type-II censored data. Journal of the Egyptian Mathematical Society . 2014;22(3):489–495. doi: 10.1016/j.joems.2013.12.009. [DOI] [Google Scholar]
  • 17.Liu K., Zhang Y. The E-bayesian estimation for Lomax distribution based on generalized type-I hybrid censoring scheme. Mathematical Problems in Engineering . 2021;2021:1–19. doi: 10.1155/2021/5570320. [DOI] [Google Scholar]
  • 18.Al-Bossly A. E-bayesian and bayesian estimation for the Lomax distribution under weighted composite LINEX loss function. Computational Intelligence and Neuroscience . 2021;2021:1–10. doi: 10.1155/2021/2101972. ‏.2101972 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Mahmoud M. A. W. MCMC technique to study the Bayesian estimation using record values from the Lomax distribution. International Journal of Computer Application . 2013;73(5):8–14. doi: 10.5120/12735-9617. [DOI] [Google Scholar]
  • 20.Howlader H, Hossain A. M. Bayesian survival estimation of Pareto distribution of the second kind based on failure-censored data. Computational Statistics & Data Analysis . 2002;38(3):301–314. doi: 10.1016/s0167-9473(01)00039-1. [DOI] [Google Scholar]
  • 21.Al-Duais F. S. Bayesian estimations under the weighted LINEX loss function based on upper record values. Complexity . 2021;2021:1–7. doi: 10.1155/2021/9982916. ‏.9982916 [DOI] [Google Scholar]
  • 22.Al-Duais F. S., Alhagyan M. Nonlinear programming to determine best weighted coefficient of balanced LINEX loss function based on lower record values. Complexity . 2021;2021:1–6. doi: 10.1155/2021/5273191. ‏.5273191 [DOI] [Google Scholar]
  • 23.Al-Duais F. S. Bayesian analysis of record statistic from the Inverse Weibull distribution under balanced loss function. Mathematical Problems in Engineering . 2021;2021:1–9. doi: 10.1155/2021/6648462. ‏.6648462 [DOI] [Google Scholar]
  • 24.Wei S., Wang C., Li Z. Bayes estimation of Lomax distribution parameter in the composite LINEX loss of symmetry. Journal of Interdisciplinary Mathematics . 2017;20(5):1277–1287. doi: 10.1080/09720502.2017.1311043. [DOI] [Google Scholar]
  • 25.Xue G., Lin F., Li S., Liu H. Adaptive dynamic surface control for finite-time tracking of uncertain nonlinear systems with dead-zone inputs and actuator faults. International Journal of Control, Automation and Systems . 2021;19(8):2797–2811. doi: 10.1007/s12555-020-0441-6. [DOI] [Google Scholar]
  • 26.Taloba A. I., Riad M. R., Soliman T. H. A. Developing an efficient spectral clustering algorithm on large scale graphs in spark. In Proceedings of the 2017 Eighth International Conference on Intelligent Computing and Information Systems ICICIS; December 2017; Cairo, Egypt. IEEE; pp. 292–298. [DOI] [Google Scholar]
  • 27.Rayan A., Taloba A. I., Abd El-Aziz R. M., Amr A. IoT enabled secured fog-based cloud server management using task prioritization strategies. International Journal of Advanced Research in Engineering & Technology . 2020;11:p. 9. [Google Scholar]
  • 28.Ha S., Chen L., Liu H., Zhang S. Command filtered adaptive fuzzy control of fractional-order nonlinear systems. European Journal of Control . 2022;63:48–60. doi: 10.1016/j.ejcon.2021.08.002. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.


Articles from Computational Intelligence and Neuroscience are provided here courtesy of Wiley

RESOURCES