Skip to main content
Entropy logoLink to Entropy
. 2021 Nov 26;23(12):1578. doi: 10.3390/e23121578

Bayesian and Classical Inference under Type-II Censored Samples of the Extended Inverse Gompertz Distribution with Engineering Applications

Ahmed Elshahhat 1, Hassan M Aljohani 2, Ahmed Z Afify 3,*
Editor: Sotiris Kotsiantis
PMCID: PMC8700446  PMID: 34945883

Abstract

In this article, we introduce a new three-parameter distribution called the extended inverse-Gompertz (EIGo) distribution. The implementation of three parameters provides a good reconstruction for some applications. The EIGo distribution can be seen as an extension of the inverted exponential, inverse Gompertz, and generalized inverted exponential distributions. Its failure rate function has an upside-down bathtub shape. Various statistical and reliability properties of the EIGo distribution are discussed. The model parameters are estimated by the maximum-likelihood and Bayesian methods under Type-II censored samples, where the parameters are explained using gamma priors. The performance of the proposed approaches is examined using simulation results. Finally, two real-life engineering data sets are analyzed to illustrate the applicability of the EIGo distribution, showing that it provides better fits than competing inverted models such as inverse-Gompertz, inverse-Weibull, inverse-gamma, generalized inverse-Weibull, exponentiated inverted-Weibull, generalized inverted half-logistic, inverted-Kumaraswamy, inverted Nadarajah–Haghighi, and alpha-power inverse-Weibull distributions.

Keywords: Bayesian estimation, inverse-Gompertz distribution, entropies, moments, stress-strength reliability, maximum likelihood estimation, Type-II censored data, MCMC

1. Introduction

The two-parameter Gompertz (Go) distribution is very important in modeling actuarial tables and human mortality. It was, historically, introduced by [1], after which many authors have contributed to its statistical methodology and characterization. Several studies have shown that the Go distribution is not flexible for modeling various phenomena due to it having only an increasing hazard rate (HR) shape, for example, the generalized-Go [2], beta-Go [3], transmuted-Go [4], McDonald-Go [5], exponentiated generalized Weibull-Go [6], unit-Go [7], power-Go [8], skew reflected-Go [9], Topp-Leone Go [10], and alpha-power Go [11] distributions.

Furthermore, Wu et al. [12] estimated the parameters of the Go distribution using the least-squares approach. Soliman et al. [13] estimated the parameters of the Go distribution using the maximum likelihood (ML) and Bayes methods under progressive first-failure censored samples. Dey et al. [14] studied the properties and different methods of estimation for the Go distribution.

Recently, many authors have constructed inverted models and studied their applications in several applied fields such as the inverse Nakagami-m, inverse weighted-Lindley, and logarithmic transformed inverse-Weibull distributions by [15,16,17], respectively.

Eliwa et al. [18] proposed the two-parameter inverse Go (IGo) distribution with an upside-down bathtub shape HR function. The non-negative random variable (rv) X is said to have an IGo distribution if its cumulative distribution function (CDF) is specified (for x>0) by

R(x;β,θ)=expβθexpθx1,β,θ>0, (1)

where β and θ denote the shape and scale parameters, respectively.

The first objective of this article is to present a new lifetime model called the EIGo distribution and explore some of its useful properties. Specifically, the EIGo model is constructed based on the extended-R (E-R) family [19] by adding another shape parameter that might address the lack of fit of the IGo distribution for modeling real-life data that indicated non-monotone failure rates. We are motivated to construct the EIGo distribution because (i) it is capable of modelling unimodal HR shape, which provides a good fit for the real data sets; (ii) the EIGo model contains some special well-known distributions; (iii) the EIGo model can be considered as a good alternative to the IGo model and other competing inverted models for fitting the positive data with a longer right tail; and (iv) the EIGo distribution outperforms some competing inverted distributions with respect to two real engineering data sets. One of the important advantages of the EIGo model is its ability to provide an improved fit with respect to its competing inverted models.

The second objective is to address and evaluate the behavior of classical and Bayesian estimators for the unknown parameters of the proposed EIGo distribution under Type-II censored samples. We compare the performances of these estimators by conducting extensive simulations in terms of their root mean squared errors (RMSEs) and relative absolute biases (RABs).

The paper is outlined in seven sections. In Section 2, the EIGo distribution is introduced with its special cases and expansion. Some of its useful properties are addressed in Section 3. In Section 4, maximum likelihood and Bayesian methods are discussed under Type-II censored samples. In Section 5, the performances of the maximum likelihood and Bayesian approaches are explored via simulation results. In Section 6, the adaptability of the EIGo model is addressed using two real-life engineering datasets. Finally, some concluding remarks are presented in Section 7.

2. The EIGo Distribution

In this section, we introduce the three-parameter EIGo distribution and some of its sub-models. The CDF of the E-R family, with a shape parameter α>0, has the form

Fx;α =11R(x)α,x. (2)

A lifetime rv X is said to have the EIGo distribution if its CDF has the form

F(x)=11expβθeθ/x1α,x>0,αβ,θ>0, (3)

where α and β denote the shape parameters and θ denotes the scale parameter. The first advantage of the EIGo distribution is that it has a closed form for its CDF (3).

The corresponding probability density function (PDF) of (3) becomes

f(x)=αβx2expθxβθeθ/x11expβθeθ/x1α1,x>0,α,β,θ>0. (4)

The rv with the PDF (4) is denoted by XEIGo(α,β,θ). The EIGo distribution involves three well-known lifetime sub-models as follows.

  • The generalized inverted-exponential (GIE) distribution [20] follows when the parameter θ0.

  • The IGo distribution [18] is derived for α=1.

  • The inverse-exponential (IE) distribution [21] with one parameter β can be derived when θ0 and α=1.

Using some specific parameter values, the shapes of the EIGo PDF (4) are displayed in Figure 1. It shows that the PDF of the EIGo distribution can be unimodal and right-skewed with great heaviness of the tails.

Figure 1.

Figure 1

Plots of the PDF of the EIGo distribution for some specific parameter values.

The corresponding survival, S(x), and HR, h(x), functions of the EIGo distribution have the forms

S(x)=1expβθeθ/x1α,x>0 (5)

and

h(x)=αβx2expθxβθeθ/x11expβθeθ/x11,x>0. (6)

Figure 2 provides graphical representations of the HR function (HRF) of the EIGo distribution with various values of its parameters. It shows that the HRF of the EIGo distribution has an upside-down bathtub shape.

Figure 2.

Figure 2

Plots of the HRF of the EIGo distribution for some specific parameter values.

The cumulative HRF, H(x), of the EIGo distribution has the form

H(x)=logS(x)=αlog1expβθeθ/x1,x>0.

The reversed HRF, r(·) of the EIGo distribution is

r(x)=αβx2expθxβθeθ/x11expβθeθ/x1111expβθeθ/x1α,x>0.

By dividing the CDF (3) by the survival function (5), the corresponding odds function, O(x), follows as

O(x)=11expβθeθ/x1α1expβθeθ/x1α,x>0.

Expansions

Let a be a real positive integer; we consider the following general binomial series

(1x)a=i=0(1)ia!xii!(ai)!, (7)

which is valid for |x|<1. By expanding the CDF (3) by (7), we get

F(x)=1i=0ϑi(α)G*(x;iβ,θ), (8)

where G*(x;iβ,θ) is the CDF of the IGo distribution with parameters iβ and θ, and the coefficient ϑi(α) is given by

ϑi(α)=(1)ii!Γ(α+1)Γ(αi+1)!·

Similarly, expanding the PDF (4) by (7), we obtain

f(x)=αi=0ωi(α)g*(x;(i+1)β,θ), (9)

where g*(x;(i+1)β,θ) is the PDF of the IGo model with parameters (i+1)β and θ, and ωi(α)=(1)iΓ(α)/[Γ(αi)i!].

Clearly, Equation (9) shows that the EIGo model is a linear combination of IGo densities. Thus, some structural properties of the EIGo model can be obtained from those of the IGo distribution.

3. Statistical and Reliability Characteristics

This section is devoted to determining several statistical and reliability characteristics of the EIGo distribution.

3.1. Quantile and Mode

To simulate random samples from the EIGo distribution, its quantile function (QF), xq, follows as

xq=θln1θβln11q1/α,0<q<1. (10)

Substituting q=0.5 into (10), the median, Med(x), of the EIGo distribution can conveniently be derived. Similarly, substituting q=0.25 and q=0.75 into (10), the first and the third quartiles of the EIGo distribution can be easily obtained.

The mode of the EIGo distribution follows by differentiating the logarithm of the PDF (4), x0, with respect to x, and equating the result to zero. After some algebraic manipulations, the mode is determined by solving the following non-linear equation:

x0=βeθ/x2xθ(α1)expθxβθeθ/x11expβθeθ/x11=0. (11)

The unique mode of the EIGo distribution cannot be obtained analytically, hence it can be obtained numerically.

3.2. Mean Residual Life

The mean-residual-life (MRL) function is the average remaining life span, which is a component surviving up to distinct time t. It is a useful measure in reliability studies for describing the aging process.

Theorem 1.

If X has EIGo(α,β,θ) distribution, thus the MRL of the lifetime rv X, say mR(·), takes the form

mRt=1Stμ1i,s,m=0j=0sα!(sj)mθmsiβst1m(αi)!(sj)!i!j!m!(1m)(1)i+s+j,t>0.

Proof. 

Suppose that X is a lifetime rv with CDF (3); then the MRL is defined as [22]

mRt=EXtX>t=1PX>tt(xt)dPXx|X>t,t>0. (12)

However, the MRL in (12) is equivalent to

mRt=1S(t)tS(x)dx,=1Stμ10tS(x)dx,t>0, (13)

where μ1 is the expected mean of time t, which is equivalent to the MRL at t=0.

Using (5), one gets

mR(t)=1S(t)μ1i,s,m=0j=0sϑi,s,j,m*α,β,θt1m, (14)

where

ϑi,s,j.m*α,β,θ=α!(sj)mθmsiβs(αi)!(sj)!i!j!m!(1m)(1)i+s+j.

3.3. Mean Inactivity Time

The mean inactivity time (MIT) function is useful in reliability and survival analysis. The MIT, mI(x), of X is defined as

mI(t)=E(tX|Xt)=0tFxdxFt. (15)

If XEIGo(α,β,θ), then, using (8), we have

0tFxdx=ti,s,m=0j=0sϑi,s,j,m*α,β,θt1m. (16)

The MIT of the EIGo distribution follows simply by substituting (3) and (16) in Equation (15). Moreover, the CDF (3) of the EIGo distribution follows from the MIT by the following formula

F(x)=expx1mI(t)mI(t)dt,

where mI(t) is differentiable.

3.4. Strong MIT

The strong-MIT (SMIT) function is another useful reliability measure, which is introduced by [23]. They showed that the SMIT has several properties that can be adopted in different applications in reliability and survival analysis. It can be used to predict the actual time at which the failure of the component or device occurs.

The SMIT, mS(x), has the form

mSt =1Ft0t2xFxdx. (17)

Hence, the SMIT of the EIGo distribution follows as

mSt =t2F(t)12i,s,m=0j=0sϑi,s,j,m*α,β,θ(1m)(2m)tm.

3.5. Stress–Strength Reliability

The stress–strength model describes the life of a component or system that has a random strength X that may fail because it is subjected to a random stress Z. Hence, R=Pr(X>Z) is a measure of component reliability.

Suppose X and Z have independent EIGoα,β1,θ1 and EIGoα,β2,θ2 with the same shape parameter α. Then, the stress–strength measure is [24]

R=0f1(x)F2(x)dx. (18)

Using (9) and (8), the PDF of X1 and the CDF of X2 are expressed as

f1(x)=αi,s=0j=0s(α1)!i+1sβ1s+1θ1s(αi1)!(sj)!i!j!1i+s+jx2e(js1)θ1/x (19)

and

F2(x)=1u,w=0v=0wα!(αu)!(wv)!u!v!(1)u+w+vuβ2θ2wewvθ2/x, (20)

respectively.

From substituting (19) and (20) in (18), the R measure reduces to

R=αi,s=0j=0sψi,s,j1α,β1,θ1js1θ1i,s,u,w=0j=0sv=0wψi,s,j1α,β1,θ1ψu,w,v2α,β2,θ2js1θ1wvθ2,

where

ψi,s,j1α,β1,θ1=(α1)!i+1sβ1s+1θ1s(αi1)!(sj)!i!j!1i+s+j

and

ψu,w,v2α,β2,θ2=α!uwβ2w(αu)!(wv)!u!v!θ2w(1)u+w+v.

3.6. Probability Weighted Moments

The probability-weighted moments (PWM), say ρr,k, can be adopted to derive an estimate for the unknown parameters of a particular distribution whose inverse extension cannot be expressed explicitly.

Theorem 2.

If X has EIGo(α,β,θ) distribution, then the (r,k)th PWM of the rv X is derived as

ρr,k=α(1)v+i+s+jΓ(k+1)Γ(α(v+1))Γ(mr+1)(i+1)sβs+1θrs1(js)rm1Γ(kv+1)Γ(α(v+1)i)Γ(sj+1)Γ(v+1)Γ(i+1)Γ(s+1)Γ(j+1)· (21)

Proof. 

The (r,k)th PWM of non-negative X following a continuous CDF, F(.), is defined by [25]

ρr,k=E[XrFk(x)]=0xkFk(x)f(x)dx. (22)

Substituting (3) and (4) into (22), the ρr,k can be written as

ρr,k=0xkFk(x)f(x)dx=αβv,i=0(1)v+iΓ(k+1)Γ(α(v+1))Γ(i+1)Γ(kv+1)Γ(α(v+1)i)Γ(v+1)×0xk2eθ/xexp(i+1)βθeθ/x1dx, (23)

The following term can be expanded by Taylor’s series as

exp(i+1)βθeθ/x1=s=0(1)s(i+1)sβsθsΓ(s+1)eθ/x1s.

Hence, Equation (23) reduces to

ρr,k=αv,i,s=0(1)v+i+sΓ(k+1)Γ(α(v+1))(i+1)sβs+1Γ(kv+1)Γ(α(v+1)i)Γ(v+1)Γ(i+1)Γ(s+1)θs×0xk2eθ/xeθ/x1sdx. (24)

However, from (24), the ρr,k of X has the form

ρr,k=αv,i,s,m=0j=0sυv,i,s,m,j(α,β,θ)Γ(mr+1),

where

υv,i,s,m,j(α,β,θ)=(1)v+i+s+jΓ(k+1)Γ(α(v+1))(i+1)sβs+1θrs1(js)rm1Γ(j+1)Γ(kv+1)Γ(α(v+1)i)Γ(sj+1)Γ(v+1)Γ(i+1)Γ(s+1)·

3.7. Moments

Moments are used to describe the characteristics of the probability distribution, so they are important in any statistical analysis.

By definition, the r-th moment of any rv X with PDF, f(x), is

μr=E(Xr)=0xrf(x)dx. (25)

By substituting (9) in (25), the rth moment of the EIGo(α,β,θ) distribution reduces to

μr=αi,s,m=0j=0s(α1)!i+1sβs+1θms(αi1)!(sj)!i!j!m!1i+s+j0xrm2e(js)θ/xdx,=αi,s,m=0j=0sϖi,s,m,jα,β,θΓ(mr+1), (26)

where

ϖi,s,m,jα,β,θ=(α1)!i+1s(js)rm1βs+1θrs1(αi1)!(sj)!i!j!m!1i+s+j,

Γ(a)=0ta1etdt,a>0 is the gamma (Ga) function.

From (26), the corresponding mean of the EIGo distribution is simply obtained by setting r=1, and the corresponding variance can be also obtained using r=1 and r=2.

3.8. Entropies

Entropy is a useful concept to measure the uncertainty related to a rv X. It is adopted in many fields of science such as econometrics and computer science.

The Rényi entropy of order δ, say ρδ(X), is defined (for δ>0 and δ1) as [26]

ρδX=11δlogfxδdx.

So, if XEIGo(α,β,θ) distribution, then we have

0f(x)δdx=αδi,s=0j=0s(α1)δ!i+δsβs+δθs(δ(α1)i1)!i!s!j!1i+s+j0x2δejsδθ/xdx,=αδi,s=0j=0sωi,s,jρδα,β,θΓ2δ1, (27)

where

ωi,s,jρδα,β,θ=δ(α1)!i+δsjsδ12δβs+δθ1s2δ(δ(α1)i1)!i!s!j!1i+s+j.

Hence, the ρδ(X) of X becomes

ρδX=11δlogαδi,s=0j=0sωi,s,jρδα,β,θΓ2δ1,δ>0,δ1.

The δ-entropy, denoted by IδX, of the EIGo distribution has the form (for δ>0 and δ1)

IδX=1δ1log10fxδdx.

It follows directly using (27).

3.9. Order Statistics

Consider the order statistics (OS) of a random sample of size n, say X(1)X(2)X(n). Then, the PDF of the rth order statistic, say X(r),r=1,2,,n is [27]

f(r)x=Cr1q=0nr(1)qnrqfxFxr+q1, (28)

where Cr=B(r,nr+1).

The corresponding CDF of X(r) reduces to

F(r)x=d=rnq=0nd(1)qndndqFxd+q, (29)

Using (3) and (4) of the EIGo distribution, the PDF (28) follows as

f(r)x=q=0nrv,s=0ξq,v,s(r)(α,β,θ)gr*((s+1)β,θ;x), (30)

where

ξq,v,s(r)(α,β,θ)=α(nr)!(r+q1)!(α(v+1)1)!(1)q+v+sCr(m+1)(nrq)!(r+qv1)!(α(v+1)m1)!q!v!s!,

and gr*((s+1)β,θ;x) is the PDF of the IGo model with parameters (s+1)β and θ. Thus, the PDF of the EIGo distribution OS is a linear mixture of the IGo densities.

Similarly, from using (3) and (4) of the EIGo distribution, the CDF (29) reduces to

F(r)x=d=rnq=0ndq=0nrv,s=0ζq,v,s(r)(α,β,θ)Gr*(sβ,θ;x), (31)

where

ζd,q,v,s(r)(α,β,θ)=n!(nd)!(d+q)!(αv)!(1)q+v+s(ndq)!(nd)!(d+qv)!(αvs)!d!q!v!s!.

3.10. Stochastic Ordering

The following theorem shows that the EIGo distribution is ordered with respect to the likelihood ratio, XLRY, order.

Theorem 3.

Let XEIGo(α1,β,θ) and YEIGo(α2,β,θ), for α1α2, then XlrY.

Proof. 

The likelihood ratio function ξx has the form (see [28])

ξx=fX(x)fY(x). (32)

By substituting (3) into (32), one gets

ξx=α11expβθeθ/x1α1α21expβθeθ/x1α2· (33)

Taking the natural logarithm of (33) and differentiating the result with respect to x, we obtain

ddxlogξx=α2α1βx2expθxβθeθ/x1<0.

Thus, ξx is decreasing in x for α1α2; i.e., XlrY. The proof is completed. □

4. Parameter Estimation under Type-II Censoring

In this section, we discuss the estimation of the EIGo parameters, Θ=(α,β,θ)T, using the ML and Bayesian estimators under Type-II censoring scheme in which the life-test is terminated after a specified number of failures have occurred, say m(<n), out of complete test units n.

4.1. Maximum Likelihood Estimators

Suppose that n independent items are taken from the EIGo model with CDF (3) and are placed on a test at time 0. Hence, the likelihood function (LF), say L(Θ|x), under Type-II censored sample, X(i),i=1,2,...,m, takes the form (See [29])

LΘ|x=n!(nm)!Πmi=1f(x(i);Θ)1F(x(m);Θ)nm. (34)

By substituting (3) and (4) into (34), then Equation (34) reduces to

LΘ|x=n!αβm(nm)!expθi=1mx(i)1Πmi=1x(i)21expζx(i);β,θ1×expα(nm)log1expζx(m);β,θ+i=1mlog1expζx(i);β,θ, (35)

where ζx(i);β,θ=βθexp(θx(i)1)1 and ζx(m);β,θ=βθexp(θx(m)1)1. Clearly, the LF of complete sample follows as a special case from (35) by setting m=n. The associated log-LF of (35), say (Θ), becomes

(α,β,θx)mlogαβ+θi=1m(x(i)1)+(α1)i=1mlog1expζx(i);β,θ+α(nm)log1expζx(m);β,θ. (36)

Differentiating (36) partially with respect to α, β and θ, we can write that the log-LF of (35), say (Θ), becomes

(Θ)α=mα+(nm)log1expζx(m);β,θ+i=1mlog1expζx(i);β,θ, (37)
(Θ)β=mβ+α(nm)ζβx(m);β,θexpζx(m);β,θ1expζx(m);β,θ+(α1)i=1mζβ(x(i);β,θ)expζx(i);β,θ1expζx(i);β,θ (38)

and

(Θ)θ=i=1m(x(i)1)+α(nm)ζθx(m);β,θexpζx(m);β,θ1expζx(m);β,θ+(α1)i=1mζθ(x(i);β,θ)expζx(i);β,θ1expζx(i);β,θ, (39)

where ζϕ(·) is the first-partial derivative with respect to ϕ, ζβ(x(i);β,θ)=ζ(x(i);β,θ)β, ζβ(x(m);β,θ)=ζ(x(m);β,θ)β, ζθ(x(i);β,θ)=βθ2[eθx(i)1(θx(i)11)+1], and ζθ(x(m);β,θ)=βθ2[eθx(m)1(θx(m)11)+1].

Equating the three Equations (37)–(39) to zero and solving them simultaneously will provide the ML estimators (MLEs) of the EIGo parameters. Clearly, the MLEs cannot be determined in closed forms, but they can be calculated numerically using suitable iterative techniques such as the Newton–Raphson. To construct the confidence intervals (CIs) of the model parameters, the observed information matrix, Iij(·), is required, and it takes the form

Iij(Θ_)=E2Θ_x/Θ_2,i,j=1,2,3. (40)

Practically, by dropping the expectation operator given in (40) and replacing Θ_ by their MLEs Θ^_, the approximate asymptotic variance–covariance matrix, I1(Θ^_), for the MLEs Θ^_=(α^,β^,θ^)T, becomes

I1(Θ^_)=LααLαβLαθLβαLββLβθLθαLθβLθθ(Θ_=Θ^_)1=σ^α^α^σ^α^β^σ^α^θ^σ^β^α^σ^β^β^σ^β^θ^σ^θ^α^σ^θ^β^σ^θ^θ^. (41)

Taking the second partial derivative of (36) with respect to α, β, and θ, the observed Fisher elements in (41), Lij, are obtained and are available with the authors upon request. Under some mild regularity conditions, the MLEs Θ^_ are approximately distributed as multivariate normal (No) distribution with mean Θ_ and variance I1(Θ^_) respectively [29]. Hence, for large samples, 100(1ε)% CIs for the model parameters α,β and θ are

α^zε/2σ^α^α^,β^zε/2σ^β^β^andθ^zε/2σ^θ^θ^,

respectively, where σ^α^α^, σ^β^β^, and σ^θ^θ^ refer to diagonal elements of (41) and zε/2 is the percentile of the standard No distribution with right-tail probability (ε/2)-th.

4.2. Bayes Estimators

This subsection discusses the Bayes estimators (BEs) and the credible intervals (CIs) of the unknown parameters of the EIGo model. The squared error loss (SEL) function as a symmetric loss function is adopted to obtain the BEs, and it is defined by

l(Θ,Θ˜)=(Θ˜Θ)2, (42)

where Θ˜ is an estimate of Θ.

The gamma (Ga) conjugate priors of the EIGo parameters can be applied to develop the BEs due to their flexibility in covering several varieties of prior beliefs of the experimenter (see [30,31]). Hence, the unknown parameters α, β and θ are assumed to have independent Ga prior PDF, i.e., as αGa(a1,b1), βGa(a2,b2) and θGa(a3,b3). The hyper-parameters, say ai,bi>0,i=1,2,3, represent the prior knowledge about the three parameters, and they are assumed to be non-negative and known. However, the hyper-parameters are fixed by using the mean and the variance of the Ga distribution (ai=Θ2iτ2 and b2=Θ2i(ai1)); hence, τ2=1 is used and Θi is the initial value. Hence, the joint prior PDF of Θ becomes

πα,β,θαa11βa21θa31expb1α+b2β+b3θ,α,β,θ>0. (43)

Combining (43) with (35), the joint posterior distribution of α, β and θ becomes

π(α,β,θ|x)=A1αm+a11eαb1*(β,θ)βm+a21eb2βθa31expθb3i=1mx(i)1×Πmi=11expζx(i);β,θ1, (44)

where

b1*(β,θ)=b1(nm)log1expζx(m);β,θi=1mlog1expζx(i);β,θ

and A=000π(α,β,θ|x)dαdβdθ refers to the normalizing constant of (44).

Then, the BEs of any function of α, β, and θ, say φ(α,β,θ), under the SEL function follows by the posterior expectation of φ(α,β,θ), and it has the form

φ˜(α,β,θ)=Eφα,β,θx=1A000φ(α,β,θ)π(α,β,θ|x)dαdβdθ. (45)

Based on (45), the BEs cannot be obtained in closed forms. Hence, the Markov chain Monte Carlo (MCMC) techniques are adopted to approximate the BEs and to construct the CIs from (45). The Metropolis Hastings (M-H) algorithm is a general technique of a family of Markov chain (MC) simulation methods, and it is the most commonly used of MCMC techniques to draw samples from posterior distribution (PD) to calculate the Bayesian estimates of interest. Several applications of the MC algorithm can be explored in [31,32].

Using (45), the full conditional posterior distributions (CPDs) of α, β, and θ are obtained as

π1(α|β,θ,x)αm+a11eαb1*(β,θ), (46)
π2(β|α,θ,x)βm+a21eb2βΠmi=11expζx(i);β,θ1 (47)

and

π3(θ|α,β,x)θa31exp(θ(b3i=1mx(i)1))Πmi=11expζx(i);β,θ1. (48)

Thus, from (46), the unknown parameter α has the Ga density with shape parameter m+a1 and scale parameter b1*(β,θ). Thus, the samples of α are generated easily using any Ga-generating routine. In addition, from (47) and (48), it can be seen that the CPDs of β and θ are different from well-known distributions. Hence, it is impossible to sample directly using standard models. To solve this problem, Tierney [33] proposed the use of a hybrid MCMC algorithm by combining the M-H algorithm sampler with a Gibbs sampling scheme using the normal distribution. Here, the hybrid algorithm will be termed as a M-H within Gibbs sampling for updating the unknown parameter α using Gibbs steps and then for updating the unknown parameters β and θ using M-H steps in order to calculate the BEs and construct the CIs of α, β, and θ. Now, the proposed hybrid algorithm can be carried out using the following steps.

Step 1: Start with an initial values α(0)=α^, β(0)=β^, and θ(0)=θ^.

Step 2: Set J=1.

Step 3: Generate α(J) from Ga(m+a1,b1*(β,θ)).

Step 4: Generate β(J) and θ(J) from π2β(J1)α(J),θ(J1),x and π3θ(J1)α(J),β(J),x using M-H algorithm with the normal densities:

(a) Generate β* and θ* from N(β(J1),σβ2) and N(θ(J1),σθ2), respectively.

(b) Obtain the acceptance measures:

ϕ1β(J1),β*=min1,π2β*α(J),θ(J1),xπ2β(J1)α(J),θ(J1),x and

ϕ2θ(J1),θ*=min1,π3θ*α(J),β(J),xπ3θ(J1)α(J),β(J),x.

(c) Generate samples u1 and u2 from U(0,1).

(d) If u1ϕ1 then set β(J)=β*. Similarly if u2ϕ2 then set θ(J)=θ*. Otherwise set β(J)=β(J1) and θ(J)=θ(J1).

Step 5: Put J=J+1.

Step 6: Repeat steps 3-5 for M times to get

φ(J)=α(J),β(J),θ(J),J=1,2,,M.

In the beginning of the analysis (burn-in period), we discarded the first simulated varieties, say M0, to remove the effect of the selection of initial guess value and to guarantee the sampling convergence; hence, the remaining samples are used to carried out the BEs with an optimal acceptance rate of 23.4% [34]. Then, for sufficiently large M, the drawn MCMC samples of the parameters α, β, and θ as in φ(J),J=M0+1,...,M, can be adopted to develop the BEs. Thus, the approximate BEs of φ under SEL function takes the form

φ˜=1MM0J=M0+1Mφ(J).

To construct the 100(1γ)% two-sided Bayes-CIs (BCIs) of α, β, and θ, we order the simulated MCMC samples of φ(J) for J=1,2,,N, after burn-in as φ(M0+1),φ(M0+2),,φ(N). Hence, the 100(1γ)% two-sided BCIs of φ reduces to

φ(MM0)γγ22,φ(MM0)1γγ22.

5. Simulation Results

To evaluate the behavior of the point and interval estimators for the EIGo parameters, we conduct a Monte Carlo simulation study. Using three sets of parametric values, i.e., (α,β,θ)=(0.5,0.1,0.1), (α,β,θ)=(1.0,0.5,0.5), and (α,β,θ)=(3,3,3), we simulate 1000 samples of n (total sample size) and m (effective sample size) such as n=40, 60, and 80, where m is taken as a failure proportion such as (m/n)100%=25, 50, 75, and 100% for each n. Clearly, the Type-II censored samples, which are generated with (m/n)=100%, represent the complete samples. In the Bayesian paradigm, the choice of the hyper-parameter value is a crucial issue. Therefore, if the proper prior information (PI) is available for α, β, and θ i.e., ai=bi=0,i=1,2,3, then the joint posterior distribution (44) is proportional to the likelihood function (35). Hence, if one does not have PI on the unknown parameters, it is better to adopt the MLEs instead of the BEs because the latter are very expensive computationally.

Here, we adopted two informative priors for each set of α, β, and θ, called prior (1): (a1,a2,a3)=(1.0,0.2,0.2),bi=2,i=1,2,3; prior (2): (a1,a2,a3)=(2.5,0.5,0.5),bi=5,i=1,2,3 when (α,β,θ)=(0.5,0.1,0.1) as well as prior (1): (a1,a2,a3)=(2,1,1),bi=2,i=1,2,3; prior (2): (a1,a2,a3)=(5.0,2.5,2.5),bi=5,i=1,2,3 when (α,β,θ)=(1.0,0.5,0.5). Here, the values of hyper parameters of α, β, and θ are determined in such a way that the prior mean becomes the expected value of the estimated parameter [30]. The hybrid MCMC algorithm described in Section 4.2 is adopted to generate 12,000 MCMC samples, and we discarded the first 2000 values as ‘burn-in’. Accordingly, the average Bayes MCMC estimates and 95% two-sided BCIs are calculated based on 10,000 MCMC samples.

For each setting, we compute the average estimates, φ^¯k, with their root mean squared errors (RMSEs) and RABs using the following formulae.

φ^¯k=1Si=1Sφ^k(i),k=1,2,3,RMSE(φ^k)=1Si=1S(φ^k(i)φk)2,k=1,2,3andRAB(φ^k)=1Si=1Sφ^k(i)φkφk,k=1,2,3,

where N is the number of replicates, φ^ is an estimate of φ, φ1=α, φ2=β, and φ3=θ.

The required numerical results are performed using the R software. The average values of α, β, and θ, RMSEs, and RABs are reported in Table 1, Table 2 and Table 3. In addition, the average confidence lengths (ACLs) of 95% asymptotic CIs of α, β, and θ are summarized in Table 4.

Table 1.

The average estimates of α and their respective RMSEs and RABs in parentheses.

(α,β,θ) n m MLE MCMC
Prior (1) Prior (2)
(0.5,0.1,0.1) 40 40 0.5175 (0.1327,0.1993) 0.5013 (0.0856,0.1338) 0.4884 (0.0728,0.1164)
30 0.4516 (0.1910,0.3075) 0.4096 (0.1456,0.2435) 0.4150 (0.1356,0.2265)
20 0.4109 (0.3225,0.4847) 0.2941 (0.2414,0.4326) 0.3505 (0.2135,0.3660)
60 60 0.5149 (0.1099,0.1693) 0.4888 (0.0618,0.0922) 0.4954 (0.0607,0.0973)
45 0.4356 (0.1776,0.2799) 0.4034 (0.1391,0.2321) 0.4112 (0.1343,0.2214)
30 0.4010 (0.2607,0.4204) 0.2959 (0.2366,0.4219) 0.3217 (0.2214,0.3847)
80 80 0.5086 (0.0892,0.1373) 0.5117 (0.0592,0.0919) 0.5048 (0.0572,0.0901)
60 0.4369 (0.1671,0.2445) 0.4015 (0.1368,0.2264) 0.4034 (0.1346,0.2221)
40 0.3924 (0.2393,0.3845) 0.3129 (0.2263,0.3957) 0.3232 (0.2198,0.3791)
(1.0,0.5,0.5) 40 40 1.0778 (0.3812,0.2744) 1.0787 (0.1936,0.1471) 1.0251 (0.1533,0.1191)
30 1.0040 (0.6560,0.4307) 0.7944 (0.2908,0.2449) 0.8300 (0.2613,0.2162)
20 1.2427 (1.8647,0.7604) 0.7024 (0.4310,0.3690) 0.6739 (0.4149,0.3573)
60 60 1.0423 (0.2756,0.2103) 1.0358 (0.1384,0.1074) 1.0081 (0.1224,0.0963)
45 0.9583 (0.4883,0.3602) 0.8395 (0.2659,0.2175) 0.8252 (0.2587,0.2117)
30 0.9968 (0.7368,0.5038) 0.6790 (0.4301,0.3682) 0.6804 (0.4167,0.3547)
80 80 1.0356 (0.2400,0.1773) 1.0170 (0.1138,0.0890) 1.0178 (0.1098,0.0864)
60 0.9068 (0.3824,0.2994) 0.8297 (0.2577,0.2099) 0.8247 (0.2544,0.2073)
40 0.9375 (0.5979,0.4590) 0.6725 (0.4279,0.3646) 0.6710 (0.4196,0.3572)

Table 2.

The average estimates of β and their respective RMSEs and RABs in parentheses.

(α,β,θ) n m MLE MCMC
Prior (1) Prior (2)
(0.5,0.1,0.1) 40 40 0.1032 (0.0576,0.4315) 0.0918 (0.0167,0.1389) 0.0894 (0.0127,0.1077)
30 0.1236 (0.0869,0.6335) 0.0951 (0.0171,0.1404) 0.1050 (0.0138,0.1178)
20 0.1651 (0.1664,1.1435) 0.0922 (0.0267,0.2333) 0.1143 (0.0230,0.1884)
60 60 0.1032 (0.0463,0.3600) 0.0908 (0.0094,0.0918) 0.0935 (0.0069,0.0653)
45 0.1205 (0.0759,0.5598) 0.0997 (0.0116,0.0927) 0.1034 (0.0106,0.0772)
30 0.1576 (0.1349,0.9860) 0.0899 (0.0185,0.1541) 0.1070 (0.0131,0.1052)
80 80 0.0999 (0.0377,0.2949) 0.1018 (0.0034,0.0248) 0.0997 (0.0054,0.0465)
60 0.1173 (0.0645,0.5038) 0.1002 (0.0073,0.0582) 0.1026 (0.0060,0.0510)
40 0.1573 (0.1213,0.9165) 0.1064 (0.0146,0.1099) 0.1096 (0.0111,0.0964)
(1.0,0.5,0.5) 40 40 0.5255 (0.2675,0.4154) 0.5263 (0.0329,0.0532) 0.4938(0.0204,0.0365)
30 0.6253 (0.4660,0.6616) 0.4578 (0.0459,0.0843) 0.5156 (0.0234,0.0350)
20 0.9423 (0.9660,1.2917) 0.5326 (0.0518,0.0764) 0.5254 (0.0333,0.0551)
60 60 0.5158 (0.2204,0.3440) 0.5124 (0.0136,0.0248) 0.4939 (0.0109,0.0161)
45 0.6043 (0.3689,0.5509) 0.5225 (0.0323,0.0517) 0.5193 (0.0223,0.0402)
30 0.8500 (0.7219,1.0446) 0.5405 (0.0454,0.0810) 0.5331 (0.0352,0.0663)
80 80 0.5118 (0.1879,0.2942) 0.5059 (0.0063,0.0119) 0.5047 (0.0061,0.0109)
60 0.5871 (0.3089,0.4833) 0.5219 (0.0241,0.0438) 0.5180 (0.0197,0.0360)
40 0.8212 (0.6443,0.9580) 0.5429 (0.0451,0.0857) 0.5406 (0.0436,0.0812)

Table 3.

The average estimates of θ and their respective RMSEs and RABs in parentheses.

(α,β,θ) n m MLE MCMC
Prior (1) Prior (2)
(0.5,0.1,0.1) 40 40 0.1260 (0.0830,0.6041) 0.1195 (0.0260,0.2072) 0.0920 (0.0098,0.0809)
30 0.1275 (0.1167,0.7961) 0.1358 (0.0417,0.3606) 0.0940 (0.0111,0.0903)
20 0.1464 (0.2073,1.3049) 0.0845 (0.0480,0.4256) 0.1240 (0.0284,0.2452)
60 60 0.1164 (0.0616,0.4733) 0.0942 (0.0064,0.0583) 0.1013 (0.0026,0.0220)
45 0.1154 (0.0866,0.6312) 0.1070 (0.0142,0.1008) 0.1081 (0.0118,0.0953)
30 0.1224 (0.1576,1.0183) 0.1009 (0.0217,0.1749) 0.0915 (0.0189,0.1452)
80 80 0.1142 (0.0512,0.3945) 0.1045 (0.0053,0.0454) 0.1007 (0.0023,0.0192)
60 0.1124 (0.0826,0.5780) 0.1067 (0.0089,0.0767) 0.0951 (0.0073,0.0567)
40 0.1047 (0.1233,0.8588) 0.0908 (0.0195,0.1624) 0.1073 (0.0117,0.0864)
(1.0,0.5,0.5) 40 40 0.5831 (0.3268,0.4871) 0.5626 (0.0647,0.1253) 0.5420 (0.0438,0.0840)
30 0.6044 (0.5502,0.7128) 0.5832 (0.0863,0.1663) 0.4607 (0.0577,0.0917)
20 0.5877 (0.8856,1.1044) 0.5958 (0.1005,0.1916) 0.4621 (0.0628,0.1061)
60 60 0.5564 (0.2585,0.3954) 0.5169 (0.0176,0.0338) 0.5057 (0.0117,0.0170)
45 0.5493 (0.3978,0.5602) 0.5270 (0.0405,0.0649) 0.4786 (0.0273,0.0466)
30 0.5250 (0.6820,0.8798) 0.5609 (0.0642,0.1217) 0.5461 (0.0494,0.0923)
80 80 0.5427 (0.2162,0.3345) 0.4927 (0.0082,0.0147) 0.5049 (0.0059,0.0100)
60 0.5247 (0.3322,0.4831) 0.5110 (0.0141,0.0237) 0.4892 (0.0131,0.0221)
40 0.4866 (0.5702,0.7752) 0.5479 (0.0517,0.0959) 0.5171 (0.0192,0.0343)

Table 4.

The ACLs for 95% ACIs/BCIs of α, β, and θ.

ACI BCI
(α,β,θ) n m α β θ α β θ
Prior (1) Prior (2) Prior (1) Prior (2) Prior (1) Prior (2)
(0.5,0.1,0.1) 40 40 0.3485 0.1039 0.0039 0.3352 0.2808 0.0594 0.0215 0.0639 0.0173
30 0.3807 0.1115 0.0031 0.4424 0.4216 0.0625 0.0467 0.0783 0.0333
20 0.4208 0.1444 0.0118 0.4760 0.5772 0.0774 0.0696 0.1437 0.0581
60 60 0.2646 0.0577 0.0019 0.2387 0.2347 0.0092 0.0077 0.0082 0.0088
45 0.3091 0.1168 0.0033 0.3912 0.3890 0.0463 0.0400 0.0475 0.0321
30 0.3393 0.1537 0.0053 0.4458 0.4907 0.0632 0.0452 0.0828 0.0695
80 80 0.2213 0.0294 0.0008 0.2269 0.2228 0.0117 0.0174 0.0103 0.0087
60 0.2572 0.0918 0.0023 0.3736 0.3674 0.0284 0.0201 0.0229 0.0201
40 0.2689 0.1219 0.0054 0.4756 0.4753 0.0526 0.0207 0.0676 0.0309
40 40 0.6913 0.3121 0.0153 0.6838 0.5868 0.0766 0.0673 0.0639 0.0506
30 0.9376 0.5906 0.0391 0.8035 0.7754 0.0657 0.0670 0.0931 0.1271
20 1.5049 0.7474 0.0476 1.1633 0.9696 0.1527 0.0868 0.1097 0.1546
60 60 0.5241 0.1494 0.0051 0.5250 0.4800 0.0202 0.0296 0.0177 0.0361
45 0.7699 0.5038 0.0213 0.8334 0.7452 0.0887 0.0457 0.0818 0.0631
30 0.8646 0.5170 0.0182 1.0647 0.9860 0.0740 0.0481 0.0791 0.0753
80 80 0.4425 0.1082 0.0041 0.4426 0.4213 0.0089 0.0148 0.0114 0.0106
60 0.5357 0.3158 0.0094 0.7461 0.7131 0.0371 0.0248 0.0292 0.0243
40 0.7310 0.4141 0.0098 1.0136 0.9521 0.0512 0.0514 0.0608 0.0342

From Table 1, Table 2 and Table 3, it can be shown that the proposed estimates of the parameters α, β, and θ are very good in terms of minimum RMSEs and RABs. Further, as n or m increases, the performance of the estimates becomes better. Moreover, the point estimates become even better with the increase in failure-proportion m/n%. Finally, the Bayes MCMC estimates using Ga informative priors are better as they include prior information than the frequentist estimates in term of their RMSEs and RABs. Generally, we conclude that the BEs based on prior (2) performed better than those based on prior (1) in terms of minimum RABs, RMSEs, and ACLs. This is due to the fact that the variance of prior (2) is lower than the variance of prior (1), and both are more informative than an improper prior for ai=bi=0,i=1,2,3.

Furthermore, the ACLs of asymptotic CIs are narrowed down with the increase in n and m. In addition, the CIs perform better than the asymptotic intervals due to the Ga prior information with respect to the shortest ACLs. Moreover, when the true values of α, β, and θ increase, it is clear that the associated RMSEs, RABs, and ACLs of all proposed estimates increase. Finally, we recommend the Bayesian MCMC estimation of the parameters of the EIGo distribution using the hybrid Gibbs within the M-H algorithm sampler.

6. Real-Life Applications

The importance and flexibility of the EIGo model are discussed empirically by analyzing two real data from engineering science. The first dataset consists of 25 (100 cm) specimens of yarn, which were tested at a certain strain level, and it represents the number of cycles to failure [29,35]. The data are: 20, 15, 61, 38, 98, 42, 86, 76, 146, 121, 157, 149, 175, 180, 176, 180, 220, 198, 224, 264, 251, 282, 325, 321, 653. The second dataset shows the time between failures for repairable mechanical equipment items [36]. The data are: 0.11, 0.30, 0.40, 0.45, 0.59, 0.63, 0.70, 0.71, 0.74, 0.77, 0.94, 1.06, 1.17, 1.23, 1.23, 1.24, 1.43, 1.46, 1.49, 1.74, 1.82, 1.86, 1.97, 2.23, 2.37, 2.46, 2.63, 3.46, 4.36, 4.73.

The EIGo distribution is compared with some competing distributions such as the IGo, IE [21], GIE [20], inverse-Weibull (IW) [37], inverse gamma (IGa) [38], generalized inverse-Weibull (GIW) [39], exponentiated inverted-Weibull (EIW) [40], generalized inverted half-logistic (GIHL) [41], inverted-Kumaraswamy (IK) [42], inverted Nadarajah–Haghighi (INH) [43], and alpha-power inverse-Weibull (APIW) [44] distributions. The corresponding PDFs of the competing models (for x>0) are written in Table 5.

Table 5.

Some competing inverted models of the EIGo distribution.

Molel PDF
IE f(x)=θx2exp(θ/x)
IW f(x)=βθx(β+1)exp(θxβ)
GIE f(x)=βθx2exp(θ/x)[1exp(θ/x)]β1)
IGa f(x)=θβΓ(β)xβ1exp(θ/x)
GIW f(x)=αβθβxβ1exp(α(θ/x)β)
EIW f(x)=βθxβ1[exp(xβ)]θ
GIHL f(x)=2βθ1x2e(θx)1[1e(θx)1]β1[1+e(θx)1]β1
IK f(x)=αβ(1+x)β1[1(1+x)β]α1
INH f(x)=βθx2(1+θx1)β1exp(1(1+θ/x)β)
APIW f(x)=αβlog(θ)(θ1)1x(α+1)exp(β(xα))θexp(β(xα))

Moreover, to check the validity of the EIGo model along with other competing models, we employed several goodness-of-fit measures as listed in Table 6.

Table 6.

Some useful criteria for model selection.

Measure or Criterion (C) Abbreviation
negative log-likelihood NLC
Akaike information AIC
consistent Akaike information CAIC
Bayesian information BIC
Hannan-Quinn information HQIC
Kolmogorov-Smirnov K-S
Anderson-Darling A-D
Cramér von Mises CvM
K-S p-value p-value

The R software and ML approach are adopted to estimate the parameters of the considered distributions and also to evaluate the goodness-of-fit measures. The calculated values of the ML estimates of the model parameters with their standard errors (SEs) and corresponding selection measures, for both data sets, are provided in Table 7 and Table 8, respectively. Moreover, Figure 3 and Figure 4 show graphically the quantile–quantile (Q–Q) plots of all competitive distributions for both datasets.

Table 7.

The estimates, SEs, and selection measures of the EIGo distribution and other competing models for first data.

Model Estimates (SEs) Statistics
α β θ NCL AIC CAIC BIC HQIC K-S (p-Value) A-D CvM
EIGo 5.8897 (3.3256) 418.743 (151.19) 70.375 (25.031) 155.330 316.659 317.802 320.316 317.674 0.114 (0.901) 0.4141 0.0696
GIW 1.0110 (0.1412) 6.8390 (92.567) 12.370 (169.25) 158.579 323.158 324.301 326.815 324.172 0.211 (0.215) 1.5866 0.2803
APIW 39.651 (76.857) 1.3072 (0.1886) 115.74 (104.40) 156.569 319.137 320.280 322.794 320.151 0.189 (0.336) 1.2512 0.2226
IE - - 82.841 (16.568) 158.582 319.841 319.338 320.383 319.205 0.207 (0.234) 1.5774 0.2787
IK 1.0282 (0.1472) 94.040 (57.964) - 158.484 320.967 321.513 323.405 321.643 0.213 (0.208) 1.5734 0.2780
IW - 1.0118 (0.1414) 86.718 (50.775) 158.579 321.158 321.703 323.596 321.834 0.211 (0.215) 1.5873 0.2804
IGa - 1.2166 (0.3081) 100.65 (31.349) 158.294 320.588 321.133 323.025 321.264 0.241 (0.110) 1.5801 0.2791
IGo - 101.08 (26.705) 14.535 (14.283) 158.001 320.003 320.548 322.440 320.679 0.222 (0.169) 1.2406 0.2203
INH - 0.7552 (0.2303) 134.21 (76.920) 158.208 320.416 320.962 322.854 321.093 0.218 (0.186) 1.3574 0.2400
GIE - 1.3462 (0.3991) 100.68 (26.199) 158.090 320.181 320.726 322.618 320.857 0.249 (0.091) 1.5393 0.2720
EIW - 86.324 (50.423) 1.0108 (0.1411) 158.579 321.158 321.703 323.596 321.834 0.211 (0.216) 1.5864 0.2802
GIHL - 0.9758 (0.2592) 0.0089 (0.0020) 160.087 324.175 324.720 326.612 324.851 0.263 (0.062) 1.8631 0.3307

Table 8.

The estimates, SEs, and selection measures of the EIGo distribution and other competing models for second data.

Model Estimates (SEs) Statistics
α β θ NCL AIC CAIC BIC HQIC K-S (p-Value) A-D CvM
EIGo 3.5359 (1.4251) 2.3986 (0.7152) 0.3749 (0.1605) 40.768 87.536 88.459 91.739 88.880 0.089 (0.971) 0.1762 0.0286
GIW 1.0730 (0.1314) 0.0761 (0.8851) 11.920 (148.78) 46.376 98.751 99.674 102.95 100.10 0.134(0.656) 1.0815 0.1634
APIW 99.979 (157.11) 1.4079 (0.1745) 0.1922 (0.0751) 43.188 92.376 93.300 96.580 93.721 0.113 (0.836) 0.6445 0.0977
IE - - 0.7932 (0.1448) 46.533 95.066 95.209 96.467 95.514 0.160 (0.423) 1.0040 0.1509
IK 2.4609 (0.4214) 4.1716 (1.2783) - 41.238 86.476 86.921 89.279 87.373 0.111 (0.852) 0.3324 0.0495
IW - 1.0730 (0.1314) 0.7518 (0.1570) 46.376 96.751 97.196 99.554 97.648 0.134 (0.656) 1.0814 0.1634
IGa - 1.4211 (0.3327) 1.1272 (0.3153) 45.507 95.015 95.459 97.817 95.911 0.158 (0.445) 1.0080 0.1516
IGo - 0.9290 (0.2107) 0.1091 (0.1068) 45.920 95.839 96.284 98.642 96.737 0.192 (0.216) 0.6357 0.0956
INH - 0.8517 (0.2346) 1.0344 (0.5127) 46.370 95.740 97.185 99.543 97.637 0.179 (0.295) 0.8525 0.1270
GIE - 1.6681 (0.4724) 1.0975 (0.2480) 44.966 93.931 94.376 96.734 94.828 0.163 (0.401) 0.9099 0.1361
EIW - 0.7518 (0.1570) 1.0730 (0.1314) 46.376 96.751 97.196 99.554 97.648 0.134 (0.656) 1.0814 0.1634
GIHL - 1.2154 (0.3164) 0.7959 (0.1626) 46.828 97.657 98.101 100.46 98.553 0.179 (0.291) 1.1995 0.1855

Figure 3.

Figure 3

The Q–Q plots of EIGo distribution and its competing models for first data.

Figure 4.

Figure 4

The Q–Q plots of EIGo distribution and its competing models for second data.

Among all fitted competitive models, Table 7 and Table 8 show that the EIGo distribution has the lowest values of NCL, AIC, CAIC, BIC, HQIC, K-S, A-D, and CvM and the highest p-value. Consequently, the EIGo distribution provides better fit, for the given datasets, than the IGo and other inverted distributions. Furthermore, the relative histograms of both datasets and the fitted densities, as well as the plot of fitted and empirical survival functions (SFs), are displayed in Figure 5 and Figure 6, respectively. It is seen that, the graphical presentations in Figure 3, Figure 4, Figure 5 and Figure 6 support the numerical findings.

Figure 5.

Figure 5

The relative histogram and fitted densities of competing models (left) and fitted and empirical SFs (right) for first data.

Figure 6.

Figure 6

The relative histogram and fitted densities of competing models (a) and fitted and empirical SFs (b) for second data.

7. Conclusions

In this paper, we have proposed a new three-parameter model called the extended inverse-Gompertz (EIGo) distribution. The EIGo model generalizes some well-known models such as the inverted-exponential, generalized inverted-exponential, and inverse-Gompertz distributions. Various statistical and reliability properties of the EIGo distribution have been addressed. The EIGo parameters have been estimated by the maximum-likelihood and Bayesian approaches under Type-II censoring. The performances of the maximum likelihood and Bayesian estimators have been examined by detailed simulation results. Based on our study, we recommend the Bayesian MCMC estimation of the parameters of the EIGo distribution using the hybrid Gibbs within M-H algorithm sampler. Finally, two real-life engineering data sets have been analyzed to illustrate the applicability of the EIGo distribution as compared with other competing models. The EIGo model provides an adequate and improved fit with respect to its competing inverted models. The failure rate of the EIGo model can only be upside-down-bathtub-shaped. Hence, for future works, the authors suggest that other extensions of the inverse-Gompertz distribution be proposed that may provide all important shapes for the hazard rate including increasing, bathtub, decreasing, and unimodal shapes.

Acknowledgments

The authors would like to thank the Editorial Board and the anonymous reviewers for their constructive comments and suggestions that improved the final version of the article.

Author Contributions

Conceptualization, A.E. and A.Z.A.; Funding acquisition, H.M.A.; Investigation, A.E.; Methodology, A.E., H.M.A. and A.Z.A.; Project administration, A.Z.A.; Resources, H.M.A.; Software, A.E.; Writing—original draft, A.E.; Writing—review and editing, H.M.A. and A.Z.A. All authors have read and agreed to the published version of the manuscript.

Funding

This study was funded by Taif University Researchers Supporting Project number (TURSP-2020/279), Taif University, Taif, Saudi Arabia.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Footnotes

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Gompertz B. On the nature of the function expressive of the law of human mortality, and on a new mode of determining the value of life contingencies. In A letter to Francis Baily, Esq. FRS & C. Philos. Trans. R. Soc. Lond. 1825;115:513–583. doi: 10.1098/rstb.2014.0379. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.El-Gohary A., Alshamrani A., Al-Otaibi A.N. The generalized Gompertz distribution. Appl. Math. Model. 2013;37:13–24. doi: 10.1016/j.apm.2011.05.017. [DOI] [Google Scholar]
  • 3.Jafari A.A., Tahmasebi S., Alizadeh M. The beta-Gompertz distribution. Rev. Colomb. Estadística. 2014;37:141–158. doi: 10.15446/rce.v37n1.44363. [DOI] [Google Scholar]
  • 4.Khan M.S., Robert K., Irene L.H. Transmuted Gompertz distribution: Properties and estimation. Pak. J. Stat. 2016;32:161–182. [Google Scholar]
  • 5.Roozegar R., Tahmasebi S., Jafari A.A. The McDonald Gompertz distribution: Properties and applications. Commun. Stat.-Simul. Comput. 2017;46:3341–3355. doi: 10.1080/03610918.2015.1088024. [DOI] [Google Scholar]
  • 6.El-Bassiouny A.H., El-Damcese M., Mustafa A., Eliwa M.S. Exponentiated generalized Weibull-Gompertz distribution with application in survival analysis. J. Stat. Appl. Probab. 2017;6:7–16. doi: 10.18576/jsap/060102. [DOI] [Google Scholar]
  • 7.Mazucheli J., Menezes A.F., Dey S. Unit-Gompertz distribution with applications. Statistica. 2019;79:25–43. [Google Scholar]
  • 8.Ieren T.G., Kromtit F.M., Agbor B.U., Eraikhuemen I.B., Koleoso P.O. A power Gompertz distribution: Model, properties and application to bladder cancer data. Asian Res. J. Math. 2019;15:1–14. doi: 10.9734/arjom/2019/v15i230146. [DOI] [Google Scholar]
  • 9.Hoseinzadeh A., Maleki M., Khodadadi Z., Contreras-Reyes J.E. The skew-reflected-Gompertz distribution for analyzing symmetric and asymmetric data. J. Comput. Appl. Math. 2019;349:132–141. doi: 10.1016/j.cam.2018.09.011. [DOI] [Google Scholar]
  • 10.Nzei L.C., Eghwerido J.T., Ekhosuehi N. Topp-Leone Gompertz Distribution: Properties and Applications. J. Data Sci. 2020;18:782–794. doi: 10.6339/JDS.202010_18(4).0012. [DOI] [Google Scholar]
  • 11.Eghwerido J.T., Nzei L.C., Agu F.I. The alpha power Gompertz distribution: Characterization, properties, and applications. Sankhya A. 2021;83:449–475. doi: 10.1007/s13171-020-00198-0. [DOI] [Google Scholar]
  • 12.Wu J.W., Hung W.L., Tsai C.H. Estimation of parameters of the Gompertz distribution using the least squares method. Appl. Math. Comput. 2004;158:133–147. doi: 10.1016/j.amc.2003.08.086. [DOI] [Google Scholar]
  • 13.Soliman A.A., Abd-Ellah A.H., Abou-Elheggag N.A., Abd-Elmougod G.A. Estimation of the parameters of life for Gompertz distribution using progressive first-failure censored data. Comput. Stat. Data Anal. 2012;56:2471–2485. doi: 10.1016/j.csda.2012.01.025. [DOI] [Google Scholar]
  • 14.Dey S., Moala F.A., Kumar D. Statistical properties and different methods of estimation of Gompertz distribution with application. J. Stat. Manag. Syst. 2018;21:839–876. doi: 10.1080/09720510.2018.1450197. [DOI] [Google Scholar]
  • 15.Louzada F., Ramos P.L., Nascimento D. The inverse Nakagami-m distribution: A novel approach in reliability. IEEE Trans. Reliab. 2018;67:1030–1042. doi: 10.1109/TR.2018.2829721. [DOI] [Google Scholar]
  • 16.Ramos P.L., Louzada F., Shimizu T.K., Luiz A.O. The inverse weighted Lindley distribution: Properties, estimation, and an application on a failure time data. Commun. Stat.-Theory Methods. 2019;48:2372–2389. doi: 10.1080/03610926.2018.1465084. [DOI] [Google Scholar]
  • 17.Afify A.Z., Ahmed S., Nassar M. A new inverse Weibull distribution: Properties, classical and Bayesian estimation with applications. Kuwait J. Sci. 2021;48:1–10. doi: 10.48129/kjs.v48i3.9896. [DOI] [Google Scholar]
  • 18.Eliwa M.S., El-Morshedy M., Ibrahim M. Inverse Gompertz distribution: Properties and different estimation methods with application to complete and censored data. Ann. Data Sci. 2019;6:321–339. doi: 10.1007/s40745-018-0173-0. [DOI] [Google Scholar]
  • 19.Nadarajah S., Kotz S. The exponentiated type distributions. Acta Appl. Math. 2006;92:97–111. doi: 10.1007/s10440-006-9055-0. [DOI] [Google Scholar]
  • 20.Abouammoh A.M., Alshingiti A.M. Reliability estimation of generalized inverted exponential distribution. J. Stat. Comput. Simul. 2009;79:1301–1315. doi: 10.1080/00949650802261095. [DOI] [Google Scholar]
  • 21.Keller A.Z., Kamath A.R.R., Perera U.D. Reliability analysis of CNC machine tools. Reliab. Eng. 1982;3:449–473. doi: 10.1016/0143-8174(82)90036-1. [DOI] [Google Scholar]
  • 22.Jeong J.H. Statistical Inference on Residual Life. Springer; New York, NY, USA: 2014. [Google Scholar]
  • 23.Kayid M., Izadkhah S. Mean inactivity time function, associated orderings, and classes of life distributions. IEEE Trans. Reliab. 2014;63:593–602. doi: 10.1109/TR.2014.2315954. [DOI] [Google Scholar]
  • 24.Johnson R.A. Stress-Strength Models for Reliability. In: Krishnaiah P.R., Rao C.R., editors. Handbook of Statistics. Volume 7. Elsevier; Amsterdam, The Netherlands: 1988. pp. 27–54. [Google Scholar]
  • 25.Greenwood J.A., Landwehr J.M., Matalas N.C., Wallis J.R. Probability weighted moments: Definition and relation to parameters of several distributions expressable in inverse form. Water Resour. Res. 1979;15:1049–1054. doi: 10.1029/WR015i005p01049. [DOI] [Google Scholar]
  • 26.Lazo A.V., Rathie P. On the entropy of continuous probability distributions. IEEE Trans. Inf. Theory. 1978;24:120–122. doi: 10.1109/TIT.1978.1055832. [DOI] [Google Scholar]
  • 27.David H.A., Nagaraja H.N. Order Statistics. John Wiley & Sons; Hoboken, NJ, USA: 2004. [Google Scholar]
  • 28.Shaked M., Shanthikumar J.G. Stochastic Orders and Their Applications. Academic Press; Boston, MA, USA: 1994. [Google Scholar]
  • 29.Lawless J.F. Statistical Models and Methods For Lifetime Data. 2nd ed. John Wiley & Sons; Hoboken, NJ, USA: 2003. [Google Scholar]
  • 30.Kundu D. Bayesian inference and life testing plan for the Weibull distribution in presence of progressive censoring. Technometrics. 2008;50:144–154. doi: 10.1198/004017008000000217. [DOI] [Google Scholar]
  • 31.Gelman A., Carlin J.B., Stern H.S., Dunson D.B., Vehtari A., Rubin D.B. Bayesian Data Analysis. 2nd ed. Chapman and Hall/CRC; Boca Raton, FL, USA: 2004. [Google Scholar]
  • 32.Lynch S.M. Introduction to Applied Bayesian Statistics and Estimation for Social Scientists. Springer; New York, NY, USA: 2007. [Google Scholar]
  • 33.Tierney L. Markov chains for exploring posterior distributions. Ann. Stat. 1994;22:1701–1728. doi: 10.1214/aos/1176325750. [DOI] [Google Scholar]
  • 34.Roberts G.O., Gelman A., Gilks W.R. Weak convergence and optimal scaling of random walk Metropolis algorithms. Ann. Appl. Probab. 1997;7:110–120. [Google Scholar]
  • 35.Elshahhat A., Elemary B.R. Analysis for Xgamma parameters of life under Type-II adaptive progressively hybrid censoring with applications in engineering and chemistry. Symmetry. 2021;13:2112. doi: 10.3390/sym13112112. [DOI] [Google Scholar]
  • 36.Murthy D.N.P., Xie M., Jiang R. Weibull Models. Wiley; Hoboken, NJ, USA: 2004. (Wiley Series in Probability and Statistics). [Google Scholar]
  • 37.Keller A.Z., Goblin M.T., Farnworth N.R. Reliability analysis of commercial vehicle engines. Reliab. Eng. 1985;10:15–25. doi: 10.1016/0143-8174(85)90039-3. [DOI] [Google Scholar]
  • 38.Glen A.G. On the inverse gamma as a survival distribution. J. Qual. Technol. 2011;43:158–166. doi: 10.1080/00224065.2011.11917853. [DOI] [Google Scholar]
  • 39.Gusmão F.R., Ortega E.M., Cordeiro G.M. The generalized inverse Weibull distribution. Stat. Pap. 2011;52:591–619. doi: 10.1007/s00362-009-0271-3. [DOI] [Google Scholar]
  • 40.Flaih A., Elsalloukh H., Mendi E., Milanova M. The exponentiated inverted Weibull distribution. Appl. Math. Inf. Sci. 2012;6:167–171. [Google Scholar]
  • 41.Potdar K.G., Shirke D.T. Inference for the parameters of generalized inverted family of distributions. ProbStat Forum. 2013;6:18–28. [Google Scholar]
  • 42.Abd AL-Fattah A.M., El-Helbawy A.A., Al-Dayian G.R. Inverted Kumaraswamy distribution: Properties and estimation. Pak. J. Stat. 2017;33:37–61. [Google Scholar]
  • 43.Tahir M.H., Cordeiro G.M., Ali S., Dey S., Manzoor A. The inverted Nadarajah–Haghighi distribution: Estimation methods and applications. J. Stat. Comput. Simul. 2018;88:2775–2798. doi: 10.1080/00949655.2018.1487441. [DOI] [Google Scholar]
  • 44.Basheer A.M. Alpha power inverse Weibull distribution with reliability application. J. Taibah Univ. Sci. 2019;13:423–432. doi: 10.1080/16583655.2019.1588488. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Not applicable.


Articles from Entropy are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES