ABSTRACT
This paper deals with the statistical inference of the unknown parameters of three-parameter exponentiated power Lindley distribution under adaptive progressive type-II censored samples. The maximum likelihood estimator (MLE) cannot be expressed explicitly, hence approximate MLEs are conducted using the Newton–Raphson method. Bayesian estimation is studied and the Markov Chain Monte Carlo method is used for computing the Bayes estimation. For Bayesian estimation, we consider two loss functions, namely: squared error and linear exponential (LINEX) loss functions, furthermore, we perform asymptotic confidence intervals and the credible intervals for the unknown parameters. A comparison between Bayes estimation and the MLE is observed using simulation analysis and we perform an optimally criterion for some suggested censoring schemes by minimizing bias and mean square error for the point estimation of the parameters. Finally, a real data example is used for the illustration of the goodness of fit for this model.
KEYWORDS: Exponentiated power Lindley, adaptive progressive type-II censoring, Bayesian estimation, maximum likelihood estimator, Markov chain Monte Carlo, simulation
1. Introduction
The Lindley distribution has been used recently in many lifetimes experimental analysis. It has many fields of application including biology, engineering, and medicine [13]. Lindley distribution is a combination of the exponential distribution with parameter θ and Gamma distribution with parameters 2 and θ. Although the survival function of Gamma distribution cannot be expressed in closed form, and the hazard rate of the exponential distribution is constant, Lindley distribution has an advantage since its hazard rate is increasing.
Lindley distribution has several generalizations and its applications in engineering, reliability and other fields of science have been introduced by several authors. One may refer to [1,2,10,14,18,21,25,29–31] and the references therein. Even though several authors used Lindley distribution to model lifetime data, see, e.g. [24], who showed that this distribution is preferable when studying reliability and stress-strength models. There are some situations in real lifetime modeling where Lindley lifetime may not serve well or can be inappropriate from either theoretical or practical point of view, for example, when data are skewed to the left, symmetric, or having a uni-modal, then Lindley distribution cannot be used to model these data. Also, since the hazard rate of Lindley distribution is increasing so it does not fit bathtub-shaped failure rate data. Hence, it is necessary to have new and modified distributions that are hopefully more flexible and can be used for data modeling for such cases. One important and efficient generalization is the exponentiated Power Lindley distribution (EPLD) which was introduced by Almetwally et al. [5]. They reported that EPLD is important as it includes many sub-models as a special case. It also offers more flexibility to analyze complex sets of data that cannot be explained algebraically in a simple way, these data may occur for periodic or recurrent behavior phenomena, sometimes they have extreme values. It has a flexible failure rate and acts better than many other generalizations of Lindley distribution. For this reason, EPLD was our choice for lifetime components under the adaptive progressive type-II censoring scheme which is considered in this paper.
The probability density function (pdf) of EPLD is given by
| (1) |
where The scale and shape parameters are and respectively, also The cumulative distribution function (cdf) of EPLD is given by
| (2) |
Progressive type-II censoring scheme is a generalization of the well-known type-II right censoring scheme, since the last 20 years, it has received greater attention, see [6,7,12,15,17,26–28,32,33]. Recently a mixture between type I and progressive type-II censoring scheme was introduced, the resulted mixture is called adaptive progressive type-II censoring (APC), see [23], and also we may refer to the work of many authors, see, e.g. [3,4,8]. The idea of this process is to select a censoring number with consideration of both the previous censoring numbers and the previous failure times. The (APC) schemes proved to be useful in making a balance between estimation efficiency and the total experimental life time. For a typical type-II progressive censoring, suppose that are the life times of n units that are independent and identically distributed (i.i.d.) that are placed on a lifetime experiment. Suppose that are some fixed positive integers such that . If only m units will be examined and the remaining n−m units will be censored progressively according to the censoring scheme . The censoring will appear progressively in m steps or stages. For these m stages, the failure times of the m observed units are detected. At the time of the first failure at the first step, we will remove units from the n−1 surviving units randomly from the experiment. Continuing on, at the time of second failure at the second step, we will remove units from the surviving units randomly from the experiment. Finally, at the time of the failure at the step, all the remaining units ( i.e. will be withdrawn from the experiment.
Suppose is a fixed total experimental time, if the progressively censored observed failure occurs before time (T) which means that , then the experiment must stop on the time otherwise, if the observed failure time exceeds (i.e. but the number of observed failures did not reach m, we should stop the experiment as early as possible, see Figure 1.
Figure 1.
Experiment terminates before time T.
This frame can be examined as a design by which the experimenter is certain of obtaining m observed failure times, this will guarantee efficient statistical inference and also a least total testing time which is expected to be not too far away from the ideal time . From the main properties of order statistics, we know that the fewer operating items are withdrawn, the smaller the expected total experimental time, see [9,22,24]. Hence in order to terminate the experiment as early as possible for a some value m, then we should keep as many surviving items on the experiment as we can. Assume I is the number of failed units that have been observed before time T, i.e.
where and For stochastic ordering of first-order statistics from different sample sizes, after the experiment passed time T, we set and This formula indicates that the experiment must terminate as early as possible if the failure time is larger than T for I + 1<m. This situation is represented in Figure 2. The value of T takes an important part in determining the values of and also as an adjustment between a less experimental time and a greater possibility to observe extreme failures. If then we may obtain an extreme failure time case, which indicates that time is not so important for the experiment, so if this is the case then we will have the usual progressive type-II censoring scheme with as a censoring procedure. Extreme value case can be obtained if T = 0, which indicates terminating the experiment as early as possible, so we will have and which reduces to the usual Type-II censoring samples.
Figure 2.
Experiment terminates after time T.
If the failure times of the n units under test are continuously distributed with cdf and pdf when the likelihood function is given by
| (3) |
where For more details, see Metropolis et al. [23].
Although many researchers worked on the statistical inference of the unknown parameters of different lifetime models based on progressively censored samples in the non-Bayesian estimation, not that much work has been done in the Bayesian inference. In this paper, we mainly work on two goals: First, compute the maximum likelihood estimator (MLE) and the approximate confidence interval for EPLD parameters θ, β, and α under adaptive progressive type-II censored sampling scheme. Bayes estimation method has been used extensively in the literature to estimate different lifetime distributions, see, e.g. [2,6,14,34–36]. In this article, the proposed priors are assumed to be Gamma priors under quadratic loss and LINEX loss functions. Gibbs and Markov Chain Monte Carlo methods (MCMC) are useful in evaluating Bayes estimation. We also construct the Bayes credible intervals using ( MCMC) method. Second we use simulation analysis using R-package to obtain an optimal censoring scheme and to compare the efficiency of the two estimation methods. Numerical analysis of real data and simulation example is used to select the optimal censoring scheme.
The rest of the paper is organized as follows: In Section 2, we obtain the MLEs for the EPLD parameters together with the approximate confidence interval. The Bayesian estimation for the EPLD parameters is obtained in Section 3 with their credible intervals. In Section 4, we introduce real lifetime data for the goodness of fit for EPLD. Simulation and Numerical analysis are performed in Section 5 in order to compare the proposed methods of estimation. Conclusion is drawn in Section 6.
2. Maximum likelihood estimation (MLE)
In this section, we use a classical method of point estimation (MLE) method for estimating the three unknown parameters of EPLD under APC type-II scheme. Let with be the observed Progressive type-II censoring sample of size m from a sample of size n with EPLD lifetime, where its pdf and cdf are as given in Equations (1) and (2) respectively.
The likelihood function based on APC sample is given in Equation (3), hence let be a vector of parameters, then the likelihood function under EPLD will be
| (4) |
The logarithmic likelihood function of EPLD is
From the above log-likelihood equation, we compute the derivatives with respect to the parameters and α. So we need to solve the following normal equations after equating them to zero:
| (5) |
where and
Since an explicit form of solution of the above equations can't be obtained algebraically, we need to use some numerical methods to obtain the MLEs of the parameters θ, β and α. Many numerical methods were used in the literature to solve systems of nonlinear equations. In this paper, we use the Newton–Raphson (N–R) method which is one of the mostly used methods. This method depends on the following iterated equation:
The above iterated equation depends on the initial point choice for solving this equation we use R-package to produce the new values of θ, β and α based on their initial values until they converge to their MLE's values. The second partial derivatives are obtained and included in the Appendix.
2.1. Fisher information
The normal approximation of the MLEs for the vector parameter can be useful in constructing approximate confidence intervals and hypotheses testing for the parameters θ, β and α. From the asymptotic property of the MLE we have that where is the Fisher information matrix, i.e.
The expected values of the second partial derivatives are obtained numerically using R-programming. The variances of the MLEs can be found from the asymptotic property of MLE so that and where is the determinant of information matrix I. The confidence intervals for , and are given as
respectively, where is the lower percentile of standard normal distribution.
3. Bayes inference
The prior information in Bayesian estimation (BE) method is not usually available in experimental data. Hence selecting prior distribution is an important in parameters' estimation. Our choice for the prior distribution of and α are the independent gamma distributions and respectively. The reason for choosing this prior density is that Gamma prior is flexible in its nature with a non-informative domain, also Gamma provides conjugate prior for the likelihood function which is a generalization of Lindley distribution. The suggested gamma distributions have the following densities:
| (6) |
where and are the hyper parameters of prior distributions and all are positive real constants.
The joint prior of and α is
The joint posterior of and α is
| (7) |
where is the likelihood function of EPLD under progressive censored samples as in Equation (4). Substituting and for EPLD under adaptive progressive censoring scheme, the joint posterior density can be written as:
where and represents the probability density of Gamma distribution.
In the case of quadratic loss function, Bayes estimate is the posterior mean, therefore, the Bayes estimate of any function of and α, say under the quadratic loss function is the determination of posterior mean for the parameters is not easy to obtain unless we use numerical approximation methods.
In the literature, there are many approximation methods that are available to solve this kind of problem. Here we consider (MCMC) approximation method, see Haj Ahmad and Awad [19]. This approximation method reduces the integrals into a whole and produce a specific numerical result.
The Bayes estimates of the unknown parameters and α, under Linex Loss function can be calculated through the following equation:
where is reflects the direction and degree of asymmetry, L is number of periods in the MCMC process.
3.1. Bayes estimation under MCMC
We use (MCMC) approximation method to evaluate the Bayes estimates (BEs) for the unknown parameters and α. We start by generating a sample from the posterior distribution using the Metropolis–Hastings algorithm technique, after that we compute Bayes estimators of EPLD parameters, for more details see Hastings [20].
Gibbs sampling method will be used to generate a sample from the posterior density function and compute Bayes estimates. For the purpose of generating the needed sample, it is assumed that the pdf of prior densities are as described in Equation (7). The full conditional posterior densities of and θ and the data is given by
| (8) |
Since the densities in Equation (8) can't be written as known densities, generating and α directly from these densities is impossible using the usual methods, so we need to generate the unknown parameters by using Metropolis–Hastings (M–H) algorithm, for more details see [16,20]. The idea in M–H algorithm is to reduce the percentage of rejections as much as possible. The M–H algorithm depends on selecting the normal distribution in order to find the (BEs) and to create the credible intervals for the needed parameters. The following algorithm is called Gibbs technique which can be summarized as following algorithm:
Start with initial values
Generate a posterior sample for and α by using the M–H algorithm in Equation (8).
Repeat step 2 times and obtain a sample of parameters
- Bayes estimators of and α with respect to quadratic loss function are given by
where is the burn-in-period of Markov Chain.(9)
Figure 3 shows a trace plot for the parameters and α with number of iteration (10,000). The histograms of the parameters and α are drawn in Figure 4.
Figure 3.
MCMC for parameters and .
Figure 4.
Frequency Histogram of and .
3.2. Credible intervals
In this section we will apply MCMC technique to create Bayes credible intervals of the parameters and α, the following are summary of this algorithm:
Arrange and in ascending order as follow , and
- credible intervals for the unknown parameters and α are given by
respectively.
4. Data analysis
A real data set is proposed and analyzed to evaluate the goodness-of-fit of EPLD model under APC scheme and also to examine how the new model acts practically. The data consists of 50 observations from a metal sheet with hole and sheet thickness are 12 mm and 3.15 mm respectively. Hole diameter readings are taken on jobs concerning one hole, selected and fixed as per a predetermined orientation, see Table (1). The data have been introduced by [11]. We fit the EPLD to the real data set and compare it with other life time models. The first suggested distribution is the three-parameter modified Weibull distribution (WD) with pdf:
Second, is the power Lindley (PL) distribution which is a special case of EPLD with , also Generalized Lindley distribution GL can be obtained from EPLD with and finally Lindley (L) distribution which is considered as submodels of the EPLD when . The analysis for the parameters in the above suggested distributions is conducted by using the method of the Kolmogorov–Smirnov (KS) statistic and its p-values, Akaike information criterion , where p represents the number of unknown parameters in the specified model and L is the maximum value of the likelihood function. The consistent Akaike information criterion (CAIC), which is defined by and Hannan–Quinn information criterion which is given by are used as a model selection criterion, also we use other measures like Cramer–von Mises (W) and Anderson–Darling (A). For a set of suggested models of the real data, the ideal model is the one with the least measure value, therefore we will choose the model with the minimum AIC, CAIC, HQIC, W and A values, but with maximum p-value. The MLEs of and α and the standard error (between brackets) are computed numerically using the optimal function in R statistical package. The values of the KS statistic with p-values, AIC, CAIC, HQIC, W and A are reported in Table 2. When comparing the values of KS between EPLD distribution and other sub models like WD, PL,GL and L distributions, we obtain the minimum values of all measures for EPLD along with highest related p-value of 0.6947. Therefore, this indicates that the EPLD fits the data set well and better than other distributions. This is also specifies the needs of new distributions in managing some real sets of data. So we can tell that the new distribution is superior according to other sub models, see Table 2.
Table 1.
Hole diameters from 50 metal sheets.
| 0.04 | 0.02 | 0.06 | 0.12 | 0.14 | 0.08 | 0.22 | 0.12 | 0.08 | 0.26 |
| 0.24 | 0.04 | 0.14 | 0.16 | 0.08 | 0.26 | 0.32 | 0.28 | 0.14 | 0.16 |
| 0.24 | 0.22 | 0.12 | 0.18 | 0.24 | 0.32 | 0.16 | 0.14 | 0.08 | 0.16 |
| 0.24 | 0.16 | 0.32 | 0.18 | 0.24 | 0.22 | 0.16 | 0.12 | 0.24 | 0.06 |
| 0.02 | 0.18 | 0.22 | 0.14 | 0.06 | 0.04 | 0.14 | 0.26 | 0.18 | 0.16 |
Table 2.
Estimated parameters of EPLD, PL, L and WD with comparison criteria.
| Model | K-S | p-value | AIC | CAIC | HQIC | W | A | |||
|---|---|---|---|---|---|---|---|---|---|---|
| MW | 0.1158 (0.0946) | 9.9982 (0.0128) | 6.1274 (1.665) | 0.2806 | 0.0007 | −75.2779 | −74.7562 | −73.0936 | 0.1831 | 1.0986 |
| L | 6.9031 (0.8777) | 1 | 1 | 0.2775 | 0.0009 | −79.8891 | −79.8058 | −79.1610 | 0.1804 | 1.0828 |
| PL | 37.1510 (14.49) | 2.1183 (0.2468) | 1 | 0.1099 | 0.5815 | −107.7915 | −107.5362 | −106.3353 | 0.1052 | 0.6431 |
| GL | 12.1585 (1.578) | 1 | 3.1446 (0.7024) | 0.1644 | 0.1340 | −100.7174 | −100.4621 | −99.2612 | 0.2099 | 1.2511 |
| EPLD | 0.2998 (0.0864) | 4.9517(0.8917) | 788.0935 (703.288) | 0.1004 | 0.6947 | −109.0677 | −108.5460 | −106.8834 | 0.0726 | 0.4341 |
5. Simulation study
A simulation study is conducted to examine the efficiency and performance of the (BEs) compared with the classical estimators obtained by applying the MLE approach under APC sample. In (BE), we use the squared error loss function (SEL) and LINEX loss function (LIN). We compute the biases and the mean square errors (MSEs) for (BEs) and MLE based on 10 ,000 iterations using R software, see Tables 3–9. Under APC data, we also find the point and the 95% confidence intervals for the EPLD parameters, see Table (10). We suggest the following censoring schemes: scheme 1: scheme 2: scheme 3:
Table 3.
Biases and MSEs (first and second columns respectively) for MLEs and the Bayesian estimators of and α for scheme 1, with n = 100 and m = 40.
| MLE | Bayes (SEL) | Bayes (LIN v = 0.5) | Bayes (LIN v = −0.5) | |||||
|---|---|---|---|---|---|---|---|---|
| Parameters | ||||||||
| 0.1883 | 0.6535 | 0.0548 | 0.1562 | 0.0794 | 0.1690 | 3.0848 | 1.5361 | |
| 0.0760 | 0.1456 | 0.0601 | 0.0588 | 0.0696 | 0.0630 | 1.4675 | 0.9218 | |
| 0.0243 | 0.1951 | −0.0118 | 0.0686 | −0.0005 | 0.0684 | 1.7308 | 1.0269 | |
| 0.0558 | 0.1949 | 0.0391 | 0.0718 | 0.0485 | 0.0743 | 1.7186 | 1.0400 | |
| 0.0488 | 0.0436 | 0.0173 | 0.0203 | 0.0206 | 0.0207 | 0.7967 | 0.5550 | |
| −0.0001 | 0.0253 | 0.0101 | 0.0105 | 0.0117 | 0.0107 | 0.6240 | 0.4010 | |
| 0.1739 | 1.0184 | 0.0320 | 0.0640 | 0.0408 | 0.0663 | 3.9006 | 0.9845 | |
| 0.0498 | 0.2808 | 0.0093 | 0.0374 | 0.0158 | 0.0383 | 2.0701 | 0.7581 | |
| 0.0841 | 0.2617 | 0.0051 | 0.0291 | 0.0098 | 0.0292 | 1.9799 | 0.6690 | |
| 0.3244 | 1.5620 | 0.0065 | 0.0941 | 0.0178 | 0.0957 | 4.7360 | 1.2033 | |
| 0.1081 | 0.2834 | 0.0321 | 0.0393 | 0.0379 | 0.0405 | 2.0454 | 0.7681 | |
| 0.0371 | 0.2409 | −0.0060 | 0.0281 | −0.0020 | 0.0280 | 1.9206 | 0.6575 | |
| 0.2361 | 1.4607 | −0.0117 | 0.0843 | −0.0011 | 0.0854 | 4.6511 | 1.1383 | |
| 0.1586 | 0.3467 | 0.0377 | 0.0368 | 0.0441 | 0.0380 | 2.2252 | 0.7376 | |
| −0.0054 | 0.2942 | −0.0146 | 0.0343 | −0.0094 | 0.0343 | 2.1281 | 0.7250 | |
Table 9.
Biases and MSEs (first and second columns respectively) for MLEs and the Bayesian estimators of θ, β and α for scheme 3, with T = 1.5, n = 100, m = 60.
| MLE | Bayes (SEL) | Bayes (LIN v = 0.5) | Bayes (LIN v = −0.5) | |||||
|---|---|---|---|---|---|---|---|---|
| Parameters | ||||||||
| 0.1486 | 0.9025 | 0.0034 | 0.1099 | 0.0197 | 0.1145 | 0.1061 | ||
| 0.1965 | 0.3455 | 0.0732 | 0.0623 | 0.0827 | 0.0661 | 0.0637 | 0.0588 | |
| 0.2578 | 0.0497 | 0.0494 | 0.0501 | |||||
| 2.0467 | 5.2673 | 0.1064 | 0.1075 | 0.1320 | 0.1233 | 0.0815 | 0.0944 | |
| 0.9162 | 0.4879 | 0.4782 | 0.4968 | |||||
| 0.7157 | 0.5783 | 0.1383 | 0.0350 | 0.1440 | 0.0372 | 0.1326 | 0.0329 | |
| 0.1181 | 0.6568 | 0.0369 | 0.1149 | 0.0522 | 0.1222 | 0.0217 | 0.1084 | |
| 0.2044 | 0.4861 | 0.0398 | 0.0809 | 0.0517 | 0.0846 | 0.0279 | 0.0776 | |
| 0.2147 | 0.0076 | 0.0533 | 0.0154 | 0.0539 | 0.0529 | |||
| 0.1763 | 1.3350 | 0.1441 | 0.0016 | 0.1483 | 0.1410 | |||
| 0.2229 | 0.5245 | 0.0636 | 0.0749 | 0.0753 | 0.0800 | 0.0520 | 0.0702 | |
| 0.2280 | 0.0381 | 0.0379 | 0.0384 | |||||
| 0.0182 | 0.8454 | 0.1409 | 0.0140 | 0.1443 | 0.1387 | |||
| 0.2864 | 0.5405 | 0.0724 | 0.0819 | 0.0856 | 0.0878 | 0.0593 | 0.0765 | |
| 0.2239 | 0.0561 | 0.0557 | 0.0567 | |||||
Table 10.
95% Average confidence intervals lengths (AIL) for and .
| MLE | Bayes (SE) | Bayes (LIN 0.5) | Bayes (LIN-0.5) | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Scheme | α | β | θ | α | β | θ | α | β | θ | α | β | θ |
| 1: n = 100, m = 40 | 5.604 | 2.698 | 2.315 | 1.866 | 1.160 | 1.018 | 1.916 | 1.185 | 1.019 | 1.816 | 1.136 | 1.016 |
| 1: n = 100, m = 60 | 4.807 | 2.285 | 1.985 | 1.524 | 0.968 | 0.850 | 1.506 | 0.948 | 0.851 | 1.542 | 0.989 | 0.849 |
| 1: n = 100, m = 80 | 4.736 | 2.045 | 1.921 | 1.203 | 0.768 | 0.658 | 1.195 | 0.760 | 0.658 | 1.212 | 0.776 | 0.657 |
| 2: n = 100, m = 40, T = 1.5 | 4.942 | 3.094 | 2.051 | 1.751 | 1.303 | 0.917 | 1.713 | 1.261 | 0.919 | 1.790 | 1.346 | 0.916 |
| 2: n = 100, m = 40, T = 0.85 | 5.256 | 1.451 | 1.315 | 1.889 | 1.321 | 1.105 | 1.824 | 1.299 | 1.101 | 1.956 | 1.344 | 1.103 |
| 3: n = 100, m = 40, T = 0.85 | 2.741 | 0.401 | 0.517 | 1.803 | 0.632 | 0.719 | 1.723 | 0.631 | 0.716 | 1.882 | 0.630 | 0.721 |
| 3: n = 100, m = 40, T = 1.5 | 4.481 | 2.704 | 1.874 | 1.488 | 1.044 | 0.765 | 1.466 | 1.019 | 0.767 | 1.511 | 1.070 | 0.764 |
We also compare these suggested schemes to obtain the optimal censoring scheme based on the values of biases and MSEs with fixed parameters values. We can summarize the results from Tables 3–9 in the points below:
For estimating α under scheme 1, we observe that the minimum MSE is obtained for Bayesian estimation under SEL as in Table 3, and the minimum MSE is obtained for Bayesian estimation under LIN with as reported in Tables 4 and 5. So as a result we can tell that Bayes estimation is better for estimating α than MLE method. This result also holds for censoring schemes 2 and 3 where Bayesian estimation under LIN with performs better than MLE method for estimating α, see Tables 6–9, with different experimental time T.
For estimating the shape parameter β under censoring scheme 1 it is observed that Bayes estimation under SEL has minimum MSE, therefore it performs better than MLE method see Table 3, while in Tables 4 and 5 we recommend Bayesian estimation under LIN with since it attains the minimum MSE. Under censoring schemes 2 and 3 it is clear that Bayesian estimation under LIN with and is better than MLE with respect to MSE, and this is true for different values of experimental time T. See Tables 6–9.
For estimating the scale parameter θ Bayes estimation under LIN with and and sometimes under SEL performs better than MLE for all censoring schemes under consideration, and all suggested experimental times T see Tables 3–9.
In terms of biases for estimating and θ, we can realize that Bayes estimation under SEL and LIN are superior to MLE for all censoring schemes, except for one case under censoring scheme 3 with T = 0.85, n = 100, m = 60 and since in this special case MLE performs better than Bayes method when estimating θ, see Table 8.
Table 4.
Biases and MSEs (first and second columns respectively) for MLEs and the Bayesian estimators of and α for scheme 1 with n = 100 and m = 60.
| MLE | Bayes (SEL) | Bayes (LIN v = 0.5) | Bayes (LIN v = −0.5) | |||||
|---|---|---|---|---|---|---|---|---|
| Parameters | ||||||||
| 0.2100 | 1.0879 | 0.0065 | 0.1071 | 0.0217 | 0.1109 | −0.0087 | 0.1037 | |
| 0.1188 | 0.2430 | 0.0741 | 0.0503 | 0.0819 | 0.0533 | 0.0663 | 0.0475 | |
| 0.0407 | 0.3072 | −0.0249 | 0.0557 | −0.0164 | 0.0553 | −0.0334 | 0.0563 | |
| 0.0082 | 0.2338 | 0.0894 | 0.1236 | 0.1070 | 0.1319 | 0.0720 | 0.1163 | |
| 0.0462 | 0.0627 | −0.0009 | 0.0305 | 0.0046 | 0.0313 | −0.0065 | 0.0299 | |
| 0.0095 | 0.0333 | 0.0316 | 0.0173 | 0.0345 | 0.0177 | 0.0288 | 0.0169 | |
| 0.3351 | 1.1203 | 0.0135 | 0.1088 | 0.0293 | 0.1148 | −0.0022 | 0.1034 | |
| 0.1838 | 0.4532 | 0.0516 | 0.0768 | 0.0636 | 0.0815 | 0.0397 | 0.0725 | |
| −0.0213 | 0.2772 | −0.0222 | 0.0546 | −0.0137 | 0.0547 | −0.0306 | 0.0546 | |
| 0.2534 | 1.5652 | −0.0251 | 0.1515 | −0.0048 | 0.1545 | −0.0453 | 0.1494 | |
| 0.1435 | 0.3596 | 0.0580 | 0.0642 | 0.0682 | 0.0681 | 0.0478 | 0.0607 | |
| 0.0132 | 0.2561 | −0.0212 | 0.0474 | −0.0143 | 0.0471 | −0.0282 | 0.0478 | |
| 0.3670 | 1.6371 | 0.0106 | 0.1433 | 0.0295 | 0.1469 | −0.0082 | 0.1404 | |
| 0.1350 | 0.4132 | 0.0580 | 0.0632 | 0.0688 | 0.0671 | 0.0472 | 0.0597 | |
| 0.0567 | 0.3260 | −0.0073 | 0.0478 | 0.0014 | 0.0472 | −0.0160 | 0.0485 | |
Table 5.
Biases and MSEs (first and second columns respectively) for MLEs and the Bayesian estimators of θ, β and α for scheme 1 with n = 100 and m = 80.
| MLE | Bayes (SEL) | Bayes (LIN v = 0.5) | Bayes (LIN v = −0.5) | |||||
|---|---|---|---|---|---|---|---|---|
| Parameters | ||||||||
| 0.1883 | 0.6535 | 0.0548 | 0.1562 | 0.0794 | 0.1690 | 0.0307 | 0.1457 | |
| 0.0760 | 0.1456 | 0.0601 | 0.0588 | 0.0696 | 0.0630 | 0.0507 | 0.0550 | |
| 0.0243 | 0.1951 | −0.0118 | 0.0686 | −0.0005 | 0.0684 | −0.0230 | 0.0691 | |
| 0.0558 | 0.1949 | 0.0391 | 0.0718 | 0.0485 | 0.0743 | 0.0297 | 0.0694 | |
| 0.0488 | 0.0436 | 0.0173 | 0.0203 | 0.0206 | 0.0207 | 0.0139 | 0.0200 | |
| −0.0001 | 0.0253 | 0.0101 | 0.0105 | 0.0117 | 0.0107 | 0.0085 | 0.0104 | |
| 0.1739 | 1.0184 | 0.0320 | 0.0640 | 0.0408 | 0.0663 | 0.0232 | 0.0619 | |
| 0.0498 | 0.2808 | 0.0093 | 0.0374 | 0.0158 | 0.0383 | 0.0027 | 0.0366 | |
| 0.0841 | 0.2617 | 0.0051 | 0.0291 | 0.0098 | 0.0292 | 0.0003 | 0.0290 | |
| 0.3244 | 1.5620 | 0.0065 | 0.0941 | 0.0178 | 0.0957 | −0.0047 | 0.0927 | |
| 0.1081 | 0.2834 | 0.0321 | 0.0393 | 0.0379 | 0.0405 | 0.0263 | 0.0382 | |
| 0.0371 | 0.2409 | −0.0060 | 0.0281 | −0.0020 | 0.0280 | −0.0100 | 0.0282 | |
| 0.2361 | 1.4607 | −0.0117 | 0.0843 | −0.0011 | 0.0854 | −0.0223 | 0.0835 | |
| 0.1586 | 0.3467 | 0.0377 | 0.0368 | 0.0441 | 0.0380 | 0.0312 | 0.0356 | |
| −0.0054 | 0.2942 | −0.0146 | 0.0343 | −0.0094 | 0.0343 | −0.0199 | 0.0345 | |
Table 6.
Biases and MSEs (first and second columns respectively) for MLEs and the Bayesian estimators of θ,β and α for scheme 2, with T = 1.5 and n = 100, m = 40.
| MLE | Bayes (SEL) | Bayes (LIN v = 0.5) | Bayes (LIN v = −0.5) | |||||
|---|---|---|---|---|---|---|---|---|
| Parameters | ||||||||
| 0.2734 | 1.2222 | 0.0388 | 0.1695 | 0.0645 | 0.1852 | 0.0137 | 0.1564 | |
| 0.2336 | 0.6391 | 0.0798 | 0.1059 | 0.0964 | 0.1173 | 0.0634 | 0.0957 | |
| 0.0424 | 0.3080 | −0.0007 | 0.0709 | 0.0114 | 0.0712 | −0.0128 | 0.0708 | |
| 0.6610 | 0.5901 | 0.1806 | 0.2044 | 0.2162 | 0.2387 | 0.1456 | 0.1744 | |
| −0.5885 | 0.3660 | −0.4334 | 0.2220 | −0.4263 | 0.2162 | −0.4404 | 0.2277 | |
| 0.2776 | 0.0902 | 0.1026 | 0.0344 | 0.1082 | 0.0364 | 0.0970 | 0.0326 | |
| 0.2390 | 1.4126 | 0.0849 | 0.2056 | 0.1107 | 0.2267 | 0.0594 | 0.1869 | |
| 0.2902 | 0.7485 | 0.0718 | 0.1438 | 0.0944 | 0.1560 | 0.0494 | 0.1333 | |
| 0.0185 | 0.3172 | 0.0220 | 0.0852 | 0.0343 | 0.0866 | 0.0096 | 0.0841 | |
| 0.2832 | 1.6666 | −0.0040 | 0.1991 | 0.0260 | 0.2088 | −0.0338 | 0.1917 | |
| 0.2347 | 0.6769 | 0.0938 | 0.1190 | 0.1131 | 0.1304 | 0.0748 | 0.1089 | |
| 0.0295 | 0.2741 | −0.0159 | 0.0549 | −0.0063 | 0.0546 | −0.0254 | 0.0555 | |
| 0.3873 | 2.1154 | −0.0311 | 0.2253 | −0.0015 | 0.2347 | −0.0602 | 0.2184 | |
| 0.3188 | 1.0140 | 0.1480 | 0.1753 | 0.1715 | 0.1962 | 0.1244 | 0.1562 | |
| 0.0762 | 0.3686 | −0.0132 | 0.0830 | 0.0008 | 0.0829 | −0.0273 | 0.0835 | |
Table 7.
Biases and MSEs (first and second columns respectively) for MLEs and the Bayesian estimators of θ, β and α for scheme 2, with T = 0.85, n = 100, m = 40.
| MLE | Bayes (SEL) | Bayes (LIN v = 0.5) | Bayes (LIN v = −0.5) | |||||
|---|---|---|---|---|---|---|---|---|
| Parameters | ||||||||
| 1.9298 | 5.1409 | 0.2271 | 0.2520 | 0.2620 | 0.2903 | 0.1923 | 0.2173 | |
| 0.4838 | 0.1641 | 0.1609 | 0.1675 | |||||
| 0.5473 | 0.4618 | 0.0747 | 0.0736 | 0.0760 | ||||
| 0.7324 | 0.8149 | 0.1574 | 0.2025 | 0.1914 | 0.2329 | 0.1232 | 0.1754 | |
| 0.5038 | 0.3104 | 0.3027 | 0.3178 | |||||
| 0.2906 | 0.1005 | 0.0857 | 0.0319 | 0.0914 | 0.0337 | 0.0799 | 0.0302 | |
| 2.2915 | 7.3360 | 0.1721 | 0.1935 | 0.2056 | 0.2249 | 0.1393 | 0.1667 | |
| 0.9831 | 0.2691 | 0.2579 | 0.2803 | |||||
| 0.6050 | 0.5338 | 0.0769 | 0.0747 | 0.0794 | ||||
| 2.2608 | 6.9052 | 0.1339 | 0.2497 | 0.1699 | 0.2775 | 0.0982 | 0.2257 | |
| 0.7486 | 0.2185 | 0.2130 | 0.2245 | |||||
| 0.4606 | 0.3244 | 0.1201 | 0.1152 | 0.1252 | ||||
| 2.2608 | 6.9052 | 0.1339 | 0.2497 | 0.1699 | 0.2775 | 0.0982 | 0.2257 | |
| 0.7486 | 0.2185 | 0.2130 | 0.2245 | |||||
| 0.4606 | 0.3244 | 0.1201 | 0.1152 | 0.1252 | ||||
Table 8.
Biases and MSEs (first and second columns respectively) for MLEs and the Bayesian estimators of θ, β and α for scheme 3, with T = 0.85, n = 100, m = 60.
| MLE | Bayes (SEL) | Bayes (LIN v = 0.5) | Bayes (LIN v = −0.5) | |||||
|---|---|---|---|---|---|---|---|---|
| Parameters | ||||||||
| 1.2869 | 3.0498 | 0.2187 | 0.2395 | 0.2516 | 0.2743 | 0.1862 | 0.2083 | |
| 0.5739 | 0.3180 | 0.3157 | 0.3200 | |||||
| 0.2935 | 0.2304 | 0.0825 | 0.0817 | 0.0836 | ||||
| 2.0111 | 5.0975 | 0.2731 | 0.2174 | 0.3142 | 0.2607 | 0.2321 | 0.1789 | |
| 0.9144 | 0.5357 | 0.5259 | 0.5446 | |||||
| 0.6945 | 0.5705 | 0.1979 | 0.0584 | 0.2053 | 0.0622 | 0.1903 | 0.0548 | |
| 2.2311 | 6.2521 | 0.2574 | 0.2152 | 0.2966 | 0.2588 | 0.2194 | 0.1788 | |
| 1.5458 | 0.8018 | 0.7804 | 0.8206 | |||||
| 0.4557 | 0.2679 | 0.1084 | 0.1051 | 0.1120 | ||||
| 2.4480 | 6.4806 | 0.1961 | 0.2495 | 0.2358 | 0.2856 | 0.1564 | 0.2173 | |
| 1.7114 | 1.0039 | 0.9786 | 1.0256 | |||||
| 0.4251 | 0.1981 | 0.0889 | 0.0857 | 0.0923 | ||||
| 1.0645 | 2.9192 | 0.1005 | 0.2482 | 0.1349 | 0.2744 | 0.0664 | 0.2263 | |
| 0.8245 | 0.3976 | 0.3942 | 0.4003 | |||||
| 0.1331 | 0.1509 | 0.1409 | 0.1381 | 0.1440 | ||||
An optimal censoring scheme is vitally important and it may save time and cost in a real lifetime experiment, many optimality criteria were discussed in the literature, here we aim to obtain an optimality criterion based on achieving minimum biases and MSEs of the estimated parameters of EPLD. So we will start with a fixed value of the parameters and look for the minimum bias or MSE that can be achieved through Tables 3–9 which they include three types of censoring schemes.
Let and and consider Bayes estimation under SEL for easier comparison. The minimum bias is obtained under censoring scheme 1, with this means that the simple Type II right censoring scheme is relatively efficient. The same results are concluded based on MSE. One may apply different censoring schemes and use the above optimality criteria to have an optimal censoring scheme
For confidence interval estimation of EPLD parameters and α under APC scheme. We find 95% confidence intervals under the three censoring schemes that were mentioned previously. Confidence intervals that we observe in simulation analysis are (i) asymptotic confidence intervals of MLE, (ii) the credible intervals of Bayesian method under (a) SE, (b) LIN loss functions with v = 0.5 and (c) LIN loss functions with v = −0.5. The comparisons are conducted depending on the average interval length (AIL), hence the smaller the AIL the better confidence estimate we may observe. The results are reported in Table 10.
From Table 10 and for different censoring schemes and different values of m and T, we notice that the (AIL) of the credible intervals for Bayesian estimation has smaller values than of MLE intervals for all parameters and α. Considering Bayesian credible intervals the one under LINEX loss function with v = 0.5 and v = −0.5 perform better than credible intervals under the SEL since they have less AIL values.
6. Conclusion
In this paper, we studied exponentiated power Lindley distribution (EPLD) under the APC scheme. We considered the problem of estimation using Bayesian and classical (MLE) methods. A point and interval estimation for the unknown parameters of EPLD was found, so we considered two confidence intervals; credible intervals and the intervals obtained by using an asymptotic property of MLEs. We discussed the analysis of APC data taken from practical experiments and conduct a comprehensive simulation study to examine the performances of the EPLD model with other related models. Different goodness of fit tests and nonparametric statistical methods were used, which indicated that the EPLD under APC better fit real data than other related models, this was reported in Table 2. A simulation study was performed using R-package to assess and compare the performance of the suggested estimation methods; we examined different sample sizes and different censoring schemes. As a result, it was obtained that Bayesian estimation was superior with respect to the MLE method; hence we recommended it for constructing point and interval estimation for all parameters of EPLD. Finally, we obtained an optimality criterion based on achieving minimum biases and minimum MSEs of the estimated parameters of EPLD. For further, work we recommend improving the life time distribution under test and obtain more powerful generalizations of the Lindley distribution through which real applications may fit from different fields of science. One may select different censoring schemes and finding an optimal censoring scheme. Others may find different estimation methods and try to compare between them to obtain the best point and interval estimation of the parameters.
Acknowledgments
The authors extend their appreciation to the Deanship of Scientific Research at Majmaah University for funding this work under project number No. RGP-2019-2.
Appendix.
From the log-likelihood equation, we compute the second partial derivatives with respect to the parameters and α as follows:
Funding Statement
The authors extend their appreciation to the Deanship of Scientific Research at Majmaah University for funding this work under project number No. RGP-2019-2.
Disclosure statement
No potential conflict of interest was reported by the author(s).
References
- 1.Abouammoh A.M., Alshangiti A.M. and Ragab I.E., A new generalized Lindley distribution,J. Stat. Comput. Simul. 85 (2015), pp. 3662–3678. [Google Scholar]
- 2.Ali S., On the Bayesian estimation of the weighted Lindley distribution, J. Stat. Comput. Simul. 85 (2015), pp. 855–880. [Google Scholar]
- 3.Alkarni S., Extended power Lindley distribution-a new statistical model for non- monotone survival data, Eur. J. Stat. Prob. 3 (2015), pp. 19–34. [Google Scholar]
- 4.Almetwally E.M. and Almongy H.M., Estimation of the Marshall–Olkin extended Weibull distribution parameters under adaptive censoring schemes, Int. J. Math. Arch. 9 (2018), pp. 95–102. [Google Scholar]
- 5.Almetwally E.M., Almongy H.M. and ElSherpieny E.A., Adaptive Type-II progressive censoring schemes based on maximum product spacing with application of generalized Rayleigh distribution, J. Data. Sci. 17 (2019), pp. 802–831. [Google Scholar]
- 6.Ashour S.K. and Eltehiwy M.A., Exponentiated power Lindley distribution, J. Adv. Res. 6 (2015), pp. 895–905. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Aslam M., Nawaz S. and Ali S., Bayesian estimation for the mixture of exponentiated inverted Weibull distribution, J. Natl. Sci. Found. Sri Lanka 46 (2018), pp. 587–604. DOI: 10.4038/jnsfsr.v46i4.8488. [DOI] [Google Scholar]
- 8.Balakrishnan N., Progressive censoring methodology: An appraisal, Test 16 (2007), pp. 211–296. (with discussions). [Google Scholar]
- 9.Balakrishnan N. and Aggrawala R., Progressive Censoring, Methods and Applications, Birkhauser, Boston, 2000. [Google Scholar]
- 10.Cramer E. and Iliopoulos G., Adaptive progressive Type-II censoring, Test 19 (2010), pp. 342–358. [Google Scholar]
- 11.Dasgupta R., On the distribution of burr with applications, Sankhya B 73 (2011), pp. 1–19. [Google Scholar]
- 12.David H.A. and Nagaraja H.N., Order Statistics, 3rd ed., Wiley, New York, 2003. [Google Scholar]
- 13.Deniz E. and Ojeda E., The discrete Lindley distribution-properties and applications, J. Stat. Comput. Simul. 81 (2011), pp. 1405–1416. [Google Scholar]
- 14.Dey S., Ali S. and Kumar D., Weighted inverted Weibull distribution: Properties and estimation, J. Stat. Manag. Syst. 23 (2020), pp. 843–885. DOI: 10.1080/09720510.2019.1669344. [DOI] [Google Scholar]
- 15.El-Sherpieny E.S.A, Almetwally E.M. and Muhammed H.Z., Progressive Type-II hybrid censored schemes based on maximum product spacing with application to power Lomax distribution, Physica A Stat. Mech. Appl. 553 (2020), pp. 124–251. [Google Scholar]
- 16.Ghitany M., Al-Mutairi D., Balakrishnan N. and Al-Enezi I., Power Lindley distribution and associated inference, Comput. Stat. Data Anal. 64 (2013), pp. 20–33. [Google Scholar]
- 17.Ghitany M., Atieha B. and Nadarajah S., Lindley distribution and its applications, Math. Comput. Simul. 78 (2008), pp. 493–506. [Google Scholar]
- 18.Haj Ahmad H. and Awad A., Optimal two-stage progressive Type-II censoring with exponential life-times, Dirasat 36 (2009). [Google Scholar]
- 19.Haj Ahmad H. and Awad A., Optimality criterion for progressive Type-II right censoring based on awad sup-entropy measures under Pareto life-times, J. Stat. 16 (2010), pp. 12–27. [Google Scholar]
- 20.Hastings W.K., Monte Carlo sampling methods using Markov chains and their applications, Biometrika 57 (1970), pp. 97–109. [Google Scholar]
- 21.Hussain E., The non-linear functions of order statistics and their properties in selected probability models, Ph.D. thesis, Department of Statistics, University of Karachi, Pakistan, 2006
- 22.Karandikar R.L., On Markov chain Monte Carlo (MCMC) Method, Sadhana 31 (2006), pp. 81–104. [Google Scholar]
- 23.Metropolis N., Rosenbluth A.W., Rosenbluth M.N., Teller A.H. and Teller E., Equations of state calculations by fast computing machines, J. Chem. Phys. 21 (1953), pp. 1087–1092. [Google Scholar]
- 24.Nadarajah S., Bakouch H.S. and Tahmasbi R., A generalized Lindley distribution, Sankhya Ser. B 73 (2011), pp. 331–359. [Google Scholar]
- 25.Nassar M. and Abo-Kasem O.E., Estimation of the inverse Weibull parameters under adaptive type-II progressive hybrid censoring scheme, in J. Comput. Appl. Math. 315 (2017), pp. 228–239.
- 26.Ng H.K.T. and Chan P.S., Comments on progressive censoring methodology: An appraisal, Test 16 (2007), pp. 287–289. [Google Scholar]
- 27.Ng H.K.T., Kundu D. and Chan P.S., Statistical analysis of exponential lifetimes under adaptive type-II progressive censoring scheme, Nav. Res. Logist. 56 (2010), pp. 687–698. [Google Scholar]
- 28.Oluyede B. and Yang T., A new class of generalized Lindley distributions with applications, J. Stat. Comput. Simul. 85 (2015), pp. 2072–2100. [Google Scholar]
- 29.Pradhan B. and Kundu D., On progressively censored generalized exponential distribution, Test 18 (2009), pp. 497–515. [Google Scholar]
- 30.Salah M., Parameter estimation of the Marshall-Olkin exponential distribution under Type-II hybrid censoring schemes and its applications, J. Stat. Appl. Prob. 5 (2016), pp. 1–8. 10.18576/jsap/050301. [DOI] [Google Scholar]
- 31.Salah M., Bayesian estimation of the scale parameter of the Marshall–Olkin exponential distribution under progressively Type-II censored samples, J. Stat. Theory Appl. 17 (2018), pp. 1–14. ( 10.2991/jsta.2018.17.1.1). [DOI] [Google Scholar]
- 32.Salah M., On progressive Type-II censored samples from alpha power exponential distribution, J. Math. 2020 (2020), pp. 1–8. Article ID 2584184, 10.1155/2020/2584184. [DOI] [Google Scholar]
- 33.Salah M., El-Morshedy M., Eliwa M.S. and Yousof H.M., Expanded Fréchet model: Mathematical properties, Copula, different estimation methods, applications and validation testing, Mathematics 8 (2020), pp 1–29. 10.3390/math8111949. [DOI] [Google Scholar]
- 34.Tahir M., Abid M, Aslam M. and Ali S., Bayesian estimation of the mixture of Burr Type-XII distributions using doubly censored data, J. King Saud Univ. Sci. 31 (2019), pp. 1137–1150. DOI: 10.1016/j.jksus.2019.04.003. [DOI] [Google Scholar]
- 35.Yousaf R., Aslam M. and Ali S., Bayesian estimation of the transmuted Frechet distribution, Iran. J. Sci. Technol. Trans. A: Sci. 43 (2019), pp. 1629–1641. DOI: 10.1007/s40995-018-0581-1. [DOI] [Google Scholar]
- 36.Zakerzadeh H. and Dolati A., Generalized Lindley distribution, J. Math. Exten. 3 (2009), pp. 13–25. [Google Scholar]




