Abstract
A Bayesian method based on the learning rate parameter is called a generalized Bayesian method. In this study, joint hybrid censored type I and type II samples from exponential populations were examined to determine the influence of the parameter on the estimation results. To investigate the selection effects of the learning rate and the loss parameters on the estimation results, we considered two additional loss functions in the Bayesian approach: the linear and the generalized entropy loss functions. We then compared the generalized Bayesian algorithm with the traditional Bayesian algorithm. We performed Monte Carlo simulations to compare the performance of the estimation results with the losses and different values of . The effects of different losses with different values and learning rate parameters are examined using an example.
Keywords: Generalized Bayes, Learning rate parameter, Exponential distribution, Joint hybrid censoring, Linex loss, General entropy loss
1. Introduction
A Bayesian analysis based on a learning rate parameter () is called a Generalized Bayes (GB). For , the classical Bayesian framework is derived as a fractional power on the likelihood function for parameter . In other words, if is the prior distribution of the parameter , then
| (1) |
is GB posterior distribution for For more information on the GB approach and how to select the value of the rate parameter, see Refs. [[1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11], [12], [13]]. In Refs. [[2], [3], [4], [5]], the Safe Bayes algorithm based on the minimization of a sequential risk measure was used to study the learning rate selection. A second technique for learning rate selection using two different information adaptation methods was presented in Refs. [6,7]. Using different values of the learning rate parameter, the authors in Ref. [11] investigated the generalized Bayes estimation (GBE) based on a common censored type II sample of k exponential populations. In Ref. [12], on the other hand, a common censored type II sample of several exponential populations served as the basis for GB prediction. A study using a joint censored type-II sample of two exponential populations for Bayes estimation and prediction was studied in Ref. [13]. We choose a range of values for the learning rate parameter to obtain the best estimators for the parameters of the corresponding distributions. We then compared GB with the traditional Bayesian method. Exact likelihood inference under joint type-II censoring for two populations with two-parameter exponential distributions was studied in Ref. [14]. In Ref. [15], two exponential populations with joint progressive hybrid type I censoring were studied using both classical and Bayesian estimates. Exact likelihood inference for multiple exponential populations with joint type II and joint progressive type II censoring was studied in Refs. [16,17]. Numerous variants of hybrid censorship have been described in the literature. For example [18,19], investigated parametric inference using dependent competing risk data and partially observed the causes of failure from the MOBK distribution under uniform hybrid censoring. HCS-I can be defined as occurring either at a certain time or after a certain number of observations (say r), whichever occurs first. Time is saved by using this type of censoring, although HCS-I has a drawback in that no observations are made before the set time . To circumvent the shortcomings of HCS-I, HCS-II was presented in Ref. [20], wherein the experiment was concluded at a predetermined time or upon reaching a certain number of observations r; that is, a minimum of r observations was assured until the conclusion of the experiment. The experiment continues until ; thus, if r observations occur before , more observations than r may be included in the data. The experiment continues until r observations are available if they do not occur before [21]. The ordered lifetime of experimental units is denoted by , and the observations are . When the experiment is stopped at , where and are predetermined, HCS-I occurs. As a result, there were two cases of HCS-I. where is a random variable less than r, the first case occurs when or , the second case occurs when or , and then . If the experiment is stopped at , which indicates that at least r failures are observed at the conclusion of the experiment, HCS-II emerges. Two further instances were provided by the HCS-II. The first case occurs when or and then ; however, the second case occurs when or where is a random variable that satisfies .
Let us assume that the goods are manufactured by one and the same company on k different production lines. A life test was performed simultaneously on k independent samples of size selected from these lines. To reduce the cost or shorten the duration of the experiment, the experimenter ended the lifetime test experiment at T. In this case, an estimate of the average lifetime of the units produced by these k lines, either as a point or interval estimate, would be of interest (for more information on this topic, see Refs. [[11], [12], [13], [14], [15], [16]]).
Suppose are be -samples, and denotes to the lifetimes of specimens of product , which are independent and identically distributed (iid) random variables from a population with cumulative distribution function (cdf) and probability density function (pdf) . Furthermore, let denote the total sample size and denote the total number of observed failures. Then, under the joint hybrid censoring scheme for the samples, the observable data consists of , where , and associated to is defined by
| (2) |
Let denote the number of failures in and , where and , then, the joint density function of is given by
| (3) |
where , are the survival functions of population and .
In addition, let be the number of failures up to time , distributed with the probability mass function (pmf)
| (4) |
where, and .
For HCS-I:
For HCS-II:
This study's major goal was to find out how parameter learning rate and losses, when used in conjunction with the joint hybrid censoring scheme (HCS–I and HCS-II), affected the estimate results of k exponential populations when censoring was applied to k combined samples. This preprint has already been published [22].
The rest of this article is organized as follows. To estimate the population parameters, Section 2 presents the maximum likelihood estimators (MLE) and (GBE) using the Linex and general entropy loss functions in the GB method. Section 3 contains a numerical analysis of the results in Section 2. Finally, Section 4 concludes the paper.
2. Estimation of the parameters
In this section, we consider k exponential distributions, under the joint hybrid censoring scheme (HCS–I and HCS-II) when the censoring is performed on combined k samples. Then study the MLE and GBE with learning rate parameters using the Linex and general entropy loss functions.
The populations studied here are exponential with pdf and cdf, respectively,
| (5) |
Substituting (5) into (3), we obtain the likelihood function,
| (6) |
where, and .
The log-likelihood function is given by,
2.1. Maximum likelihood estimation
Differentiating the log-likelihood function for then equating to zero, the MLE of for , under HCS-I is given by,
| (7) |
The MLE of for , under HCS-II is given by
| (8) |
MLEs of exist if we have at least failures , such that at least one failure observed from each sample that satisfies the condition or .
As mentioned in Section 3, we computed the MLEs to compare their results with those of Bayesian estimation using different values for the learning rate parameter and different values for the loss function parameters.
2.2. Generalized Bayes estimation
Since we assume that the parameters are unknown, we can treat the conjugate prior distributions of as separate gamma prior distributions, denoted . Thus, we obtain the joint prior distribution of as:
| (9) |
where
| (10) |
and denotes the complete gamma function.
After raising (6) to the fractional power and combining (6) and (9), the generalized Bayes posterior distribution of is then
| (11) |
Notice that the distribution of the generalized posterior density function is gamma distribution because is a conjugate prior.
We consider two loss functions, namely Linex and general entropy loss functions, to investigate the influence of the learning rate with the loss parameters on the estimation results.
-
(i).
The Linex loss function which is asymmetric is given by,
The Linex loss function, introduced by Ref. [23] gives differing weights to overestimation and underestimation.
-
(ii).
The generalization of the entropy (GE) loss function is,
This loss function is used by Refs. [24,25], and expressed in terms of the ratio to give more realism to practical situations. The estimators of under the Linex loss function are provided by,
| (12) |
Under the general entropy (GE) loss function, the Bayes estimators of are given by
| (13) |
Some unusual instances have been included in the GE loss function. We can obtain the Bayes estimators of under the weighted squared error loss function and the squared error loss function, respectively, by substituting into (13).
. Since can be immediately substituted into (11) to obtain MLEs in (13) after substituting , the estimators are Bayesian estimators of using Jeffreys’ non-informative priors .
The estimators under HCS-I in the GB approach can be obtained by setting for case one and for case two, respectively. But we can obtain under HCS-II by setting for case one and for case two.
3. Numerical study
The performance of the derivation techniques identified in the previous section is evaluated using Monte Carlo simulation results, which are also provided in this section along with an example to illustrate the different derivation techniques.
3.1. Simulation study
The simulation study is designed and carried out as follows.
-
•
Generate three samples with sizes from different exponential distributions by considering the values of the parameters as the mean of the prior distribution for each parameter, then combining the three samples in one ordered sample.
-
•
Repeat generating of the combined sample B times, (discard samples that do not fulfill the condition in Remark 1, the number of replications becomes ).
-
•
Set two fixed values, the number of observed failures from the combined sample and the time .
-
•
Under HCS-I the experiment is terminated at , then observations number , but for HCS-II the experiment is terminated at , then observations number , where are random.
-
•
Under HCS-I, and are computed, where is the ratio of samples that are stopped at from all replications, is the mean of the number of observed values up to () and are the average values of the observed failures of the three samples for both cases of . By determining the number of observations from the three samples, the MLEs ; are computed using (7) by taking the average of the results of replicates , and their estimated risk (Er) is obtained from .
-
•
, and MLEs are also computed for HCS-II, where is the ratio of samples that are stopped at from all replications, is the mean of the number of observed values up to (), the MLEs ; are computed using (8).
-
•
Compute GBEs under Linex and GE loss functions from (12), (13) using different values of and for HCS-I and HCS-II, (see, Appendix).
We chose the exponential parameters to be (0.2,0.5,0.9) based on the hyperparameters are obtained as the mean of gamma distributions in (10), for the Monte Carlo simulations we use replicates. For MLE we considered different options for the sample sizes of the three populations and for and . For the Bayes study, the sample sizes are , for , for . MLE results are shown in Table 1 for HCS-I and Table 2 for HCS-II.
Table 1.
MLEs under HCS-I.
| (10,10,10) | (20, 2) | 0.73 | 16.9 | (3.3,6.2,8.3) | (0.345, 0.625, 1.070) | (0.460, 0.300, 0.443) |
| (20, 3) | 0.17 | 18.2 | (3.8,7,8.9) | (0.286, 0.604, 1.046) | (0.235, 0.274, 0.408) | |
| (25, 4) | 0.62 | 22.7 | (5.3,8.5,9.7) | (0.256, 0.590, 1.024) | (0.177, 0.246, 0.385) | |
| (25, 5) | 0.29 | 23.2 | (5.8,8.9,9.8) | (0.250, 0.583, 1.022) | (0.123, 0.231, 0.368) | |
| (8, 9,13) | (20, 2) | 0.55 | 17.4 | (2.6,5.5,10.6) | (0.427, 0.643, 1.022) | (1.126, 0.336, 0.352) |
| (20, 3) | 0.06 | 18.5 | (2.8,5.9,11.2) | (0.372, 0.620, 1.013) | (1.464, 0.316, 0.349) | |
| (25, 4) | 0.41 | 23.1 | (4.1,7.6,12.5) | (0.280, 0.602, 1.003) | (0.192, 0.261, 0.318) | |
| (25, 5) | 0.15 | 23.5 | (4.4,7.8,12.6) | (0.271, 0.595, 0.996) | (0.218, 0.255, 0.316) | |
| (12,11,7) | (20, 2) | 0.86 | 16.1 | (3.9,6.9,5.8) | (0.299, 0.613, 1.155) | (0.670, 0.269, 0.618) |
| (20, 3) | 0.33 | 17.9 | (4.9,8.1,6.3) | (0.263, 0.593, 1.115) | (0.219, 0.245, 0.556) | |
| (25, 4) | 0.77 | 22.1 | (6.5,9.5,6.8) | (0.243, 0.583, 1.088) | (0.108, 0.220, 0.518) | |
| (25, 5) | 0.46 | 22.9 | (7.2,9.9,6.9) | (0.239, 0.578, 1.063) | (0.103, 0.212, 0.488) | |
| (20,20,20) | (40, 2) | 0.85 | 35 | (6.6,12.6,16.7) | (0.244, 0.557, 0.985) | (0.106, 0.171, 0.261) |
| (40, 3) | 0.12 | 37.9 | (7.7,14.2,17.8) | (0.232, 0.547, 0.978) | (0.094, 0.155, 0.254) | |
| (50, 4) | 0.73 | 46.6 | (10.9,17.2,19.4) | (0.226, 0.546, 0.968) | (0.074, 0.144, 0.236) | |
| (50, 5) | 0.30 | 47.8 | (11.8,17.9,19.6) | (0.222, 0.541, 0.959) | (0.070, 0.141, 0.231) | |
| (16,18,26) | (40, 2) | 0.63 | 36.4 | (5.1,11.1,21.4) | (0.262, 0.562, 0.963) | (0.269, 0.183, 0.224) |
| (40, 3) | 0.02 | 38.2 | (5.6,12,22.4) | (0.248, 0.557, 0.957) | (0.122, 0.179, 0.218) | |
| (50, 4) | 0.47 | 47.4 | (8.5,15.3,25.1) | (0.232, 0.549, 0.952) | (0.088, 0.154, 0.205) | |
| (50, 5) | 0.11 | 48.3 | (8.9,15.6,25.3) | (0.229, 0.546, 0.949) | (0.086, 0.152, 0.200) | |
| (24,22,14) | (40, 2) | 0.96 | 33.1 | (7.9,13.9,11.7) | (0.233, 0.554, 1.023) | (0.091, 0.162, 0.336) |
| (40, 3) | 0.32 | 37.3 | (10,16.4,12.8) | (0.225, 0.544, 1.010) | (0.076, 0.146, 0.316) | |
| (50, 4) | 0.89 | 45.2 | (13.1,19,13.6) | (0.220, 0.541, 0.996) | (0.063, 0.133, 0.304) | |
| (50, 5) | 0.54 | 47.2 | (14.7,20,13.8) | (0.216, 0.537, 0.984) | (0.059, 0.129, 0.293) |
Table 2.
MLEs under HCS-II.
| (10,10,10) | (20, 2) | 0.27 | 21 | (4,7.3,9) | (0.282, 0.599, 1.043) | (0.397, 0.267, 0.406) |
| (20, 3) | 0.83 | 22.3 | (4.6,7.9,9.4) | (0.266, 0.598, 1.041) | (0.154, 0.246, 0.390) | |
| (25, 4) | 0.38 | 25.8 | (6.2,9.2,9.9) | (0.246, 0.578, 1.009) | (0.124, 0.225, 0.361) | |
| (25, 5) | 0.71 | 26.3 | (6.7,9.4,9.9) | (0.245, 0.581, 1.004) | (0.114, 0.222, 0.355) | |
| (8, 9,13) | (20, 2) | 0.45 | 21.3 | (3,6.2,11.4) | (0.336, 0.625, 1.017) | (0.487, 0.310, 0.352) |
| (20, 3) | 0.94 | 23 | (3.6,7,12.2) | (0.302, 0.610, 1.009) | (0.272, 0.273, 0.320) | |
| (25, 4) | 0.59 | 26.1 | (4.8,8.1,12.8) | (0.264, 0.596, 0.990) | (0.155, 0.252, 0.302) | |
| (25, 5) | 0.85 | 26.7 | (5.2,8.4,12.9) | (0.261, 0.591, 0.987) | (0.145, 0.242, 0.306) | |
| (12,11,7) | (20, 2) | 0.14 | 20.8 | (5.2,8.4,6.5) | (0.254, 0.583, 1.106) | (0.135, 0.239, 0.531) |
| (20, 3) | 0.67 | 21.8 | (5.7,8.8,6.6) | (0.250, 0.587, 1.095) | (0.122, 0.230, 0.517) | |
| (25, 4) | 0.23 | 25.9 | (8,10.3,6.9) | (0.233, 0.570, 1.063) | (0.097, 0.210, 0.507) | |
| (25, 5) | 0.54 | 26.1 | (8.2,10.4,7) | (0.234, 0.567, 1.058) | (0.096, 0.199, 0.502) | |
| (20,20,20) | (40, 2) | 0.15 | 41.3 | (7.9,14.5,17.9) | (0.234, 0.548, 0.976) | (0.095, 0.161, 0.256) |
| (40, 3) | 0.88 | 43.9 | (9.1,15.6,18.7) | (0.230, 0.550, 0.973) | (0.084, 0.152, 0.243) | |
| (50, 4) | 0.27 | 51.2 | (12.4,18.2,19.7) | (0.221, 0.540, 0.959) | (0.069, 0.139, 0.233) | |
| (50, 5) | 0.70 | 52.1 | (13.1,18.6,19.8) | (0.221, 0.541, 0.956) | (0.068, 0.136, 0.228) | |
| (16,18,26) | (40, 2) | 0.37 | 41.8 | (5.8,12.2,22.6) | (0.248, 0.557, 0.960) | (0.117, 0.177, 0.219) |
| (40, 3) | 0.98 | 44.7 | (7.3,14,24.3) | (0.240, 0.556, 0.958) | (0.100, 0.165, 0.207) | |
| (50, 4) | 0.53 | 51.6 | (9.3,16,25.5) | (0.229, 0.545, 0.949) | (0.084, 0.148, 0.199) | |
| (50, 5) | 0.89 | 52.9 | (10.2,16.6,25.7) | (0.228, 0.545, 0.946) | (0.079, 0.147, 0.194) | |
| (24,22,14) | (40, 2) | 0.04 | 41 | (10.4,16.7,12.9) | (0.224, 0.538, 1.005) | (0.076, 0.145, 0.302) |
| (40, 3) | 0.68 | 42.7 | (11.2,17.5,13.1) | (0.224, 0.545, 1.001) | (0.073, 0.144, 0.309) | |
| (50, 4) | 0.11 | 50.9 | (15.7,2.5,13.9) | (0.214, 0.535, 0.977) | (0.058, 0.128, 0.287) | |
| (50, 5) | 0.46 | 51.5 | (16.1,20.6,13.9) | (0.215, 0.535, 0.977) | (0.059, 0.129, 0.291) |
The values for the learning rate parameters were = Note that for = -1.5, −1, −0.85, −0.75; = -0.5, −0.1, 0.3, 0.5, for = -1, −0.5, −0.25, 0.1; = -0.1,0.7,1,4 and finally for = −1, 0.1, 0.65, 1; = 0.5, 1.5, 3.5, 8.5. The results of the Bayesian estimators for and for HCS-I and HCS-II are shown in Table 3, Table 4, Table 5, Table 6.
Table 3.
GB estimators under HCS-I, GE loss.
|
| |||||
|---|---|---|---|---|---|
|
|
|
|
|
|
|
| (10,10,10) |
(20, 2) | (0.249, 0.608, 1.014) | (0.058, 0.125, 0.150) | (0.214, 0.531, 0.931) | (0.029, 0.079, 0.093) |
| (20, 3) | (0.245, 0.598, 0.999) | (0.052, 0.122, 0.153) | (0.210, 0.524, 0.926) | (0.029, 0.080, 0.098) | |
| (25, 4) | (0.244, 0.586, 1.001) | (0.049, 0.125, 0.137) | (0.212, 0.526, 0.922) | (0.030, 0.080, 0.097) | |
| (25, 5) | (0.239, 0.581, 0.988) | (0.052, 0.123, 0.151) | (0.210, 0.525, 0.921) | (0.031, 0.080, 0.096) | |
| (20, 2) | (0.204, 0.505, 0.905) | (0.024, 0.075, 0.095) | (0.193, 0.483, 0.878) | (0.024, 0.071, 0.094) | |
| (20, 3) | (0.202, 0.502, 0.898) | (0.025, 0.075, 0.094) | (0.192, 0.484, 0.873) | (0.028, 0.074, 0.095) | |
| (25, 4) | (0.202, 0.501, 0.893) | (0.028, 0.077, 0.093) | (0.194, 0.486, 0.873) | (0.028, 0.074, 0.094) | |
| (25, 5) |
(0.202, 0.509, 0.893) |
(0.028, 0.075, 0.094) |
(0.191, 0.486, 0.871) |
(0.028, 0.075, 0.096) |
|
|
|
|
||||
| (20, 2) | (0.240, 0.564, 0.976) | (0.075, 0.175, 0.228) | (0.217, 0.534, 0.940) | (0.064, 0.155, 0.211) | |
| (20, 3) | (0.233, 0.557, 0.959) | (0.073, 0.164, 0.227) | (0.211, 0.520, 0.932) | (0.064, 0.148, 0.207) | |
| (25, 4) | (0.229, 0.556, 0.952) | (0.070, 0.157, 0.214) | (0.212, 0.534, 0.925) | (0.061, 0.144, 0.196) | |
| (25, 5) | (0.225, 0.552, 0.959) | (0.068, 0.153, 0.213) | (0.207, 0.515, 0.924) | (0.060, 0.139, 0.196) | |
| (20, 2) | (0.203, 0.509, 0.917) | (0.060, 0.147, 0.205) | (0.186, 0.479, 0.878) | (0.062, 0.144, 0.199) | |
| (20, 3) | (0.200, 0.510, 0.917) | (0.061, 0.146, 0.200) | (0.185, 0.473, 0.872) | (0.062, 0.139, 0.193) | |
| (25, 4) | (0.203, 0.503, 0.901) | (0.061, 0.136, 0.191) | (0.187, 0.496, 0.865) | (0.057, 0.133, 0.187) | |
| (25, 5) |
(0.199, 0.505, 0.890) |
(0.058, 0.134, 0.190) |
(0.191, 0.495, 0.967) |
(0.057, 0.132, 0.189) |
|
|
|
|
||||
| (20, 2) | (0.254 0.589 1.012) | (0.106 0.216 0.304) | (0.220, 0.528, 0.951) | (0.087, 0.187, 0.267) | |
| (20, 3) | (0.197, 0.498, 0.999) | (0.101, 0.203, 0.283) | (0.214, 0.531, 0.937) | (0.086, 0.184, 0.257) | |
| (25, 4) | (0.238, 0.561, 0.979) | (0.090, 0.187, 0.269) | (0.215, 0.526, 0.936) | (0.079, 0.167, 0.246) | |
| (25, 5) | (0.235, 0.565, 0.988) | (0.088, 0.183, 0.270) | (0.210, 0.522, 0.913) | (0.075, 0.161, 0.235) | |
| (20, 2) | (0.198, 0.510, 0.913) | (0.084, 0.180, 0.259) | (0.183, 0.490, 0.899) | (0.083, 0.179, 0.254) | |
| (20, 3) | (0.196, 0.497, 0.921) | (0.081, 0.173, 0.251) | (0.185, 0.496, 0.888) | (0.080, 0.169, 0.247) | |
| (25, 4) | (0.197, 0.522, 0.901) | (0.073, 0.161, 0.237) | (0.193, 0.494, 0.876) | (0.074, 0.159, 0.227) | |
| (25, 5) |
(0.200, 0.503, 0.899) |
(0.074, 0.155, 0.240) |
(0.191, 0.492, 0.873) |
(0.071, 0.153, 0.232) |
|
|
|
|
||||
| (20, 2) | (0.222 0.543 0.961) | (0.010 0.043 0.090) | (0.194 0.497 0.904) | (0.009 0.037 0.081) | |
| (20, 3) | (0.242 0.559 0.991) | (0.012 0.045 0.093) | (0.214 0.536 0.935) | (0.009 0.037 0.077) | |
| (25, 4) | (0.219 0.549 0.961) | (0.008 0.035 0.078) | (0.200 0.510 0.936) | (0.007 0.030 0.073) | |
| (25, 5) |
(0.220 0.551 0.968) |
(0.008 0.035 0.078) |
(0.212 0.518 0.923) |
(0.007 0.029 0.066) |
|
|
|
|
||||
| (20, 2) | (0.181 0.487 0.892) | (0.009 0.038 0.076) | (0.168 0.462 0.855) | (0.009 0.036 0.070) | |
| (20, 3) | (0.202 0.504 0.920) | (0.008 0.034 0.073) | (0.194 0.495 0.916) | (0.008 0.034 0.072) | |
| (25, 4) | (0.187 0.505 0.894) | (0.006 0.029 0.065) | (0.181 0.499 0.886) | (0.006 0.029 0.064) | |
| (25, 5) | (0.204 0.504 0.924) | (0.006 0.028 0.067) | (0.192 0.490 0.885) | (0.006 0.026 0.060) | |
Table 4.
GB estimators under HCS-II, GE loss.
|
| |||||
|---|---|---|---|---|---|
|
|
|
|
|
|
|
| (10,10,10) |
(20, 2) | (0.245, 0.599, 1.008) | (0.051, 0.120, 0.141) | (0.210, 0.523, 0.927) | (0.031, 0.085, 0.098) |
| (20, 3) | (0.246, 0.594, 1.009) | (0.053, 0.127, 0.137) | (0.212, 0.525, 0.927) | (0.032, 0.084, 0.097) | |
| (25, 4) | (0.238, 0.584, 0.988) | (0.052, 0.119, 0.145) | (0.207, 0.520, 0.920) | (0.033, 0.084, 0.097) | |
| (25, 5) |
(0.238, 0.585, 0.988) |
(0.052, 0.116, 0.146) |
(0.210, 0.524, 0.914) |
(0.033, 0.080, 0.099) |
|
|
|
|
||||
| (20, 2) | (0.201, 0.501, 0.903) | (0.027, 0.077, 0.095) | (0.191, 0.479, 0.872) | (0.029, 0.075, 0.095) | |
| (20, 3) | (0.204, 0.502, 0.903) | (0.028, 0.078, 0.092) | (0.194, 0.484, 0.873) | (0.028, 0.076, 0.092) | |
| (25, 4) | (0.201, 0.501, 0.895) | (0.031, 0.076, 0.095) | (0.190, 0.487, 0.872) | (0.031, 0.077, 0.097) | |
| (25, 5) |
(0.201, 0.500, 0.886) |
(0.031, 0.075, 0.095) |
(0.192, 0.483, 0.867) |
(0.031, 0.075, 0.096) |
|
|
|
|
||||
| (20, 2) | (0.236, 0.561, 0.978) | (0.076, 0.165, 0.223) | (0.212, 0.524, 0.930) | (0.067, 0.151, 0.204) | |
| (20, 3) | (0.232, 0.555, 0.947) | (0.075, 0.164, 0.217) | (0.214, 0.536, 0.920) | (0.065, 0.150, 0.199) | |
| (25, 4) | (0.226, 0.540, 0.958) | (0.070, 0.152, 0.210) | (0.209, 0.493, 0.925) | (0.063, 0.138, 0.196) | |
| (25, 5) | (0.224, 0.555, 0.942) | (0.070, 0.151, 0.212) | (0.212, 0.513, 0.908) | (0.062, 0.135, 0.192) | |
| (20, 2) | (0.201, 0.501, 0.905) | (0.063, 0.143, 0.198) | (0.186, 0.483, 0.870) | (0.063, 0.144, 0.197) | |
| (20, 3) | (0.205, 0.514, 0.901) | (0.063, 0.144, 0.196) | (0.190, 0.492, 0.874) | (0.063, 0.140, 0.190) | |
| (25, 4) | (0.201, 0.507, 0.902) | (0.062, 0.134, 0.194) | (0.188, 0.497, 0.890) | (0.059, 0.130, 0.189) | |
| (25, 5) |
(0.203, 0.502, 0.886) |
(0.060, 0.131, 0.192) |
(0.193, 0.483, 0.923) |
(0.060, 0.127, 0.191) |
|
|
|
|
||||
| (20, 2) | (0.245, 0.569, 0.994) | (0.101, 0.199, 0.288) | (0.208, 0.533, 0.948) | (0.085, 0.182, 0.260) | |
| (20, 3) | (0.246, 0.580, 0.984) | (0.098, 0.196, 0.276) | (0.214, 0.540, 0.931) | (0.083, 0.177, 0.252) | |
| (25, 4) | (0.238, 0.569, 0.964) | (0.088, 0.184, 0.260) | (0.213, 0.526, 0.908) | (0.077, 0.162, 0.239) | |
| (25, 5) |
(0.236, 0.551, 0.991) |
(0.086, 0.175, 0.264) |
(0.209, 0.514, 0.930) |
(0.075, 0.157, 0.242) |
|
|
|
|
||||
| (20, 2) | (0.198, 0.506, 0.911) | (0.082, 0.173, 0.245) | (0.183, 0.489, 0.885) | (0.081, 0.170, 0.245) | |
| (20, 3) | (0.199, 0.507, 0.916) | (0.080, 0.162, 0.242) | (0.196, 0.494, 0.885) | (0.081, 0.164, 0.239) | |
| (25, 4) | (0.200, 0.504, 0.899) | (0.073, 0.154, 0.235) | (0.191, 0.490, 0.875) | (0.071, 0.150, 0.231) | |
| (25, 5) |
(0.202, 0.507, 0.898) |
(0.073, 0.153, 0.232) |
(0.196, 0.493, 0.871) |
(0.071, 0.150, 0.231) |
|
|
|
|
||||
| (20, 2) | (0.236 0.554 0.977) | (0.009 0.037 0.087) | (0.214 0.530 0.934) | (0.007 0.033 0.074) | |
| (20, 3) | (0.213 0.529 0.984) | (0.006 0.030 0.081) | (0.197 0.505 0.916) | (0.006 0.027 0.069) | |
| (25, 4) | (0.222 0.562 0.971) | (0.005 0.032 0.079) | (0.209 0.523 0.930) | (0.005 0.027 0.067) | |
| (25, 5) |
(0.222 0.543 0.960) |
(0.005 0.030 0.074) |
(0.207 0.524 0.929) |
(0.005 0.027 0.068) |
|
|
|
|
||||
| (20, 2) | (0.196 0.508 0.917) | (0.006 0.029 0.069) | (0.192 0.488 0.905) | (0.006 0.029 0.069) | |
| (20, 3) | (0.183 0.489 0.896) | (0.005 0.027 0.064) | (0.176 0.475 0.880) | (0.005 0.026 0.064) | |
| (25, 4) | (0.198 0.506 0.915) | (0.005 0.025 0.062) | (0.193 0.503 0.880) | (0.004 0.024 0.058) | |
| (25, 5) | (0.197 0.505 0.911) | (0.004 0.025 0.065) | (0.187 0.492 0.877) | (0.004 0.024 0.059) | |
Table 5.
GB estimators under HCS-I, Linex loss.
|
| |||||
|---|---|---|---|---|---|
|
|
|
|
|
|
|
| (10,10,10) |
(0.222, 0.584, 1.026) | (0.037, 0.115, 0.166) | (0.215, 0.539, 0.947) | (0.031, 0.086, 0.110) | |
| (20, 3) | (0.219, 0.570, 1.018) | (0.035, 0.112, 0.158) | (0.213, 0.535, 0.940) | (0.028, 0.084, 0.109) | |
| (25, 4) | (0.219, 0.568, 1.004) | (0.036, 0.110, 0.160) | (0.214, 0.542, 0.935) | (0.031, 0.082, 0.105) | |
| (25, 5) |
(0.217, 0.562, 1.006) |
(0.036, 0.110, 0.153) |
(0.211, 0.528, 0.933) |
(0.032, 0.086, 0.105) |
|
|
|
|
||||
| (20, 2) | (0.209, 0.501, 0.882) | (0.025, 0.069, 0.090) | (0.206, 0.491, 0.856) | (0.023, 0.067, 0.092) | |
| (20, 3) | (0.207, 0.499, 0.877) | (0.026, 0.071, 0.090) | (0.204, 0.486, 0.853) | (0.025, 0.069, 0.096) | |
| (25, 4) | (0.207, 0.509, 0.875) | (0.029, 0.072, 0.089) | (0.204, 0.496, 0.855) | (0.027, 0.072, 0.098) | |
| (25, 5) |
(0.207, 0.499, 0.880) |
(0.028, 0.072, 0.094) |
(0.203, 0.490, 0.852) |
(0.028, 0.070, 0.098) |
|
|
|
|
||||
| (10,10,10) |
(20, 2) | (0.243, 0.574, 0.988) | (0.077, 0.178, 0.237) | (0.230, 0.537, 0.922) | (0.067, 0.150, 0.193) |
| (20, 3) | (0.234, 0.563, 0.970) | (0.074, 0.168, 0.228) | (0.225, 0.532, 0.898) | (0.067, 0.144, 0.187) | |
| (25, 4) | (0.232, 0.558, 0.958) | (0.070, 0.159, 0.220) | (0.223, 0.534, 0.907) | (0.065, 0.138, 0.184) | |
| (25, 5) |
(0.227, 0.554, 0.967) |
(0.068, 0.155, 0.216) |
(0.222, 0.530, 0.899) |
(0.064, 0.137, 0.182) |
|
|
|
|
||||
| (20, 2) | (0.227, 0.525, 0.895) | (0.067, 0.140, 0.184) | (0.199, 0.440, 0.724) | (0.050, 0.122, 0.217) | |
| (20, 3) | (0.222, 0.516, 0.882) | (0.065, 0.138, 0.181) | (0.198, 0.442, 0.727) | (0.051, 0.119, 0.215) | |
| (25, 4) | (0.221, 0.526, 0.885) | (0.063, 0.132, 0.173) | (0.201, 0.448, 0.728) | (0.052, 0.113, 0.208) | |
| (25, 5) |
(0.219, 0.518, 0.883) |
(0.063, 0.131, 0.175) |
(0.199, 0.451, 0.728) |
(0.051, 0.114, 0.209) |
|
|
|
|
||||
| (10,10,10) |
(20, 2) | (0.257, 0.568, 0.985) | (0.103, 0.201, 0.275) | (0.244, 0.551, 0.907) | (0.081, 0.181, 0.238) |
| (20, 3) | (0.241, 0.571, 0.950) | (0.096, 0.194, 0.266) | (0.233, 0.541, 0.914) | (0.091, 0.175, 0.229) | |
| (25, 4) | (0.238, 0.568, 0.970) | (0.090, 0.180, 0.253) | (0.230, 0.541, 0.914) | (0.083, 0.163, 0.219) | |
| (25, 5) |
(0.230, 0.551, 0.950) |
(0.085, 0.170, 0.249) |
(0.228, 0.530, 0.904) |
(0.080, 0.157, 0.217) |
|
|
|
|
||||
| (20, 2) | (0.230, 0.504, 0.846) | (0.082, 0.152, 0.209) | (0.200, 0.424, 0.692) | (0.065, 0.140, 0.255) | |
| (20, 3) | (0.221, 0.491, 0.835) | (0.080, 0.147, 0.207) | (0.199, 0.431, 0.693) | (0.066, 0.138, 0.251) | |
| (25, 4) | (0.216, 0.502, 0.844) | (0.075, 0.139, 0.197) | (0.199, 0.441, 0.698) | (0.062, 0.124, 0.237) | |
| (25, 5) |
(0.218, 0.508, 0.820) |
(0.072, 0.136, 0.193) |
(0.199, 0.433, 0.699) |
(0.061, 0.120, 0.246) |
|
|
|
|
||||
| (10,10,10) |
(20, 2) | (0.219 0.526 0.939) | (0.010 0.041 0.079) | (0.214 0.509 0.896) | (0.009 0.034 0.067) |
| (20, 3) | (0.236 0.558 0.978) | (0.011 0.042 0.083) | (0.232 0.530 0.918) | (0.010 0.035 0.064) | |
| (25, 4) | (0.216 0.535 0.958) | (0.007 0.032 0.072) | (0.213 0.517 0.915) | (0.007 0.027 0.060) | |
| (25, 5) |
(0.222 0.545 0.952) |
(0.008 0.033 0.072) |
(0.223 0.528 0.918) |
(0.007 0.028 0.057) |
|
|
|
|
||||
| (20, 2) | (0.200 0.467 0.827) | (0.007 0.027 0.055) | (0.180 0.414 0.705) | (0.006 0.026 0.072) | |
| (20, 3) | (0.222 0.507 0.854) | (0.008 0.027 0.049) | (0.197 0.442 0.718) | (0.006 0.021 0.055) | |
| (25, 4) | (0.208 0.491 0.844) | (0.006 0.023 0.047) | (0.192 0.446 0.731) | (0.005 0.019 0.058) | |
| (25, 5) | (0.212 0.500 0.865) | (0.006 0.021 0.0471) | (0.198 0.444 0.728) | (0.005 0.017 0.053) | |
Table 6.
GB estimators under HCS-II, Linex loss.
|
| |||||
|---|---|---|---|---|---|
|
|
|
|
|
|
|
| (10,10,10) | (0.219, 0.569, 1.012) | (0.036, 0.114, 0.167) | (0.212, 0.531, 0.942) | (0.030, 0.088, 0.107) | |
| (20, 2) | |||||
| (20, 3) | (0.221, 0.569, 1.014) | (0.036, 0.115, 0.163) | (0.213, 0.530, 0.941) | (0.032, 0.092, 0.108) | |
| (25, 4) | (0.217, 0.560, 1.005) | (0.037, 0.110, 0.153) | (0.209, 0.531, 0.930) | (0.034, 0.085, 0.107) | |
| (25, 5) | (0.217, 0.561, 0.999) | (0.038, 0.108, 0.151) | (0.214, 0.527, 0.931) | (0.033, 0.085, 0.105) | |
| (20, 2) | (0.206, 0.499, 0.883) | (0.027, 0.074, 0.093) | (0.202, 0.486, 0.853) | (0.027, 0.071, 0.097) | |
| (20, 3) | (0.206, 0.510, 0.877) | (0.030, 0.073, 0.088) | (0.205, 0.491, 0.850) | (0.027, 0.071, 0.089) | |
| (25, 4) | (0.206, 0.504, 0.882) | (0.031, 0.073, 0.089) | (0.202, 0.490, 0.853) | (0.030, 0.072, 0.098) | |
| (25, 5) |
(0.207, 0.499, 0.869) |
(0.032, 0.072, 0.090) |
(0.203, 0.489, 0.848) |
(0.030, 0.069, 0.099) |
|
|
|
|
||||
| (10,10,10) |
(20, 2) | (0.232, 0.569, 0.982) | (0.075, 0.170, 0.230) | (0.224, 0.536, 0.904) | (0.069, 0.146, 0.189) |
| (20, 3) | (0.233, 0.564, 0.984) | (0.075, 0.165, 0.226) | (0.225, 0.538, 0.916) | (0.068, 0.142, 0.185) | |
| (25, 4) | (0.228, 0.547, 0.959) | (0.070, 0.153, 0.214) | (0.217, 0.526, 0.900) | (0.065, 0.134, 0.182) | |
| (25, 5) |
(0.222, 0.550, 0.956) |
(0.070, 0.152, 0.212) |
(0.221, 0.528, 0.898) |
(0.066, 0.133, 0.180) |
|
|
|
|
||||
| (20, 2) | (0.222, 0.522, 0.884) | (0.067, 0.141, 0.180) | (0.197, 0.441, 0.729) | (0.054, 0.122, 0.220) | |
| (20, 3) | (0.221, 0.518, 0.883) | (0.066, 0.137, 0.176) | (0.198, 0.449, 0.730) | (0.054, 0.118, 0.211) | |
| (25, 4) | (0.217, 0.528, 0.867) | (0.064, 0.131, 0.173) | (0.198, 0.439, 0.724) | (0.054, 0.108, 0.215) | |
| (25, 5) |
(0.216, 0.522, 0.879) |
(0.064, 0.127, 0.173) |
(0.200, 0.455, 0.728) |
(0.053, 0.113, 0.214) |
|
|
|
|
||||
| (10,10,10) |
(20, 2) | (0.241, 0.561, 0.984) | (0.099, 0.191, 0.267) | (0.235, 0.546, 0.916) | (0.092, 0.174, 0.233) |
| (20, 3) | (0.243, 0.557, 0.980) | (0.094, 0.184, 0.257) | (0.234, 0.528, 0.920) | (0.089, 0.166, 0.223) | |
| (25, 4) | (0.225, 0.541, 0.943) | (0.084, 0.168, 0.248) | (0.225, 0.538, 0.907) | (0.081, 0.158, 0.219) | |
| (25, 5) |
(0.230, 0.543, 0.927) |
(0.084, 0.168, 0.237) |
(0.225, 0.522, 0.913) |
(0.080, 0.154, 0.220) |
|
|
|
|
||||
| (20, 2) | (0.217, 0.501, 0.834) | (0.080, 0.149, 0.204) | (0.198, 0.435, 0.699) | (0.067, 0.140, 0.254) | |
| (20, 3) | (0.224, 0.491, 0.829) | (0.080, 0.145, 0.197) | (0.200, 0.449, 0.700) | (0.067, 0.137, 0.244) | |
| (25, 4) | (0.216, 0.504, 0.824) | (0.074, 0.137, 0.197) | (0.197, 0.435, 0.697) | (0.062, 0.122, 0.246) | |
| (25, 5) |
(0.214, 0.504, 0.827) |
(0.074, 0.133, 0.197) |
(0.200, 0.441, 0.697) |
(0.063, 0.122, 0.244) |
|
|
|
|
||||
| (10,10,10) |
(20, 2) | (0.241 0.552 0.964) | (0.009 0.033 0.074) | (0.228 0.525 0.929) | (0.007 0.028 0.061) |
| (20, 3) | (0.212 0.533 0.948) | (0.006 0.030 0.070) | (0.211 0.516 0.918) | (0.006 0.026 0.061) | |
| (25, 4) | (0.218 0.550 0.967) | (0.005 0.029 0.070) | (0.214 0.525 0.915) | (0.005 0.024 0.055) | |
| (25, 5) |
(0.211 0.536 0.948) |
(0.005 0.029 0.066) |
(0.213 0.529 0.908) |
(0.005 0.025 0.056) |
|
|
|
|
||||
| (20, 2) | (0.217 0.498 0.861) | (0.006 0.023 0.047) | (0.197 0.443 0.725) | (0.005 0.019 0.055) | |
| (20, 3) | (0.201 0.481 0.841) | (0.005 0.021 0.047) | (0.184 0.433 0.727) | (0.004 0.020 0.062) | |
| (25, 4) | (0.214 0.504 0.835) | (0.005 0.020 0.042) | (0.196 0.445 0.723) | (0.004 0.015 0.052) | |
| (25, 5) | (0.205 0.495 0.847) | (0.004 0.019 0.045) | (0.193 0.445 0.739) | (0.003 0.016 0.059) | |
3.2. Illustrative example
We have selected from Nelson's data three samples of size (groups 1,4 and 5) [see Ref. [26] p.462] corresponding to the failure of an insulating liquid subjected to a high load within minutes to demonstrate the usefulness of the results derived in the previous sections. Table 7 lists these failure times (denoted as samples ) together with their order statistics with respect to ().
Table 7.
Sample X1, X2 and X3, and their order (w, ), where .
| Sample | Data |
|---|---|
| X1 | 1.89, 4.03, 1.54, 0.31, 0.66, 1.7, 2.17, 1.82, 9.99, 2.24 |
| X2 | 1.17, 3.87, 2.8, 0.7, 3.82, 0.02, 0.5, 3.72, 0.06, 3.57 |
| X3 | 8.11, 3.17, 5.55, 0.80, 0.20, 1.13, 6.63, 1.08, 2.44, 0.78 |
| Ordered data (w, ji) | |
| (0.02,2), (0.06,2), (0.20,3), (0.31,1), (0.50,2), (0.66,1), (0.70,2), (0.78,3), (0.80,3), (1.083), (1.13,3), (1.17,2), (1.54,1), (1.70,1), (1.82,1), (1.89,1), (2.17,1), (2.24,1), (2.44,3), (2.80,2), (3.17,3), (3.57,2), (3.72,2), (3.82,2), (3.87,2), (4.03,1), (5.55,3), (6.63,3), (8.11,3), (9.99,1( | |
For GB study, there is no information about the prior and noninformative prior should be used (which gives the same results of MLEs for ), therefore we suggest the hyperparameters as . Choosing the values ; for and .
Table 8 shows the MLE and Bayesian estimation of the parameters for HCS-I, while Table 9 shows the results for HCS-II.
Table 8.
ML and GB estimators under HCS-I
| 20 | (6,5,5) | 2 | MLE | (0.377, 0.402, 0.357) | |
| GB | |||||
| (0.377, 0.402, 0.357) | (0.377, 0.402, 0.357) | ||||
| (0.329, 0.343, 0.305) | (0.362, 0.383, 0.341) | ||||
| (0.208, 0.198, 0.176) | (0.324, 0.336, 0.300) | ||||
| (0.390, 0.419, 0.371) | (0.380, 0.405, 0.360) | ||||
| (0.346, 0.360, 0.324) | (0.368, 0.390, 0.348) | ||||
| (0.293, 0.295, 0.270) | (0.350, 0.366, 0.329) | ||||
| 20 | (8,6,6) | 3 | MLE | (0.476, 0.365, 0.371) | |
| GB | |||||
| (0.476, 0.365, 0.371) | (0.476, 0.365, 0.371) | ||||
| (0.427, 0.319, 0.340) | (0.462, 0.351, 0.356) | ||||
| (0.374, 0.285, 0.289) | (0.426, 0.340, 0.345) | ||||
| (0.491, 0.377, 0.383) | (0.480, 0.368, 0.374) | ||||
| (0.438, 0.335, 0.340) | (0.466, 0.357, 0.362) | ||||
| (0.374, 0.385, 0.289) | (0.444, 0.340, 0.348) | ||||
| 25 | (8,5,6) | 2.5 | MLE | (0.462, 0.334, 0.365) | |
| GB | |||||
| (0.462, 0.334, 0.365) | (0.462, 0.334, 0.365) | ||||
| (0.415, 0.286, 0.319) | (0.448, 0.319, 0.351) | ||||
| (0.295, 0.165, 0.202) | (0.413, 0.280, 0.340) | ||||
| (0.476, 0.347, 0.377) | (0.465, 0.337, 0.368) | ||||
| (0.426, 0.305, 0.335) | (0.452, 0.326, 0.357) | ||||
| (0.364, 0.257, 0.286) | (0.431, 0.310, 0.340) | ||||
| 25 | (8,10,7) | 4 | MLE | (0.476, 0.494, 0.366) | |
| GB | |||||
| (0.476, 0.494, 0.366) | (0.476, 0.494, 0.366) | ||||
| (0.427, 0.452, 0.325) | (0.462, 0.482, 0.354) | ||||
| (0.304, 0.345, 0.220) | (0.426, 0.452, 0.323) | ||||
| (0.491, 0.507, 0.376) | (0.480, 0.498, 0.369) | ||||
| (0.438, 0.461, 0.340) | (0.466, 0.485, 0.359) | ||||
| (0.373, 0.402, 0.294) | (0.444, 0.466, 0.344) | ||||
Table 9.
ML and GB estimators under HCS-II
| 20 | (8,6,6) | 2 | MLE | (0.476, 0.365, 0.371) | |
| GB | |||||
| (0.476, 0.365, 0.371) | (0.476, 0.365, 0.371) | ||||
| (0.428, 0.319, 0.324) | (0.462, 0.351, 0.356) | ||||
| (0.304, 0.202, 0.208) | (0.426, 0.314, 0.319) | ||||
| (0.491, 0.377, 0.383) | (0.479, 0.368 0.374) | ||||
| (0.438, 0.335, 0.340) | (0.466, 0.357, 0.362) | ||||
| (0.374, 0.285, 0.289) | (0.444, 0.340, 0.345) | ||||
| 20 | (8,8,7) | 3.8 | MLE | (0.401, 0.397, 0.333) | |
| GB | |||||
| (0.401, 0.397, 0.333) | (0.401, 0.397, 0.333) | ||||
| (0.361, 0.457, 0.297) | (0.389, 0.385, 0.322) | ||||
| (0.256, 0.254, 0.199) | (0.359, 0.355, 0.293) | ||||
| (0.412, 0.408, 0.342) | (0.404, 0.399, 0.335) | ||||
| (0.374, 0.371, 0.312) | (0.394, 0.390, 0.327) | ||||
| (0.325, 0.323, 0.273) | (0.378, 0.374, 0.315) | ||||
| 25 | (8,10,7) | 4 | MLE | (0.394, 0.494, 0.324) | |
| GB | |||||
| (0.394, 0.494, 0.324) | (0.394, 0.494, 0.324) | ||||
| (0.354, 0.452, 0.288) | (0.382, 0.482, 0.313) | ||||
| (0.251, 0.345, 0.194) | (0.352, 0.452, 0.285) | ||||
| (0.404, 0.507, 0.332) | (0.396, 0.497, 0.326) | ||||
| (0.367, 0.461, 0.303) | (0.386, 0.485, 0.318) | ||||
| (0.320, 0.402, 0.266) | (0.371, 0.466, 0.307) | ||||
| 25 | (9,10,10) | 9 | MLE | (0.355, 0.494, 0.335) | |
| GB | |||||
| (0.355, 0.494, 0.335) | (0.355, 0.494, 0.335) | ||||
| (0.322, 0.452, 0.306) | (0.345, 0.482, 0.326) | ||||
| (0.238, 0.345, 0.233) | (0.322, 0.452, 0.306) | ||||
| (0.362, 0.507, 0.340) | (0.356, 0.497, 0.336) | ||||
| (0.336, 0.461, 0.319) | (0.349, 0.485, 0.331) | ||||
| (0.299, 0.402, 0.289) | (0.338, 0.466, 0.321) | ||||
4. Discussion and conclusion
In this study, we examined HCS-I and HCS-II when the life spans of the three populations have exponential distributions. Using different values for the learning rate parameter and GE, the Linex loss function in a simulation study and an example, we were able to derive the MLEs and Bayesian estimates of the parameters. Similar to the simulation study, the GBEs outperformed the MLEs. Therefore, we discuss the Bayesian results based on the estimator values and their ER in detail below:
• For , the results are overestimated for but underestimated for , so that leads to better estimation results.
• For there is an overestimation for but an underestimation for , so leads to better estimation results.
• For , 1 the best results are obtained for .
• This means that the majority of the GB results of the parameters and under HCS-I and HCS-II at are overestimation for but underestimation for GB results at are overestimation for but underestimate for ; GB results at are overestimate for but underestimate for .
From simulation study, we can conclude that.
-
i.
Due to the chosen values for , the results under HCS-II are slightly better than that under HCS-I.
-
ii.
The best results are obtained for .
-
iii.
The results are affected by the different values of ; where has better performance for the learning rate parameter, that means the small values of gives better results, therefore GBE is better than traditional Bayes.
As for Illustrative example, under HCS-II for in Table 9, the number of observations is 29 nearly the complete sample which gives the best results for the MLEs and GBEs for , also for which underlined in Table 9.
Regarding the investigation of the effect of the learning rate parameter on the estimation results, it may be interesting to investigate GB for different distributions with different types of censoring schemes.
Funding
Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2024R226), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.
Data Availability Statement
The data used to support the findings of this study are included in the article.
CRediT authorship contribution statement
Yahia Abdel-Aty: Project administration, Methodology, Investigation. Mohamed Kayid: Writing – original draft, Formal analysis, Data curation, Conceptualization. Ghadah Alomani: Writing – review & editing, Supervision, Software, Resources, Funding acquisition.
Declaration of competing interest
The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Ghadah Alomani reports financial support was provided by Princess Nourah bint Abdulrahman University. Ghadah Alomani reports a relationship with Princess Nourah bint Abdulrahman University that includes: employment. If there are other authors, they declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgments
The authors would like to thank the three anonymous reviewers for their thorough review of our article and their numerous comments and recommendations. The authors extend their sincere appreciation to Princess Nourah bint Abdulrahman University Researchers Supporting Project Number (PNURSP2024R226), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.
Appendix.
Algorithm1
Calculating MLE–HCS–I [HCS-II]
Step 1
[Enter the values]
[B = iteration number, = sample sizes, = observations number, = prefixed time, B = iteration number, assumed values for exponential parameters]
Read .
Step 2
[Initialize the variables]
Step 3
For to B
Step 4
[Generating samples]
Generate 3 samples with sizes from .
Step 5
[Combine the generated samples in one ordered sample]
Step 6
[Determine termination time of the experiment. (HCS–I) ].
[, is termination time of the experiment. (HCS-II)].
Step 7
[Compute the observations number, where, observations number till time ]
Step 8
[Observations ]
Step 9
Step 10
If Go To Step 3
Step 10: #[replications satisfying ]
Step 11
Step 12
[Computing MLEs]
Step 13
[ Computing the number of replications for ; the sum of ]
If ( then .
Else .
End If.
Step 14
[ Stop for loop]
End for.
Step 15
[Computing MLEs, Estimated Risk, the mean of , the ratio ]
Step 16
Print () Stop
Algorithm2
Calculating GBE–HCS–I [HCS-II]
Step 1: [Enter the values]
Read .
Step 2: [Generating ]
[The\Compute GBEs under Linex loss]
Step 13: [Compute GBEs under GE loss
… … … … … … … … … ….
Step 16: Print ()
Print ()
Stop.
References
- 1.Miller J.W., Dunson D.B. Robust Bayesian inference via coarsening. J. Am. Stat. Assoc. 2019;114(527):1113–1125. doi: 10.1080/01621459.2018.1469995. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Grünwald, P. The safe Bayesian: learning the learning rate via the mixability gap. In Algorithmic Learning Theory, 2012, Volume 7568 of Lecture Notes in Computer Science, 169-183. Springer, Heidelberg. MR3042889.
- 3.Grünwald P., van Ommen T. Inconsistency of Bayesian inference for misspecified linear models, and a proposal for repairing it. Bayesian Analysis. 2017;12(4):1069–1103. [Google Scholar]
- 4.Grünwald P. Safe probability. J. Stat. Plann. Inference. 2018:47–63. MR3760837. [Google Scholar]
- 5.De Heide R., Kirichenko A., Grünwald P., Mehta N. Safe-Bayesian generalized linear regression. International Conference on Artificial Intelligence and Statistics. 2020;106(113):2623–2633. [Google Scholar]
- 6.Holmes C.C., Walker S.G. Assigning a value to a power likelihood in a general Bayesian model. Biometrika. 2017:497–503. [Google Scholar]
- 7.Lyddon S.P., Holmes C.C., Walker S.G. General Bayesian updating and the loss-likelihood bootstrap. Biometrika. 2019:465–478. [Google Scholar]
- 8.Martin R. Invited comment on the article by van der Pas, Szabó, and van der Vaart. Bayesian Analysis. 2017:1254–1258. [Google Scholar]
- 9.Martin R., Ning B. Special Issue in Memory of Jayanta K. Ghosh. 2020. Empirical priors and coverage of posterior credible sets in a sparse normal mean model. Sankhyā Series A; pp. 477–498. [Google Scholar]
- 10.Wu P.S., Martin R. A comparison of learning rate selection methods in generalized Bayesian inference. Bayesian Anal. 2023;18(1):105–132. doi: 10.1214/21-BA1302. [DOI] [Google Scholar]
- 11.Abdel-Aty Y., Kayid M., Alomani G. Generalized Bayes estimation based on a joint type-II censored sample from k-exponential populations. Mathematics. 2023;11:2190. doi: 10.3390/math11092190. [DOI] [Google Scholar]
- 12.Abdel-Aty Y., Kayid M., Alomani G. Generalized Bayes prediction study based on joint type-II censoring. Axioms. 2023;12:716. doi: 10.3390/axioms12070716. [DOI] [Google Scholar]
- 13.Shafay A.R., Balakrishnan N.Y., Abdel-Aty Y. Bayesian inference based on a jointly type-II censored sample from two exponential populations. J. Stat. Comput. Simulat. 2014:2427–2440. [Google Scholar]
- 14.Abdel-Aty Y. Exact likelihood inference for two populations from two-parameter exponential distributions under joint Type-II censoring. Commun. Stat. Theor. Methods. 2017:9026–9041. [Google Scholar]
- 15.Abo-Kasem O.E., Nassar M., Sanku Dey S., Abbas Rasouli A. Classical and Bayesian estimation for two exponential populations based on joint type-I progressive hybrid censoring scheme. Am. J. Math. Manag. Sci. 2019;38(4):373–385. doi: 10.1080/01966324.2019.1570407. [DOI] [Google Scholar]
- 16.Balakrishnan N., Feng S. Exact likelihood inference for k exponential populations under joint type-II censoring. Commun. Stat. Simulat. Comput. 2015;44(3):591–613. [Google Scholar]
- 17.Balakrishnan N., Feng S., Kinyat L. Exact likelihood inference for k exponential populations under joint progressive type-II censoring. Commun. Stat. Simulat. Comput. 2015;44(3):902–923. [Google Scholar]
- 18.Balakrishnan N., Kundu D. Hybrid censoring: models, inferential results and applications. Comput. Stat. Data Anal. 2013;57(1):166–209. [Google Scholar]
- 19.Dutta S., Lio Y., Kayal S. Parametric inferences using dependent competing risks data with partially observed failure causes from MOBK distribution under unified hybrid censoring. J. Stat. Comput. Simulat. 2024;94(2):376–399. doi: 10.1080/00949655.2023.2249165. [DOI] [Google Scholar]
- 20.Childs A., Chandrasekar B., Balakrishnan N., Kundu D. Exact likelihood inference based on Type-I and Type-II hybrid censored samples from the exponential distribution. Ann. Inst. Stat. Math. 2003;55:319–330. [Google Scholar]
- 21.Subhankar D., Yuhlong L., Suchandan K. Parametric inferences using dependent competing risks data with partially observed failure causes from MOBK distribution under unified hybrid censoring. J. Stat. Comput. Simulat. 2024;94(2):376–399. [Google Scholar]
- 22.Abdel-Aty Y., Kayid M., Alomani G. Bayesian estimation based on learning rate parameter under the joint hybrid censoring scheme for k exponential populations. 2023. [DOI]
- 23.Varian H.R. North-Holland; Amsterdam: 1975. A Bayesian Approach to Real Estate Assessment. [Google Scholar]
- 24.Dey D.K., Ghosh M., Srinivasan C. Simultaneous estimation of parameters under entropy loss. J Stat Plan Inference. 1987;15:347–363. [Google Scholar]
- 25.Dey D.K., Liao P.L. On comparison of estimators in a generalized life model. Microelectron. Reliab. 1992;32:207–221. [Google Scholar]
- 26.Nelson W. Wiley; New York, NY, USA: 1982. Applied Life Data Analysis. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The data used to support the findings of this study are included in the article.
