Skip to main content
Heliyon logoLink to Heliyon
. 2024 Jul 6;10(13):e34087. doi: 10.1016/j.heliyon.2024.e34087

Selection effect of learning rate parameter on estimators of k exponential populations under the joint hybrid censoring

Yahia Abdel-Aty a,b, Mohamed Kayid c,, Ghadah Alomani d
PMCID: PMC11277392  PMID: 39071643

Abstract

A Bayesian method based on the learning rate parameter η is called a generalized Bayesian method. In this study, joint hybrid censored type I and type II samples from k exponential populations were examined to determine the influence of the parameter η on the estimation results. To investigate the selection effects of the learning rate and the loss parameters on the estimation results, we considered two additional loss functions in the Bayesian approach: the linear and the generalized entropy loss functions. We then compared the generalized Bayesian algorithm with the traditional Bayesian algorithm. We performed Monte Carlo simulations to compare the performance of the estimation results with the losses and different values of η. The effects of different losses with different values and learning rate parameters are examined using an example.

Keywords: Generalized Bayes, Learning rate parameter, Exponential distribution, Joint hybrid censoring, Linex loss, General entropy loss

1. Introduction

A Bayesian analysis based on a learning rate parameter (0<η<1) is called a Generalized Bayes (GB). For η=1, the classical Bayesian framework is derived as a fractional power on the likelihood function L(θ)L(θ;data) for parameter θΘ. In other words, if π(θ) is the prior distribution of the parameter θ, then

π*(θ|data)Lη(θ)π(θ),θΘ,0<η<1, (1)

is GB posterior distribution for θ. For more information on the GB approach and how to select the value of the rate parameter, see Refs. [[1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11], [12], [13]]. In Refs. [[2], [3], [4], [5]], the Safe Bayes algorithm based on the minimization of a sequential risk measure was used to study the learning rate selection. A second technique for learning rate selection using two different information adaptation methods was presented in Refs. [6,7]. Using different values of the learning rate parameter, the authors in Ref. [11] investigated the generalized Bayes estimation (GBE) based on a common censored type II sample of k exponential populations. In Ref. [12], on the other hand, a common censored type II sample of several exponential populations served as the basis for GB prediction. A study using a joint censored type-II sample of two exponential populations for Bayes estimation and prediction was studied in Ref. [13]. We choose a range of values for the learning rate parameter to obtain the best estimators for the parameters of the corresponding distributions. We then compared GB with the traditional Bayesian method. Exact likelihood inference under joint type-II censoring for two populations with two-parameter exponential distributions was studied in Ref. [14]. In Ref. [15], two exponential populations with joint progressive hybrid type I censoring were studied using both classical and Bayesian estimates. Exact likelihood inference for multiple exponential populations with joint type II and joint progressive type II censoring was studied in Refs. [16,17]. Numerous variants of hybrid censorship have been described in the literature. For example [18,19], investigated parametric inference using dependent competing risk data and partially observed the causes of failure from the MOBK distribution under uniform hybrid censoring. HCS-I can be defined as occurring either at a certain time T1 or after a certain number of observations (say r), whichever occurs first. Time is saved by using this type of censoring, although HCS-I has a drawback in that no observations are made before the set time T1. To circumvent the shortcomings of HCS-I, HCS-II was presented in Ref. [20], wherein the experiment was concluded at a predetermined time T2 or upon reaching a certain number of observations r; that is, a minimum of r observations was assured until the conclusion of the experiment. The experiment continues until T2; thus, if r observations occur before T2, more observations than r may be included in the data. The experiment continues until r observations are available if they do not occur before T2 [21]. The ordered lifetime of N experimental units is denoted by w1<...<wN, and the observations are w1<...<wD,0DN. When the experiment is stopped at T=min(wr,T1), where r and T1 are predetermined, HCS-I occurs. As a result, there were two cases of HCS-I. where D=R1=0,1,...,r1; is a random variable less than r, the first case occurs when T=T1 or wD<T1<wD+1,whereD=R1=0,1,...,r1;, the second case occurs when wr<T1 or T=wr, and then D=r. If the experiment is stopped at T=max(wr,T2), which indicates that at least r failures are observed at the conclusion of the experiment, HCS-II emerges. Two further instances were provided by the HCS-II. The first case occurs when T2<wr or T=wr and then D=r; however, the second case occurs when T=T2 or wD<T2<wD+1 where D=R2 is a random variable that satisfies rDN.

Let us assume that the goods are manufactured by one and the same company on k different production lines. A life test was performed simultaneously on k independent samples of size nj,1jk selected from these lines. To reduce the cost or shorten the duration of the experiment, the experimenter ended the lifetime test experiment at T. In this case, an estimate of the average lifetime of the units produced by these k lines, either as a point or interval estimate, would be of interest (for more information on this topic, see Refs. [[11], [12], [13], [14], [15], [16]]).

Suppose {Xjnj,j=1,...,k} are be k-samples, and Xjnj={Xj1,Xj2,...,Xjnj} denotes to the lifetimes of nj specimens of product Aj, which are independent and identically distributed (iid) random variables from a population with cumulative distribution function (cdf) Fj(x) and probability density function (pdf) fj(x). Furthermore, let N=j=1knj denote the total sample size and D=j=1kDj denote the total number of observed failures. Then, under the joint hybrid censoring scheme for the k samples, the observable data consists of (δ,w), where w=(w1,...,wD),wi{Xjinji,i=1,...,D;ji=1,...,k}, and δ=(δ1j,...,δDj) associated to (j1,...,jD) is defined by

δij={1,ifj=ji0,otherwise. (2)

Let Dj=i=1Dδij denote the number of Xj failures in w and D=j=1kDj, where DN and Djnj,j, then, the joint density function of (δ,w) is given by

f(δ,w)=CDi=1Dj=1k(fj(wi))δij.j=1k(Fj(T))njDj (3)

where Fj=1Fj, are the survival functions of jth population and CD=j=1knj!(njDj)!.

In addition, let M be the number of failures up to time T, distributed with the probability mass function (pmf)

P(M=m)=lj=1k(njlj)pjljqjnjlj,m=0,1,...,N, (4)

where, l=l1,...,lk,m=j=1klj,1ljnj and pj=Fj(T),qj=Fj(T).

For HCS-I:

D={R1,T=T1,M=0,1,...,r1,(Case1),r,T=wr,M=r,...,N,(Case2).

For HCS-II:

D={r,T=wr,M=0,1,...,r1,(Case1),R2,T=T2,M=r,...,N,(Case2).

This study's major goal was to find out how parameter learning rate and losses, when used in conjunction with the joint hybrid censoring scheme (HCS–I and HCS-II), affected the estimate results of k exponential populations when censoring was applied to k combined samples. This preprint has already been published [22].

The rest of this article is organized as follows. To estimate the population parameters, Section 2 presents the maximum likelihood estimators (MLE) and (GBE) using the Linex and general entropy loss functions in the GB method. Section 3 contains a numerical analysis of the results in Section 2. Finally, Section 4 concludes the paper.

2. Estimation of the parameters

In this section, we consider k exponential distributions, under the joint hybrid censoring scheme (HCS–I and HCS-II) when the censoring is performed on combined k samples. Then study the MLE and GBE with learning rate parameters using the Linex and general entropy loss functions.

The populations studied here are exponential with pdf and cdf, respectively,

fj=θjexp(θjx),Fj=1exp(θjx),x>0,θj>0,1jk. (5)

Substituting (5) into (3), we obtain the likelihood function,

(Θ,δ,w)=CDi=1Dj=1k{θjexp(θjwi)}δijj=1k{exp(θjT)}njDj
=CDj=1kθjDjexp{θjuj} (6)

where, Θ=(θ1,...,θk) and uj=i=1Dwiδij+T(njDj).

The log-likelihood function is given by,

lnL(Θ,δ,w)=lnCD+j=1k(Djlnθjθjuj)

2.1. Maximum likelihood estimation

Differentiating the log-likelihood function for θj then equating to zero, the MLE of θj for 1jk, under HCS-I is given by,

θˆjM={R1juj,T=T1,M=0,1,...,r1,(Case1),rjuj,T=wr,M=r,...,N,(Case2). (7)

The MLE of θj for 1jk, under HCS-II is given by

θˆjM={rjuj,T=wr,M=0,1,...,r1,(Case1),R2juj,T=T2,M=r,...,N,(Case2). (8)

Remark1. MLEs of θj exist if we have at least k failures (Dk), such that at least one failure observed from each sample that satisfies the condition j=1kDj1 or 1DjDk+1.

As mentioned in Section 3, we computed the MLEs to compare their results with those of Bayesian estimation using different values for the learning rate parameter and different values for the loss function parameters.

2.2. Generalized Bayes estimation

Since we assume that the parameters Θ are unknown, we can treat the conjugate prior distributions of Θ as separate gamma prior distributions, denoted θjGam(aj,bj). Thus, we obtain the joint prior distribution of Θ as:

π(Θ)=j=1kπj(θj), (9)

where

πj(θj)=bjajΓ(aj)θjaj1ebjθj, (10)

and Γ() denotes the complete gamma function.

After raising (6) to the fractional power η and combining (6) and (9), the generalized Bayes posterior distribution of Θ is then

π*(Θ|data)=j=1k(ujη+bj)Djη+ajθjDjη+aj1Γ(Djη+aj)exp[{θj(ujη+bj)}], (11)

Notice that the distribution of the generalized posterior density function is gamma distribution Gam(Djη+aj,ujη+bj) because πj is a conjugate prior.

We consider two loss functions, namely Linex and general entropy loss functions, to investigate the influence of the learning rate with the loss parameters on the estimation results.

  • (i).

    The Linex loss function which is asymmetric is given by,

LL(ϕ*,ϕ)eν(ϕ*ϕ)ν(ϕ*ϕ)1,ν0.

The Linex loss function, introduced by Ref. [23] gives differing weights to overestimation and underestimation.

  • (ii).

    The generalization of the entropy (GE) loss function is,

LGE(ϕ*,ϕ)(ϕ*/ϕ)ccln(ϕ*/ϕ)1,c0,

This loss function is used by Refs. [24,25], and expressed in terms of the ratio ϕ*/ϕ to give more realism to practical situations. The estimators of θj under the Linex loss function are provided by,

θˆjL=1νln[E(eνθj)]=Djη+ajνln(1+νujη+bj),ν0,1jk. (12)

Under the general entropy (GE) loss function, the Bayes estimators of θj are given by

θˆjE={E(θjc)}1c=(Γ(Djη+ajc)Γ(Djη+aj))1c1ujη+bj,1jk. (13)

Remark2. Some unusual instances have been included in the GE loss function. We can obtain the Bayes estimators of θj under the weighted squared error loss function and the squared error loss function, respectively, by substituting c=1,1 into (13).

Remark3. Since aj=bj=0 can be immediately substituted into (11) to obtain MLEs θˆjM in (13) after substituting c=1, the estimators θˆjJ are Bayesian estimators of θj using Jeffreys’ non-informative priors πJj=1k1θj.

The estimators θˆj under HCS-I in the GB approach can be obtained by setting Dj=rj for case one and Dj=R1j for case two, respectively. But we can obtain θˆj under HCS-II by setting Dj=R2j for case one and Dj=rj for case two.

3. Numerical study

The performance of the derivation techniques identified in the previous section is evaluated using Monte Carlo simulation results, which are also provided in this section along with an example to illustrate the different derivation techniques.

3.1. Simulation study

The simulation study is designed and carried out as follows.

  • Generate three samples with sizes (n1,n2,n3) from different exponential distributions by considering the values of the parameters as the mean of the prior distribution for each parameter, then combining the three samples in one ordered sample.

  • Repeat generating of the combined sample B times, (discard samples that do not fulfill the condition in Remark 1, the number of replications becomes BB).

  • Set two fixed values, the number of observed failures from the combined sample r and the time ,T1(T2).

  • Under HCS-I the experiment is terminated at T=min(wr,T1), then observations number D=min(r,R1), but for HCS-II the experiment is terminated at T=max(wr,T2), then observations number D=max(r,R2), where R1andR2 are random.

  • Under HCS-I, p1,R1 and (r1,r2,r3) are computed, where p1 is the ratio of samples that are stopped at T1 from all replications, R1 is the mean of the number of observed values up to T1 (R1r) and (r1,r2,r3) are the average values of the observed failures of the three samples for both cases of T. By determining the number of observations from the three samples, the MLEs θˆjM,j=1,2,3; are computed using (7) by taking the average of the results of B replicates θˆjM=i=1BθˆjiB, and their estimated risk (Er) is obtained from Er=i=1B(θjθˆji)2B,j=1,2,3.

  • p2,R2, (r1,r2,r3) and MLEs are also computed for HCS-II, where p2 is the ratio of samples that are stopped at T2 from all replications, R2 is the mean of the number of observed values up to T2 (R2r), the MLEs θˆjM,j=1,2,3; are computed using (8).

  • Compute GBEs θˆjL,θˆjE under Linex and GE loss functions from (12), (13) using different values of ν,c and η for HCS-I and HCS-II, (see, Appendix).

We chose the exponential parameters (θ1,θ2,θ3) to be (0.2,0.5,0.9) based on the hyperparameters (a1,b1;a2,b2;a3,b3)=(1,5,1,2,1.8,2),where(θ1,θ2,θ3) are obtained as the mean of gamma distributions in (10), for the Monte Carlo simulations we use B=10,000 replicates. For MLE we considered different options for the sample sizes of the three populations (n1,n2,n3) and for r,T1, and T2. For the Bayes study, the sample sizes are (n1,n2,n3)=(10,10,10), T1orT2=2,3 for r=20, T1orT2=4,5 for r=25. MLE results are shown in Table 1 for HCS-I and Table 2 for HCS-II.

Table 1.

MLEs under HCS-I.

(n1,n2,n3) (r,T1) p1 R1 (r1,r2,r3) (θˆ1M,θˆ2M,θˆ3M) Er(θˆ1M,θˆ2M,θˆ3M)
(10,10,10) (20, 2) 0.73 16.9 (3.3,6.2,8.3) (0.345, 0.625, 1.070) (0.460, 0.300, 0.443)
(20, 3) 0.17 18.2 (3.8,7,8.9) (0.286, 0.604, 1.046) (0.235, 0.274, 0.408)
(25, 4) 0.62 22.7 (5.3,8.5,9.7) (0.256, 0.590, 1.024) (0.177, 0.246, 0.385)
(25, 5) 0.29 23.2 (5.8,8.9,9.8) (0.250, 0.583, 1.022) (0.123, 0.231, 0.368)
(8, 9,13) (20, 2) 0.55 17.4 (2.6,5.5,10.6) (0.427, 0.643, 1.022) (1.126, 0.336, 0.352)
(20, 3) 0.06 18.5 (2.8,5.9,11.2) (0.372, 0.620, 1.013) (1.464, 0.316, 0.349)
(25, 4) 0.41 23.1 (4.1,7.6,12.5) (0.280, 0.602, 1.003) (0.192, 0.261, 0.318)
(25, 5) 0.15 23.5 (4.4,7.8,12.6) (0.271, 0.595, 0.996) (0.218, 0.255, 0.316)
(12,11,7) (20, 2) 0.86 16.1 (3.9,6.9,5.8) (0.299, 0.613, 1.155) (0.670, 0.269, 0.618)
(20, 3) 0.33 17.9 (4.9,8.1,6.3) (0.263, 0.593, 1.115) (0.219, 0.245, 0.556)
(25, 4) 0.77 22.1 (6.5,9.5,6.8) (0.243, 0.583, 1.088) (0.108, 0.220, 0.518)
(25, 5) 0.46 22.9 (7.2,9.9,6.9) (0.239, 0.578, 1.063) (0.103, 0.212, 0.488)
(20,20,20) (40, 2) 0.85 35 (6.6,12.6,16.7) (0.244, 0.557, 0.985) (0.106, 0.171, 0.261)
(40, 3) 0.12 37.9 (7.7,14.2,17.8) (0.232, 0.547, 0.978) (0.094, 0.155, 0.254)
(50, 4) 0.73 46.6 (10.9,17.2,19.4) (0.226, 0.546, 0.968) (0.074, 0.144, 0.236)
(50, 5) 0.30 47.8 (11.8,17.9,19.6) (0.222, 0.541, 0.959) (0.070, 0.141, 0.231)
(16,18,26) (40, 2) 0.63 36.4 (5.1,11.1,21.4) (0.262, 0.562, 0.963) (0.269, 0.183, 0.224)
(40, 3) 0.02 38.2 (5.6,12,22.4) (0.248, 0.557, 0.957) (0.122, 0.179, 0.218)
(50, 4) 0.47 47.4 (8.5,15.3,25.1) (0.232, 0.549, 0.952) (0.088, 0.154, 0.205)
(50, 5) 0.11 48.3 (8.9,15.6,25.3) (0.229, 0.546, 0.949) (0.086, 0.152, 0.200)
(24,22,14) (40, 2) 0.96 33.1 (7.9,13.9,11.7) (0.233, 0.554, 1.023) (0.091, 0.162, 0.336)
(40, 3) 0.32 37.3 (10,16.4,12.8) (0.225, 0.544, 1.010) (0.076, 0.146, 0.316)
(50, 4) 0.89 45.2 (13.1,19,13.6) (0.220, 0.541, 0.996) (0.063, 0.133, 0.304)
(50, 5) 0.54 47.2 (14.7,20,13.8) (0.216, 0.537, 0.984) (0.059, 0.129, 0.293)

Table 2.

MLEs under HCS-II.

(n1,n2,n3) (r,T2) p2 R2 (r1,r2,r3) (θˆ1M,θˆ2M,θˆ3M) Er(θˆ1M,θˆ2M,θˆ3M)
(10,10,10) (20, 2) 0.27 21 (4,7.3,9) (0.282, 0.599, 1.043) (0.397, 0.267, 0.406)
(20, 3) 0.83 22.3 (4.6,7.9,9.4) (0.266, 0.598, 1.041) (0.154, 0.246, 0.390)
(25, 4) 0.38 25.8 (6.2,9.2,9.9) (0.246, 0.578, 1.009) (0.124, 0.225, 0.361)
(25, 5) 0.71 26.3 (6.7,9.4,9.9) (0.245, 0.581, 1.004) (0.114, 0.222, 0.355)
(8, 9,13) (20, 2) 0.45 21.3 (3,6.2,11.4) (0.336, 0.625, 1.017) (0.487, 0.310, 0.352)
(20, 3) 0.94 23 (3.6,7,12.2) (0.302, 0.610, 1.009) (0.272, 0.273, 0.320)
(25, 4) 0.59 26.1 (4.8,8.1,12.8) (0.264, 0.596, 0.990) (0.155, 0.252, 0.302)
(25, 5) 0.85 26.7 (5.2,8.4,12.9) (0.261, 0.591, 0.987) (0.145, 0.242, 0.306)
(12,11,7) (20, 2) 0.14 20.8 (5.2,8.4,6.5) (0.254, 0.583, 1.106) (0.135, 0.239, 0.531)
(20, 3) 0.67 21.8 (5.7,8.8,6.6) (0.250, 0.587, 1.095) (0.122, 0.230, 0.517)
(25, 4) 0.23 25.9 (8,10.3,6.9) (0.233, 0.570, 1.063) (0.097, 0.210, 0.507)
(25, 5) 0.54 26.1 (8.2,10.4,7) (0.234, 0.567, 1.058) (0.096, 0.199, 0.502)
(20,20,20) (40, 2) 0.15 41.3 (7.9,14.5,17.9) (0.234, 0.548, 0.976) (0.095, 0.161, 0.256)
(40, 3) 0.88 43.9 (9.1,15.6,18.7) (0.230, 0.550, 0.973) (0.084, 0.152, 0.243)
(50, 4) 0.27 51.2 (12.4,18.2,19.7) (0.221, 0.540, 0.959) (0.069, 0.139, 0.233)
(50, 5) 0.70 52.1 (13.1,18.6,19.8) (0.221, 0.541, 0.956) (0.068, 0.136, 0.228)
(16,18,26) (40, 2) 0.37 41.8 (5.8,12.2,22.6) (0.248, 0.557, 0.960) (0.117, 0.177, 0.219)
(40, 3) 0.98 44.7 (7.3,14,24.3) (0.240, 0.556, 0.958) (0.100, 0.165, 0.207)
(50, 4) 0.53 51.6 (9.3,16,25.5) (0.229, 0.545, 0.949) (0.084, 0.148, 0.199)
(50, 5) 0.89 52.9 (10.2,16.6,25.7) (0.228, 0.545, 0.946) (0.079, 0.147, 0.194)
(24,22,14) (40, 2) 0.04 41 (10.4,16.7,12.9) (0.224, 0.538, 1.005) (0.076, 0.145, 0.302)
(40, 3) 0.68 42.7 (11.2,17.5,13.1) (0.224, 0.545, 1.001) (0.073, 0.144, 0.309)
(50, 4) 0.11 50.9 (15.7,2.5,13.9) (0.214, 0.535, 0.977) (0.058, 0.128, 0.287)
(50, 5) 0.46 51.5 (16.1,20.6,13.9) (0.215, 0.535, 0.977) (0.059, 0.129, 0.291)

The values for the learning rate parameters were η = 0.1,0.4,0.8and1. Note that for η=0.1, c = -1.5, −1, −0.85, −0.75; ν = -0.5, −0.1, 0.3, 0.5, for η=0.4, c = -1, −0.5, −0.25, 0.1; ν = -0.1,0.7,1,4 and finally for η=0.8,1; c = −1, 0.1, 0.65, 1; ν = 0.5, 1.5, 3.5, 8.5. The results of the Bayesian estimators for θ1,θ2, and θ3 for HCS-I and HCS-II are shown in Table 3, Table 4, Table 5, Table 6.

Table 3.

GB estimators under HCS-I, GE loss.

η=0.1
(n1,n2,n3)
(r,T1)
(θˆ1GE,θˆ2GE,θˆ3GE)
Er(θˆ1GE,θˆ2GE,θˆ3GE)
(θˆ1GE,θˆ2GE,θˆ3GE)
Er(θˆ1GE,θˆ2GE,θˆ3GE)
c=1.5 c=1
(10,10,10)
(20, 2) (0.249, 0.608, 1.014) (0.058, 0.125, 0.150) (0.214, 0.531, 0.931) (0.029, 0.079, 0.093)
(20, 3) (0.245, 0.598, 0.999) (0.052, 0.122, 0.153) (0.210, 0.524, 0.926) (0.029, 0.080, 0.098)
(25, 4) (0.244, 0.586, 1.001) (0.049, 0.125, 0.137) (0.212, 0.526, 0.922) (0.030, 0.080, 0.097)
(25, 5) (0.239, 0.581, 0.988) (0.052, 0.123, 0.151) (0.210, 0.525, 0.921) (0.031, 0.080, 0.096)
c=0.85 c=0.7
(20, 2) (0.204, 0.505, 0.905) (0.024, 0.075, 0.095) (0.193, 0.483, 0.878) (0.024, 0.071, 0.094)
(20, 3) (0.202, 0.502, 0.898) (0.025, 0.075, 0.094) (0.192, 0.484, 0.873) (0.028, 0.074, 0.095)
(25, 4) (0.202, 0.501, 0.893) (0.028, 0.077, 0.093) (0.194, 0.486, 0.873) (0.028, 0.074, 0.094)
(25, 5)
(0.202, 0.509, 0.893)
(0.028, 0.075, 0.094)
(0.191, 0.486, 0.871)
(0.028, 0.075, 0.096)
η=0.4

c=1
c=0.5

(20, 2) (0.240, 0.564, 0.976) (0.075, 0.175, 0.228) (0.217, 0.534, 0.940) (0.064, 0.155, 0.211)
(20, 3) (0.233, 0.557, 0.959) (0.073, 0.164, 0.227) (0.211, 0.520, 0.932) (0.064, 0.148, 0.207)
(25, 4) (0.229, 0.556, 0.952) (0.070, 0.157, 0.214) (0.212, 0.534, 0.925) (0.061, 0.144, 0.196)
(25, 5) (0.225, 0.552, 0.959) (0.068, 0.153, 0.213) (0.207, 0.515, 0.924) (0.060, 0.139, 0.196)
c=0.25 c=0.1
(20, 2) (0.203, 0.509, 0.917) (0.060, 0.147, 0.205) (0.186, 0.479, 0.878) (0.062, 0.144, 0.199)
(20, 3) (0.200, 0.510, 0.917) (0.061, 0.146, 0.200) (0.185, 0.473, 0.872) (0.062, 0.139, 0.193)
(25, 4) (0.203, 0.503, 0.901) (0.061, 0.136, 0.191) (0.187, 0.496, 0.865) (0.057, 0.133, 0.187)
(25, 5)
(0.199, 0.505, 0.890)
(0.058, 0.134, 0.190)
(0.191, 0.495, 0.967)
(0.057, 0.132, 0.189)
η=0.8

c=1
c=0.1
(20, 2) (0.254 0.589 1.012) (0.106 0.216 0.304) (0.220, 0.528, 0.951) (0.087, 0.187, 0.267)
(20, 3) (0.197, 0.498, 0.999) (0.101, 0.203, 0.283) (0.214, 0.531, 0.937) (0.086, 0.184, 0.257)
(25, 4) (0.238, 0.561, 0.979) (0.090, 0.187, 0.269) (0.215, 0.526, 0.936) (0.079, 0.167, 0.246)
(25, 5) (0.235, 0.565, 0.988) (0.088, 0.183, 0.270) (0.210, 0.522, 0.913) (0.075, 0.161, 0.235)
c=0.65 c=1

(20, 2) (0.198, 0.510, 0.913) (0.084, 0.180, 0.259) (0.183, 0.490, 0.899) (0.083, 0.179, 0.254)
(20, 3) (0.196, 0.497, 0.921) (0.081, 0.173, 0.251) (0.185, 0.496, 0.888) (0.080, 0.169, 0.247)
(25, 4) (0.197, 0.522, 0.901) (0.073, 0.161, 0.237) (0.193, 0.494, 0.876) (0.074, 0.159, 0.227)
(25, 5)
(0.200, 0.503, 0.899)
(0.074, 0.155, 0.240)
(0.191, 0.492, 0.873)
(0.071, 0.153, 0.232)
η=1

c=1
c=0.1

(20, 2) (0.222 0.543 0.961) (0.010 0.043 0.090) (0.194 0.497 0.904) (0.009 0.037 0.081)
(20, 3) (0.242 0.559 0.991) (0.012 0.045 0.093) (0.214 0.536 0.935) (0.009 0.037 0.077)
(25, 4) (0.219 0.549 0.961) (0.008 0.035 0.078) (0.200 0.510 0.936) (0.007 0.030 0.073)
(25, 5)
(0.220 0.551 0.968)
(0.008 0.035 0.078)
(0.212 0.518 0.923)
(0.007 0.029 0.066)

c=0.65
c=1
(20, 2) (0.181 0.487 0.892) (0.009 0.038 0.076) (0.168 0.462 0.855) (0.009 0.036 0.070)
(20, 3) (0.202 0.504 0.920) (0.008 0.034 0.073) (0.194 0.495 0.916) (0.008 0.034 0.072)
(25, 4) (0.187 0.505 0.894) (0.006 0.029 0.065) (0.181 0.499 0.886) (0.006 0.029 0.064)
(25, 5) (0.204 0.504 0.924) (0.006 0.028 0.067) (0.192 0.490 0.885) (0.006 0.026 0.060)

Table 4.

GB estimators under HCS-II, GE loss.

η=0.1
(n1,n2,n3)
(r,T2)
(θˆ1GE,θˆ2GE,θˆ3GE)
Er(θˆ1GE,θˆ2GE,θˆ3GE)
(θˆ1GE,θˆ2GE,θˆ3GE)
Er(θˆ1GE,θˆ2GE,θˆ3GE)
c=1.5 c=1
(10,10,10)
(20, 2) (0.245, 0.599, 1.008) (0.051, 0.120, 0.141) (0.210, 0.523, 0.927) (0.031, 0.085, 0.098)
(20, 3) (0.246, 0.594, 1.009) (0.053, 0.127, 0.137) (0.212, 0.525, 0.927) (0.032, 0.084, 0.097)
(25, 4) (0.238, 0.584, 0.988) (0.052, 0.119, 0.145) (0.207, 0.520, 0.920) (0.033, 0.084, 0.097)
(25, 5)
(0.238, 0.585, 0.988)
(0.052, 0.116, 0.146)
(0.210, 0.524, 0.914)
(0.033, 0.080, 0.099)


c=0.85
c=0.7

(20, 2) (0.201, 0.501, 0.903) (0.027, 0.077, 0.095) (0.191, 0.479, 0.872) (0.029, 0.075, 0.095)
(20, 3) (0.204, 0.502, 0.903) (0.028, 0.078, 0.092) (0.194, 0.484, 0.873) (0.028, 0.076, 0.092)
(25, 4) (0.201, 0.501, 0.895) (0.031, 0.076, 0.095) (0.190, 0.487, 0.872) (0.031, 0.077, 0.097)
(25, 5)
(0.201, 0.500, 0.886)
(0.031, 0.075, 0.095)
(0.192, 0.483, 0.867)
(0.031, 0.075, 0.096)
η=0.4

c=1
c=0.5

(20, 2) (0.236, 0.561, 0.978) (0.076, 0.165, 0.223) (0.212, 0.524, 0.930) (0.067, 0.151, 0.204)
(20, 3) (0.232, 0.555, 0.947) (0.075, 0.164, 0.217) (0.214, 0.536, 0.920) (0.065, 0.150, 0.199)
(25, 4) (0.226, 0.540, 0.958) (0.070, 0.152, 0.210) (0.209, 0.493, 0.925) (0.063, 0.138, 0.196)
(25, 5) (0.224, 0.555, 0.942) (0.070, 0.151, 0.212) (0.212, 0.513, 0.908) (0.062, 0.135, 0.192)
c=0.25 c=0.1
(20, 2) (0.201, 0.501, 0.905) (0.063, 0.143, 0.198) (0.186, 0.483, 0.870) (0.063, 0.144, 0.197)
(20, 3) (0.205, 0.514, 0.901) (0.063, 0.144, 0.196) (0.190, 0.492, 0.874) (0.063, 0.140, 0.190)
(25, 4) (0.201, 0.507, 0.902) (0.062, 0.134, 0.194) (0.188, 0.497, 0.890) (0.059, 0.130, 0.189)
(25, 5)
(0.203, 0.502, 0.886)
(0.060, 0.131, 0.192)
(0.193, 0.483, 0.923)
(0.060, 0.127, 0.191)
η=0.8

c=1
c=0.1

(20, 2) (0.245, 0.569, 0.994) (0.101, 0.199, 0.288) (0.208, 0.533, 0.948) (0.085, 0.182, 0.260)
(20, 3) (0.246, 0.580, 0.984) (0.098, 0.196, 0.276) (0.214, 0.540, 0.931) (0.083, 0.177, 0.252)
(25, 4) (0.238, 0.569, 0.964) (0.088, 0.184, 0.260) (0.213, 0.526, 0.908) (0.077, 0.162, 0.239)
(25, 5)
(0.236, 0.551, 0.991)
(0.086, 0.175, 0.264)
(0.209, 0.514, 0.930)
(0.075, 0.157, 0.242)

c=0.65
c=1

(20, 2) (0.198, 0.506, 0.911) (0.082, 0.173, 0.245) (0.183, 0.489, 0.885) (0.081, 0.170, 0.245)
(20, 3) (0.199, 0.507, 0.916) (0.080, 0.162, 0.242) (0.196, 0.494, 0.885) (0.081, 0.164, 0.239)
(25, 4) (0.200, 0.504, 0.899) (0.073, 0.154, 0.235) (0.191, 0.490, 0.875) (0.071, 0.150, 0.231)
(25, 5)
(0.202, 0.507, 0.898)
(0.073, 0.153, 0.232)
(0.196, 0.493, 0.871)
(0.071, 0.150, 0.231)
η=1

c=1
c=0.1

(20, 2) (0.236 0.554 0.977) (0.009 0.037 0.087) (0.214 0.530 0.934) (0.007 0.033 0.074)
(20, 3) (0.213 0.529 0.984) (0.006 0.030 0.081) (0.197 0.505 0.916) (0.006 0.027 0.069)
(25, 4) (0.222 0.562 0.971) (0.005 0.032 0.079) (0.209 0.523 0.930) (0.005 0.027 0.067)
(25, 5)
(0.222 0.543 0.960)
(0.005 0.030 0.074)
(0.207 0.524 0.929)
(0.005 0.027 0.068)

c=0.65
c=1
(20, 2) (0.196 0.508 0.917) (0.006 0.029 0.069) (0.192 0.488 0.905) (0.006 0.029 0.069)
(20, 3) (0.183 0.489 0.896) (0.005 0.027 0.064) (0.176 0.475 0.880) (0.005 0.026 0.064)
(25, 4) (0.198 0.506 0.915) (0.005 0.025 0.062) (0.193 0.503 0.880) (0.004 0.024 0.058)
(25, 5) (0.197 0.505 0.911) (0.004 0.025 0.065) (0.187 0.492 0.877) (0.004 0.024 0.059)

Table 5.

GB estimators under HCS-I, Linex loss.

η=0.1
(n1,n2,n3)
(r,T1)
(θˆ1L,θˆ2L,θˆ3L)
Er(θˆ1L,θˆ2L,θˆ3L)
(θˆ1L,θˆ2L,θˆ3L)
Er(θˆ1L,θˆ2L,θˆ3L)
υ=0.5 υ=0.1
(10,10,10)
(0.222, 0.584, 1.026) (0.037, 0.115, 0.166) (0.215, 0.539, 0.947) (0.031, 0.086, 0.110)
(20, 3) (0.219, 0.570, 1.018) (0.035, 0.112, 0.158) (0.213, 0.535, 0.940) (0.028, 0.084, 0.109)
(25, 4) (0.219, 0.568, 1.004) (0.036, 0.110, 0.160) (0.214, 0.542, 0.935) (0.031, 0.082, 0.105)
(25, 5)
(0.217, 0.562, 1.006)
(0.036, 0.110, 0.153)
(0.211, 0.528, 0.933)
(0.032, 0.086, 0.105)


υ=0.3
υ=0.5

(20, 2) (0.209, 0.501, 0.882) (0.025, 0.069, 0.090) (0.206, 0.491, 0.856) (0.023, 0.067, 0.092)
(20, 3) (0.207, 0.499, 0.877) (0.026, 0.071, 0.090) (0.204, 0.486, 0.853) (0.025, 0.069, 0.096)
(25, 4) (0.207, 0.509, 0.875) (0.029, 0.072, 0.089) (0.204, 0.496, 0.855) (0.027, 0.072, 0.098)
(25, 5)
(0.207, 0.499, 0.880)
(0.028, 0.072, 0.094)
(0.203, 0.490, 0.852)
(0.028, 0.070, 0.098)
η=0.4

υ=0.1
υ=0.7
(10,10,10)
(20, 2) (0.243, 0.574, 0.988) (0.077, 0.178, 0.237) (0.230, 0.537, 0.922) (0.067, 0.150, 0.193)
(20, 3) (0.234, 0.563, 0.970) (0.074, 0.168, 0.228) (0.225, 0.532, 0.898) (0.067, 0.144, 0.187)
(25, 4) (0.232, 0.558, 0.958) (0.070, 0.159, 0.220) (0.223, 0.534, 0.907) (0.065, 0.138, 0.184)
(25, 5)
(0.227, 0.554, 0.967)
(0.068, 0.155, 0.216)
(0.222, 0.530, 0.899)
(0.064, 0.137, 0.182)


υ=1
υ=4

(20, 2) (0.227, 0.525, 0.895) (0.067, 0.140, 0.184) (0.199, 0.440, 0.724) (0.050, 0.122, 0.217)
(20, 3) (0.222, 0.516, 0.882) (0.065, 0.138, 0.181) (0.198, 0.442, 0.727) (0.051, 0.119, 0.215)
(25, 4) (0.221, 0.526, 0.885) (0.063, 0.132, 0.173) (0.201, 0.448, 0.728) (0.052, 0.113, 0.208)
(25, 5)
(0.219, 0.518, 0.883)
(0.063, 0.131, 0.175)
(0.199, 0.451, 0.728)
(0.051, 0.114, 0.209)
η=0.8

υ=0.5
υ=1.5
(10,10,10)
(20, 2) (0.257, 0.568, 0.985) (0.103, 0.201, 0.275) (0.244, 0.551, 0.907) (0.081, 0.181, 0.238)
(20, 3) (0.241, 0.571, 0.950) (0.096, 0.194, 0.266) (0.233, 0.541, 0.914) (0.091, 0.175, 0.229)
(25, 4) (0.238, 0.568, 0.970) (0.090, 0.180, 0.253) (0.230, 0.541, 0.914) (0.083, 0.163, 0.219)
(25, 5)
(0.230, 0.551, 0.950)
(0.085, 0.170, 0.249)
(0.228, 0.530, 0.904)
(0.080, 0.157, 0.217)

υ=3.5
υ=8.5

(20, 2) (0.230, 0.504, 0.846) (0.082, 0.152, 0.209) (0.200, 0.424, 0.692) (0.065, 0.140, 0.255)
(20, 3) (0.221, 0.491, 0.835) (0.080, 0.147, 0.207) (0.199, 0.431, 0.693) (0.066, 0.138, 0.251)
(25, 4) (0.216, 0.502, 0.844) (0.075, 0.139, 0.197) (0.199, 0.441, 0.698) (0.062, 0.124, 0.237)
(25, 5)
(0.218, 0.508, 0.820)
(0.072, 0.136, 0.193)
(0.199, 0.433, 0.699)
(0.061, 0.120, 0.246)
η=1

υ=0.5
υ=1.5
(10,10,10)
(20, 2) (0.219 0.526 0.939) (0.010 0.041 0.079) (0.214 0.509 0.896) (0.009 0.034 0.067)
(20, 3) (0.236 0.558 0.978) (0.011 0.042 0.083) (0.232 0.530 0.918) (0.010 0.035 0.064)
(25, 4) (0.216 0.535 0.958) (0.007 0.032 0.072) (0.213 0.517 0.915) (0.007 0.027 0.060)
(25, 5)
(0.222 0.545 0.952)
(0.008 0.033 0.072)
(0.223 0.528 0.918)
(0.007 0.028 0.057)

υ=3.5
υ=8.5
(20, 2) (0.200 0.467 0.827) (0.007 0.027 0.055) (0.180 0.414 0.705) (0.006 0.026 0.072)
(20, 3) (0.222 0.507 0.854) (0.008 0.027 0.049) (0.197 0.442 0.718) (0.006 0.021 0.055)
(25, 4) (0.208 0.491 0.844) (0.006 0.023 0.047) (0.192 0.446 0.731) (0.005 0.019 0.058)
(25, 5) (0.212 0.500 0.865) (0.006 0.021 0.0471) (0.198 0.444 0.728) (0.005 0.017 0.053)

Table 6.

GB estimators under HCS-II, Linex loss.

η=0.1
(n1,n2,n3)
(r,T1)
(θˆ1L,θˆ2L,θˆ3L)
Er(θˆ1L,θˆ2L,θˆ3L)
(θˆ1L,θˆ2L,θˆ3L)
Er(θˆ1L,θˆ2L,θˆ3L)
υ=0.5 υ=0.1
(10,10,10) (0.219, 0.569, 1.012) (0.036, 0.114, 0.167) (0.212, 0.531, 0.942) (0.030, 0.088, 0.107)
(20, 2)
(20, 3) (0.221, 0.569, 1.014) (0.036, 0.115, 0.163) (0.213, 0.530, 0.941) (0.032, 0.092, 0.108)
(25, 4) (0.217, 0.560, 1.005) (0.037, 0.110, 0.153) (0.209, 0.531, 0.930) (0.034, 0.085, 0.107)
(25, 5) (0.217, 0.561, 0.999) (0.038, 0.108, 0.151) (0.214, 0.527, 0.931) (0.033, 0.085, 0.105)
υ=0.3 υ=0.5

(20, 2) (0.206, 0.499, 0.883) (0.027, 0.074, 0.093) (0.202, 0.486, 0.853) (0.027, 0.071, 0.097)
(20, 3) (0.206, 0.510, 0.877) (0.030, 0.073, 0.088) (0.205, 0.491, 0.850) (0.027, 0.071, 0.089)
(25, 4) (0.206, 0.504, 0.882) (0.031, 0.073, 0.089) (0.202, 0.490, 0.853) (0.030, 0.072, 0.098)
(25, 5)
(0.207, 0.499, 0.869)
(0.032, 0.072, 0.090)
(0.203, 0.489, 0.848)
(0.030, 0.069, 0.099)
η=0.4

υ=0.1
υ=0.7
(10,10,10)
(20, 2) (0.232, 0.569, 0.982) (0.075, 0.170, 0.230) (0.224, 0.536, 0.904) (0.069, 0.146, 0.189)
(20, 3) (0.233, 0.564, 0.984) (0.075, 0.165, 0.226) (0.225, 0.538, 0.916) (0.068, 0.142, 0.185)
(25, 4) (0.228, 0.547, 0.959) (0.070, 0.153, 0.214) (0.217, 0.526, 0.900) (0.065, 0.134, 0.182)
(25, 5)
(0.222, 0.550, 0.956)
(0.070, 0.152, 0.212)
(0.221, 0.528, 0.898)
(0.066, 0.133, 0.180)


υ=1
υ=4

(20, 2) (0.222, 0.522, 0.884) (0.067, 0.141, 0.180) (0.197, 0.441, 0.729) (0.054, 0.122, 0.220)
(20, 3) (0.221, 0.518, 0.883) (0.066, 0.137, 0.176) (0.198, 0.449, 0.730) (0.054, 0.118, 0.211)
(25, 4) (0.217, 0.528, 0.867) (0.064, 0.131, 0.173) (0.198, 0.439, 0.724) (0.054, 0.108, 0.215)
(25, 5)
(0.216, 0.522, 0.879)
(0.064, 0.127, 0.173)
(0.200, 0.455, 0.728)
(0.053, 0.113, 0.214)
η=0.8

υ=0.5
υ=1.5
(10,10,10)
(20, 2) (0.241, 0.561, 0.984) (0.099, 0.191, 0.267) (0.235, 0.546, 0.916) (0.092, 0.174, 0.233)
(20, 3) (0.243, 0.557, 0.980) (0.094, 0.184, 0.257) (0.234, 0.528, 0.920) (0.089, 0.166, 0.223)
(25, 4) (0.225, 0.541, 0.943) (0.084, 0.168, 0.248) (0.225, 0.538, 0.907) (0.081, 0.158, 0.219)
(25, 5)
(0.230, 0.543, 0.927)
(0.084, 0.168, 0.237)
(0.225, 0.522, 0.913)
(0.080, 0.154, 0.220)

υ=3.5
υ=8.5

(20, 2) (0.217, 0.501, 0.834) (0.080, 0.149, 0.204) (0.198, 0.435, 0.699) (0.067, 0.140, 0.254)
(20, 3) (0.224, 0.491, 0.829) (0.080, 0.145, 0.197) (0.200, 0.449, 0.700) (0.067, 0.137, 0.244)
(25, 4) (0.216, 0.504, 0.824) (0.074, 0.137, 0.197) (0.197, 0.435, 0.697) (0.062, 0.122, 0.246)
(25, 5)
(0.214, 0.504, 0.827)
(0.074, 0.133, 0.197)
(0.200, 0.441, 0.697)
(0.063, 0.122, 0.244)
η=1

υ=0.5
υ=1.5
(10,10,10)
(20, 2) (0.241 0.552 0.964) (0.009 0.033 0.074) (0.228 0.525 0.929) (0.007 0.028 0.061)
(20, 3) (0.212 0.533 0.948) (0.006 0.030 0.070) (0.211 0.516 0.918) (0.006 0.026 0.061)
(25, 4) (0.218 0.550 0.967) (0.005 0.029 0.070) (0.214 0.525 0.915) (0.005 0.024 0.055)
(25, 5)
(0.211 0.536 0.948)
(0.005 0.029 0.066)
(0.213 0.529 0.908)
(0.005 0.025 0.056)

υ=3.5
υ=8.5
(20, 2) (0.217 0.498 0.861) (0.006 0.023 0.047) (0.197 0.443 0.725) (0.005 0.019 0.055)
(20, 3) (0.201 0.481 0.841) (0.005 0.021 0.047) (0.184 0.433 0.727) (0.004 0.020 0.062)
(25, 4) (0.214 0.504 0.835) (0.005 0.020 0.042) (0.196 0.445 0.723) (0.004 0.015 0.052)
(25, 5) (0.205 0.495 0.847) (0.004 0.019 0.045) (0.193 0.445 0.739) (0.003 0.016 0.059)

3.2. Illustrative example

We have selected from Nelson's data three samples of size n1=n2=n3=10 (groups 1,4 and 5) [see Ref. [26] p.462] corresponding to the failure of an insulating liquid subjected to a high load within minutes to demonstrate the usefulness of the results derived in the previous sections. Table 7 lists these failure times (denoted as samples Xi,i=1,2,3) together with their order statistics with respect to (W,ji).

Table 7.

Sample X1, X2 and X3, and their order (w, ji), where δji=1.

Sample Data
X1 1.89, 4.03, 1.54, 0.31, 0.66, 1.7, 2.17, 1.82, 9.99, 2.24
X2 1.17, 3.87, 2.8, 0.7, 3.82, 0.02, 0.5, 3.72, 0.06, 3.57
X3 8.11, 3.17, 5.55, 0.80, 0.20, 1.13, 6.63, 1.08, 2.44, 0.78
Ordered data (w, ji)
(0.02,2), (0.06,2), (0.20,3), (0.31,1), (0.50,2), (0.66,1), (0.70,2), (0.78,3), (0.80,3), (1.083), (1.13,3), (1.17,2), (1.54,1), (1.70,1), (1.82,1), (1.89,1), (2.17,1), (2.24,1), (2.44,3), (2.80,2), (3.17,3), (3.57,2), (3.72,2), (3.82,2), (3.87,2), (4.03,1), (5.55,3), (6.63,3), (8.11,3), (9.99,1(

For GB study, there is no information about the prior and noninformative prior should be used (which gives the same results of MLEs for c=1orυtendstozero), therefore we suggest the hyperparameters as aj=bj=0.0001,j=1,2,3. Choosing the values =1,0.8,0.3 ;ν=0.1,0.3,1; T1=2,3forr=20;T1=2.5,4 for r=25 and T2=2,3.8forr=20; T2=4,9 forr=25; η=0.1,0.4.

Table 8 shows the MLE and Bayesian estimation of the parameters for HCS-I, while Table 9 shows the results for HCS-II.

Table 8.

ML and GB estimators under HCS-I

r (r1,r2,r3) T1 (θˆ1,θˆ2,θˆ3)
20 (6,5,5) 2 MLE (0.377, 0.402, 0.357)
GB η=0.1 η=0.4
c=1 (0.377, 0.402, 0.357) (0.377, 0.402, 0.357)
c=0.8 (0.329, 0.343, 0.305) (0.362, 0.383, 0.341)
c=0.3 (0.208, 0.198, 0.176) (0.324, 0.336, 0.300)
υ=0.1 (0.390, 0.419, 0.371) (0.380, 0.405, 0.360)
υ=0.3 (0.346, 0.360, 0.324) (0.368, 0.390, 0.348)
υ=1 (0.293, 0.295, 0.270) (0.350, 0.366, 0.329)
20 (8,6,6) 3 MLE (0.476, 0.365, 0.371)
GB η=0.1 η=0.4
c=1 (0.476, 0.365, 0.371) (0.476, 0.365, 0.371)
c=0.8 (0.427, 0.319, 0.340) (0.462, 0.351, 0.356)
c=0.3 (0.374, 0.285, 0.289) (0.426, 0.340, 0.345)
υ=0.1 (0.491, 0.377, 0.383) (0.480, 0.368, 0.374)
υ=0.3 (0.438, 0.335, 0.340) (0.466, 0.357, 0.362)
υ=1 (0.374, 0.385, 0.289) (0.444, 0.340, 0.348)
25 (8,5,6) 2.5 MLE (0.462, 0.334, 0.365)
GB η=0.1 η=0.4
c=1 (0.462, 0.334, 0.365) (0.462, 0.334, 0.365)
c=0.8 (0.415, 0.286, 0.319) (0.448, 0.319, 0.351)
c=0.3 (0.295, 0.165, 0.202) (0.413, 0.280, 0.340)
υ=0.1 (0.476, 0.347, 0.377) (0.465, 0.337, 0.368)
υ=0.3 (0.426, 0.305, 0.335) (0.452, 0.326, 0.357)
υ=1 (0.364, 0.257, 0.286) (0.431, 0.310, 0.340)
25 (8,10,7) 4 MLE (0.476, 0.494, 0.366)
GB η=0.1 η=0.4
c=1 (0.476, 0.494, 0.366) (0.476, 0.494, 0.366)
c=0.8 (0.427, 0.452, 0.325) (0.462, 0.482, 0.354)
c=0.3 (0.304, 0.345, 0.220) (0.426, 0.452, 0.323)
υ=0.1 (0.491, 0.507, 0.376) (0.480, 0.498, 0.369)
υ=0.3 (0.438, 0.461, 0.340) (0.466, 0.485, 0.359)
υ=1 (0.373, 0.402, 0.294) (0.444, 0.466, 0.344)

Table 9.

ML and GB estimators under HCS-II

r (r1,r2,r3) T2 (θˆ1,θˆ2,θˆ3)
20 (8,6,6) 2 MLE (0.476, 0.365, 0.371)
GB η=0.1 η=0.4
c=1 (0.476, 0.365, 0.371) (0.476, 0.365, 0.371)
c=0.8 (0.428, 0.319, 0.324) (0.462, 0.351, 0.356)
c=0.3 (0.304, 0.202, 0.208) (0.426, 0.314, 0.319)
υ=0.1 (0.491, 0.377, 0.383) (0.479, 0.368 0.374)
υ=0.3 (0.438, 0.335, 0.340) (0.466, 0.357, 0.362)
υ=1 (0.374, 0.285, 0.289) (0.444, 0.340, 0.345)
20 (8,8,7) 3.8 MLE (0.401, 0.397, 0.333)
GB η=0.1 η=0.4
c=1 (0.401, 0.397, 0.333) (0.401, 0.397, 0.333)
c=0.8 (0.361, 0.457, 0.297) (0.389, 0.385, 0.322)
c=0.3 (0.256, 0.254, 0.199) (0.359, 0.355, 0.293)
υ=0.1 (0.412, 0.408, 0.342) (0.404, 0.399, 0.335)
υ=0.3 (0.374, 0.371, 0.312) (0.394, 0.390, 0.327)
υ=1 (0.325, 0.323, 0.273) (0.378, 0.374, 0.315)
25 (8,10,7) 4 MLE (0.394, 0.494, 0.324)
GB η=0.1 η=0.4
c=1 (0.394, 0.494, 0.324) (0.394, 0.494, 0.324)
c=0.8 (0.354, 0.452, 0.288) (0.382, 0.482, 0.313)
c=0.3 (0.251, 0.345, 0.194) (0.352, 0.452, 0.285)
υ=0.1 (0.404, 0.507, 0.332) (0.396, 0.497, 0.326)
υ=0.3 (0.367, 0.461, 0.303) (0.386, 0.485, 0.318)
υ=1 (0.320, 0.402, 0.266) (0.371, 0.466, 0.307)
25 (9,10,10) 9 MLE (0.355, 0.494, 0.335)
GB η=0.1 η=0.4
c=1 (0.355, 0.494, 0.335) (0.355, 0.494, 0.335)
c=0.8 (0.322, 0.452, 0.306) (0.345, 0.482, 0.326)
c=0.3 (0.238, 0.345, 0.233) (0.322, 0.452, 0.306)
υ=0.1 (0.362, 0.507, 0.340) (0.356, 0.497, 0.336)
υ=0.3 (0.336, 0.461, 0.319) (0.349, 0.485, 0.331)
υ=1 (0.299, 0.402, 0.289) (0.338, 0.466, 0.321)

4. Discussion and conclusion

In this study, we examined HCS-I and HCS-II when the life spans of the three populations have exponential distributions. Using different values for the learning rate parameter η and GE, the Linex loss function in a simulation study and an example, we were able to derive the MLEs and Bayesian estimates of the parameters. Similar to the simulation study, the GBEs outperformed the MLEs. Therefore, we discuss the Bayesian results based on the estimator values and their ER in detail below:

  • • For η=0.1, the results are overestimated for c=1.5,1;ν=0.5,0.1 but underestimated for c=0.75;ν=0.5, so that c=0.85;ν=0.3 leads to better estimation results.

  • • For η=0.4 there is an overestimation for c=1,0.5;ν=0.1,0.7 but an underestimation for c=0.1,ν=4, so c=0.25,ν=1 leads to better estimation results.

  • • For η=0.8, 1 the best results are obtained for c=0.65,ν=3.5.

  • • This means that the majority of the GB results of the parameters θ1,θ2 and θ3 under HCS-I and HCS-II at η=0.1 are overestimation for c<0.85,ν<0.3 but underestimation for c>0.85;ν>0.3 GB results at η=0.4 are overestimation for c<0.25;ν<1 but underestimate for c>0.25,ν>1; GB results at η=0.8,1 are overestimate for c<0.65;ν<8.5 but underestimate for c>0.65,ν>3.5.

From simulation study, we can conclude that.

  • i.

    Due to the chosen values for T1, T2 the results under HCS-II are slightly better than that under HCS-I.

  • ii.

    The best results are obtained for η=0.1,c=0.8;ν=0.3.

  • iii.

    The results are affected by the different values of η,candν; where η=0.1 has better performance for the learning rate parameter, that means the small values of η gives better results, therefore GBE is better than traditional Bayes.

As for Illustrative example, under HCS-II for T2=9, in Table 9, the number of observations is 29 nearly the complete sample which gives the best results for the MLEs and GBEs for η=0.1,c=0.8;ν=0.3, also for η=0.4,c=0.3;ν=1 which underlined in Table 9.

Regarding the investigation of the effect of the learning rate parameter on the estimation results, it may be interesting to investigate GB for different distributions with different types of censoring schemes.

Funding

Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2024R226), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Data Availability Statement

The data used to support the findings of this study are included in the article.

CRediT authorship contribution statement

Yahia Abdel-Aty: Project administration, Methodology, Investigation. Mohamed Kayid: Writing – original draft, Formal analysis, Data curation, Conceptualization. Ghadah Alomani: Writing – review & editing, Supervision, Software, Resources, Funding acquisition.

Declaration of competing interest

The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Ghadah Alomani reports financial support was provided by Princess Nourah bint Abdulrahman University. Ghadah Alomani reports a relationship with Princess Nourah bint Abdulrahman University that includes: employment. If there are other authors, they declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

The authors would like to thank the three anonymous reviewers for their thorough review of our article and their numerous comments and recommendations. The authors extend their sincere appreciation to Princess Nourah bint Abdulrahman University Researchers Supporting Project Number (PNURSP2024R226), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Appendix.

Algorithm1

Calculating MLE–HCS–I [HCS-II]

Step 1

[Enter the values]

  • [B = iteration number, nj = sample sizes, r = observations number, T1,T2 = prefixed time, B = iteration number, θj assumed values for exponential parameters]

Read (B,r,T1[T2],nj);j=1,2,3.

Step 2

[Initialize the variables]

B=0;d1j=0;d2j=0;d3j=0;M=0;L=0

Step 3

For I=1 to B

Step 4

[Generating samples]

Generate 3 samples Xj[I] with sizes nj from EXP(θj).

Step 5

[Combine the generated samples in one ordered sample]

W[I]=sort(X1[I],X2[I],X3[I])

Step 6

[Determine termination time of the experiment. (HCS–I) ].

T[I]=min(wr[I],T1)

[T[I]=max(wr[I],T2), is termination time of the experiment. (HCS-II)].

Step 7

[Compute the observations number, where, R1,R2 observations number till time T1,T2]

D[I]=min(r,R1)[HCSI]
[D[I]=max(r,R2),(HCSII)].

Step 8

[Observations T[I]]

Yj[I]=[Xj[I]T[I]].

Step 9

Dj[I]=#[Yj[I]].

Step 10

If Dj[I]=0 Go To Step 3

Step 10: #[replications satisfying Dj0 ]

B=B+1

Step 11

uj[I]=sum(Yj[I])+T[I](njDj[I]).

Step 12

[Computing MLEs]

θˆjM[I]=Dj[I]uj[I]
Vj[I]=(θjθˆjM[I])ˆ2
d1j=d1j+θˆjM[I]
d2j=d2j+Vj[I]

Step 13

[ Computing the number of replications for D[I]=r; the sum of rj,R1]

If (Dj[I]=rj[I]) then M=M+1;d3j=rj[I]+d3j.

Else L=R1[I]+L.

End If.

Step 14

[ Stop for loop]

End for.

Step 15

[Computing MLEs, Estimated Risk, the mean of rj,R1, the ratio p1 ]

θˆjM=d1jB;ERj=d2jB;rj=d3jM;R1=LBM;p1=1MB

Step 16

Print (p1,R1,r1,r2,r3,θˆ1M,θˆ2M,θˆ3M,ER1,ER2,ER3) Stop

Algorithm2

Calculating GBE–HCS–I [HCS-II]

Step 1: [Enter the values]

Read (B,c,ν,η,r,T1,T2nj,aj,bj);j=1,2,3.

Step 2: [Generating θj(Gam(aj,bj).]

θj=ajbj

[The\Compute GBEs under Linex loss]

θˆjL[I]=Dj[I]η+ajνln(1+νuj[I]η+bj)

Step 13: [Compute GBEs under GE loss

θˆjE[I]=(Γ(Dj[I]η+ajc)Γ(Dj[I]η+aj))1c1uj[I]η+bj

… … … … … … … … … ….

Step 16: Print (θˆ1L,θˆ2L,θˆ3L,ER1L,ER2L,ER3L)

Print (θˆ1E,θˆ2E,θˆ3E,ER1E,ER2E,ER3E)

Stop.

References

  • 1.Miller J.W., Dunson D.B. Robust Bayesian inference via coarsening. J. Am. Stat. Assoc. 2019;114(527):1113–1125. doi: 10.1080/01621459.2018.1469995. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Grünwald, P. The safe Bayesian: learning the learning rate via the mixability gap. In Algorithmic Learning Theory, 2012, Volume 7568 of Lecture Notes in Computer Science, 169-183. Springer, Heidelberg. MR3042889.
  • 3.Grünwald P., van Ommen T. Inconsistency of Bayesian inference for misspecified linear models, and a proposal for repairing it. Bayesian Analysis. 2017;12(4):1069–1103. [Google Scholar]
  • 4.Grünwald P. Safe probability. J. Stat. Plann. Inference. 2018:47–63. MR3760837. [Google Scholar]
  • 5.De Heide R., Kirichenko A., Grünwald P., Mehta N. Safe-Bayesian generalized linear regression. International Conference on Artificial Intelligence and Statistics. 2020;106(113):2623–2633. [Google Scholar]
  • 6.Holmes C.C., Walker S.G. Assigning a value to a power likelihood in a general Bayesian model. Biometrika. 2017:497–503. [Google Scholar]
  • 7.Lyddon S.P., Holmes C.C., Walker S.G. General Bayesian updating and the loss-likelihood bootstrap. Biometrika. 2019:465–478. [Google Scholar]
  • 8.Martin R. Invited comment on the article by van der Pas, Szabó, and van der Vaart. Bayesian Analysis. 2017:1254–1258. [Google Scholar]
  • 9.Martin R., Ning B. Special Issue in Memory of Jayanta K. Ghosh. 2020. Empirical priors and coverage of posterior credible sets in a sparse normal mean model. Sankhyā Series A; pp. 477–498. [Google Scholar]
  • 10.Wu P.S., Martin R. A comparison of learning rate selection methods in generalized Bayesian inference. Bayesian Anal. 2023;18(1):105–132. doi: 10.1214/21-BA1302. [DOI] [Google Scholar]
  • 11.Abdel-Aty Y., Kayid M., Alomani G. Generalized Bayes estimation based on a joint type-II censored sample from k-exponential populations. Mathematics. 2023;11:2190. doi: 10.3390/math11092190. [DOI] [Google Scholar]
  • 12.Abdel-Aty Y., Kayid M., Alomani G. Generalized Bayes prediction study based on joint type-II censoring. Axioms. 2023;12:716. doi: 10.3390/axioms12070716. [DOI] [Google Scholar]
  • 13.Shafay A.R., Balakrishnan N.Y., Abdel-Aty Y. Bayesian inference based on a jointly type-II censored sample from two exponential populations. J. Stat. Comput. Simulat. 2014:2427–2440. [Google Scholar]
  • 14.Abdel-Aty Y. Exact likelihood inference for two populations from two-parameter exponential distributions under joint Type-II censoring. Commun. Stat. Theor. Methods. 2017:9026–9041. [Google Scholar]
  • 15.Abo-Kasem O.E., Nassar M., Sanku Dey S., Abbas Rasouli A. Classical and Bayesian estimation for two exponential populations based on joint type-I progressive hybrid censoring scheme. Am. J. Math. Manag. Sci. 2019;38(4):373–385. doi: 10.1080/01966324.2019.1570407. [DOI] [Google Scholar]
  • 16.Balakrishnan N., Feng S. Exact likelihood inference for k exponential populations under joint type-II censoring. Commun. Stat. Simulat. Comput. 2015;44(3):591–613. [Google Scholar]
  • 17.Balakrishnan N., Feng S., Kinyat L. Exact likelihood inference for k exponential populations under joint progressive type-II censoring. Commun. Stat. Simulat. Comput. 2015;44(3):902–923. [Google Scholar]
  • 18.Balakrishnan N., Kundu D. Hybrid censoring: models, inferential results and applications. Comput. Stat. Data Anal. 2013;57(1):166–209. [Google Scholar]
  • 19.Dutta S., Lio Y., Kayal S. Parametric inferences using dependent competing risks data with partially observed failure causes from MOBK distribution under unified hybrid censoring. J. Stat. Comput. Simulat. 2024;94(2):376–399. doi: 10.1080/00949655.2023.2249165. [DOI] [Google Scholar]
  • 20.Childs A., Chandrasekar B., Balakrishnan N., Kundu D. Exact likelihood inference based on Type-I and Type-II hybrid censored samples from the exponential distribution. Ann. Inst. Stat. Math. 2003;55:319–330. [Google Scholar]
  • 21.Subhankar D., Yuhlong L., Suchandan K. Parametric inferences using dependent competing risks data with partially observed failure causes from MOBK distribution under unified hybrid censoring. J. Stat. Comput. Simulat. 2024;94(2):376–399. [Google Scholar]
  • 22.Abdel-Aty Y., Kayid M., Alomani G. Bayesian estimation based on learning rate parameter under the joint hybrid censoring scheme for k exponential populations. 2023. [DOI]
  • 23.Varian H.R. North-Holland; Amsterdam: 1975. A Bayesian Approach to Real Estate Assessment. [Google Scholar]
  • 24.Dey D.K., Ghosh M., Srinivasan C. Simultaneous estimation of parameters under entropy loss. J Stat Plan Inference. 1987;15:347–363. [Google Scholar]
  • 25.Dey D.K., Liao P.L. On comparison of estimators in a generalized life model. Microelectron. Reliab. 1992;32:207–221. [Google Scholar]
  • 26.Nelson W. Wiley; New York, NY, USA: 1982. Applied Life Data Analysis. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The data used to support the findings of this study are included in the article.


Articles from Heliyon are provided here courtesy of Elsevier

RESOURCES