Abstract
An estimator for the load share parameters in an equal load-share model is derived based on observing k-component parallel systems of identical components that have a continuous distribution function F (·) and failure rate r(·). In an equal load share model, after the first of k components fails, failure rates for the remaining components change from r(t) to γ1 r(t), then to γ2 r(t) after the next failure, and so on. On the basis of observations on n independent and identical systems, a semiparametric estimator of the component baseline cumulative hazard function R = − log(1 − F) is presented, and its asymptotic limit process is established to be a Gaussian process. The effect of estimation of the load-share parameters is considered in the derivation of the limiting process. Potential applications can be found in diverse areas, including materials testing, software reliability and power plant safety assessment.
Keywords: Dependent systems, Nelson-Aalen estimator, proportional hazards
1 Introduction
Most reliability methods are intended for components that operate independently within a system. It is more realistic, however, to develop models that incorporate stochastic dependencies among the system’s components. In many systems, the performance of a functioning component will be affected by how the other components within the system are operating or not operating (cf., Hollander and Peña, 1995). Statistical methods for analyzing systems with dependent components are not yet well developed. Real examples of dependent systems include fiber composites, software and hardware systems, power plants, automobiles, and materials subject to failure due to crack growth, to name just a few.
In the nuclear power industry, for example, components are redundantly added to systems to safeguard against core meltdown. If the failure of one back-up system adversely affects the operation of another, the probability of core meltdown can increase significantly. If four to eight motor operated valves can be employed to ensure the circulation of cooling water around the reactor, the failure of one or two valves can induce a higher rate of failure of the remaining valves due to increased water pressure, thus diminishing the effects of the component redundancy.
Unfortunately, analysts have few options for modeling dependent systems. Existing methods for such systems studied in engineering and the physical sciences are typically based on two classes of models: shock models and load-share models. Shock models, such as Marshall and Olkin’s (1967) bivariate exponential model, enable the user to model component dependencies by incorporating latent variables to allow simultaneous component failures.
Load share models dictate that component failure rates depend on the operating status of the other system components and the effective system structure function. Daniels (1945) originally adopted this model to describe how the strain on yarn fibers increases as individual fibers within a bundle break. Freund (1961) formalized the probability theory for a bivariate exponential load share model. In most applications, the shock model provides an easier avenue for multivariate modeling of system component lifetimes. However, dynamic models such as the load-share model are deemed more realistic in environments where a component’s performance can change once another component in the system fails or degrades.
Perhaps the most important element of the load-share model is the rule that governs how failure rates change after some components in the system fail. This rule depends on the reliability application and how the components within the system interact, i.e., through the structure function. For researchers in the textile industry who deal with the reliability of composite materials, a bundle of fibers can be considered as a parallel system subject to a steady tensile load. The rate of failure for individual fibers depends on how the unbroken fibers within the bundle share the load of this overall stress. The load share rule of such a system depends on the physical properties of the fiber composite. Yarn bundles or untwisted cables tend to spread the stress load uniformly after individual failures. This leads to an equal load-share rule, which implies the existence of a constant system load that is distributed equally among the working components.
In more complex settings, a bonding matrix joins the individual fibers as a composite material, and an individual fiber failure affects the load of certain surviving fibers (e.g., neighbors) more than others. This characterizes a local load sharing rule, where a failed component’s load is transferred to adjacent components; the proportion of the load the surviving components inherit depends on their ‘distance’ to the failed component. A more general monotone load sharing rule assumes only that the load on any individual component is nondecreasing as other items fail. Lynch (1999) characterized some relationships between the failure rate and the load-share rule based on a monotone load share rule. Relationships for some specific load share rules are studied in Durham and Lynch (2000).
Past research has stressed reliability estimation based on known load share rules. Statistical methods are relatively undeveloped for characterizing systems with dependent components by estimating unknown parameters of the load-share rule. In this paper, we consider estimating the component baseline lifetime distribution based on observing dynamic systems of identical components. Dependence between system components is modeled through a load-share framework, with the load-sharing rule containing unknown parameters. Of primary interest in the model is the baseline distribution, but the parameters of the load-sharing rule may also be of importance such as when an estimate of the system reliability is desired, or they could just be viewed as nuisance parameters. We focus on the equal load-share rule, where the failure rate of the remaining functioning components within the system change uniformly after each component failure, but with the magnitudes of change being unknown.
2 Examples of Load-Share Systems
The load share rule has obvious potential for application in modeling systems with interdependent components, as described in the preceding section. The load-sharing framework also applies to problems of detecting members of a finite population. Suppose the resources allocated toward finding a finite set of items are defined globally, rather than assigned individually. Once items are detected, resources can be redistributed for the problem of detecting the remaining items, and this action gives rise to a load sharing model. In most cases, the items are identical to the observer, and an equal load-share rule is appropriate for characterizing the system dependence.
Unlike load-share models for fiber strength, these more general models give no indication of how load share parameters might change as other components fail. In this case, inference based on known load share parameters seems unrealistic and the problem of estimating those parameters becomes crucial.
We have already discussed two examples for which the load-share rule might apply: risk assessment in power plants, and the study of fiber strength in relation to fiber composites in textile engineering. Other important examples include the following:
Software Reliability
The load-sharing model generalizes the dynamic model suggested by Jelinski and Moranda (1972), among others, for software reliability. The most basic problem is to assume that an unknown number of faults exist in the system (i.e., software). After a fixed time, some number of faults are found, and the number of remaining faults is to be estimated. The load-share model represents a more flexible and realistic method of predicting the detection of faults by acknowledging the dynamic nature of fault detection when some faults have already been found. For instance, in problems where the number of software bugs is relatively small, the discovery of a major defect can help conceal or reveal other existing bugs in the software.
Civil Engineering
With a large structure supported by welded joints, the structure fails only after a series of supporting joints fail. The failure of one or two welded joints in a bridge support, for instance, might cause the stress on remaining joints to increase, thus causing earlier subsequent failures. Static reliability models fail to consider the changing stress in this setting, which constitutes a load-sharing model.
Materials Testing
Fatigue and material degradation is often characterized by crack growth, especially in large structures such as an airplane engine turbine or a commercial airplane fuselage. At the microscopic level, these materials have an intractable number of cracks, with only a few becoming large enough to be measurable, usually at stress centers such as edges, rivets, etc. It is known that the largest crack in a (predefined) local area will inherit much of the test stress, and thus will grow at a faster rate than the other measurable cracks; see Carlson and Kardomateas (1996) for instance. This provides a platform for extending the load-share model to degradation data. Certainly, the interdependence between crack growths cannot be modeled using simple physical principles, thus a nonparametric load-share model has potential application.
A similar approach, used in modeling the incubation period for the Human Immunodeficiency Virus (HIV) in Jewell and Kalbfleisch (1996), is based on marker processes. Rate changes can be incorporated into the model via time-dependent stochastic markers that carry covariate information. Marker processes are based on the shock model approach to describing component dependence, but are closely related to load-sharing models. As an illustrative reliability example, a car’s odometer serves as an obvious marker for the car’s chronological lifetime. This approach serves as a natural one for modeling crack growth in materials using observed degradation (e.g., crack size) as a stochastic marker.
Population Sampling
In wildlife studies, population sizes are estimated from relatively small samples. Capture/recapture methods can be used for these estimation methods, and involve finding previously tagged animals in order to deduce the sample’s size relative to the larger population. In some cases, the detection of a tagged animal may affect the detection rate of the remaining sample. When recapture probabilities are significantly nonzero, the load-share framework allows the experimenter to modify the detection model after a recapture occurs.
Combat Modeling
The attrition of military hardware and personnel in combat situations is highly dynamic, and the loss of one component in combat can easily change the success rate (or death rate) of the remaining components in the field; see Kvam and Day (2001). Specific load share models could be used to model the natural dependence between components within the system as well as their relative status within the group (e.g., even with combat machines, the components are not generally identical in effectiveness or constitution).
3 Estimation of Load-Share Model Parameters
Consider a system with k identical components for which stochastic component dependencies are induced via a load sharing model. Suppose we observe n independent and identical systems over an observation period [0, τ], where τ is possibly random and could be the time of the last component failure among all nk components. We monitor the times of component failures of these systems. For i = 1, 2, 3, …, let Si,1 < Si,2 < … be the successive component failure times for the ith system whose values are less than or equal to τ, so that Si,j is the jth smallest component failure time for the ith system. Denote by F the baseline component failure time distribution. The hazard function (or cumulative hazard rate) corresponding to F is R(x) = − log(1 − F(x)), and the hazard rate is r(x) = f(x) = [1 − F(x)], where f(x) is the density of F. Thus, the hazard function can be expressed as
Inter-component dependencies are due to the fact that the system’s environment can possibly become more or less harsh on the remaining functioning components upon failure of other components. This framework is based on applications for which failure rates or detection rates of all items within the system are equal, but the change in rate after a component failure depends on the set of functioning components in the system. Note that upon a component failure, the effective system structure function also changes (cf., Hollander and Peña, 1995). For the specific model considered in the present paper, until the first component failure, the failure rate of each of k components in the system equals the baseline rate r(x). Upon the first failure within a system, the failure rates of the k − 1 remaining components jump to γ1 r(x), and remain at that rate until the next component failure. After this failure, the failure rates of the k − 2 surviving components jump toγ2 r(x), and so on. The failure rate of the last remaining component is γk−1 r(x). The (equal) load share rule can be characterized by the k − 1 unknown parameters γ1, γ2, …, γk−1 and the unknown baseline distribution or hazard function. For example, a system with a constant load would assign γj = k/(k − j), j = 1; …, k − 1. In the sequel, we let γ = (γ0 ≡ 1, γ1, …, γk−1)′: Estimating the underlying baseline functions F or R may be of primary interest. In some situations such as when estimation of the system reliability is desired, estimation of the load share parameters γ will also be of interest; otherwise it may be viewed as a vector of nuisance parameters.
We approach the problem using point process theory. This will allow the establishment of asymptotic properties of the estimator of R(·) and F(·) in a broader framework.
For notation, we sometimes write γ[j] for γj. Furthermore, for a function h, we define h(w−) = lima↓0 h(w − a) and h(w+) = lim a↓0 h(w + a). We also let I(A) denote the indicator function of event A, so that I(A) = 1 if event A occurs, otherwise it equals zero. Define the counting processes
Ni (t) represents the number of component failures for the ith system that occurred on or before time t. We may then write To express the likelihood in terms of stochastic processes, we also define
(1) |
Let Fit = σ {(Ni(w); Yi(w+)); w ≤ t} be the filtration generated by the ith system up to time t, and let The load-share model can be described by specifying the intensities of the Ni(·)’s to be
(2) |
If we denote by
(3) |
then M = {(Mi(t) = Ni(t) − Ai(t), 0 ≤ t ≤ τ), i = 1, .., n} is a vector of orthogonal square-integrable zero-mean martingales (cf., Andersen, et al., 1993). Following Jacod (1975), the full likelihood associated with the observed data {(Ni(w), Yi(w)), 0 ≤ w ≤ τ): i = 1, 2, …, n} is given by the expression
(4) |
where the second product in (4) denotes product-integral.
A standard approach to obtaining a semiparametric estimator of R(·) from (4) is to first fix γ, and then to obtain an ‘estimator’ of R denoted by . This is plugged into (4) to obtain the profile likelihood Lp(γ) for γ, which is then maximized in γ to obtain the estimator . The semiparametric estimator of R(·) is then To implement this estimation procedure, we first introduce the process .In particular, J(w) = 0 indicates all nk components have already failed at time w−. Note also that J(·) is a predictable and bounded process. If γ is known, by using the zero-mean property of the martingale and analogously to the derivation of the Nelson-Aalen estimator (Aalen (1978)), we immediately obtain the estimator of R given by
(5) |
The estimator in (5), which is a generalized Nelson-Aalen estimator, is similar in structure to the hazard function estimator for tensile strengths derived byRyden (1999). To obtain the estimator of R(·) for the more general case where γ is unknown, we first obtain the profile likelihood for γ by plugging in given in (5) into the likelihood function in (4). From (4) and (5) we obtain this profile likelihood to be
(6) |
This profile likelihood may also be viewed as a partial likelihood process. This profile likelihood is maximized with respect to γ to obtain , which is then plugged in into to obtain the semiparametric estimator of R given by
(7) |
By virtue of the product representation of we then obtain an estimator of via
(8) |
To facilitate the presentation of asymptotic properties of the estimators and , we introduce the following notation:
Qi,j(t) = Yi(t) I (Ni (t−) = j), 1≤i≤n,0≤j≤k − 1;
Qi(t) = (Qi,0(t),...,Qi,k−1(t))′, 1≤i≤n;
δi(t) = (δi,0(t),...,δi,k−1 (t))′, with δi,j(t) = I (Qi,j(t)>0), 1≤i≤n;
γ−1 ≡ (1/γ0,...,1/γk−1);
q(s) = (q0(s),...,qk−1(s)), where by invoking the assumed iid property of the n systems, we have
Here, * represents component-by-component multiplication. With the aforementioned notation, becomes
while the profile log-likelihood process becomes
The corresponding profile (partial) score process for γ is
(9) |
and the profile information matrix process is
(10) |
If we ignore differentiation by the known constant γ0 = 1, is a vector of length (k − 1), and is a (k − 1) × ( k − 1) matrix. Using the notation defined earlier, (9) and (10) can be further simplified by noting that
In terms of ,
We let the symbol D(η) represent a diagonal matrix with diagonal elements η. Because
the first term in the integrand in (10) can be written as D(γ–1)D(δi(w))D(γ–1), and the second term as Therefore, Eqs (9) and (10) become
Solving the set of k − 1 nonlinear equations
(11) |
does not lead to a closed form solution for the MLE of γ. However, solving the set of equations is not a diffcult numerical problem. For instance, a Newton-Raphson method could be implemented, which has the iterations . In our computer implementation using the R language, which we used in the computer simulation studies, the R object optim was invoked as a preliminary step to obtain good seed values for the Newton-Raphson procedure. This two-step approach lead to a more efficient computational implementation, and also lead to convergence in almost all cases considered in the simulations. Other approaches could also be used to solve (11); a similar set of Eqs is in Kvam and Samaniego (1993) for solving the likelihood Eqs in an exponential factorial model, and as in that paper, applying Theorem 2.1 of Mäakeläinen, Schmidt and Styan (1981) establishes that there exists a unique solution satisfying . In Kvam and Samaniego (1993), a nonlinear Gauss-Seidel iterative method (see Ortega and Rheinboldt (1970), for example) was applied to solve the set of Equations.
4 Asymptotic Properties
For purposes of examining the properties of the estimators, note that if we define the ‘alternative’ score process
(12) |
then the estimator is also the solution of the equation U(τ, γ) = 0. To obtain the asymptotic properties of the estimator which solves the preceding Eq, we re-express the martingale M in terms of {Qi, Q}. First, note that from (3), the compensator of Mi is
(13) |
so the quadratic variation process of Mi is
Lemma 1 The process
The proof of this result is presented in the Appendix. This simplification leads us to the following asymptotic properties for the alternative score process. The proofs of these results are also relegated to the Appendix.
Lemma 2 If {(Ni(·); Yi(·)), i = 1, …, n} are iid, and then the alternative score function in (12) is a square-integrable martingale with quadratic variation process 〈U(·; γ)〉(s). Furthermore,
and n−½U(·; γ) converges weakly to a zero-mean Gaussian process with covariance matrix function ϒ (·; γ).
We now state the major asymptotic properties of the estimators and .
Theorem 1 Under the conditions of Lemma 2, and where and
Theorem 2 If the conditions of Lemma 2 hold, and if τ satisfies , then
converges weakly to a zero-mean Gaussian process with variance function
where
Corollary 1 Under the conditions of Theorem 2, converges weakly to a zero-mean Gaussian process whose variance function is
We attempt to provide an explicit expression for the limiting variance functions. To try to do so, an expression for Pr {N1(w−) = j} is needed in order to get an expression for
when τ is fixed. Observe that
Invoking Theorem 5.1 of Hollander and Peña (1995), we obtain an expression for Pr{Sj ≥ w}. Introducing the notation
for i ≤ j and i,j ∈ {0, 1, 2, …, k − 1}, with the convention from Hollander and Peña (1995) we have
(14) |
Consequently, for j = 0, 1, …, k −1,
Unfortunately, this does not yield a simple expression for Ξ(s; γ). For instance, the first term of this limiting variance function is given by
Note that this expression is at most equal to
which is the asymptotic variance function of the Nelson-Aalen estimator of R(s) which utilizes only the first component failure for each system. This estimator is given by
(15) |
This particular result demonstrates that if γ is known, then the estimator is more efficient than the estimator , certainly not a surprising result. However, since γ is not known and is estimated to form the estimator , the second term in Ξ(s; γ) given by
must be taken into account in comparing the asymptotic variances of and . Recall that this term is the effect of the estimation of γ by .
We now prove that indeed, for two-component parallel systems, i.e., k = 2, the estimator improves on the estimator by showing that the asymptotic variance of the former is at most that of the latter. We have as yet been unsuccessful in establishing whether this domination result also hold for the case k > 2.
For notation, let us define then it follows that
(16) |
Theorem 3 For k = 2, Δ(s; τ) ≥ 0 for s ≤ τ, implying that the estimator is asymptotically never less efficient than the estimator .
Proof: First we note that when k = 2, and since then
Therefore, when k = 2,
But by Cauchy-Schwartz Inequality, we have for positive functions f, g and measureν,
Applying this result, we have
so that
from which it follows that Δ(s; τ) ≥ 0, thereby completing the proof of the theorem. ||
For practical purposes, we need consistent estimators of the variance functions of these limiting processes. An obvious estimator of ϒ(τ, γ) is provided by
To estimate the limiting covariance matrix for , we can use . An estimator of the limiting variance function of is provided by
(17) |
where Finally, an estimator of the limiting variance function of is given by
The results in Theorem 2 and Corollary 1 are analogous to the asymptotic results in Andersen and Gill (1982) which consider the estimation of the baseline hazard and distribution functions in the multiplicative intensity model. The Andersen and Gill model subsumes the Cox (1972) proportional hazards model. The difference between the load share problem and the regular set up is that the data structure and the stochastic model under load sharing is more complicated; they arise from observing several components in a system combined with the evolution of the failure rates of the components being governed by the component histories.
5 Simulations and Examples
5.1 A Simulation Study
To examine the small sample properties of the estimator of γ and of the the baseline survivor function , a modest simulation study was undertaken to determine the biases, standard errors, and root-mean-squared-errors (rmses) of the estimators. Two values of k, the number of components, were chosen: (i) k = 2 with γ = (1, 1.25); and (ii) k = 4 with γ = (1, 1.25, 1.5, 1.75). Three sample sizes were chosen: n ∈ {30, 50, 100}. The baseline distribution was chosen to be a Weibull with shape parameter of 2 and scale parameter of 1. For each combination of k and n, 1000 replications of the simulation were performed. The empirical bias and standard error of were then computed, and also the empirical bias, standard error, and rmse curves of the estimator were also computed at pre-specified values of time. Table 1 contains a summary of the bias and standard errors of and Figure 1 contains the bias, standard error, and rmse curves of for the six simulation cases.
Table 1.
Bias and Standard Error of based on 1000 replications and when the true baseline survivor function is Weibull with shape parameter of 2 and scale parameter of 1.
n | k = 2, γ1 = 1.25 | k = 4, (γ1, γ2, γ3) = (1.25, 1.5, 1.75) | ||
---|---|---|---|---|
Bias | StdErr | Bias | StdErr | |
30 | .0281 | .4033 | (.0103, −.0059, .0005) | (.3723, .4951, .6571) |
50 | .0355 | .3163 | (.0171, −.0134, −.0005) | (.2770, .3767, .4912) |
100 | −.0046 | .2062 | (.0118, .0094, .0165) | (.2007, .2725, .3554) |
Figure 1.
Bias, Standard Error, and RMSE Curves of based on 1000 replications and when the true baseline survivor function is Weibull with shape and scale parameters of 2 and 1, respectively. The red (solid) curve is the bias curve, the blue (dash) curve is the standard error curve, and the green (dot-dash) curve is the rmse curve.
Examining Table 1, we note that as the sample size increases, standard errors of decrease steadily and the biases are negligible when n = 100. It is not clear whether the estimators ’s are positively biased since some of the empirical biases are negative. We also observe that when k > 2 the standard errors of increase in j. This is expected because fewer components are on test (providing less information) when inference is made on these latter load-share parameters. Furthermore, comparing the standard errors of for the case with k = 2 and k = 4, we note that the latter has smaller standard errors for all n. This could be explained again by the fact that the effective sample size for k = 4 is greater than the effective sample size for k = 2. Examining Figure 1, the shapes of the bias, standard error, and rmse curves for in all the cases considered are generally similar: negative bias in the middle portion of the distribution and then positive bias in the tail. The standard error dominates the bias in forming the rmse, overall. As the sample size increases, the bias, standard error, and rmse curves all approach the zero horizontal line, as is to be expected.
5.2 Some Applications
As an example of load-sharing in manufacturing, we consider life testing of light displays such as Plasma Display Devices (PDPs). In product tests, degradation is measured in luminosity (measured in candela) and PDP failure is declared when luminosity decreases below a given failure threshold. While some units degrade slowly, sudden pixel failures are also a problem in test items. A similar problem with laser degradation is discussed in Example 13.5 of Meeker and Escobar (1998).
A PDP manufacturer has conducted accelerated degradation tests on PDPs with multiple measurements at key locations on the PDP’s surface. While it is clear that different areas of the PDP surface can degrade at different rates, it is not known how sudden pixel failures indicated by one sensor would affect the degradation at other parts of the PDP (whether triggered by stress changes or common causes that affect different areas of the test surface). See Bae and Kvam (2003) for further discussion of statistical modeling for PDPs. With k sensors spaced evenly across the test device, the failure times can be modeled using load sharing. PDP failure data are not available for this analysis, and in initial tests, most tests were stopped after the first indication of failure, and the remaining lifetime measurements were censored. Here, we present a simulated example to illustrate the load-share model characterizing the test data. We assume n=20 test items are tested with k=3 sensors, and lifetimes are recorded for the three sensors on each of the twenty test items. The sample sizes and failure probabilities are consistent with those in Bae and Kvam (2003); Table 2contains degradation measurements (in hours) generated from the random coefficients model from that paper, which is analogous to some longitudinal data models in Vonesh and Chinchilli (1997).
Table 2.
Time (in hours) until failure for n = 20 plasma display devices using k = 3 luminosity sensors.
Si,1 | Si,2 | Si,3 | Si,1 | Si,2 | Si,3 |
---|---|---|---|---|---|
1286.54 | 1647.9 | 1763.22 | 1100.4 | 1412.26 | 1664.37 |
860.441 | 1345.05 | 1751.84 | 825.547 | 1125.41 | 1417.22 |
1194.75 | 1617.76 | 2719.27 | 427.758 | 1004.59 | 2181.77 |
350.698 | 782.61 | 1926.68 | 1768.23 | 1796.08 | 2727.01 |
169.722 | 766.904 | 988.569 | 904.204 | 1335.02 | 1803.59 |
732.044 | 1911.16 | 2593.06 | 315.753 | 732.8 | 1283.59 |
337.713 | 803.275 | 994.759 | 650.034 | 954.343 | 3415.51 |
472.796 | 531.578 | 788.641 | 562.689 | 772.21 | 1232.09 |
747.868 | 824.309 | 1806.99 | 53.7681 | 1405.06 | 2357.47 |
915.552 | 1849.6 | 1872.03 | 1376.24 | 1879.17 | 2150.99 |
The estimated distribution for failure time is plotted in Figure 2, and Figure 3 contains confidence regions for the load-share parameter estimates, which were . Although the data indicate an increased failure frequency after the first observed failure, the evidence is not overly strong (the 90% confidence region contains the point (1,1)).
Figure 2.
Estimated Cumulative Distribution Function for PDP lifetime.
Figure 3.
PDP Example: Confidence Regions (50%, 90%, 95%) for (γ1, γ2)
Examples in other fields of application can be analyzed and illustrated in the same manner. One fundamental conjecture would be that the system is under constant load, or H0: γi = k/(k −i); i = 1, …, k − 1. In other applications, 1 ≤ γ1 ≤ γ2 ≤ … ≤ γk−1 might be a reasonable assumption. This is the monotone load share rule mentioned in Section 1. Kim and Kvam (2004) consider the simple case in which failure data follow an exponential distribution, and order restricted inference is used to estimate the load share rule under the monotone load share restriction.
Because of the flexibility offered in a nonparametric estimator with dynamic failure rate changes, this load share model may be found to adequately fit dependent lifetime data even if no “true” load share quality is exhibited between system failures. As an example, the model can be fit to the Danish twins survival data from Anderson, et al. (1992), where lifetimes for 111 female monozygotic twins (born between 1870 and 1880) obtained from the Danish Twin Registry were analyzed with time-dependent measures of association. With a fitted load-share model, the γ parameter is estimated with a significance value less than 0.10 for the test H0: γ ≤ 1 versus H1: γ > 1. While the dependence reflected in is not spurious, it tends to ignore the age dependence of the bivariate survival distributions worked out in Anderson, et al. (1992).
Acknowledgments
The authors thank Samsung SDI Co., LTD for providing degradation data from the Samsung Plasma Display Device Division. The authors also wish to thank the reviewers for their very careful reading of the manuscript which unearthed some inaccuracies, and for their suggestions and criticisms.
A Appendix: Some Technicalities
Proof of Lemma 1: The alternative score function in (12) can be decomposed into two parts:
(18) |
It turns out that the second term in (18) is
This is true because Qi,j(w) > 0 implies Qi,j′ (w) = 0; j ≠ j′, so that
But
Proof of Lemma 2: By stochastic integration theory, the score process {U(s; γ): 0 ≤ s ≤ τ} is clearly a square-integrable martingale with quadratic variation process
It follows from the Glivenko-Cantelli type strong law of large numbers that if {(Ni(w); 0 ≤ w ≤ τ), i = 1, …, n} are independent and identically distributed, then for j = 0, …, k −1,
Therefore, provided that then
where the jth of element ρ(w; γ) is for j = 0, …, k − 1. By Robolledo’s martingale central limit theorem (see Andersen, et al. (1993), Theorem II.5.1), it follows that n−½U(·; γ) converges weakly to a zero-mean Gaussian process with covariance matrix function γ (s; γ).||
Proof of Theorem 1: The establishment of the consistency of follows the usual route of consistency proofs for partial likelihood MLEs. We therefore refer the reader to such standard proofs in Andersen, et. al. (1993). With consistency of established, observe first that for all has full rank, and from (12), we have Since , we can therefore write A first-order Taylor series expansion of about γ yields where ξ lies in the line segment connecting and γ, and the (j; j′)th element of is
Here is the jth element of . This matrix simplifies to
Since and by continuity considerations,
The matrix whose inverse is taken converges to ϒ(τ; γ) because
This result establishes Theorem 1.||
Proof of Theorem 2: Recall that
(19) |
We seek a representation of by expanding around γ using a first-order Taylor series. First note that It therefore follows that, for some ξ between γ and ,
(20) |
We have the decomposition
(21) |
From (19) and because of (20),
The first term of in (21) goes to zero in probability. Using the above representation for , the second term in (21) becomes
To further simplify our notation, let us define
Then, in terms of these processes,
We now describe the limit of in terms of the limits of Ψi, i = 1, 2, 3. We have
From the proof of Theorem 1, we also have that and by Rebolledo’s martingale central limit theorem, if γ′ q(w) > 0, then Ψ3(s; γ) converges to a zero-mean Gaussian process on [0, τ] with variance function From (18), we have that
Because is asymptotically negligible, it follows that which converges to a Gaussian process by virtue of the Gaussian process limits of The limiting variance function of now immediately follows from the above representation by observing that the covariance process between is
Since then
This fact completes the proof of Theorem 2. ||
Proof of Corollary 1: This follows by applying the functional delta-method and invoking the asymptotic result in Theorem 2. Since then
By the functional delta-method (cf., Andersen, et. al. (1993)), it follows that the limiting process is dφ (R) · W where, with W being the Gaussian limiting process in Theorem 2,
References
- 1.Aalen O. “Nonparametric inference for a family of counting processes,”. Annals of Statistics. 1978;6:701–726. [Google Scholar]
- 2.Andersen P, Gill R. “Cox’s regression model for counting processes: A large sample study,”. Annals of Statistics. 1982;10:1100–1120. [Google Scholar]
- 3.Anderson JE, Louis TA, Holm NV, Harvald B. “Time-dependent association measures for bivariate survival distributions,”. Journal of the American Statistical Association. 1992;87:641–650. [Google Scholar]
- 4.Andersen, P. K., Borgan, Ø., Gill, R. D. and Keiding, N. (1993), Statistical Models Based on Counting Processes, Springer-Verlag, New York.
- 5.Bae, S. J. and Kvam, P. H. (2003), “A Nonlinear Random Coefficients Model for Degradation Testing,” Georgia Institute of Technology ISyE Technical Report 2003-41
- 6.Carlson, R. L. and Kardomateas, G. A. (1996), An Introduction to Fatigue in Metals and Composites, Chapman & Hall, London.
- 7.Cox DR. “Regression models and life tables (with discussion),”. Journal of the Royal Statistical Society, B. 1972;34:187–220. [Google Scholar]
- 8.Daniels HE. “The statistical theory of the strength bundles of threads,”. Proceedings of the Royal Society, London, A. 1945;83:405–435. [Google Scholar]
- 9.Durham SD, Lynch JD. “A threshold representation for the strength distribution of a complex load sharing system,”. Journal of Statistical Planning and Inference. 2000;Vol 83:25–46. [Google Scholar]
- 10.Freund JE. “A bivariate extension of the exponential distribution,”. Journal of the American Statistical Association. 1961;Vol 56:971–977. [Google Scholar]
- 11.Hollander M, Peña E. “Dynamic Reliability Models With Conditional Proportional Hazards,”. Lifetime Data Analysis. 1995;1:377–401. doi: 10.1007/BF00985451. [DOI] [PubMed] [Google Scholar]
- 12.Jacod J. “Multivariate point processes: Predictable projection, RAdon-Nikodym derivatives, representation of martingales,”. Z Wahrsch Geb. 1975;31:235–253. [Google Scholar]
- 13.Jelinski Z, Moranda P. “Software reliability research,” . Statist. Comp. Perform. Evaluation. 1972:45–484. [Google Scholar]
- 14.Jewell NP, Kalbfleisch JD. “Marker processes in survival analysis,”. Lifetime Data Analysis. 1996;Vol 2:15–19. doi: 10.1007/BF00128468. [DOI] [PubMed] [Google Scholar]
- 15.Kim, H. T. and Kvam, P. H. (2004) “Reliability estimation based on system data with an unknown load share rule,” Lifetime Data Analysis, to appear. [DOI] [PubMed]
- 16.Kvam PH, Samaniego FJ. “Life testing in variably scaled environments,”. Technometrics. 1993;Vol 35(No 3):306–314. doi: 10.1023/a:1009602128877. [DOI] [PubMed] [Google Scholar]
- 17.Kvam PH, Day D. “The multivariate Polya distribution in combat modeling,”. Naval Research Logistics. 2001;Vol 48(No 1):1–17. [Google Scholar]
- 18.Lynch JD. “On the joint distribution of component failures for monotone load sharing systems,”. Journal of Statistical Planning and Inference. 1999;Vol 78:13–21. [Google Scholar]
- 19.Mäkeläinen T, Schmidt K, Styan GPH. “On the existence and uniqueness of the maximum likelihood estimate of a vector-valued parameter in fixed-size samples,”. Annals of Statistics. 1981;Vol 9(No 4):758–767. [Google Scholar]
- 20.Marshall AW, Olkin I. “A multivariate exponential distribution,”. Journal of the American Statistical Association. 1967;62:30–44. [Google Scholar]
- 21.Meeker, W. Q. and Escobar, L. A. (1998), Statistical Methods for Reliability Data, Wiley, New York.
- 22.Ortega, J. M. and Rheinboldt, C. R. (1970) Iterative solution of Nonlinear Eqs in Several Variables, New York: Academic Press.
- 23.Rydeń, P. (1999) Estimation of the Reliability of Systems Described by the Daniels Load-Sharing Model, Licentiate Thesis, Department of Mathematical Statistics, Umeå University, Sweden.
- 24.Vonesh, E. F. and Chinchilli, V. M. (1997), Linear and Nonlinear Models for the Analysis of Repeated Measurements, Marcel Dekker, Inc., New York.