Abstract
We study and compare three estimators of a discrete monotone distribution: (a) the (raw) empirical estimator; (b) the “method of rearrangements” estimator; and (c) the maximum likelihood estimator. We show that the maximum likelihood estimator strictly dominates both the rearrangement and empirical estimators in cases when the distribution has intervals of constancy. For example, when the distribution is uniform on {0, … , y}, the asymptotic risk of the method of rearrangements estimator (in squared ℓ2 norm) is y/(y + 1), while the asymptotic risk of the MLE is of order (log y)/(y + 1). For strictly decreasing distributions, the estimators are asymptotically equivalent.
Keywords: Maximum likelihood, monotone mass function, rearrangement, rate of convergence, limit distributions, nonparametric estimation, shape restriction, Grenander estimator
1. Introduction
This paper is motivated in large part by the recent surge of activity concerning “method of rearrangement” estimators for nonparametric estimation of monotone functions: see, for example, Fougères (1997), Dette and Pilz (2006), Dette et al. (2006), Chernozhukov et al. (2009) and Anevski and Fougères (2007). Most of these authors study continuous settings and often start with a kernel type estimator of the density, which involves choices of a kernel and of a bandwidth. Our goal here is to investigate method of rearrangement estimators and compare them to natural alternatives (including the maximum likelihood estimators with and without the assumption of monotonicity) in a setting in which there is less ambiguity in the choice of an initial or “basic” estimator, namely the setting of estimation of a monotone decreasing mass function on the non-negative integers .
Suppose that is a probability mass function; i.e. px ≥ 0 for all and . Our primary interest here is in the situation in which p is monotone decreasing: px ≥ px+1 for all . The three estimators of p we study are:
the (raw) empirical estimator,
the method of rearrangement estimator,
the maximum likelihood estimator.
Notice that the empirical estimator is also the maximum likelihood estimator when no shape assumption is made on the true probability mass function.
Much as in the continuous case our considerations here carry over to the case of estimation of unimodal mass functions with a known (fixed) mode; see e.g. Fougères (1997), Birgé (1987), and Alamatsaz (1993). For two recent papers discussing connections and trade-offs between discrete and continuous models in a related problem involving nonparametric estimation of a monotone function, see Banerjee et al. (2009) and Maathuis and Hudgens (2009).
Distributions from the monotone decreasing family satisfy Δpx ≡ px+1−px ≤ 0 for all , and may be written as mixtures of uniform mass functions
(1.1) |
Here, the mixing distribution q may be recovered via
(1.2) |
for any .
Remark 1.1
From the form of the mass function, it follows that px ≤ 1/(x+1) for all x ≥ 0.
Suppose then that we observe X1, X2, … , Xn i.i.d. random variables with values in and with a monotone decreasing mass function p. For , let
denote the (unconstrained) empirical estimator of the probabilities px. Clearly, there is no guarantee that this estimator will also be monotone decreasing, especially for small sample size. We next consider two estimators which do satisfy this property: the rearrangement estimator and the maximum likelihood estimator (MLE).
For a vector w = {w0, … , wk}, let rear(w) denote the reverse-ordered vector such that w’ = rear(w) satisfies . The rearrangement estimator is then simply defined as
We can also write , where .
To define the MLE we again need some additional notation. For a vector w = {w0, … , wk}, let gren(w) be the operator which returns the vector of the k + 1 slopes of the least concave majorant of the points
Here, we assume that . The MLE, also known as the Grenander estimator, is then defined as
Thus, is the left derivative at x of the least concave majorant (LCM) of the empirical distribution function (where we include the point (−1, 0) to find the left derivative at x = 0). Therefore, by definition, the MLE is a vector of local averages over a partition of {0, … , max{X1, … , Xn}}. This partition is chosen by the touchpoints of the LCM with . It is easily checked that corresponds to the isotonic estimator for multinomial data as described in Robertson et al. (1988), pages 7–8 and 38–39.
We begin our discussion with two examples: in the first, p is the uniform distribution, and in the second p is strictly monotone decreasing. To compare the three estimators, we consider several metrics: the ℓk norm for 1 ≤ k ≤ ∞ and the Hellinger distance. Recall that the Hellinger distance between two mass functions is given by
while the ℓk metrics are defined as
In the examples, we compare the Hellinger norm and the ℓ1 and ℓ2 metrics, as the behavior of these differs the most.
Example 1. Suppose that p is the uniform distribution on {0, … , 5}. For n = 100 independent draws from this distribution we observe . Then , and the MLE may be calculated as . The estimators are illustrated in Figure 1 (left). The distances of the estimators from the true mass function p are given in Table 1 (left). The maximum likelihood estimator is superior in all three metrics shown. To explore this relationship further, we repeated the estimation procedure for 1000 Monte Carlo samples of size n = 100 from the uniform distribution. Figure 2 (left) shows boxplots of the metrics for the three estimators. The figure shows that here the rearrangement and empirical estimators have the same behavior; a relationship which we establish rigorously in Theorem 2.1.
Fig 1.
Illustration of MLE and monotone rearrangement estimators: empirical proportions (black dots), monotone rearrangement estimator (dashed line), MLE (solid line), and the true mass function (grey line). Left: the true distribution is the discrete uniform; and right: the true distribution is the geometric distribution with θ = 0.75. In both cases a sample size of n = 100 was observed.
Table 1. Distances between true p and estimators.
Example 1 |
Example 2 |
|||||
---|---|---|---|---|---|---|
0.08043 | 0.09129 | 0.2 | 0.1641 | 0.07425 | 0.2299 | |
0.08043 | 0.09129 | 0.2 | 0.1290 | 0.06115 | 0.1821 | |
0.03048 | 0.03651 | 0.06667 | 0.09553 | 0.06302 | 0.1887 |
Fig 2.
Monte Carlo comparison of the estimators: boxplots of m = 1000 distances of the estimators (white), (light grey) and (dark grey) from the truth for a sample size of n = 100. Left: the true distribution is the discrete uniform; and right: the true distribution is the geometric distribution with θ = 0.75.
Example 2. Suppose that p is the geometric distribution with px = (1 − θ)θx for and with θ = 0.75. For n = 100 draws from this distribution we observe , and as shown in Figure 1 (right). The distances of the estimators from the true mass function p are given in Table 1 (right). Here, is outperformed by and in all the metrics, with performing better in the ℓ1 and ℓ2 metrics, but not in the Hellinger distance. These relationships appear to hold true in general, see Figure 2 (left) for boxplots of the metrics obtained through Monte Carlo simulation.
The above examples illustrate our main conclusion: the MLE preforms better when the true distribution p has intervals of constancy, while the MLE and rearrangement estimators are competitive when p is strictly monotone. Asymptotically, it turns out that the MLE is superior if p has any periods of constancy, while the empirical and rearrangement estimators are equivalent. However, if p is strictly monotone, then all three estimators have the same asymptotic behavior.
Both the MLE and monotone rearrangement estimators have been considered in the literature for the decreasing probability density function. The MLE, or Grenander estimator, has been studied extensively, and much is known about its behavior. In particular, if the true density is locally strictly decreasing, then the estimator converges at a rate of n1/3, and if the true density is locally flat, then the estimator converges at a rate of n1/2, cf. Prakasa Rao (1969); Carolan and Dykstra (1999), and the references therein for a further history of the problem. In both cases the limiting distribution is characterized via the LCM of a Gaussian process.
The monotone rearrangement estimator for the continuous density was introduced by Fougères (1997) (see also Dette and Pilz (2006)). It is found by calculating the monotone rearrangement of a kernel density estimator (see e.g. Lieb and Loss (1997)). Fougères (1997) shows that this estimator also converges at the n1/3 rate if the true density is locally strictly decreasing, and it is shown through Monte Carlo simulations that it has better behavior than the MLE for small sample size. The latter is done by comparing the L1 metrics for different, strictly decreasing, densities. Unlike our Example 2, the Hellinger distance is not considered.
The outline of this paper is as follows. In Section 2 we show that all three estimators are consistent. We also establish some small sample size relationships between the estimators. Section 3 is dedicated to the limiting distributions of the estimators, where we show that the rate of convergence is n1/2 for all three estimators. Unlike the continuous case, the local behavior of the MLE is equivalent to that of the empirical estimator when the true mass function is strictly decreasing. In Section 4 we consider the limiting behavior of the ℓp and Hellinger distances of the estimators. In Section 5, we consider the estimation of the mixing distribution q. Proofs and some technical results are given in Section 6. R code to calculate the maximum likelihood estimator (i.e. ) is available from the website of the first author: www.math.yorku.ca/~hkj/Software/.
2. Some inequalities and consistency results
We begin by establishing several relationships between the three different estimators.
Theorem 2.1
- Suppose that p is monotone decreasing. Then
(2.1) (2.2) - If p is the uniform distribution on {0, … , y} for some integer y, then
- If is monotone then . Under the discrete uniform distribution on {0, … , y}, this occurs with probability
If p is strictly monotone with the support of p equal to {0, … , y} where , then
as n → ∞.
Let denote the collection of all decreasing mass functions on . For any estimator of and k ≥ 1 let the loss function Lk be defined by , with . The risk of at p is then defined as
(2.3) |
Corollary 2.2
When k = 2, and for any sample size n, it holds that
Based on these results, we now make the following remarks.
It is always better to use a monotone estimator (either or ) to estimate a monotone mass function.
If the true distribution is uniform, then clearly the MLE is the better choice.
If the true mass function is strictly monotone, then the estimators and should be asymptotically equivalent. We make this statement more precise in Sections 3 and 4. Figure 2 (right) shows that in this case and have about the same performance for n = 100.
When only the monotonicity constraint is known about the true p, then, by Corollary 2.2, is a better choice of estimator than .
Remark 2.3
In continuous density estimation one of the most popular measures of distance is the L1 norm, which corresponds to the ℓ1 norm on mass functions. However, for discrete mass functions, it is more natural to consider the ℓ2 norm. One of the reasons is made clear in the following sections (cf. Theorem 3.8, Corollaries 4.1 and 4.2, and Remark 4.4). The ℓ2 space is the smallest space in which we obtain convergence results, without additional assumptions on the true distribution p.
To examine more closely the case when the true distribution p is neither uniform nor strictly monotone we turn to Monte Carlo simulations. Let pU(y) denote the uniform mass function on {0, … , y}. Figure 3 shows boxplots of m = 1000 samples of the estimators for three distributions:
(top) p = 0.2pU(3) + 0.8pU(7)
(center) p = 0.15pU(3) + 0.1pU(7) + 0.75pU(11)
(bottom) p = 0.25pU(1) + 0.2pU(3) + 0.15pU(5) + 0.4pU(7)
On the left we have a small sample size of n = 20, while on the right n = 100. For each distribution and sample size, we calculate the three estimators (the estimators , and are shown in white, light grey and dark grey, respectively) and compute their distance functions from the truth (Hellinger, ℓ1, and ℓ2). Note that the MLE outperforms the other estimators in all three metrics, even for small sample sizes. It appears also that the more regions of constancy the true mass function has, the better the relative performance of the MLE, even for small sample size (see also Figure 2). By considering the asymptotic behavior of the estimators, we are able to make this statement more precise in Section 4.
Fig 3.
Comparison of the estimators (white), (light grey) and (dark grey).
All three estimators are consistent estimators of the true distribution, regardless of their relative performance.
Theorem 2.4
Suppose that p is monotone decreasing. Then all three estimators , and are consistent estimators of p in the sense that
almost surely as n → ∞ for and , whenever or .
As a corollary, we obtain the following Glivenko-Cantelli type result.
Corollary 2.5
Let and , with . Then
almost surely.
3. Limiting distributions
Next, we consider the large sample behavior of , and . To do this, define the fluctuation processes Yn, , and as
Regardless of the shape of p, the limiting distribution of Yn is well-known. In what follows we use the notation Yn,x →d Yn,x to denote weak convergence of random variables in (we also use this notation for ), and Yn ⇒ Y to denote that the process Yn converges weakly to the process Y. Let be a Gaussian process on the Hilbert space ℓ2 with mean zero and covariance operator S such that 〈S e(x), e(x’)〉 = pxδx,x’ − pxpx’, where e(x) denotes a sequence which is one at location x, and zero everywhere else. The process is well-defined, since
For background on Gaussian processes on Hilbert spaces we refer to Parthasarathy (1967).
Theorem 3.1
For any mass function p, the process Yn satisfies Yn ⇒ Y in ℓ2.
Remark 3.2
We assume that Y is defined only on the support of the mass function p. That is, let κ = sup{x : px > 0}. If κ < ∞ then .
3.1. Local behavior
At a fixed point x there are only two possibilities for the true mass function p: either x belongs to a flat region for p (i.e. pr = ⋯ = px = ⋯ = ps for some r ≤ x ≤ s), or p is strictly decreasing at x: px−1 > px > px+1. In the first case the three estimators exhibit different limiting behavior, while in the latter all three have the same limiting distribution. In some sense, this result is not surprising. Suppose that x is such that px−1 > px > px+1. Then asymptotically this will hold also for for k ≥ 1 and for sufficiently large n. Therefore, in the rearrangement of the values at x will always stay the same, i.e. . Similarly, the empirical distribution function will also be locally concave at x, and therefore both x, x − 1 will be touchpoints of with its LCM. This implies that .
On the other hand, suppose that x is such that px−1 = px = px+1. Then asymptotically the empirical density will have random order near x, and therefore both re-orderings (either via rearrangement or via the LCM) will be necessary to obtain and .
3.1.1. When p is flat at x
We begin with some notation. Let be a sequence, and let r ≤ s be positive integers. We define q(r,s) = {qr, qr+1, … , qs−1, qs} to be the r through s elements of q.
Proposition 3.3
Suppose that for some with s − r ≥ 1 the probability mass function p satisfies pr−1 > pr = ⋯ = ps > ps+1. Then
The last statement of the above theorem is the discrete version of the same result in the continuous case due to Carolan and Dykstra (1999) for a density with locally flat regions. Thus, both the discrete and continuous settings have similar behavior in this situation. Figure 4 shows the exact and limiting cumulative distribution functions when p = 0.2pU(3) + 0.8pU(7) (same as in Figure 3, top) at locations x = 4 and x = 7. Note the significantly “more discrete” behavior of the empirical and rearrangement estimators in comparison with the MLE. Also note the lack of accuracy in the approximation at x = 4 when n = 100 (top left), which is more prominent for the rearrangement estimator. This occurs because x = 4 is a boundary point, in the sense that p3 > p4, and is therefore least resilient to any global changes in . Lastly, note that the distribution functions satisfy at x = 4 while at x = 7, . It is not difficult to see that the relationships and must hold from the definition of (YR)(4,7) = rear(Y(4,7)) and (YG)(4,7) = gren(Y(4,7)).
Fig 4.
The limiting distributions at x = 4 (left) and at x = 7 (right) when p = 0.2pU(3) + 0.8pU(7) : the limiting distributions are shown (dashed) along with the exact distributions (solid) of Yn, , for n = 100 (top) and n = 1000 (bottom).
Proposition 3.4
Let θ = pr = ⋯ = ps, and let denote a multivariate normal vector with mean zero and variance matrix where
for α−1 = s − r + 1. Let Z be a standard normal random variable independent of , and let τ = s − r + 1. Then
Note that the behavior of gren(Y(r,s)) and will be quite different since almost surely, but the same is not true for Y(r,s).
Remark 3.5
To match the notation of Carolan and Dykstra (1999), note that τ is equivalent to the left slopes at the points {1, … , τ}/τ of the least concave majorant of standard Brownian bridge at the points {0, 1, … , τ}/τ. This random vector most closely matches the left derivative of the least concave majorant of the Brownian bridge on [0, 1], which is the process that shows up in the limit for the continuous case.
3.1.2. When p is strictly monotone at x
In this situation, the three estimators , and have the same asymptotic behavior. This is considerably different than what happens for continuous densities, and occurs because of the inherent discreteness of the problem for probability mass functions.
Proposition 3.6
Suppose that for some with s − r ≥ 0 the probability mass function p satisfies pr−1 > pr > ⋯ > ps > ps+1. Then
Remark 3.7
We note that the convergence results of Propositions 3.3 and 3.6 also hold jointly. That is, convergence of the three processes may also be proved jointly in .
3.2. Convergence of the process
We now strengthen these results to obtain convergence of the processes and in ℓ2. Note that the limit of Yn has already been stated in Theorem 3.1.
Theorem 3.8
Let Y be the Gaussian process defined in Theorem 3.1, with p a monotone decreasing distribution. Define YR and YG as the processes obtained by the following transforms of Y : for all periods of constancy of p, i.e. for all s ≥ r with s − r ≥ 1 such that pr−1 > pr = ⋯ = px = ⋯ = ps > ps+1 let
Then , and in ℓ2.
The two extreme cases, p strictly monotone decreasing and p equal to the uniform distribution, may now be considered as corollaries. By studying the uniform case, we also study the behavior of YG (via Proposition 3.4), and therefore we consider this case in detail.
Corollary 3.9
Suppose that p is strictly monotone decreasing. That is, suppose that px > px+1 for all x ≥ 0. Then and in ℓ2.
3.2.1. The uniform distribution
Here, the limiting distribution Y is a vector of length y+1 having a multivariate normal distribution with E[Yx] = 0 and cov(Yx, Yz) = (y + 1)−1δx,z − (y + 1)−2.
Corollary 3.10
Suppose that p is the uniform probability mass function on {0, … , y}, where . Then and .
The limiting process gren(Y) may also be described as follows. Let denote the standard Brownian bridge process on [0, 1], and write for k = −1, … , y. Then we have equality in distribution of
In particular we have that . Thus, the process U is a discrete analogue of the Brownian bridge, and gren(Y) is the vector of (left) derivatives of the least concave majorant of {(j, Uj) : j = −1, … , y}. Figure 5 illustrates two different realizations of the processes Y and gren(Y).
Fig 5.
The relationship between the limiting process Y and the least concave majorant of its partial sums for the uniform distribution on {0, … , 5}. Left: the slopes of the lines L1,L2 and L3 give the values gren(Y)0, gren(Y)1 = ⋯ = gren(Y)4 and gren(Y)5, respectively. Right: the discrete Brownian bridge lies entirely below zero. Therefore, its LCM is zero, and also gren(Y) ≡ 0. This event occurs with positive probability (see also Figure 6).
Remark 3.11
Note that if the discrete Brownian Bridge is itself convex, then the limits Y, rear(Y) and gren(Y) will be equivalent. This occurs with probability
The result matches that in part (iii) of Theorem 2.1.
Figure 6 examines the behavior of the limiting distribution of the MLE for several values of x. Since this is found via the LCM of the discrete Brownian bridge, it maintains the monotonicity property in the limit: that is, gren(Y)x ≥ gren(Y)x+1. This can easily be seen by examining the marginal distributions of gren(Y) for different values of x (Figure 6, left). For each x, there is a positive probability that gren(Y)x = 0. This occurs if the discrete Brownian bridge lies entirely below zero and then the least concave majorant is identically zero, in which case gren(Y)x = 0 for all x = 0, … , y (as in Figure 5, right). The probability of this event may be calculated exactly using the distribution function of the multivariate normal. Figure 6 (right), shows several values for different y.
Fig 6.
Limiting distribution of the MLE for the uniform case with y = 9: marginal cumulative distribution functions at x = 0, 4, 9 (left). The probability that gren(Y) ≡ 0 is plotted for different values of y (right). For y = 9, it is equal to 0.0999.
4. Limiting distributions for the metrics
In the previous section we obtained asymptotic distribution results for the three estimators. To compare the estimators, we need to also consider convergence of the Hellinger and ℓk metrics. Our results show that and are asymptotically equivalent (in the sense that the metrics have the same limit). The MLE is also asymptotically equivalent, but if and only if p is strictly monotone. If p has any periods of constancy, then the MLE has better asymptotic behavior. Heuristically, this happens because, by definition, YG is a sequence of local averages of Y, and averages have smaller variability. Furthermore, the more and larger the periods of constancy, the better the MLE performs, see, in particular, Proposition 4.5 below. These results quantify, for large sample size, the observations of Figure 3.
The rate of convergence of the ℓ2 metric is an immediate consequence of Theorem 3.8. Below, the notation Z1 ≤S Z2 denotes stochastic ordering: i.e. P(Z1 > x) ≤ P(Z2 > x) for all (the ordering is strict if both inequalities are replaced with strict inequalities).
Corollary 4.1
Suppose that p is a monotone decreasing distribution. Then, for any 2 ≤ k ≤ ∞,
If p is not strictly monotone, then ≤S may be replaced with <S. The above convergence also holds in expectation (that is, and so forth). Furthermore,
with equality if and only if p is strictly monotone.
Convergence of the other two metrics is not as immediate, and depends on the tail behavior of the distribution p.
Corollary 4.2
Suppose that p is such that . Then
If p is not strictly monotone, then ≤S may be replaced with <S. The above convergence also holds in expectation, and
with equality if and only if p is strictly monotone.
Convergence of the Hellinger distance requires an even more stringent condition.
Corollary 4.3
Suppose that κ = sup{x : px > 0} < ∞. Then
If p is not strictly monotone, then ≤S may be replaced with <S. The distribution of is chi-squared with κ degrees of freedom. The above convergence also holds in expectation, and
with equality if and only if p is strictly monotone.
Remark 4.4
We note that if , then almost surely, and if κ = ∞, then is also infinite almost surely. This implies that for the empirical and rearrangement estimators, the conditions in Corollaries 4.2 and 4.3 are also necessary for convergence. The same is true for the Grenander estimator, when the true distribution is strictly decreasing.
Proposition 4.5
Let p be a decreasing distribution, and write it in terms of its intervals of constancy. That is, let
where where θi > θi+1 for all i = 1, 2, …, and where {Ci}i≥1 forms a partition of . Then
Also, if κ = sup{x : px > 0} < ∞, then
This result allows us to explicitly calculate exactly how much “better” the performance of the MLE is, in comparison to Y and YR. With –valued random variables, it is standard to compare the asymptotic variance to evaluate the relative efficiency of two estimators. We, on the other hand, are dealing with –valued processes. Consider some process , and let ΣW denote its covariance matrix (of size ). Then the trace norm of ΣW is equal to the expected squared ℓ2 norm of W ,
where {λi}i≥1 denotes the eigenvalues of ΣW. Therefore, Corollary 4.1 tells us that, asymptotically, YG is more efficient than YR and Y , in the sense that
with equality if and only if p is strictly decreasing. Furthermore, Proposition 4.5 allows us to calculate exactly how much more efficient YG is for any given mass function p.
Suppose that p has exactly one period of constancy on r ≤ x ≤ s, and let τ = s − r + 1 ≥ 2. Further, suppose that px = θ* for r ≤ x ≤ s. Then
In particular, if p is the uniform distribution on {0, … , y}, then we find that , whereas behaves like logy/(y + 1), and is much smaller.
Note that if p is strictly monotone, then we obtain
as required. Also, if p is the uniform probability mass function on {0, … , y}, we conclude that
where .
Lastly, consider a distribution with bounded support, and fix r < s where p is strictly monotone on {r, … , s}. That is, we have that pr−1 > pr > ⋯ > ps > ps+1. Next define by for x < r and x > s, and for x ∈ {r, … , s}. Then the difference in the expected Hellinger metrics under the two distributions is
where τ = s − r + 1. Therefore, the longer the intervals of constancy in a distribution, the better the performance of the MLE.
Remark 4.6
From Theorem 1.6.2 of Robertson et al. (1988) it follows that for any x ≥ 0
This result may also be proved using the method used to show Proposition 4.5. Note that this pointwise inequality does not hold in general for YG replaced with YR.
Corollaries 4.1 and 4.2 then translate into statements concerning the limiting risks of the three estimators , , and as follows, where the risk was defined in (2.3). In particular, we see that, asymptotically, both and are inadmissible, and are dominated by the maximum likelihood estimator .
Corollary 4.7
For any 2 ≤ k ≤ ∞, and any , the class of decreasing probability mass functions on ,
The inequality in the last line is strict if p is not strictly monotone. The statements also hold for k = 1 under the additional hypothesis that .
5. Estimating the mixing distribution
Here, we consider the problem of estimating the mixing distribution q in (1.1). This may be done directly via the estimators of p and the formula (1.2). Define the estimators of the mixing distribution as follows
Each of these estimators sums to one by definition, however is not guaranteed to be positive. The main results of this section are consistency and √n̅–rate of convergence of these estimators.
Theorem 5.1
Suppose that p is monotone decreasing and satisfies . Then all three estimators , and are consistent estimators of q in the sense that
almost surely as n → ∞ for and , whenever or .
To study the rates of convergence we define the fluctuation processes Zn, , and as
with limiting processes defined as
Theorem 5.2
Suppose that p is such that κ = sup{x ≥ 0 : px > 0} < ∞. Then Zn ⇒ Z, and . Furthermore, ∥Zn∥k →d ∥Z∥k, and for any k ≥ 1. These convergences also hold in expectation. Also, and and these again also hold in expectation.
As before, we have asymptotic equivalence of all three estimators if p is strictly decreasing (cf. Corollary 3.9). To determine the relative behavior of the estimators and we turn to simulations. Since is not guaranteed to be a probability mass function (unlike the other two estimators), we exclude it from further consideration.
In Figure 7, we show boxplots of m = 1000 samples of the distances , and for (light grey) and (dark grey) with n = 20 (left), n = 100 (center) and n = 1000 (right). From top to bottom the true distributions are
p = pU(5),
p = 0.2pU(3) + 0.8pU(7),
p = 0.25pU(1) + 0.2pU(3) + 0.15pU(5) + 0.4pU(7), and
p is geometric with θ = 0.75.
We can see that has better performance in all metrics, except for the case of the strictly decreasing distribution. As before, the flatter the true distribution is, the better the relative performance of . Notice that by Corollary 3.9 and Theorem 5.2 the asymptotic behavior (i.e. rate of convergence and limiting distributions) of the l2 norm of and should be the same if p is strictly decreasing.
Fig 7.
Monte Carlo comparison of the estimators (light grey) and (dark grey).
Remark 5.3
For κ = ∞, the process is known to converge weakly in ℓ2 if and only if , while the convergence is know to hold in ℓ1 if and only if ; see e.g. Araujo and Giné (1980, Exercise 3.8.14, page 205). We therefore conjecture that and converge weakly to ZR and ZG in ℓ2 (resp. ℓ1) if and only if (resp. ).
6. Proofs
Proof of Remark 1.1
This bound follows directly from the definition of p, since
In the next lemma, we prove several useful properties of both the rearrangement and Grenander operators.
Lemma 6.1
Consider two sequences p and q with support S, and let ϕ(·) denote either the Grenander or rearrangement operator. That is, ϕ(p) = gren(p) or ϕ(p) = rear(p).
- For any increasing function ,
.(6.1) - Suppose that is a non–negative convex function such that Ψ(0) = 0, and that q is decreasing. Then,
(6.2) Suppose that |S| is finite. Then ϕ(p) is a continuous function of p.
Proof
- Suppose that S = {s1, … , s2}, where it is possible that s2 = ∞. Then it is clear from the properties of the rearrangement and Grenander operators that
for y ∈ S. These inequalities immediately imply (6.1), since, by summation by parts,
and f is an increasing function. - For the Grenander estimator this is simply Theorem 1.6.1 in Robertson, Wright and Dykstra (1988). For the rearrangement estimator, we adapt the proof from Theorem 3.5 in Lieb and Loss (1997). We first write Ψ = Ψ+ + Ψ−, where Ψ+(x) = Ψ(x) for x ≥ 0 and Ψ−(x) = Ψ(x) for x ≤ 0. Now, since Ψ+ is convex, there exists an increasing function such that . Now,
Applying Fubini’s theorem, we have that
Now, the function is an increasing function of x, and for ϕ(p) = rear(p), for each fixed s we have that , since is an increasing function. Therefore, applying (6.1), we find that the last display above is bounded below by
The proof for Ψ− is the same, except that here we use the identity Since |S| is finite, we know that p is a finite vector, and therefore it is enough to prove continuity at any point x ∈ S. For ϕ = rear this is a well–known fact. Next, note that if pn → p, then the partial sums of pn also converge to the partial sums of p. From Lemma 2.2 of Durot and Tocquet (2003), it follows that the least concave majorant of pn converges to the least concave majorant of p, and hence, so do their differences. Thus ϕ(pn)x → ϕ(p)x.
6.1. Some inequalities and consistency results: Proofs
Proof of Theorem 2.1
- Choosing Ψ(t) = |t|k in (6.2) of Lemma 6.1 proves (2.2). To prove (2.1) recall that
By Hardy et al. (1952), Theorem 368, page 261, (or Theorem 3.4 in Lieb and Loss (1997)) it follows that
which proves the result for the rearrangement estimator. It remains to prove the same for the MLE. Let {Bi}i≥1 denote a partition of . By definition,
for some partition. Jensen’s inequality now implies that
which completes the proof. is obvious.
The second statement is obvious in light of (2.2) with k = ∞. To see that the probability of monotonicity of the converges to 1/(y+1)! under the uniform distribution, note that the event in question is that same as the event that the components of the vector are ordered in the same way. This vector converges in distribution to Z ~ Ny+1(0, Σ) where Σ = diag(1/(y + 1)) − (y + 1)−211T, and the probability P (Z1 ≥ Z2 ≥ ⋯ ≥ Zy+1) = 1/(y + 1)! since the components of Z are exchangeable.
Proof of Corollary 2.2
For any , we have that
Plugging in the discrete uniform distribution on {0, … , κ}, and applying part (ii) of Theorem 2.1, we find that
Thus, for any ε > 0, there exists a , such that
Since the upper bound on both risks is one, the result follows.
Proof of Theorem 2.4
The results of this theorem are quite standard, and we provide a proof only for completeness. Let denote the empirical distribution function and F the cumulative distribution function of the true distribution p. For any K (large), we have that for any x > K ,
Fix ε > 0, and choose K large enough so that (1 − F(K)) < ε/6. Next, there exists an n0 sufficiently large so that and for all n ≥ n0 almost surely. Therefore for n ≥ n0
This shows that almost surely for k = ∞. A similar approach proves the result for any 1 ≤ k < ∞. Convergence of follows since for mass functions (see e.g. Le Cam, (1969), page 35). Consistency of the other estimators and now follows from the inequalities of Theorem 2.1.
Proof of Corollary 2.5
Note that by virtue of the estimators, we have that and for all x ≥ 0. Now, fix ε > 0. Then there exists a K such that . By the Glivenko-Cantelli lemma, there exists an n0 such that for all n ≥ n0
almost surely. Furthermore, by Theorem 2.4, n0 can be chosen large enough so that for all n ≥ n0
almost surely. Therefore, for all n ≥ n0, we have that
The proof for the rearrangement estimator is identical.
6.2. Limiting distributions: Proofs
Lemma 6.2
Let Wn be a sequence of processes in ℓk with 1 ≤ k < ∞. Suppose that
,
.
Then Wn is tight in ℓk.
Proof
Note that for k < ∞, compact sets K are subsets of ℓk such that there exists a sequence of real numbers Ax for and a sequence λm → 0 such that
|Wx| ≤ Ax for all ,
Σk≥m |Wx|k ≤ λm for all m,
for all elements w ∈ K. Clearly, if the conditions of the lemma are satisfied, then for each ε > 0, we have that
for all n. Thus, Wn is tight in ℓk.
Proof of Theorem 3.1
Convergence of the finite dimensional distributions is standard. It remains to prove tightness in ℓ2. By Lemma 6.2 this is straightforward, since
Throughout the remainder of this section we make extensive use of a set equality for the least concave majorant known as the “switching relation”. Let
(6.3) |
denote the first time that the process reaches its maximum. Then the following holds
(6.4) |
For more background (as well as a proof) of this fact see, for example, Balabdaoui et al. (2009).
Proof of Proposition 3.3
Let F denote the cumulative distribution function for the function p. For fixed it follows from (6.4) that
(6.5) |
where . Note that for any constant c, argmaxL(Zn(y)) = argmaxL(Zn(y) + c), and therefore we instead take
where
Let denote the standard Brownian bridge on [0, 1]. It is well-known that . Also, Wn(y) → ∞ for y ∉ {r − 1, … , s}, and it is identically zero otherwise. It follows that the limit of (6.5) is
for any x ∈ {r, … , s}. Note that the process
and therefore the probability above is equal to
for x ∈ {r, … , s}. Since the half-open intervals [a, b) are convergence determining, this proves pointwise convergence of to gren(Y)x.
To show convergence of the rearrangement estimator fluctuation process, note that for sufficiently large n we have that for all x ∈ {r, … , s} and k ≥ 1. Therefore, and furthermore, since px is constant here, . The result now follows from the continuous mapping theorem.
Proof of Proposition 3.4
To simplify notation, let for m = 0, … , s − r + 1. Also, let θ = pr = ⋯ = ps and then Gm = F(m − r + 1) − F(r − 1) = θm. Write
where s̄ = s − r + 1. Let . Then and some calculation shows that and
Also, . Let Z be a standard normal random variable independent of the standard Brownian bridge . We have shown that
Next, let for m = 1, … , s̄. The vector is multivariate normal with mean zero and . To finish the proof, note that for any constant c.
Proof of Proposition 3.6
The claim for the rearrangement estimator follows directly from Theorem 2.4 for k = ∞. To prove the second claim, we will show that . To do this, we again use the switching relation (6.4).
Fix ε > 0. Then
(6.6) |
where . Since for any constant c, , we instead take
where
Let denote the standard Brownian bridge on [0, 1]. It is well-known that and . Also Wn(y) = 0 at y = −1, 0 and Wn(y) → ∞ for y ∉ {−1, 0}. Define
and notice that . It follows that the limit of (6.6) is
since . A similar argument proves that
showing that and completing the proof.
Proof of Theorem 3.8
Let ϕ denote an operator on sequences in l2. Specifically, we take ϕ = gren or ϕ = rear. Also, for a fixed mass function p let . Next, define ϕp to be the local version of the ϕ operator. That is, for each i ≥ 1, ϕp(q)x = ϕ(p(τi+1,τi+1))x for all τi + 1 ≤ x ≤ τi+1.
Fix ε > 0, and suppose that qn → q in ℓ2. Then there exists a and an n0 such that . By Lemma 6.1, ϕp is continuous on finite blocks, and therefore it is continuous on {0, … , K}. Hence, there exists a such that for all
Applying (6.2), we find that for all .
which shows that ϕp is continuous on ℓ2. Since Yn ⇒ Y in ℓ2, it follows, by the continuous mapping theorem, that . However, both and are of the form . To complete the proof of the theorem it is enough to show that
converges to zero in L1; that is, we will show that .
By Skorokhod’s theorem, there exists a probability triple and random processes Y and , such that Yn → Y almost surely in ℓ2. Fix ε > 0 and find such that .
Next, let , and let . Then, there exists an n0 such that for all n ≤ n0
(6.7) |
(6.8) |
almost surely (see Corollary 2.5).
Now, consider any . It follows that any such m is also a touchpoint of the operator ϕ on . Here, by touchpoint we mean that . From (6.7), it follows that
which implies that m is a touchpoint for the rearrangement estimator. For the Grenander estimator, we require (6.8). Here,
Therefore, the slope of changes from m to m + 1, which implies that m is a touchpoint almost surely. Let . An important property of the ϕ operator is if m < m’ are two touchpoints of ϕ applied to , then for all m+1 ≤ x ≤ m’, . Now, since p takes constant values between the touchpoints , it follows that , for all x ≤ K.
Therefore, for all n ≥ n0
almost surely. It follows that
and hence
Since , with , we may apply Fatou’s lemma so that
Letting ε → 0 completes the proof.
Corollaries 3.9 and 3.10 are obvious consequences of Theorem 3.8. Remark 3.11 is proved in the following section.
6.3. Limiting distributions for metrics: Proofs
Proof of Corollary 4.1
We provide the details only in the k = 2 setting. The cases when k > 2 follow in a similar manner, since here ∥x∥k ≤ ∥x∥2 for x ∈ ℓ2.
Convergence of , and follows from Theorems 3.1 and 3.8 by the continuous mapping theorem. That ∥Y∥2 = ∥YR∥2 is obvious from the definition of YR. That ∥YG∥2 ≤ ∥Y∥2 follows from Jensen’s inequality and the definition of the gren (·) operator, since for any r < s, gren (Y(r,s))x is equal to the average of Yy over some subset of {r, ⋯ , s} containing the point x. If p is not strictly decreasing, then there exists a region, which we denote again by {r, ⋯ , s}, where it is constant. Then there is positive probability that (YG)(r,s) is different from Y(r,s). In this case, we have that
which finishes the proof of the stochastic ordering in the third statement. Convergence in expectation is immediate since
and the same results for , follow by the dominated convergence theorem and the bounds in Theorem 2.1 (i). Lastly, the bound with equality if and only if p is strictly monotone follows from the stochastic ordering.
Proof of Corollary 4.2
The result of the corollary for the empirical estimator is essentially the Borisov-Durst theorem (see e.g. Dudley (1999), Theorem 7.3.1, page 244), which states that
if . To complete the argument note that for any sequence w such that (note that the condition means that the sequences Yn and Y are absolutely summable almost surely). However, the result may also be proved by noting that the sequence Yn is tight in ℓ1 using Lemma 6.2, since
as m → ∞ under the assumption . The proof that and in ℓ1 is identical to the proof of Theorem 3.8, and we omit the details. Convergence of expectations follows since ∥Yn∥1 is uniformly integrable, as
by the Cauchy-Schwarz inequality. All other details follow as in the proof of Corollary 4.1.
Proof of Corollary 4.3
If κ < ∞, then we have that
which converges to
(6.9) |
by Theorem 3.1 and Theorem 2.4 for k = ∞. That this has a chi-squared distribution with κ degrees of freedom is standard, and is shown for example, in Ferguson (1996), Theorem 9. Convergence of means follows by the dominated convergence theorem from the bound (see e.g. Le Cam (1969), page 35) and Corollary 4.2. All other details follow as in the proof of Corollary 4.1.
Proof of Remark 4.4
Suppose first that . Define P to be the probability measure and let W be the mean zero Gaussian field on ℓ2 such that E[WxWx’] = pxδx,x’. Then we may write , where .
Now, since , by the Borel-Cantelli lemma we have that almost surely. Since
and is finite almost surely, it follows that almost surely as well. That is, if , then the random variable ∥Y∥1 simply does not exist.
A similar argument works for the Hellinger norm. Assume that κ = ∞. Then
and the Borel-Cantelli lemma shows that is infinite almost surely.
Lemma 6.3
Let Z1, … , Zk be i.i.d. N(0,1) random variables, and let denote the left slopes of the least concave majorant of the graph of the cumulative sums with j = 0, … , k. Let T denote the number of times that the LCM touches the cumulative sums (excluding the point zero, but including the point k). Then
Proof
Since the submission of this paper, it has come to our attention that this result follows from the Bohnenblust-Spitzer lemma as exposited by Steele (2002); taking f(k, y) = y2/k in the development on pages 240-241 of Steele (2002) gives the result. We give a direct argument below.
It is instructive to first consider some of the simple cases. When k = 1, the result is obvious. Suppose then that k = 2. We have
T | if | |
---|---|---|
2 | Z1 > Z2 | |
1 |
Note that we ignore all equalities, since these occur with probability zero. It follows that
where, by exchangeability it follows that
On the other hand, we also have that
since the random variables Z̄ = (Z1 + Z2)/2 and Z1 − Z̄ are independent. The result follows.
Next, suppose that k = 3. Then we have the following.
T | if |
||
---|---|---|---|
(a) | (b) | ||
3 | Z1 > Z2 > Z3 | ||
2 | |||
2 | |||
1 |
The choice of splitting the conditions between columns (a) and (b) is key to our argument. Note that the LCM creates a partition of the space {1, … , k}, where within each subset the slope of the LCM is constant. The number of partitions is equal to T. Here, column (a) describes the necessary conditions on the order of the slopes on the partitions, while column (b) describes the necessary conditions that must hold within each partition.
In the first row of the table, we find by permuting across all orderings of (123) that
Next consider T = 2. Here, by permuting (123) to (312), we find that
Note that the permutation (123) to (312) may be re-written as ({12}{3}) to ({3}{12}) which is really a permutation on the partitions formed by the LCM. Now,
where in the penultimate line we use the fact that Z3, (Z1 + Z2)/2 and Z1 − (Z1 + Z2)/2 are independent.
Lastly,
as the variables Z̄ = (Z1 + Z2 + Z3) and {Z1 - Z̄, Z2 - Z̄, Z3 - Z̄} are independent.
The key to the general proof is the combination of two actions:
Permutations of subgroups (column (a)), and
- independence of column (b) from the random variables and the indicator functions in column (a). Note that for any k > ≥ 1, letting
which is independent of Z̄ for any choice of j < k.
To write down the proof for any k we must first introduce some notation.
For any 1 ≤ m ≤ k, we may create a collection of partitions of {1, … , k} such that the total number of elements in each partition is m. For example, when k = 4 and m = 2, then the elements of are the partitions ({1}{234}), ({12}{34}) and ({123}{4}). Furthermore, for each partition, we may write down the number of elements in each subset of the partition. Here the sizes of the partitions are 1, 3 then 2, 2 and 3, 1. These partitions my be grouped further by placing together all partitions such that their sizes are unique up to order. Thus, in the above example we would put together 1, 3 and 3, 1 as one group, and the second group would be made up of 2, 2. From each subgroup we wish to choose a representative member, and the collection of these representatives will be denoted as τ(m). We assume that the representative τ is chosen in such a way that the sizes of the partitions are given in increasing order. Let r1 denote the number of subgroups with size 1, and so on. Thus, for τ = ({1}{234}), we have r1 = 1, r2 = 0, r3 = 1, ⋯ , rk = 0.
- Next, from τ(m) we wish to recreate the entire collection . To do this, it is sufficient to take each τ and recreate all of the partitions which had the same sizes. Let σmτ denote the resulting collection for a fixed partition τ. Thus, is equal to the union of σmτ over all τ ∈ τ(m). Note that the number of elements in σmτ is given by
We also use the notation with R0 = 0. Note that Rk = m. For each partition σ, we write σ1, …, σm to denote the individual subsets of the partition. Thus, for σ = ({1}{234}), we would have σ1 = {1} and σ2 = {2, 3, 4}
- For each σj as defined above, we let
where denotes σj with its last l elements removed.
We are now ready to calculate . By considering all possible partitions, this is equal to the sum over all τ ∈ τ(m) of the following terms
By permuting each σ ∈ σmτ, and appealing to the exchangeability of the Zi’s, this is equal to
by independence of each AVσjZ and each Zi − AVσjZ for i ∈ σj. Notice that the permutations of σ ∈ σmτ do not account for permutations across all groups with equal “size”. By considering furthermore all permutations between groups of equal size, we further obtain that the last display above is equal to
Lastly, we collect terms to find that is equal to m times
which concludes the proof.
Proof of Proposition 4.5
In light of Proposition 3.4 and the definition of YG (along with some simple calculations), it is sufficient to prove that
(6.10) |
using the notation of the Proposition 3.4. Without loss of generality we may assume that r = 0, and for simplicity we write for .
Let k = s + 1, and let Z1, … , Zk denote k i.i.d. N(0,1) random variables, let Z̄ denote their average, and let (which is independent of Z̄). We then have that
Therefore, by Lemma 6.3, to prove (6.10), it is sufficient to show that
where T denotes the number of touchpoints of the LCM with the cumulative sums of the .
To do this, we use the results of Sparre Andersen (1954). He considers exchangeable random variables X1, X2, … and their partial sums , and shows that the number Hn of values i ∈ {1, … , n−1} for which Si coincides with the least concave majorant (equivalently the greatest convex minorant) of the sequence S0, … , Sn has mean given by
as long as the random variables X1, … , Xn are symmetrically dependent and
The vector X1, … , Xn is symmetrically dependent if its joint cumulative distribution function P(Xi ≤ xi, i = 1, … , n) is a symmetric function of x1, … , xn. This result is Theorem 5 in Sparre Andersen (1954). Clearly, we have that E[T − 1] = E[Hk], for X1 = Z1, … , Xn = Zk, which are exchangeable, and satisfy the required conditions. The result follows.
Proof of Remark 3.11
To prove this result we continue with the notation of the previous proof. Equality of gren(Y) with Y holds if and only if the above partition T = {0, … , y}. By Theorem 5 of Sparre Andersen (1954), this occurs with probability 1/(y + 1)!.
Proof of Remark 4.6
By Proposition 3.4 (and using the notation defined there), it is enough to prove that
where for simplicity we write . Let be i.i.d. normal random variables with mean zero and variance 1/τ, and let . Then , and also . Notice also that and Z̄ are independent. We therefore find that
the latter inequality following directly from Theorem 1.6.2 of Robertson, Wright and Dykstra (1988), since the elements of are independent.
6.4. Estimating the mixing distribution: Proofs
Proof of Theorem 5.1
Since and , it is sufficient to only consider convergence in the ℓ1 norm. Note that
and therefore we may further reduce the problem to showing that converges to zero.
For , we have that for any large K
and since Ep[X] exists by assumption, it follows from the law of large numbers that for any K,
almost surely. The proof now proceeds as in the proof of Theorem 2.4.
For the rearrangement estimator and the MLE, we may use the same approach. The key is to note that , for any K and for both . This holds since is an increasing function and therefore (6.1) of Lemma 6.1 applies.
Proof of Theorem 5.2
Since κ < ∞ by assumption, the theorem follows directly from the results of Sections 3 and 4, as well as Theorem 5.1.
Acknowledgements
We owe thanks to Jim Pitman for suggesting the relevance of the Bohnenblust-Spitzer algorithm and for pointers to the literature.
Footnotes
AMS 2000 subject classifications: Primary 62E20, 62F12; secondary 62G07, 62G30, 62C15, 62F20.
Contributor Information
Hanna K. Jankowski, Department of Mathematics and Statistics, York University, hkj@mathstat.yorku.ca.
Jon A. Wellner, Department of Statistics, University of Washington, jaw@stat.washington.edu.
References
- Alamatsaz MH. On discrete α-unimodal distributions. Statist. Neerlandica. 1993;47:245–252. [Google Scholar]
- Anevski D, Fougères A-L. Limit properties of the monotone rearrangement for density and regression function estimation. Tech. rep. 2007 arXiv.org.
- Araujo A, Giné E. The central limit theorem for real and Banach valued random variables. John Wiley & Sons; New York-Chichester-Brisbane: 1980. Wiley Series in Probability and Mathematical Statistics. [Google Scholar]
- Balabdaoui F, Jankowski HK, Pavlides M, Seregin A, Wellner JA. On the Grenander estimator at zero. Statistica Sinica. 2009 doi: 10.5705/ss.2011.038a. to appear. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Banerjee M, Kosorok M, Tang R. Tech. rep. University of Michigan; 2009. Asymptotics for current status data with different observation time schemes. [Google Scholar]
- Birgé L. Estimating a density under order restrictions: nonasymptotic minimax risk. Ann. Statist. 1987;15:995–1012. [Google Scholar]
- Carolan C, Dykstra R. Asymptotic behavior of the Grenander estimator at density flat regions. The Canadian Journal of Statistics. 1999;27:557–566. [Google Scholar]
- Chernozhukov V, Fernandez-Val I, Galichon A. Improving point and interval estimators of monotone functions by rearrangement. Biometrika. 2009;96:559–575. [Google Scholar]
- Dette H, Neumeyer N, Pilz KF. A simple nonparametric estimator of a strictly monotone regression function. Bernoulli. 2006;12:469–490. [Google Scholar]
- Dette H, Pilz KF. A comparative study of monotone nonparametric kernel estimates. Journal of Statistical Computation and Simulation. 2006;76:41–56. [Google Scholar]
- Dudley RM. Uniform Central Limit Theorems, vol. 63 of Cambridge Studies in Advanced Mathematics. Cambridge University Press; Cambridge: 1999. [Google Scholar]
- Durot C, Tocquet A-S. On the distance between the empirical process and its concave majorant in a monotone regression framework. Ann. Inst. H. Poincaré Probab. Statist. 2003;39:217–240. [Google Scholar]
- Ferguson TS. A Course in Large Sample Theory. Chapman & Hall; London: 1996. Texts in Statistical Science Series. [Google Scholar]
- Fougères A-L. Estimation de densités unimodales. The Canadian Journal of Statistics. 1997;25:375–387. [Google Scholar]
- Hardy GH, Littlewood JE, Pólya G. Inequalities. 2d ed. Cambridge, at the University Press; 1952. [Google Scholar]
- Le Cam LM. Théorie asymptotique de la décision statistique. 33. Les Presses de l’Université de Montréal; Montreal, Que: 1969. Séminaire de Mathématiques Supérieures. Et́é, 1968. [Google Scholar]
- Lieb EH, Loss M. Analysis, vol. 14 of Graduate Studies in Mathematics. American Mathematical Society; Providence, RI: 1997. [Google Scholar]
- Maathuis MH, Hudgens MG. Nonparametric inference for competing risks current status data with continuous, discrete or grouped observation times. Tech. rep. 2009 doi: 10.1093/biomet/asq083. arXiv.org. [DOI] [PMC free article] [PubMed]
- Parthasarathy KR. Probability measures on metric spaces. 3. Academic Press Inc.; New York: 1967. Probability and Mathematical Statistics. [Google Scholar]
- Prakasa Rao BLS. Estimation of a unimodal density. Sankhyā Series A. 1969;31:23–36. [Google Scholar]
- Robertson T, Wright FT, Dykstra RL. Order Restricted Statistical Inference. John Wiley & Sons Ltd.; Chichester: 1988. Wiley Series in Probability and Mathematical Statistics: Probability and Mathematical Statistics. [Google Scholar]
- Sparre Andersen E. On the fluctuations of sums of random variables. II. Mathematica Scandinavica. 1954;2:195–223. [Google Scholar]
- Steele JM. The Bohnenblust-Spitzer algorithm and its applications. J. Comput. Appl. Math. 2002;142:235–249. Probabilistic methods in combinatorics and combinatorial optimization. [Google Scholar]