Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2010 Apr 22.
Published in final edited form as: Electron J Stat. 2009;3:1567–1605. doi: 10.1214/09-EJS526

Estimation of a discrete monotone distribution

Hanna K Jankowski 1,*, Jon A Wellner 2,
PMCID: PMC2858466  NIHMSID: NIHMS177682  PMID: 20419057

Abstract

We study and compare three estimators of a discrete monotone distribution: (a) the (raw) empirical estimator; (b) the “method of rearrangements” estimator; and (c) the maximum likelihood estimator. We show that the maximum likelihood estimator strictly dominates both the rearrangement and empirical estimators in cases when the distribution has intervals of constancy. For example, when the distribution is uniform on {0, … , y}, the asymptotic risk of the method of rearrangements estimator (in squared 2 norm) is y/(y + 1), while the asymptotic risk of the MLE is of order (log y)/(y + 1). For strictly decreasing distributions, the estimators are asymptotically equivalent.

Keywords: Maximum likelihood, monotone mass function, rearrangement, rate of convergence, limit distributions, nonparametric estimation, shape restriction, Grenander estimator

1. Introduction

This paper is motivated in large part by the recent surge of activity concerning “method of rearrangement” estimators for nonparametric estimation of monotone functions: see, for example, Fougères (1997), Dette and Pilz (2006), Dette et al. (2006), Chernozhukov et al. (2009) and Anevski and Fougères (2007). Most of these authors study continuous settings and often start with a kernel type estimator of the density, which involves choices of a kernel and of a bandwidth. Our goal here is to investigate method of rearrangement estimators and compare them to natural alternatives (including the maximum likelihood estimators with and without the assumption of monotonicity) in a setting in which there is less ambiguity in the choice of an initial or “basic” estimator, namely the setting of estimation of a monotone decreasing mass function on the non-negative integers N={0,1,2,}.

Suppose that p={px}xN is a probability mass function; i.e. px ≥ 0 for all xN and xNpx=1. Our primary interest here is in the situation in which p is monotone decreasing: pxpx+1 for all xN. The three estimators of p we study are:

  1. the (raw) empirical estimator,

  2. the method of rearrangement estimator,

  3. the maximum likelihood estimator.

Notice that the empirical estimator is also the maximum likelihood estimator when no shape assumption is made on the true probability mass function.

Much as in the continuous case our considerations here carry over to the case of estimation of unimodal mass functions with a known (fixed) mode; see e.g. Fougères (1997), Birgé (1987), and Alamatsaz (1993). For two recent papers discussing connections and trade-offs between discrete and continuous models in a related problem involving nonparametric estimation of a monotone function, see Banerjee et al. (2009) and Maathuis and Hudgens (2009).

Distributions from the monotone decreasing family satisfy Δpxpx+1px ≤ 0 for all xN, and may be written as mixtures of uniform mass functions

px=y01y+11{0,,y}(x)qy. (1.1)

Here, the mixing distribution q may be recovered via

qx=(x+1)Δpx, (1.2)

for any xN.

Remark 1.1

From the form of the mass function, it follows that px ≤ 1/(x+1) for all x ≥ 0.

Suppose then that we observe X1, X2, … , Xn i.i.d. random variables with values in N and with a monotone decreasing mass function p. For xN, let

p^n,xn1i=1n1{x}(Xi)

denote the (unconstrained) empirical estimator of the probabilities px. Clearly, there is no guarantee that this estimator will also be monotone decreasing, especially for small sample size. We next consider two estimators which do satisfy this property: the rearrangement estimator and the maximum likelihood estimator (MLE).

For a vector w = {w0, … , wk}, let rear(w) denote the reverse-ordered vector such that w’ = rear(w) satisfies w0w1wk. The rearrangement estimator is then simply defined as

p^nR=rear(p^n).

We can also write p^n,xR=sup{u:Qn(u)x}, where Qn(u)#{x:p^n,xu}.

To define the MLE we again need some additional notation. For a vector w = {w0, … , wk}, let gren(w) be the operator which returns the vector of the k + 1 slopes of the least concave majorant of the points

{(j,j=0jwi):j=1,0,,k}.

Here, we assume that j=01wj=0. The MLE, also known as the Grenander estimator, is then defined as

p^nG=gren(p^n).

Thus, p^n,xG is the left derivative at x of the least concave majorant (LCM) of the empirical distribution function Fn(x)=n1i=1n1[0,x](Xi) (where we include the point (−1, 0) to find the left derivative at x = 0). Therefore, by definition, the MLE is a vector of local averages over a partition of {0, … , max{X1, … , Xn}}. This partition is chosen by the touchpoints of the LCM with Fn. It is easily checked that p^nG corresponds to the isotonic estimator for multinomial data as described in Robertson et al. (1988), pages 7–8 and 38–39.

We begin our discussion with two examples: in the first, p is the uniform distribution, and in the second p is strictly monotone decreasing. To compare the three estimators, we consider several metrics: the k norm for 1 ≤ k ≤ ∞ and the Hellinger distance. Recall that the Hellinger distance between two mass functions is given by

H2(p,p~)=21[pp~]2dμ=21x0[pxp~x]2,

while the k metrics are defined as

pp~k={(Σx0pxp~xk)1k1k<,supx0pxp~xk=.}

In the examples, we compare the Hellinger norm and the 1 and 2 metrics, as the behavior of these differs the most.

Example 1. Suppose that p is the uniform distribution on {0, … , 5}. For n = 100 independent draws from this distribution we observe p^n=(0.20,0.14,0.11,0.22,0.15,0.18). Then p^nR=(0.22,0.20,0.18,0.15,0.14,0.11), and the MLE may be calculated as p^nG=(0.20,0.16,0.16,0.16,0.16,0.16). The estimators are illustrated in Figure 1 (left). The distances of the estimators from the true mass function p are given in Table 1 (left). The maximum likelihood estimator p^nG is superior in all three metrics shown. To explore this relationship further, we repeated the estimation procedure for 1000 Monte Carlo samples of size n = 100 from the uniform distribution. Figure 2 (left) shows boxplots of the metrics for the three estimators. The figure shows that here the rearrangement and empirical estimators have the same behavior; a relationship which we establish rigorously in Theorem 2.1.

Fig 1.

Fig 1

Illustration of MLE and monotone rearrangement estimators: empirical proportions (black dots), monotone rearrangement estimator (dashed line), MLE (solid line), and the true mass function (grey line). Left: the true distribution is the discrete uniform; and right: the true distribution is the geometric distribution with θ = 0.75. In both cases a sample size of n = 100 was observed.

Table 1. Distances between true p and estimators.

Example 1
Example 2
H(p~,p) p~p2 p~p1 H(p~,p) p~p2 p~p1
p~=p^n 0.08043 0.09129 0.2 0.1641 0.07425 0.2299
p~=p^nR 0.08043 0.09129 0.2 0.1290 0.06115 0.1821
p~=p^nG 0.03048 0.03651 0.06667 0.09553 0.06302 0.1887

Fig 2.

Fig 2

Monte Carlo comparison of the estimators: boxplots of m = 1000 distances of the estimators p^n (white), p^nR (light grey) and p^nG (dark grey) from the truth for a sample size of n = 100. Left: the true distribution is the discrete uniform; and right: the true distribution is the geometric distribution with θ = 0.75.

Example 2. Suppose that p is the geometric distribution with px = (1 − θ)θx for xN and with θ = 0.75. For n = 100 draws from this distribution we observe p^n, p^nR and p^nG as shown in Figure 1 (right). The distances of the estimators from the true mass function p are given in Table 1 (right). Here, p^n is outperformed by p^nG and p^nR in all the metrics, with p^nR performing better in the 1 and 2 metrics, but not in the Hellinger distance. These relationships appear to hold true in general, see Figure 2 (left) for boxplots of the metrics obtained through Monte Carlo simulation.

The above examples illustrate our main conclusion: the MLE preforms better when the true distribution p has intervals of constancy, while the MLE and rearrangement estimators are competitive when p is strictly monotone. Asymptotically, it turns out that the MLE is superior if p has any periods of constancy, while the empirical and rearrangement estimators are equivalent. However, if p is strictly monotone, then all three estimators have the same asymptotic behavior.

Both the MLE and monotone rearrangement estimators have been considered in the literature for the decreasing probability density function. The MLE, or Grenander estimator, has been studied extensively, and much is known about its behavior. In particular, if the true density is locally strictly decreasing, then the estimator converges at a rate of n1/3, and if the true density is locally flat, then the estimator converges at a rate of n1/2, cf. Prakasa Rao (1969); Carolan and Dykstra (1999), and the references therein for a further history of the problem. In both cases the limiting distribution is characterized via the LCM of a Gaussian process.

The monotone rearrangement estimator for the continuous density was introduced by Fougères (1997) (see also Dette and Pilz (2006)). It is found by calculating the monotone rearrangement of a kernel density estimator (see e.g. Lieb and Loss (1997)). Fougères (1997) shows that this estimator also converges at the n1/3 rate if the true density is locally strictly decreasing, and it is shown through Monte Carlo simulations that it has better behavior than the MLE for small sample size. The latter is done by comparing the L1 metrics for different, strictly decreasing, densities. Unlike our Example 2, the Hellinger distance is not considered.

The outline of this paper is as follows. In Section 2 we show that all three estimators are consistent. We also establish some small sample size relationships between the estimators. Section 3 is dedicated to the limiting distributions of the estimators, where we show that the rate of convergence is n1/2 for all three estimators. Unlike the continuous case, the local behavior of the MLE is equivalent to that of the empirical estimator when the true mass function is strictly decreasing. In Section 4 we consider the limiting behavior of the p and Hellinger distances of the estimators. In Section 5, we consider the estimation of the mixing distribution q. Proofs and some technical results are given in Section 6. R code to calculate the maximum likelihood estimator (i.e. gren(p^nG)) is available from the website of the first author: www.math.yorku.ca/~hkj/Software/.

2. Some inequalities and consistency results

We begin by establishing several relationships between the three different estimators.

Theorem 2.1

  1. Suppose that p is monotone decreasing. Then
    max{H(p^nG,p),H(p^nR,p)}H(p^n,p), (2.1)
    max{p^nGpk,p^nRpk}p^npk,1k. (2.2)
  2. If p is the uniform distribution on {0, … , y} for some integer y, then
    H(p^n,p)=H(p^nR,p),p^nRpk=p^npk,1k.
  3. If p^n is monotone then p^nG=p^nR=p^n. Under the discrete uniform distribution on {0, … , y}, this occurs with probability
    P(p^n,0p^n,1p^n,y)1(y+1)!asn.
    If p is strictly monotone with the support of p equal to {0, … , y} where yN, then
    P(p^n,0p^n,1p^n,y)1,
    as n → ∞.

Let P denote the collection of all decreasing mass functions on N. For any estimator p~n of pP and k ≥ 1 let the loss function Lk be defined by Lk(p,p~n)=x0p~n,xpxk, with L(p,p~n)=supx0p~n,xpx. The risk of p~n at p is then defined as

Rk(p,p~n)=Ep[x0p~n,xpxk]. (2.3)

Corollary 2.2

When k = 2, and for any sample size n, it holds that

supPR2(p,p^nG)supPR2(p,p^nR)=supPR2(p,p^n).

Based on these results, we now make the following remarks.

  1. It is always better to use a monotone estimator (either p^nR or p^nG) to estimate a monotone mass function.

  2. If the true distribution is uniform, then clearly the MLE is the better choice.

  3. If the true mass function is strictly monotone, then the estimators p^nR and p^nG should be asymptotically equivalent. We make this statement more precise in Sections 3 and 4. Figure 2 (right) shows that in this case p^nR and p^nG have about the same performance for n = 100.

  4. When only the monotonicity constraint is known about the true p, then, by Corollary 2.2, p^nG is a better choice of estimator than p^nR.

Remark 2.3

In continuous density estimation one of the most popular measures of distance is the L1 norm, which corresponds to the 1 norm on mass functions. However, for discrete mass functions, it is more natural to consider the 2 norm. One of the reasons is made clear in the following sections (cf. Theorem 3.8, Corollaries 4.1 and 4.2, and Remark 4.4). The 2 space is the smallest space in which we obtain convergence results, without additional assumptions on the true distribution p.

To examine more closely the case when the true distribution p is neither uniform nor strictly monotone we turn to Monte Carlo simulations. Let pU(y) denote the uniform mass function on {0, … , y}. Figure 3 shows boxplots of m = 1000 samples of the estimators for three distributions:

  1. (top) p = 0.2pU(3) + 0.8pU(7)

  2. (center) p = 0.15pU(3) + 0.1pU(7) + 0.75pU(11)

  3. (bottom) p = 0.25pU(1) + 0.2pU(3) + 0.15pU(5) + 0.4pU(7)

On the left we have a small sample size of n = 20, while on the right n = 100. For each distribution and sample size, we calculate the three estimators (the estimators p^n, p^nR and p^nG are shown in white, light grey and dark grey, respectively) and compute their distance functions from the truth (Hellinger, 1, and 2). Note that the MLE outperforms the other estimators in all three metrics, even for small sample sizes. It appears also that the more regions of constancy the true mass function has, the better the relative performance of the MLE, even for small sample size (see also Figure 2). By considering the asymptotic behavior of the estimators, we are able to make this statement more precise in Section 4.

Fig 3.

Fig 3

Comparison of the estimators p^n (white), p^nR (light grey) and p^nG (dark grey).

All three estimators are consistent estimators of the true distribution, regardless of their relative performance.

Theorem 2.4

Suppose that p is monotone decreasing. Then all three estimators p^n, p^nG and p^nR are consistent estimators of p in the sense that

ρ(p~n,p)0

almost surely as n → ∞ for p~n=p^n,p^nG and p^nR, whenever ρ(p~,p)=H(p~,p) or ρ(p~,p)=p~pk,1k.

As a corollary, we obtain the following Glivenko-Cantelli type result.

Corollary 2.5

Let F^nR(x)=y=0xp^n,yR and F^nG(x)=y=0xp^n,yG, with F(x)=y=0xpy. Then

supx0F^nR(x)F(x)0andsupx0F^nG(x)F(x)0,

almost surely.

3. Limiting distributions

Next, we consider the large sample behavior of p^n, p^nR and p^nG. To do this, define the fluctuation processes Yn, YnR, and YnG as

Yn,x=n(p^n,xpx),Yn,xR=n(p^n,xRpx),Yn,xG=n(p^n,xGpx).

Regardless of the shape of p, the limiting distribution of Yn is well-known. In what follows we use the notation Yn,xd Yn,x to denote weak convergence of random variables in R (we also use this notation for Rd), and YnY to denote that the process Yn converges weakly to the process Y. Let Y={Yx}xN be a Gaussian process on the Hilbert space 2 with mean zero and covariance operator S such that 〈S e(x), e(x’)〉 = pxδx,x’pxpx, where e(x) denotes a sequence which is one at location x, and zero everywhere else. The process is well-defined, since

traceS=E[Y22]=x0px(1px)<.

For background on Gaussian processes on Hilbert spaces we refer to Parthasarathy (1967).

Theorem 3.1

For any mass function p, the process Yn satisfies YnY in ℓ2.

Remark 3.2

We assume that Y is defined only on the support of the mass function p. That is, let κ = sup{x : px > 0}. If κ < ∞ then Y={Yx}x=0k.

3.1. Local behavior

At a fixed point x there are only two possibilities for the true mass function p: either x belongs to a flat region for p (i.e. pr = ⋯ = px = ⋯ = ps for some rxs), or p is strictly decreasing at x: px−1 > px > px+1. In the first case the three estimators exhibit different limiting behavior, while in the latter all three have the same limiting distribution. In some sense, this result is not surprising. Suppose that x is such that px−1 > px > px+1. Then asymptotically this will hold also for p^n:p^n,xk>p^n,x>p^n,x+k for k ≥ 1 and for sufficiently large n. Therefore, in the rearrangement of p^n the values at x will always stay the same, i.e. p^n,xR=p^n,x. Similarly, the empirical distribution function Fn will also be locally concave at x, and therefore both x, x − 1 will be touchpoints of Fn with its LCM. This implies that p^n,xG=p^n,x.

On the other hand, suppose that x is such that px−1 = px = px+1. Then asymptotically the empirical density will have random order near x, and therefore both re-orderings (either via rearrangement or via the LCM) will be necessary to obtain p^n,xR and p^n,xG.

3.1.1. When p is flat at x

We begin with some notation. Let q={qx}xN be a sequence, and let rs be positive integers. We define q(r,s) = {qr, qr+1, … , qs−1, qs} to be the r through s elements of q.

Proposition 3.3

Suppose that for some r,sN with sr ≥ 1 the probability mass function p satisfies pr−1 > pr = ⋯ = ps > ps+1. Then

(YnR)(r,s)drear(Y(r,s)),(YnG)(r,s)dgren(Y(r,s)).

The last statement of the above theorem is the discrete version of the same result in the continuous case due to Carolan and Dykstra (1999) for a density with locally flat regions. Thus, both the discrete and continuous settings have similar behavior in this situation. Figure 4 shows the exact and limiting cumulative distribution functions when p = 0.2pU(3) + 0.8pU(7) (same as in Figure 3, top) at locations x = 4 and x = 7. Note the significantly “more discrete” behavior of the empirical and rearrangement estimators in comparison with the MLE. Also note the lack of accuracy in the approximation at x = 4 when n = 100 (top left), which is more prominent for the rearrangement estimator. This occurs because x = 4 is a boundary point, in the sense that p3 > p4, and is therefore least resilient to any global changes in p^n. Lastly, note that the distribution functions satisfy FY4>FY4G>FY4R at x = 4 while at x = 7, FY7R>FY7G>FY7. It is not difficult to see that the relationships Y4RY4GY4 and Y7RY7GY7 must hold from the definition of (YR)(4,7) = rear(Y(4,7)) and (YG)(4,7) = gren(Y(4,7)).

Fig 4.

Fig 4

The limiting distributions at x = 4 (left) and at x = 7 (right) when p = 0.2pU(3) + 0.8pU(7) : the limiting distributions are shown (dashed) along with the exact distributions (solid) of Yn, YnR, YnG for n = 100 (top) and n = 1000 (bottom).

Proposition 3.4

Let θ = pr = ⋯ = ps, and let Y~r,s denote a multivariate normal vector with mean zero and variance matrix {σi,j}i,j=rs where

σi,j=αδi,jα2,

for α−1 = sr + 1. Let Z be a standard normal random variable independent of Y~(r,s), and let τ = sr + 1. Then

gren(Y(r,s))=dθτ(1θτZ+τgren(Y~(r,s)))

Note that the behavior of gren(Y(r,s)) and gren(Y~(r,s)) will be quite different since x=rsY~x(r,s)=0 almost surely, but the same is not true for Y(r,s).

Remark 3.5

To match the notation of Carolan and Dykstra (1999), note that τ gren(Y~(r,s)) is equivalent to the left slopes at the points {1, … , τ}/τ of the least concave majorant of standard Brownian bridge at the points {0, 1, … , τ}/τ. This random vector most closely matches the left derivative of the least concave majorant of the Brownian bridge on [0, 1], which is the process that shows up in the limit for the continuous case.

3.1.2. When p is strictly monotone at x

In this situation, the three estimators p^n,x, p^n,xR and p^n,xG have the same asymptotic behavior. This is considerably different than what happens for continuous densities, and occurs because of the inherent discreteness of the problem for probability mass functions.

Proposition 3.6

Suppose that for some r,sN with sr ≥ 0 the probability mass function p satisfies pr−1 > pr > ⋯ > ps > ps+1. Then

(YnR)(r,s)dY(r,s)inRsr+1,(YnG)(r,s)dY(r,s)inRsr+1.

Remark 3.7

We note that the convergence results of Propositions 3.3 and 3.6 also hold jointly. That is, convergence of the three processes (Yn(r,s),(YnR)(r,s),(YnG)(r,s)) may also be proved jointly in R3(sr+1).

3.2. Convergence of the process

We now strengthen these results to obtain convergence of the processes YnR and YnG in 2. Note that the limit of Yn has already been stated in Theorem 3.1.

Theorem 3.8

Let Y be the Gaussian process defined in Theorem 3.1, with p a monotone decreasing distribution. Define YR and YG as the processes obtained by the following transforms of Y : for all periods of constancy of p, i.e. for all s ≥ r with sr ≥ 1 such that pr−1 > pr = ⋯ = px = ⋯ = ps > ps+1 let

(YR)(r,s)=rear(Y(r,s))(YG)(r,s)=gren(Y(r,s)).

Then YnRYR, and YnGYG in ℓ2.

The two extreme cases, p strictly monotone decreasing and p equal to the uniform distribution, may now be considered as corollaries. By studying the uniform case, we also study the behavior of YG (via Proposition 3.4), and therefore we consider this case in detail.

Corollary 3.9

Suppose that p is strictly monotone decreasing. That is, suppose that px > px+1 for all x ≥ 0. Then YnRY and YnGY in ℓ2.

3.2.1. The uniform distribution

Here, the limiting distribution Y is a vector of length y+1 having a multivariate normal distribution with E[Yx] = 0 and cov(Yx, Yz) = (y + 1)−1δx,z − (y + 1)−2.

Corollary 3.10

Suppose that p is the uniform probability mass function on {0, … , y}, where yN. Then YnRdrear(Y) and YnGdgren(Y).

The limiting process gren(Y) may also be described as follows. Let U() denote the standard Brownian bridge process on [0, 1], and write Uk=j=0kYj for k = −1, … , y. Then we have equality in distribution of

U={U1,U0,,Uy1,Uy}=d{U(k+1y+1):k=1,,y}.

In particular we have that U1=Uy=j=0yYj=0. Thus, the process U is a discrete analogue of the Brownian bridge, and gren(Y) is the vector of (left) derivatives of the least concave majorant of {(j, Uj) : j = −1, … , y}. Figure 5 illustrates two different realizations of the processes Y and gren(Y).

Fig 5.

Fig 5

The relationship between the limiting process Y and the least concave majorant of its partial sums for the uniform distribution on {0, … , 5}. Left: the slopes of the lines L1,L2 and L3 give the values gren(Y)0, gren(Y)1 = ⋯ = gren(Y)4 and gren(Y)5, respectively. Right: the discrete Brownian bridge lies entirely below zero. Therefore, its LCM is zero, and also gren(Y) ≡ 0. This event occurs with positive probability (see also Figure 6).

Remark 3.11

Note that if the discrete Brownian Bridge is itself convex, then the limits Y, rear(Y) and gren(Y) will be equivalent. This occurs with probability

P(Yrear(Y)gren(Y))=1(y+1)!.

The result matches that in part (iii) of Theorem 2.1.

Figure 6 examines the behavior of the limiting distribution of the MLE for several values of x. Since this is found via the LCM of the discrete Brownian bridge, it maintains the monotonicity property in the limit: that is, gren(Y)x ≥ gren(Y)x+1. This can easily be seen by examining the marginal distributions of gren(Y) for different values of x (Figure 6, left). For each x, there is a positive probability that gren(Y)x = 0. This occurs if the discrete Brownian bridge lies entirely below zero and then the least concave majorant is identically zero, in which case gren(Y)x = 0 for all x = 0, … , y (as in Figure 5, right). The probability of this event may be calculated exactly using the distribution function of the multivariate normal. Figure 6 (right), shows several values for different y.

Fig 6.

Fig 6

Limiting distribution of the MLE for the uniform case with y = 9: marginal cumulative distribution functions at x = 0, 4, 9 (left). The probability that gren(Y) ≡ 0 is plotted for different values of y (right). For y = 9, it is equal to 0.0999.

4. Limiting distributions for the metrics

In the previous section we obtained asymptotic distribution results for the three estimators. To compare the estimators, we need to also consider convergence of the Hellinger and k metrics. Our results show that p^nR and p^n are asymptotically equivalent (in the sense that the metrics have the same limit). The MLE is also asymptotically equivalent, but if and only if p is strictly monotone. If p has any periods of constancy, then the MLE has better asymptotic behavior. Heuristically, this happens because, by definition, YG is a sequence of local averages of Y, and averages have smaller variability. Furthermore, the more and larger the periods of constancy, the better the MLE performs, see, in particular, Proposition 4.5 below. These results quantify, for large sample size, the observations of Figure 3.

The rate of convergence of the 2 metric is an immediate consequence of Theorem 3.8. Below, the notation Z1S Z2 denotes stochastic ordering: i.e. P(Z1 > x) ≤ P(Z2 > x) for all xR (the ordering is strict if both inequalities are replaced with strict inequalities).

Corollary 4.1

Suppose that p is a monotone decreasing distribution. Then, for any 2 ≤ k ≤ ∞,

np^npk=YnkdYk,np^nRpk=YnRkdYk,np^nGpk=YnGkdYGkSYk.

If p is not strictly monotone, thenS may be replaced with <S. The above convergence also holds in expectation (that is, E[Ynkk]E[Ykk] and so forth). Furthermore,

E[YG22]E[Y22]=x0px(1px),

with equality if and only if p is strictly monotone.

Convergence of the other two metrics is not as immediate, and depends on the tail behavior of the distribution p.

Corollary 4.2

Suppose that p is such that x0px<. Then

np^np1=Yn1dY1,np^nRp1=YnR1dY1,np^nGp1=YnG1dYG1SY1.

If p is not strictly monotone, thenS may be replaced with <S. The above convergence also holds in expectation, and

E[YG1]E[Y1]=2πx0px(1px),

with equality if and only if p is strictly monotone.

Convergence of the Hellinger distance requires an even more stringent condition.

Corollary 4.3

Suppose that κ = sup{x : px > 0} < ∞. Then

nH2(p^n,p)d18x=0κYx2pxnH2(p^nR,p)d18x=0κYx2pxnH2(p^nG,p)d18x=0κ(YxG)2pxS18x=0κYx2px.

If p is not strictly monotone, thenS may be replaced with <S. The distribution of x=0kYx2px is chi-squared with κ degrees of freedom. The above convergence also holds in expectation, and

E[x=0κ(YxG)2px]E[x=0κYx2px]=κ,

with equality if and only if p is strictly monotone.

Remark 4.4

We note that if x0px=, then x0Yx= almost surely, and if κ = ∞, then x0Yx2px is also infinite almost surely. This implies that for the empirical and rearrangement estimators, the conditions in Corollaries 4.2 and 4.3 are also necessary for convergence. The same is true for the Grenander estimator, when the true distribution is strictly decreasing.

Proposition 4.5

Let p be a decreasing distribution, and write it in terms of its intervals of constancy. That is, let

px=θiifxCi,

where where θi > θi+1 for all i = 1, 2, …, and where {Ci}i≥1 forms a partition of N. Then

E[x0(YxG)2]=i1j=1Ciθi(1jθi).

Also, if κ = sup{x : px > 0} < ∞, then

E[x=0κ(YxG)2px]=i1j=1Ci(1jθi).

This result allows us to explicitly calculate exactly how much “better” the performance of the MLE is, in comparison to Y and YR. With R–valued random variables, it is standard to compare the asymptotic variance to evaluate the relative efficiency of two estimators. We, on the other hand, are dealing with RN–valued processes. Consider some process WRN, and let ΣW denote its covariance matrix (of size N×N). Then the trace norm of ΣW is equal to the expected squared 2 norm of W ,

E[W22]=ΣWtrace=i1λi,

where {λi}i≥1 denotes the eigenvalues of ΣW. Therefore, Corollary 4.1 tells us that, asymptotically, YG is more efficient than YR and Y , in the sense that

ΣYGtraceΣYRtrace=ΣYtrace,

with equality if and only if p is strictly decreasing. Furthermore, Proposition 4.5 allows us to calculate exactly how much more efficient YG is for any given mass function p.

Suppose that p has exactly one period of constancy on rxs, and let τ = sr + 1 ≥ 2. Further, suppose that px = θ* for rxs. Then

E[YR22]E[YG22]=E[Y22]E[YG22]=θ(τi=1τ1i).

In particular, if p is the uniform distribution on {0, … , y}, then we find that E[YR22]=y(y+1), whereas E[YG22] behaves like logy/(y + 1), and is much smaller.

Note that if p is strictly monotone, then we obtain

E[x0(YxG)2]=i1θi(1θi)=E[x0Yx2],

as required. Also, if p is the uniform probability mass function on {0, … , y}, we conclude that

E[x=0ygren(Y)x2px]=i=1y1i+1,

where logy0.5<i=1y(i+1)1<log(y+1).

Lastly, consider a distribution with bounded support, and fix r < s where p is strictly monotone on {r, … , s}. That is, we have that pr−1 > pr > ⋯ > ps > ps+1. Next define p~ by p~x=px for x < r and x > s, and p~x=x=rspx(sr+1) for x ∈ {r, … , s}. Then the difference in the expected Hellinger metrics under the two distributions is

Ep[x=0κ(YxG)2px]Ep~[x=0κ(YxG)2p~x]=τj=1τ1j

where τ = sr + 1. Therefore, the longer the intervals of constancy in a distribution, the better the performance of the MLE.

Remark 4.6

From Theorem 1.6.2 of Robertson et al. (1988) it follows that for any x ≥ 0

E[(YxG)2]E[Yx2]=px(1px).

This result may also be proved using the method used to show Proposition 4.5. Note that this pointwise inequality does not hold in general for YG replaced with YR.

Corollaries 4.1 and 4.2 then translate into statements concerning the limiting risks of the three estimators p^n, p^nR, and p^nG as follows, where the risk was defined in (2.3). In particular, we see that, asymptotically, both p^nR and p^n are inadmissible, and are dominated by the maximum likelihood estimator p^nG.

Corollary 4.7

For any 2 ≤ k ≤ ∞, and any pP, the class of decreasing probability mass functions on N,

nk2Rk(p,p^n)E[Ykk],nk2Rk(p,p^nR)E[Ykk],nk2Rk(p,p^nG)E[YGkk]E[Ykk].

The inequality in the last line is strict if p is not strictly monotone. The statements also hold for k = 1 under the additional hypothesis that x0px<.

5. Estimating the mixing distribution

Here, we consider the problem of estimating the mixing distribution q in (1.1). This may be done directly via the estimators of p and the formula (1.2). Define the estimators of the mixing distribution as follows

q^n,x=(x+1)Δp^n,x,q^n,xR=(x+1)Δp^n,xR,q^n,xG=(x+1)Δp^n,xG.

Each of these estimators sums to one by definition, however q^n is not guaranteed to be positive. The main results of this section are consistency and √n̅–rate of convergence of these estimators.

Theorem 5.1

Suppose that p is monotone decreasing and satisfies x0xpx<. Then all three estimators q^n, q^nG and q^nR are consistent estimators of q in the sense that

ρ(q~n,q)0

almost surely as n → ∞ for q~n=q^n,q^nG and q^nR, whenever ρ(q~,q)=H(q~,q) or ρ(q~,q)=q~qk,1k.

To study the rates of convergence we define the fluctuation processes Zn, ZnR, and ZnG as

Zn,x=n(q^n,xqx),Zn,xR=n(q^n,xRqx),Zn,xG=n(q^n,xGqx),

with limiting processes defined as

Zx=(x+1)(Yx+1Yx),ZxR=(x+1)(Yx+1RYxR),ZxG=(x+1)(Yx+1GYxG).

Theorem 5.2

Suppose that p is such that κ = sup{x ≥ 0 : px > 0} < ∞. Then ZnZ, ZnRZR and ZnGZG. Furthermore, ∥ZnkdZk, ZnRkdZRk and ZnGkdZGk for any k ≥ 1. These convergences also hold in expectation. Also, nH2(q^n,q)dx=0kZx2qx,nH2(q^n,q)dx=0k(ZxR)2qx and nH2(q^n,q)dx=0k(ZxG)2qx and these again also hold in expectation.

As before, we have asymptotic equivalence of all three estimators if p is strictly decreasing (cf. Corollary 3.9). To determine the relative behavior of the estimators q^nR and q^nG we turn to simulations. Since q^n is not guaranteed to be a probability mass function (unlike the other two estimators), we exclude it from further consideration.

In Figure 7, we show boxplots of m = 1000 samples of the distances 1(q~,q), 2(q~,q) and H(q~,q) for q~=q^nR (light grey) and q~=q^nG (dark grey) with n = 20 (left), n = 100 (center) and n = 1000 (right). From top to bottom the true distributions are

  1. p = pU(5),

  2. p = 0.2pU(3) + 0.8pU(7),

  3. p = 0.25pU(1) + 0.2pU(3) + 0.15pU(5) + 0.4pU(7), and

  4. p is geometric with θ = 0.75.

We can see that q^nG has better performance in all metrics, except for the case of the strictly decreasing distribution. As before, the flatter the true distribution is, the better the relative performance of q^nG. Notice that by Corollary 3.9 and Theorem 5.2 the asymptotic behavior (i.e. rate of convergence and limiting distributions) of the l2 norm of qnG and qnR should be the same if p is strictly decreasing.

Fig 7.

Fig 7

Monte Carlo comparison of the estimators q^nR (light grey) and q^nG (dark grey).

Remark 5.3

For κ = ∞, the process {xYn,x:xN} is known to converge weakly in 2 if and only if x0x2px<, while the convergence is know to hold in 1 if and only if x0xpx<; see e.g. Araujo and Giné (1980, Exercise 3.8.14, page 205). We therefore conjecture that ZnR and ZnG converge weakly to ZR and ZG in 2 (resp. 1) if and only if x0x2px< (resp. x0xpx<).

6. Proofs

Proof of Remark 1.1

This bound follows directly from the definition of p, since

px=Σy>x(y+1)qy(x+1)1Σyxqy(x+1)1.

In the next lemma, we prove several useful properties of both the rearrangement and Grenander operators.

Lemma 6.1

Consider two sequences p and q with support S, and let ϕ(·) denote either the Grenander or rearrangement operator. That is, ϕ(p) = gren(p) or ϕ(p) = rear(p).

  1. For any increasing function f:SR,
    xSfxφ(p)xxSfxpx. (6.1)
    .
  2. Suppose that Ψ:RR+ is a non–negative convex function such that Ψ(0) = 0, and that q is decreasing. Then,
    xSΨ(φ(p)xqx)xSΨ(pxqx). (6.2)
  3. Suppose that |S| is finite. Then ϕ(p) is a continuous function of p.

Proof
  1. Suppose that S = {s1, … , s2}, where it is possible that s2 = ∞. Then it is clear from the properties of the rearrangement and Grenander operators that
    x=s1S2φ(p)x=x=s2S1pxandx=s1yφ(p)xx=s1ypx,
    for y ∈ S. These inequalities immediately imply (6.1), since, by summation by parts,
    x=s1s2fxpx=x=s1s2y=s1x1(fy+1fy)px+fs1x=s1s2px=x=s1s2(fy+1fy)x=y+1s2px+fs1x=s1s2px,
    and f is an increasing function.
  2. For the Grenander estimator this is simply Theorem 1.6.1 in Robertson, Wright and Dykstra (1988). For the rearrangement estimator, we adapt the proof from Theorem 3.5 in Lieb and Loss (1997). We first write Ψ = Ψ+ + Ψ, where Ψ+(x) = Ψ(x) for x ≥ 0 and Ψ(x) = Ψ(x) for x ≤ 0. Now, since Ψ+ is convex, there exists an increasing function Ψ+ such that Ψ+(x)=0xΨ+(t)dt. Now,
    Ψ+(pxqx)=qxpxΨ+(pxs)ds=0Ψ+(pxs)I[qxs]ds.
    Applying Fubini’s theorem, we have that
    xSΨ+(pxqx)=0{xSΨ+(pxs)I[qxs]}ds.
    Now, the function I[qxs] is an increasing function of x, and for ϕ(p) = rear(p), for each fixed s we have that φ(Ψ+(ps))x=Ψ+(φ(p)xs), since Ψ+ is an increasing function. Therefore, applying (6.1), we find that the last display above is bounded below by
    0{xSΨ+(φ(p)xs)I[qxs]}ds=xSΨ+(φ(p)xqx).
    The proof for Ψ is the same, except that here we use the identity
    Ψ(pxqx)=0Ψ(pxS){I[qxs]}ds.
  3. Since |S| is finite, we know that p is a finite vector, and therefore it is enough to prove continuity at any point x ∈ S. For ϕ = rear this is a well–known fact. Next, note that if pnp, then the partial sums of pn also converge to the partial sums of p. From Lemma 2.2 of Durot and Tocquet (2003), it follows that the least concave majorant of pn converges to the least concave majorant of p, and hence, so do their differences. Thus ϕ(pn)xϕ(p)x.

6.1. Some inequalities and consistency results: Proofs

Proof of Theorem 2.1

  1. Choosing Ψ(t) = |t|k in (6.2) of Lemma 6.1 proves (2.2). To prove (2.1) recall that
    H2(p~,p)=1x0p~xpx.
    By Hardy et al. (1952), Theorem 368, page 261, (or Theorem 3.4 in Lieb and Loss (1997)) it follows that
    x0p^n,xpxx0p^n,xRpx,
    which proves the result for the rearrangement estimator. It remains to prove the same for the MLE. Let {Bi}i≥1 denote a partition of N. By definition,
    p^n,xG=1BixBip^n,x,xBi
    for some partition. Jensen’s inequality now implies that
    xBip^n,xxBip^n,xG,
    which completes the proof.
  2. is obvious.

  3. The second statement is obvious in light of (2.2) with k = ∞. To see that the probability of monotonicity of the p^n,xs converges to 1/(y+1)! under the uniform distribution, note that the event in question is that same as the event that the components of the vector (n(p^n,x(y+1)1):x{0,,y}} are ordered in the same way. This vector converges in distribution to Z ~ Ny+1(0, Σ) where Σ = diag(1/(y + 1)) − (y + 1)−211T, and the probability P (Z1Z2 ≥ ⋯ ≥ Zy+1) = 1/(y + 1)! since the components of Z are exchangeable.

Proof of Corollary 2.2

For any pP, we have that

nR2(p,p^nR)nR2(p,p^n)=1x0px21.

Plugging in the discrete uniform distribution on {0, … , κ}, and applying part (ii) of Theorem 2.1, we find that

nR2(p,p^nR)=nR2(p,p^n)=1(κ+1)1.

Thus, for any ε > 0, there exists a pP, such that

nR2(p,p^nR)=nR2(p,p^n)1.

Since the upper bound on both risks is one, the result follows.

Proof of Theorem 2.4

The results of this theorem are quite standard, and we provide a proof only for completeness. Let Fn denote the empirical distribution function and F the cumulative distribution function of the true distribution p. For any K (large), we have that for any x > K ,

p^n,xpxp^n,x+px(1Fn(K))+(1F(K))Fn(K)F(K)+2(1F(K)).

Fix ε > 0, and choose K large enough so that (1 − F(K)) < ε/6. Next, there exists an n0 sufficiently large so that sup0xKp^n,xpx<3 and Fn(K)F(K)<3 for all nn0 almost surely. Therefore for nn0

supx0p^n,xpxsup0xKp^n,xpx+Fn(K)F(K)+2(1F(K))<.

This shows that p^npk0 almost surely for k = ∞. A similar approach proves the result for any 1 ≤ k < ∞. Convergence of H(p^n,p) follows since for mass functions H(p,q)pq1 (see e.g. Le Cam, (1969), page 35). Consistency of the other estimators p^nR and p^nG now follows from the inequalities of Theorem 2.1.

Proof of Corollary 2.5

Note that by virtue of the estimators, we have that F^nR(x)Fn(x) and F^nG(x)Fn(x) for all x ≥ 0. Now, fix ε > 0. Then there exists a K such that x>Kpx<4. By the Glivenko-Cantelli lemma, there exists an n0 such that for all nn0

supx0Fn(x)F(x)<4,

almost surely. Furthermore, by Theorem 2.4, n0 can be chosen large enough so that for all nn0

supx0p^n,xGpx<4(K+1),

almost surely. Therefore, for all nn0, we have that

supx0F^nG(x)F(x)x=0Kp^n,xGpx+x>Kp^n,xG+x>Kpk4+x>Kp^n,x+44+x>Kpx+4+4.

The proof for the rearrangement estimator is identical.

6.2. Limiting distributions: Proofs

Lemma 6.2

Let Wn be a sequence of processes in ℓk with 1 ≤ k < ∞. Suppose that

  1. supnE[Wnkk]<,

  2. limmsupnxmE[Wn,kk]=0.

Then Wn is tight in ℓk.

Proof

Note that for k < ∞, compact sets K are subsets of k such that there exists a sequence of real numbers Ax for xN and a sequence λm → 0 such that

  1. |Wx| ≤ Ax for all xN,

  2. Σkm |Wx|kλm for all m,

for all elements wK. Clearly, if the conditions of the lemma are satisfied, then for each ε > 0, we have that

P(Wn,xAxfor allx0,andxmWn,xkλmfor allm)1

for all n. Thus, Wn is tight in k.

Proof of Theorem 3.1

Convergence of the finite dimensional distributions is standard. It remains to prove tightness in 2. By Lemma 6.2 this is straightforward, since

E[Yn22]=x0px(1px)andxmE[Yn,x2]=xmpx(1px).

Throughout the remainder of this section we make extensive use of a set equality for the least concave majorant known as the “switching relation”. Let

s^n(a)=inf{k1:Fn(k)a(k+1)=sup{Fn(y)a(y+1)}}argmaxk1L{Fn(k)a(k+1)} (6.3)

denote the first time that the process Fn(y)a(y+1) reaches its maximum. Then the following holds

{s^n(a)<x}={s^n(a)x12}={p^n,xG<a}. (6.4)

For more background (as well as a proof) of this fact see, for example, Balabdaoui et al. (2009).

Proof of Proposition 3.3

Let F denote the cumulative distribution function for the function p. For fixed tR it follows from (6.4) that

P(Yn,xG<t)=P(s^n(px+n12t)x12)=P(argmaxy1L{Zn(y)}x12) (6.5)

where Zn(y)=n12Fn(y)(n12px+t)(y+1). Note that for any constant c, argmaxL(Zn(y)) = argmaxL(Zn(y) + c), and therefore we instead take

Zn(y)=n12(Fn(y)Fn(r1))(n12px+t)(yr+1)=Vn(y)+Wn(y)t(yr+1),

where

Vn(y)=n((Fn(y)Fn(r1))(F(y)F(r1))),n12Wn(y)=(F(y)F(r1))px(yr+1)={=0forr1ys,<0otherwise.}

Let U denote the standard Brownian bridge on [0, 1]. It is well-known that Vn(y)U(F(y))U(F(r1)). Also, Wn(y) → ∞ for y ∉ {r − 1, … , s}, and it is identically zero otherwise. It follows that the limit of (6.5) is

P(argmaxr1ysL{U(F(y))U(F(r1))t(yr+1)}x12)=P(argmaxr1ysL{U(F(y))U(F(r1))t(yr+1)}<x),

for any x ∈ {r, … , s}. Note that the process

{U(F(x))U(F(r1)),x=r1,,s}=d{j=rxYj,x=r1,,s},

and therefore the probability above is equal to

P(gren(Y(r,s))x<t)

for x ∈ {r, … , s}. Since the half-open intervals [a, b) are convergence determining, this proves pointwise convergence of Yn,xG to gren(Y)x.

To show convergence of the rearrangement estimator fluctuation process, note that for sufficiently large n we have that p^n,rk>p^n,x>p^n,s+k for all x ∈ {r, … , s} and k ≥ 1. Therefore, (p^nR)(r,s)=rear((p^n)(r,s)) and furthermore, since px is constant here, (YnR)(r,s)=rear(Yn(r,s)). The result now follows from the continuous mapping theorem.

Proof of Proposition 3.4

To simplify notation, let Wm=U(F(mr+1))U(F(r1)) for m = 0, … , sr + 1. Also, let θ = pr = ⋯ = ps and then Gm = F(mr + 1) − F(r − 1) = θm. Write

Wm=GmGsWs+{WmGmGsWs}=msWs+{WmmsWs},

where = sr + 1. Let W~m=WmmWss. Then W~0=W~s=0 and some calculation shows that E[W~m]=0 and

cov(W~m,W~m)=θs{min(ms,ms)msms}.

Also, cov(W~m,Ws)=0. Let Z be a standard normal random variable independent of the standard Brownian bridge U. We have shown that

Wm=dmsθs(1θs)Z+θsU(ms).

Next, let Y~m=U(ms)U(m1s) for m = 1, … , . The vector Y~=(Y~1,,Y~s) is multivariate normal with mean zero and cov(Y~m,Y~m)=δm,ms1(s)2. To finish the proof, note that gren(c+Y~)=c+gren(Y~) for any constant c.

Proof of Proposition 3.6

The claim for the rearrangement estimator follows directly from Theorem 2.4 for k = ∞. To prove the second claim, we will show that Yn,xGYn,x=n(p^n,xGp^n,x)p0. To do this, we again use the switching relation (6.4).

Fix ε > 0. Then

P(Yn,xGYn,x)=P(p^n,xGp^n,x+n12)=P(s^n(p^n,x+n12)x12)=P(argmaxy1LZ~n(yx)x12)=P(argmaxhx1LZ~n(h)12), (6.6)

where Z~n(h)=n12Fn(x+h)(n12p^n,x+t)(x+h+1). Since for any constant c, argmaxL(Z~n(y))=argmaxL(Z~n(y)+c), we instead take

Z~n(h)=n12(Fn(x+h)Fn(x1))(n12p^n,x+)(h+1)=Un(h)+Vn(h)+Wn(h)(h+1),

where

Un(h)=n((Fn(x+h)Fn(x1))(F(x+h)F(x1))),(h+1)1Vn(h)=n((Fn(x)Fn(x1))(F(x)F(x1))),n12Wn(h)=(F(x+h)F(x1))px(h+1)={=0forh=1,0,<0otherwise.}

Let U denote the standard Brownian bridge on [0, 1]. It is well-known that Un(h)U(F(x+h))U(F(x1)) and Vn(h)(h+1)(U(F(x))U(F(x1))). Also Wn(y) = 0 at y = −1, 0 and Wn(y) → ∞ for y ∉ {−1, 0}. Define

Z(h)=U(F(x+h))U(F(x1))+(h+1)(U(F(x))U(F(x1))).

and notice that Z(0)=Z(1)=0. It follows that the limit of (6.6) is

P(argmaxy=1,0L{Z(h)(h+1)}12)=0,

since argmaxy=1,0L{Z(h)(h+1)}=1. A similar argument proves that

limnP(Yn,xGYn,x<)=0,

showing that Yn,xGYn,x=op(1) and completing the proof.

Proof of Theorem 3.8

Let ϕ denote an operator on sequences in l2. Specifically, we take ϕ = gren or ϕ = rear. Also, for a fixed mass function p let Tp={x0:pxpx+1>0}={τi}i1. Next, define ϕp to be the local version of the ϕ operator. That is, for each i ≥ 1, ϕp(q)x = ϕ(p(τi+1,τi+1))x for all τi + 1 ≤ xτi+1.

Fix ε > 0, and suppose that qnq in 2. Then there exists a KTp and an n0 such that supnn0x>Kqn,x2<6. By Lemma 6.1, ϕp is continuous on finite blocks, and therefore it is continuous on {0, … , K}. Hence, there exists a n0 such that for all nn0

x=0K(φp(qn)xφp(q)x)23.

Applying (6.2), we find that for all nmax{n0,n0}.

φp(qn)φp(q)22x=0K(φp(qn)φp(q))2+2x>Kφp(qn)x2+2x>Kφp(q)x23+2x>Kqn,x2+2x>Kqx2<,

which shows that ϕp is continuous on 2. Since YnY in 2, it follows, by the continuous mapping theorem, that φp(Yn)φp(Y). However, both YnG and YnR are of the form n(φ(p^n)p)φp(Yn). To complete the proof of the theorem it is enough to show that

En=n(φ(p^n)p)φp(Yn)22,

converges to zero in L1; that is, we will show that E[En]0.

By Skorokhod’s theorem, there exists a probability triple and random processes Y and Yn=n(p^np), such that YnY almost surely in 2. Fix ε > 0 and find KTp such that x>Kpx<4.

Next, let TpK={0xK:xTp}, and let δ=minxTpK(pxpx+1). Then, there exists an n0 such that for all nn0

supx0p^n,xpx<δ3, (6.7)
supx00yxφ(p^n)yF(x)<δ6, (6.8)

almost surely (see Corollary 2.5).

Now, consider any mTpK. It follows that any such m is also a touchpoint of the operator ϕ on p^n. Here, by touchpoint we mean that x=0mφ(p^n)x=x=0mp^n,x. From (6.7), it follows that

infxmp^n,x>supx>mp^n,x,

which implies that m is a touchpoint for the rearrangement estimator. For the Grenander estimator, we require (6.8). Here,

F^nG(m)F^nG(m1)>F(m)F(m1)δ3=pmδ3>pm+1+δ3=F(m+1)F(m)+δ3F^nG(m+1)F^nG(m).

Therefore, the slope of F^nG changes from m to m + 1, which implies that m is a touchpoint almost surely. Let p^n(s,r)={p^n,s,p^n,s+1,,p^n,r}. An important property of the ϕ operator is if m < m’ are two touchpoints of ϕ applied to p^n, then for all m+1 ≤ xm’, φ(p^n)x=φ(p^n(m+1,m))x. Now, since p takes constant values between the touchpoints TpK, it follows that n(φ(p^n)p)x=φp(Yn)x, for all xK.

Therefore, for all nn0

En=x0n(φ(p^n)p)xφp(Yn)x2x=0K(n(φ(p^n)p)xφp(Yn)x)2+2x>K(n(φ(p^n)p)x)2+2x>K(φp(Yn)x)24x>K(Yn,x)2,

almost surely. It follows that

limEn4x>K(Yx)2,

and hence

E[limEn]4E[x>K(Yx)2]=4x>Kpx(1px)<.

Since En2Yn22, with E[Yn22]1, we may apply Fatou’s lemma so that

0limE[En]E[limEn].

Letting ε → 0 completes the proof.

Corollaries 3.9 and 3.10 are obvious consequences of Theorem 3.8. Remark 3.11 is proved in the following section.

6.3. Limiting distributions for metrics: Proofs

Proof of Corollary 4.1

We provide the details only in the k = 2 setting. The cases when k > 2 follow in a similar manner, since here ∥xk ≤ ∥x2 for x2.

Convergence of Yn2, YnR2 and YnG2 follows from Theorems 3.1 and 3.8 by the continuous mapping theorem. That ∥Y2 = ∥YR2 is obvious from the definition of YR. That ∥YG2 ≤ ∥Y2 follows from Jensen’s inequality and the definition of the gren (·) operator, since for any r < s, gren (Y(r,s))x is equal to the average of Yy over some subset of {r, ⋯ , s} containing the point x. If p is not strictly decreasing, then there exists a region, which we denote again by {r, ⋯ , s}, where it is constant. Then there is positive probability that (YG)(r,s) is different from Y(r,s). In this case, we have that

(YG)(r,s)22<(Y)(r,s)22,

which finishes the proof of the stochastic ordering in the third statement. Convergence in expectation is immediate since

E[Yn22]=x0px(1px),

and the same results for YnR, YnG follow by the dominated convergence theorem and the bounds in Theorem 2.1 (i). Lastly, the bound E[YG22]E[Y22] with equality if and only if p is strictly monotone follows from the stochastic ordering.

Proof of Corollary 4.2

The result of the corollary for the empirical estimator is essentially the Borisov-Durst theorem (see e.g. Dudley (1999), Theorem 7.3.1, page 244), which states that

supC2NxCYn,xsupC2NxCYx

if xpx<. To complete the argument note that supC2NxCwx=w12 for any sequence w such that xwx=0 (note that the condition xpx< means that the sequences Yn and Y are absolutely summable almost surely). However, the result may also be proved by noting that the sequence Yn is tight in 1 using Lemma 6.2, since

E[Yn1]x0px(1px),xmE[Yn,x]xmpx(1px)0,

as m → ∞ under the assumption x0px<. The proof that YnGYG and YnRYR in 1 is identical to the proof of Theorem 3.8, and we omit the details. Convergence of expectations follows since ∥Yn1 is uniformly integrable, as

E[Yn1I{Yn1>α}]E[Yn12]α=1αx,zE[Yn,xYn,z]1α(x0px)2,

by the Cauchy-Schwarz inequality. All other details follow as in the proof of Corollary 4.1.

Proof of Corollary 4.3

If κ < ∞, then we have that

8nH2(p^n,p)=4nx=0κ[p^n,xpx]2=4x=0κ[n(p^n,xpx)]2(p^n,x+px)2,

which converges to

4x=0κYx2(2px)2=x=0κYx2px (6.9)

by Theorem 3.1 and Theorem 2.4 for k = ∞. That this has a chi-squared distribution with κ degrees of freedom is standard, and is shown for example, in Ferguson (1996), Theorem 9. Convergence of means follows by the dominated convergence theorem from the bound H(p,q)pq1 (see e.g. Le Cam (1969), page 35) and Corollary 4.2. All other details follow as in the proof of Corollary 4.1.

Proof of Remark 4.4

Suppose first that x0px=. Define P to be the probability measure P(A)=xApx and let W be the mean zero Gaussian field on 2 such that E[WxWx’] = pxδx,x’. Then we may write Y=d{WxpxWN}x0, where WN=x0Wx.

Now, since x0P(Wxpx)=, by the Borel-Cantelli lemma we have that x0Wx= almost surely. Since

x0Yx=x0WxpxWNx0WxWN,

and WN is finite almost surely, it follows that x0Yx= almost surely as well. That is, if x0px=, then the random variable ∥Y1 simply does not exist.

A similar argument works for the Hellinger norm. Assume that κ = ∞. Then

x0Yx2px=(x0Wx2px)WN2,

and the Borel-Cantelli lemma shows that x0Wx2px is infinite almost surely.

Lemma 6.3

Let Z1, … , Zk be i.i.d. N(0,1) random variables, and let ZiG,i=1,,k denote the left slopes of the least concave majorant of the graph of the cumulative sums i=1jZi with j = 0, … , k. Let T denote the number of times that the LCM touches the cumulative sums (excluding the point zero, but including the point k). Then

E[i=1k(ZiG)2]=E[T].
Proof

Since the submission of this paper, it has come to our attention that this result follows from the Bohnenblust-Spitzer lemma as exposited by Steele (2002); taking f(k, y) = y2/k in the development on pages 240-241 of Steele (2002) gives the result. We give a direct argument below.

It is instructive to first consider some of the simple cases. When k = 1, the result is obvious. Suppose then that k = 2. We have

T i=1k(ZiG)2 if
2 Z12+Z22 Z1 > Z2
1 (Z1+Z22)2 Z1<Z1+Z22

Note that we ignore all equalities, since these occur with probability zero. It follows that

E[i=12(ZiG)2]=E[(Z12+Z22)1Z1>Z2]+E[(Z1+Z22)21Z1<Z1+Z22]

where, by exchangeability it follows that

E[(Z12+Z22)1Z1>Z2]=E[(Z12+Z22)1Z1<Z2]=E[(Z12+Z22)]P(Z1>Z2)=2P(T=2).

On the other hand, we also have that

E[(Z1+Z22)21Z1<Z1+Z22]=E[(Z1+Z22)2]P(Z1<Z1+Z22)=1P(T=1),

since the random variables = (Z1 + Z2)/2 and Z1 are independent. The result follows.

Next, suppose that k = 3. Then we have the following.

T i=1k(ZiG)2 if
(a) (b)
3 Z12+Z22+Z32 Z1 > Z2 > Z3
2 (Z1+Z22)2+Z32 Z1+Z22>Z3 Z1+Z22>Z1
2 Z12+(Z2+Z32)2 Z>Z2+Z32 Z2+Z32>Z2
1 (Z1+Z2+Z33)2 Z1+Z2+Z33>Z1,Z1+Z22

The choice of splitting the conditions between columns (a) and (b) is key to our argument. Note that the LCM creates a partition of the space {1, … , k}, where within each subset the slope of the LCM is constant. The number of partitions is equal to T. Here, column (a) describes the necessary conditions on the order of the slopes on the partitions, while column (b) describes the necessary conditions that must hold within each partition.

In the first row of the table, we find by permuting across all orderings of (123) that

E[(Z12+Z22+Z32)1Z1>Z2>Z3]=E[(Z12+Z22+Z32)]P(Z1>Z2>Z3)=3P(T=3).

Next consider T = 2. Here, by permuting (123) to (312), we find that

E[{Z12+(Z2+Z32)2}1Z1>Z2+Z321Z2+Z32>Z2]=E[{(Z1+Z22)2+Z32}1Z3>Z1+Z221Z1+Z22>Z1].

Note that the permutation (123) to (312) may be re-written as ({12}{3}) to ({3}{12}) which is really a permutation on the partitions formed by the LCM. Now,

E[T1T=2]=E[{(Z1+Z22)2+Z32}1Z1+Z22>Z31Z1+Z22>Z1]+E[{Z12+(Z2+Z32)2}1Z1>Z2+Z321Z2+Z32>Z2]=E[{(Z1+Z22)2+Z32}1Z1+Z22>Z1]=E[{(Z1+Z22)2+Z32}]P(Z1+Z22>Z1)=2P(T=2),

where in the penultimate line we use the fact that Z3, (Z1 + Z2)/2 and Z1 − (Z1 + Z2)/2 are independent.

Lastly,

E[(Z1+Z2+Z33)21Z1+Z2+Z33>Z11Z1+Z2+Z33>Z1+Z22]=E[(Z1+Z2+Z33)21Z1+Z2+Z33>Z11Z3>Z1+Z2+Z33]=E[(Z1+Z2+Z33)2]E[1Z1+Z2+Z33>Z11Z1+Z2+Z33>Z1+Z22]=1P(T=1)

as the variables = (Z1 + Z2 + Z3) and {Z1 - , Z2 - , Z3 - } are independent.

The key to the general proof is the combination of two actions:

  1. Permutations of subgroups (column (a)), and

  2. independence of column (b) from the random variables i=1k(ZiG)2 and the indicator functions in column (a). Note that for any k > ≥ 1, letting
    Z=(Z1+Z2++Zk)kZ1+Z2++ZjjZ=(Z1Z)+(Z2Z)++(ZjZ)j,
    which is independent of for any choice of j < k.

To write down the proof for any k we must first introduce some notation.

  • For any 1 ≤ mk, we may create a collection P of partitions of {1, … , k} such that the total number of elements in each partition is m. For example, when k = 4 and m = 2, then the elements of P are the partitions ({1}{234}), ({12}{34}) and ({123}{4}). Furthermore, for each partition, we may write down the number of elements in each subset of the partition. Here the sizes of the partitions are 1, 3 then 2, 2 and 3, 1. These partitions my be grouped further by placing together all partitions such that their sizes are unique up to order. Thus, in the above example we would put together 1, 3 and 3, 1 as one group, and the second group would be made up of 2, 2. From each subgroup we wish to choose a representative member, and the collection of these representatives will be denoted as τ(m). We assume that the representative τ is chosen in such a way that the sizes of the partitions are given in increasing order. Let r1 denote the number of subgroups with size 1, and so on. Thus, for τ = ({1}{234}), we have r1 = 1, r2 = 0, r3 = 1, ⋯ , rk = 0.

  • Next, from τ(m) we wish to recreate the entire collection P. To do this, it is sufficient to take each τ and recreate all of the partitions which had the same sizes. Let σmτ denote the resulting collection for a fixed partition τ. Thus, P is equal to the union of σmτ over all ττ(m). Note that the number of elements in σmτ is given by
    (mr1r2rk).
    We also use the notation Rj=i=1jri with R0 = 0. Note that Rk = m.
  • For each partition σ, we write σ1, …, σm to denote the individual subsets of the partition. Thus, for σ = ({1}{234}), we would have σ1 = {1} and σ2 = {2, 3, 4}

  • For each σj as defined above, we let
    AVσjZ=(ΣiσjZi)σj,andAVσjlZ=(Σiσj(l)Zi)σj(l),
    where σj(l) denotes σj with its last l elements removed.

We are now ready to calculate E[i=1k(ZiG)21T=m]. By considering all possible partitions, this is equal to the sum over all ττ(m) of the following terms

σσmτE[{j=1mσj(AVσjZ)2}1AVσ1Z>>AVσmZ]×[j=1m1AVσjZ>max{AVσj1Z,,AVσj(σj1)Z}].

By permuting each σσmτ, and appealing to the exchangeability of the Zi’s, this is equal to

E[{j=1mσj(AVσjZ)2}{i=1k1AVσRi1+1Z>>AVσRiZ}]×[{j=1m1AVσjZ>max{AVσj1Z,,AVσj(σj1)Z}}]=E[{j=1mσj(AVσjZ)2}{i=1k1AVσRi1+1Z>>AVσRiZ}]×E[{j=1m1AVσjZ>max{AVσj1Z,,AVσj(σj1)Z}}],

by independence of each AVσjZ and each ZiAVσjZ for iσj. Notice that the permutations of σσmτ do not account for permutations across all groups with equal “size”. By considering furthermore all permutations between groups of equal size, we further obtain that the last display above is equal to

E[{j=1mσj(AVσjZ)2}]E[{i=1k1AVσRi1+1Z>>AVσRiZ}]×E[{j=1m1AVσjZ>max{AVσj1Z,,AVσj(σj1)Z}}]=mE[{i=1k1AVσRi1+1Z>>AVσRiZ}]×E[{j=1m1AVσjZ>max{AVσj1Z,,AVσj(σj1)Z}}].

Lastly, we collect terms to find that E[i=1k(ZiG)21T=m] is equal to m times

ττ(m)E[{i=1k1AVσRi1+1Z>>AVσRiZ}]×E[{j=1m1AVσjZ>max{AVσj1Z,,AVσj(σj1)Z}}]=ττ(m)σσmτE[1AVσ1Z>>AVσmZ]×[{j=1m1AVσjZ>max{AVσj1Z,,AVσj(σj1)Z}}]=P(T=m),

which concludes the proof.

Proof of Proposition 4.5

In light of Proposition 3.4 and the definition of YG (along with some simple calculations), it is sufficient to prove that

(sr+1)E[x=rsgren(Y~(r,s))x2]=i=1sr1i+1, (6.10)

using the notation of the Proposition 3.4. Without loss of generality we may assume that r = 0, and for simplicity we write Y~ for Y~(r,s).

Let k = s + 1, and let Z1, … , Zk denote k i.i.d. N(0,1) random variables, let denote their average, and let Z~i=ZiZ (which is independent of ). We then have that

E[x=1kgren(Z)x2]=E[x=1kgren(Z~+Z)x2]=E[x=1k{gren(Z~)x+Z}2]=E[x=1k{gren(Z~)x}2+x=1kZ2]=E[x=1k{gren(Z~)x}2]+1=(y+1)E[x=0y{gren(Y~)x}2]+1

Therefore, by Lemma 6.3, to prove (6.10), it is sufficient to show that

E[x=1kgren(Z)x2]=E[T]=i=1k1i,

where T denotes the number of touchpoints of the LCM with the cumulative sums of the Zis.

To do this, we use the results of Sparre Andersen (1954). He considers exchangeable random variables X1, X2, … and their partial sums S0=0,S1,S2,,Sn=i=1nXi, and shows that the number Hn of values i ∈ {1, … , n−1} for which Si coincides with the least concave majorant (equivalently the greatest convex minorant) of the sequence S0, … , Sn has mean given by

E[Hn]=i=1n1i+1,

as long as the random variables X1, … , Xn are symmetrically dependent and

P(Sii=Sjj)=0,1i<jn.

The vector X1, … , Xn is symmetrically dependent if its joint cumulative distribution function P(Xixi, i = 1, … , n) is a symmetric function of x1, … , xn. This result is Theorem 5 in Sparre Andersen (1954). Clearly, we have that E[T − 1] = E[Hk], for X1 = Z1, … , Xn = Zk, which are exchangeable, and satisfy the required conditions. The result follows.

Proof of Remark 3.11

To prove this result we continue with the notation of the previous proof. Equality of gren(Y) with Y holds if and only if the above partition T = {0, … , y}. By Theorem 5 of Sparre Andersen (1954), this occurs with probability 1/(y + 1)!.

Proof of Remark 4.6

By Proposition 3.4 (and using the notation defined there), it is enough to prove that

E[gren(Y~)x2]1τ1τ2,

where for simplicity we write Y~=Y~(s,r). Let {W~x}x=rs be i.i.d. normal random variables with mean zero and variance 1/τ, and let W=(x=rsW~x)τ. Then Y~=dZ~Z, and also gren(Z~)x=gren(Z~Z)x+Z. Notice also that Z~Z and are independent. We therefore find that

E[gren(Y~)x2]+1τ2=E[gren(Z~)x2]E[Z~x2]=1τ,

the latter inequality following directly from Theorem 1.6.2 of Robertson, Wright and Dykstra (1988), since the elements of Z~ are independent.

6.4. Estimating the mixing distribution: Proofs

Proof of Theorem 5.1

Since q~nqkq~nq1 and H(q~n,q)q~nq1, it is sufficient to only consider convergence in the 1 norm. Note that

q~n,xqx(x+1){p~n,x+1px+1+p~n,xpx},

and therefore we may further reduce the problem to showing that x0xp~n,xpx converges to zero.

For p~n=p^n,x, we have that for any large K

x0xp^n,xpxKsupx<Kp^n,xpx+xKxpx+xKxp^n,x,

and since Ep[X] exists by assumption, it follows from the law of large numbers that for any K,

xKxp^n,xxKxpx,

almost surely. The proof now proceeds as in the proof of Theorem 2.4.

For the rearrangement estimator and the MLE, we may use the same approach. The key is to note that xKxp~n,xxKxp^n,x, for any K and for both p~n=p^nR,p^nG. This holds since fx=xIxK is an increasing function and therefore (6.1) of Lemma 6.1 applies.

Proof of Theorem 5.2

Since κ < ∞ by assumption, the theorem follows directly from the results of Sections 3 and 4, as well as Theorem 5.1.

Acknowledgements

We owe thanks to Jim Pitman for suggesting the relevance of the Bohnenblust-Spitzer algorithm and for pointers to the literature.

Footnotes

AMS 2000 subject classifications: Primary 62E20, 62F12; secondary 62G07, 62G30, 62C15, 62F20.

Contributor Information

Hanna K. Jankowski, Department of Mathematics and Statistics, York University, hkj@mathstat.yorku.ca.

Jon A. Wellner, Department of Statistics, University of Washington, jaw@stat.washington.edu.

References

  1. Alamatsaz MH. On discrete α-unimodal distributions. Statist. Neerlandica. 1993;47:245–252. [Google Scholar]
  2. Anevski D, Fougères A-L. Limit properties of the monotone rearrangement for density and regression function estimation. Tech. rep. 2007 arXiv.org.
  3. Araujo A, Giné E. The central limit theorem for real and Banach valued random variables. John Wiley & Sons; New York-Chichester-Brisbane: 1980. Wiley Series in Probability and Mathematical Statistics. [Google Scholar]
  4. Balabdaoui F, Jankowski HK, Pavlides M, Seregin A, Wellner JA. On the Grenander estimator at zero. Statistica Sinica. 2009 doi: 10.5705/ss.2011.038a. to appear. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Banerjee M, Kosorok M, Tang R. Tech. rep. University of Michigan; 2009. Asymptotics for current status data with different observation time schemes. [Google Scholar]
  6. Birgé L. Estimating a density under order restrictions: nonasymptotic minimax risk. Ann. Statist. 1987;15:995–1012. [Google Scholar]
  7. Carolan C, Dykstra R. Asymptotic behavior of the Grenander estimator at density flat regions. The Canadian Journal of Statistics. 1999;27:557–566. [Google Scholar]
  8. Chernozhukov V, Fernandez-Val I, Galichon A. Improving point and interval estimators of monotone functions by rearrangement. Biometrika. 2009;96:559–575. [Google Scholar]
  9. Dette H, Neumeyer N, Pilz KF. A simple nonparametric estimator of a strictly monotone regression function. Bernoulli. 2006;12:469–490. [Google Scholar]
  10. Dette H, Pilz KF. A comparative study of monotone nonparametric kernel estimates. Journal of Statistical Computation and Simulation. 2006;76:41–56. [Google Scholar]
  11. Dudley RM. Uniform Central Limit Theorems, vol. 63 of Cambridge Studies in Advanced Mathematics. Cambridge University Press; Cambridge: 1999. [Google Scholar]
  12. Durot C, Tocquet A-S. On the distance between the empirical process and its concave majorant in a monotone regression framework. Ann. Inst. H. Poincaré Probab. Statist. 2003;39:217–240. [Google Scholar]
  13. Ferguson TS. A Course in Large Sample Theory. Chapman & Hall; London: 1996. Texts in Statistical Science Series. [Google Scholar]
  14. Fougères A-L. Estimation de densités unimodales. The Canadian Journal of Statistics. 1997;25:375–387. [Google Scholar]
  15. Hardy GH, Littlewood JE, Pólya G. Inequalities. 2d ed. Cambridge, at the University Press; 1952. [Google Scholar]
  16. Le Cam LM. Théorie asymptotique de la décision statistique. 33. Les Presses de l’Université de Montréal; Montreal, Que: 1969. Séminaire de Mathématiques Supérieures. Et́é, 1968. [Google Scholar]
  17. Lieb EH, Loss M. Analysis, vol. 14 of Graduate Studies in Mathematics. American Mathematical Society; Providence, RI: 1997. [Google Scholar]
  18. Maathuis MH, Hudgens MG. Nonparametric inference for competing risks current status data with continuous, discrete or grouped observation times. Tech. rep. 2009 doi: 10.1093/biomet/asq083. arXiv.org. [DOI] [PMC free article] [PubMed]
  19. Parthasarathy KR. Probability measures on metric spaces. 3. Academic Press Inc.; New York: 1967. Probability and Mathematical Statistics. [Google Scholar]
  20. Prakasa Rao BLS. Estimation of a unimodal density. Sankhyā Series A. 1969;31:23–36. [Google Scholar]
  21. Robertson T, Wright FT, Dykstra RL. Order Restricted Statistical Inference. John Wiley & Sons Ltd.; Chichester: 1988. Wiley Series in Probability and Mathematical Statistics: Probability and Mathematical Statistics. [Google Scholar]
  22. Sparre Andersen E. On the fluctuations of sums of random variables. II. Mathematica Scandinavica. 1954;2:195–223. [Google Scholar]
  23. Steele JM. The Bohnenblust-Spitzer algorithm and its applications. J. Comput. Appl. Math. 2002;142:235–249. Probabilistic methods in combinatorics and combinatorial optimization. [Google Scholar]

RESOURCES