Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2014 Oct 14.
Published in final edited form as: Stat Sci. 2014 May;29(2):285–301. doi: 10.1214/13-sts455

Two modeling strategies for empirical Bayes estimation

Bradley Efron 1
PMCID: PMC4196219  NIHMSID: NIHMS551405  PMID: 25324592

Abstract

Empirical Bayes methods use the data from parallel experiments, for instance observations Xk ~ 𝒩 (Θk, 1) for k = 1, 2, …, N, to estimate the conditional distributions Θk|Xk. There are two main estimation strategies: modeling on the θ space, called “g-modeling” here, and modeling on the×space, called “f-modeling.” The two approaches are de- scribed and compared. A series of computational formulas are developed to assess their frequentist accuracy. Several examples, both contrived and genuine, show the strengths and limitations of the two strategies.

Keywords: f-modeling, g-modeling, Bayes rule in terms of f, prior exponential families

1 Introduction

Empirical Bayes methods, though of increasing use, still suffer from an uncertain theoretical basis, enjoying neither the safe haven of Bayes theorem nor the steady support of frequentist optimality. Their rationale is often reduced to inserting more or less obvious estimates into familiar Bayesian formulas. This conceals the essential empirical Bayes task: learning an appropriate prior distribution from ongoing statistical experience, rather than knowing it by assumption. Efficient learning requires both Bayesian and frequentist modeling strategies. My plan here is to discuss such strategies in a mathematically simplified framework that, hopefully, renders them more transparent. The development proceeds with some methodological discussion supplemented by numerical examples.

A wide range of empirical Bayes applications have the following structure: repeated sampling from an unknown prior distribution g(θ) yields unseen realizations

Θ1,Θ2,,ΘN. (1.1)

Each Θk in turn provides an observation Xk ~ fΘk(·) from a known probability family fθ(x),

X1,X2,,XN. (1.2)

On the basis of the observed sample (1.2), the statistician wishes to approximate certain Bayesian inferences that would be directly available if g(θ) were known. This is the empirical Bayes framework developed and named by Robbins (1956). Both Θ and X are usually one-dimensional variates, as they will be in our examples, though that is of more applied than theoretical necessity.

A central feature of empirical Bayes estimation is that the data arrives on the x scale but inferences are calculated on the θ scale. Two main strategies have developed: modeling on the Θ scale, called g-modeling here, and modeling on the x scale, called f-modeling. G-modeling has predominated in the theoretical empirical Bayes literature, as in Laird (1978), Morris (1983), Zhang (1997), and Jiang and Zhang (2009). Applications, on the other hand, from Robbins (1956) onward, have more often relied on f-modeling, recently as in Efron (2010, 2011) and Brown, Greenshtein and Ritov (2013).

We begin Section 2 with a discretized statement of Bayes theorem that simplifies the nonparametric f-modeling development of Section 3. Parameterized f-modeling, necessary for efficient empirical Bayes estimation, is discussed in Section 4. Section 5 introduces an exponential family class of g-modeling procedures. Classic empirical Bayes applications, an f-modeling stronghold (including Robbins' Poisson formula, the James{Stein estimator, and false discovery rate methods), are the subject of Section 6. The paper concludes with a brief discussion in Section 7.

Several numerical examples, both contrived and genuine, are carried through in Sections 2 through 7. The comparison is never one-sided: as one moves away from the classic applications, g-modeling comes into its own. Trying to go backward, from observations on the x-space to the unknown prior g(θ), has an ill-posed computational flavor. Empirical Bayes calculations are inherently fraught with difficulties, making both of the modeling strategies useful. An excellent review of empirical Bayes methodology appears in Chapter 3 of Carlin and Louis (2000).

There is an extensive literature, much of it focusing on rates of convergence, concerning the “deconvolution problem,” that is, estimating the distribution g(θ) from the observed X values. A good recent reference is Butucea and Comte (2009). Empirical Bayes inference amounts to estimating certain nonlinear functionals of g(·), whereas linear functionals play a central role for the deconvolution problem, as in Cavalier and Hengartner (2009), but the two literatures are related. The development in this paper employs discrete models that avoid rates of convergence difficulties.

Empirical Bayes analyses often produce impressive-looking estimates of posterior θ distributions. The main results in what follows are a series of computational formulas — Theorems 1 through 4 — giving the accuracy of both f-model and g-model estimates. Accuracy can be poor, as some of the examples show, and in any case accuracy assessments are an important part of the analysis.

2 A discrete model of Bayesian inference

In order to simplify the f-modeling computations we will assume a model in which both the parameter vector θ and the observed data set x are confined to finite discrete sets:

θθ=(θ1,θ2,,θj,,θm) and xx=(x1,x2,,xi,,xn) (2.1)

with m < n. The prior distribution g puts probability gj on θj,

g=(g1,g2,,gj,,gm). (2.2)

This induces a marginal distribution f on x,

f=(f1,f2,,fi,,fn), (2.3)

with fi = Pr{x = xi}. Letting {pij} represent the sampling probabilities

pij=Pr{xi|θj}, (2.4)

the n × m matrix

P=(pij) (2.5)

produces f from g according to

f=Pg. (2.6)

In the example of Figure 1, we have

θ=(3,2.8,,3)(m=31), (2.7)

with g(θ) an equal mixture of a discretized 𝒩(0, 0.52) density and a density proportional to |θ|. The sampling probabilities pij are obtained from the normal translation model φ(xi – θj), φ the standard normal density function, and with

x=(4.4,4.35,,5.2)(n=193). (2.8)

Figure 1.

Figure 1

Top Discrete model: prior g(θ), θ = seq(−3, 3, 0.2), g is equal mixture of 𝒩(0, 0.52) and density ∝ |θ|. Bottom Corresponding f(x): assuming 𝒩(θ, 1) sampling, x= seq(−4.4, 5.2, 0.05). Note the different scales.

Then f = Pg produces the triangular-shaped marginal density f(x) seen in the bottom panel. Looking ahead, we will want to use samples from the bottom distribution to estimate functions of the top.

In the discrete model (2.1)(2.6), Bayes rule takes the form

Pr{θj|xi}=pijgj/fi. (2.9)

Letting pi represent the ith row of matrix P, the m-vector of posterior probabilities of θ given x = xi is given by

diag(pi)g/pig, (2.10)

where diag(v) indicates a diagonal matix with diagonal elements taken from the vector v.

Now suppose t(θ) is a parameter of interest, expressed in our discrete setting by the vector of values

t=(t1,t2,,tj,,tm). (2.11)

The posterior expectation of t(θ) given x = xi is then

E{t(θ)|xi}=j=1mtjpijgj/fi=tdiag(pi)g/pig. (2.12)

The main role of the discrete model (2.1)(2.6) is to simplify the presentation of f-modeling begun in Section 3. Basically, it allows the use of familiar matrix calculations rather than functional equations. G-modeling, Section 5, will be presented in both discrete and continuous forms. The prostate data example of Section 6 shows our discrete model nicely handling continuous data.

3 Bayes rule in terms of f

Formula (2.12) expresses E{t(θ)|xi} in terms of the prior distribution g. This is fine for pure Bayesian applications but in empirical Bayes work, information arrives on the x scale and we may need to express Bayes rule in terms of f. We begin by inverting (2.6), f = Pg.

For now assume that the n × m matrix P (2.4)(2.5) is of full rank m. Then the m × n matrix

A=(PP)1P (3.1)

carries out the inversion,

g=Af. (3.2)

Section 4 discusses the case where rank(P) is less than m. Other definitions of A are possible, see the discussion in Section 7.

With pi denoting the ith row of P as before, let

u=(tjpij)=tdiag(pi),v=pi, (3.3)

and

U=uA,V=vA, (3.4)

U and V being n-vectors. (Here we are suppressing the subscript in U = Ui, etc.) Using (3.2), the Bayes posterior expectation E{t|xi} (2.12) becomes

E{t|xi}=ugvg=UfVf, (3.5)

the latter being Bayes rule in terms of f. Notice that U and V do not depend on g or f. The denominator Vf equals f(xi) in (3.5), but not in the regularized versions of Section 4.

In a typical empirical Bayes situation, as in Section 6.1 of Efron (2010), we might observe independent observations X1,X2, …, XN from the marginal density f(x),

Xk~iidf(·),k=1,2,,N, (3.6)

and wish to estimate E = E{t|xi}. For the discrete model (2.1), the vector of counts y = (y1, y2, …, yn)′,

yi=#{Xk=xi}, (3.7)

is a nonparametric sufficient statistic, y follows a multinomial distribution on n categories, N draws, probability vector f,

y~Multn(N,f), (3.8)

having mean vector and covariance matrix

y~(Nf,ND(f)),D(f)diag(f)ff. (3.9)

The unbiased estimate of f,

f^=y/N, (3.10)

gives a nonparametric estimate ^E of E{t|xi} by substitution into (3.5),

E^=Uf^/Vf^. (3.11)

Using ~ (f, D(f)/N), a standard differential argument yields the approximate “delta method” frequentist standard error of Ê. Define

Uf=i=1nfiUi,Vf=i=1nfiVi, (3.12)

and

W=UUfVVf. (3.13)

(Notice that Σ fiWi = 0.)

Theorem 1. The delta-method approximate standard deviation of Ê = U/V is

sd(E^)=1N|E|·σf(W), (3.14)

where E = Uf/Vf and

σj2(W)=i=1nfiWi2. (3.15)

The approximate coefficient of variation sd(Ê)/|E| of Ê is

cv(E^)=σf(W)/N. (3.16)

Proof. From (3.5) we compute the joint moments of U and V,

(Uf^Vf^)~((UfVf),1N(σf2(U)σf(U,V)σf(U,V)σf2(V))), (3.17)

with σf2(U)=fi(UiUf)2,σf(U,V)=fi(UiUf)(ViVf),andσf2(V)=fi(ViVf)2..

Then

E^=Uf^Vf^=E·1+Δ^U1+Δ^V[Δ^U=Uf^UfUf,Δ^V=Vf^VfVf]E·(1+Δ^UΔ^V), (3.18)

so sd(Ê2) ≐ E2 var (Δ̂U – Δ̂V), which, again using (3.9), gives Theorem 1.

The trouble here, as will be shown, is that sd(Ê) or cv(Ê) may easily become unmanageably large. Empirical Bayes methods require sampling on the x scale, which can be grossly inefficient for estimating functions of θ.

Hypothetically, the Xk’s in (3.6) are the observable halves of pairs (Θ,X),

(Θk,Xk)~indg(θ)fθ(x),k=1,2,,N. (3.19)

If the Θk’s had been observed, we could estimate g directly as = (1, 2, …, m)′,

g¯j=#{Θk=θj}/N, (3.20)

leading to the direct Bayes estimate

E¯=ug¯/vg¯. (3.21)

Ē would usually be less variable than Ê (3.11) (and would automatically enforce possible constraints on E such as monotonicity in xk). A version of Theorem 1 applies here. Now we define

ug=j=1mgjuj,vg=j=1mgjvj, and w=u/ugv/vg. (3.22)

Theorem 2. For direct Bayes estimation (3.21), the delta-method approximate standard deviation of Ē is

sd(E¯)=1N|E|·σg(w), (3.23)

where

σg2(w)=j=1mgjwj2; (3.24)

Ē has approximate coefficient of variation

cv(E¯)=σg(w)/N. (3.25)

The proof of Theorem 2 is the same as that for Theorem 1.

Table 1 concerns the estimation of E{t(θ)|x = 2.5} for the situation shown in Figure 1. Three different parameters t(θ) are considered:

(1)t(θ)=θ(2)t(θ)=θ2(3)t(θ)={1ifθ00ifθ>0. (3.26)

Table 1.

Standard deviation and coefficient of variation of E{t(θ)|x = 2.5} (for N = 1); for the three parameters (3.26), with g and f as in Figure 1; sdf from Theorem 1 (3.14); sdd for direct Bayes estimation, Theorem 2 (3.23); sdx from the regularized f-modeling of Section 4, Theorem 3 (4.8).

N1/2sd
N1/2 cv
t(θ) E{t|x = 2:5} sdf sdd sdx cvf cvd cvx
parameter (1) 2.00 8.74 3.38 2.83 4.4 1.7 1.4
parameter (2) 4.76 43.4 13.7 10.4 9.1 2.9 2.2
parameter (3) 0.03 43.9 0.53 1.24 1371 16 39

In the third case, E{t(θ)|x} = Pr{θ ≤ 0|x}. Cvf is Ncv(E^) (3.16) so cvf/N is the approximate coefficient of variation of Ê, the nonparametric empirical Bayes estimate of E{t(θ)|x = 2.5}. Cvd is the corresponding quantity (3.25), available only if we could directly observe the Θk values in (3.19), while cvx is a regularized version of Ê described in the next section.

Suppose we wish to bound cv(Ê) below some prespecified value c0, perhaps c0 = 0.1. Then according to (3.16) we need N to equal

N=(cv1/c0)2, (3.27)

where cv1 is the numerator σf (W) of (3.16), e.g., cvf in Table 1. For the three parameters (3.26) and for c0 = 0:1, we would require N = 1936, 8281, and 187 million respectively.

The vector W for parameter (3) is seen to take on enormous values in Figure 2, resulting in σf(W) = 1370:7 for (3.16). The trouble stems from the abrupt discontinuity of t3 at θ = 0, which destabilizes U in (3.13). Definition (3.4) implies U′P = u′. This says that U′ must linearly compose u′ from the rows of P. But in our example the rows of P are smooth functions of the form φ(xi – θj), forcing the violent cycling of U seen in Figure 2. Section 4 discusses a regularization method that greatly improves the accuracy of using “Bayes rule in terms of f.”

Figure 2.

Figure 2

W vector (3.13) for f-Bayes estimation of Pr{θ≤0|x = 2.5} for the model of Figure 1 (actually W12 as in Section 4, dashed curve is W9).

Table 1 shows that if we could sample on the θ scale, as in (3.20), we would require “only” 25,600 Θk observations to achieve coefficient of variation 0.1 for estimating Pr{θ≤0|x = 2.5}; direct sampling is almost always more efficient than f sampling, but that is not the way empirical Bayes situations present themselves. The efficiency difference is a factor of 86 for parameter (3), but less than a factor of 3 for parameter (1), t(θ) = θ. The latter is a particularly favorable case for empirical Bayes estimation, as discussed in Section 6.

The assumption of independent sampling, (3.6) and (3.19), is a crucial element of all our results. Independence assumptions (often tacitly made) dominate the empirical Bayes literature, as in Muralidharan et al. (2012), Zhang (1997), Morris (1983), and Efron and Morris (1975). Non-independence effectively reduces the effective sample size N, see Chapter 8 of Efron (2010). This point is brought up again in Section 6.

4 Regularized f-modeling

Fully nonparametric estimation of E = E{t(θ)|x} is sometimes feasible but, as seen in Table 1 of Section 3, it can become unacceptably noisy. Some form of regularization is usually necessary. A promising approach is to estimate f parametrically according to a smooth low-dimensional model.

Suppose then that we have such a model, yielding as an estimate of f (2.3), with mean vector and covariance matrix

f^~(f,Δ(f)/N). (4.1)

In the nonparametric case (3.9) Δ(f) = D(f), but we expect that we can reduce Δ(f) parametrically. In any case, the delta-method approximate coefficient of variation for Ê= U′f̂/V′f̂ (3.11) is given in terms of W (3.13):

cv(E^)={WΔ(f)W/N}1/2. (4.2)

This agrees with (3.16) in the nonparametric situation (3.9) where Δ(f) = diag(f) – ff′. The verification of (4.2) is almost indentical to that for Theorem 1.

Poisson regression models are convenient for the smooth parametric estimation of f. Beginning with an n × p structure matrix X, having rows xi for i = 1, 2, …, n, we assume that the components of the count vector y (3.7) are independent Poisson observations,

yi~indPoi(μi),μi=exiα fori=1,2,,n, (4.3)

where α is an unknown vector of dimension p. Matrix X is assumed to have as its first column a vector of 1's.

Let μ+=1nμiandN=1nyi, and define

fi=μi/μ+ fori=1,2,,n. (4.4)

Then a well-known Poisson/multinomial relationship says that the conditional distribution of y given N is

y|N~Multn(N,f) (4.5)

as in (3.8). Moreover, under mild regularity conditions, the estimate = y/N has asymptotic mean vector and covariance matrix (as µ+ → ∞)

f^~˙(f,Δ(f)/N), (4.6)

where

Δ(f)=diag(f)XGf1Xdiag(f)[Gf=Xdiag(f)X]; (4.7)

(4.6)(4.7) are derived from standard generalized linear model calculations. Combining (4.2) and (4.6) gives a Poisson regression version of Theorem 1.

Theorem 3. The delta-method coefficient of variation for Ê = U/V under Poisson model (4.3) is

cv(E^)={(WX)f(XX)f1(WX)f/N}1/2 (4.8)

where

(WX)f=Wdiag(f)X and (XX)f=Xdiag(f)X, (4.9)

with W as in (3.13).

The bracketed term in (4.8), times N, is recognized as the length2 of the projection of W into the p-dimensional space spanned by the columns of X, carried out using inner product 〈a, bf = ∑ fiaibi. In the nonparametric case, X equals the identity I, and (4.8) reduces to (3.16). As in (3.14), sd(Ê) is approximated by |E| cv(Ê). (Note: Theorem 3 remains valid as stated if a multinomial model for replaces the Poisson calculations in (4.7).)

Cvx in Table 1 was calculated as in (4.8), with N = 1. The structure matrix X for the example in Figure 1 was obtained from the R natural spline function ns(x, df = 5); including a column of 1’s made X193 × 6. The improvements over cvf, the nonparametric coefficients of variation, were by factors of 3, 5, and 100 for the three parameters (3.26).

The regularization in Theorem 3 takes place with respect to f and . Good performance also requires regularization of the inversion process ĝ = A (3.2). Going back to the beginning of Section 3, let

P=LDR (4.10)

represent the singular value decomposition of the n × m matrix P, with L the n × m orthonormal matrix of left singular vectors, R the m × m orthonormal matrix of right singular vectors, and D the m × m diagonal matrix of singular values,

d1d2dm. (4.11)

Then it is easy to show that the m × n matrix

A=RD1L (4.12)

is the pseudo-inverse of P, which is why we could go from f = Pg to g = Af at (3.2). (Other pseudo-inverses exist, see (7.1).)

Definition (4.12) depends on P being of full rank m, equivalently having dm > 0 in (4.11). Whether or not this is true, very small values of dj will destabilize A. The familar cure is to truncate representation (4.12), lopping off the end terms of the singular value decomposition. If we wish to stop after the first r terms, we define Rr to be the first r columns of R, Lr the first r columns of L, Dr the r × r diagonal matrix diag(d1, d2, …, dr), and

Ar=RrDr1Lr. (4.13)

In fact, r = 12 was used in Figure 2 and Table 1, chosen to make

r+1mdj2/1mdj2<1010. (4.14)

As in (3.1)(3.13), let

Ur=uAr,Vr=vAr (4.15)

(u and v stay the same as before),

Er=UrfVrf,E^r=Urf^Vrf^, (4.16)

and

Wr=UrfiUriVrfiVri. (4.17)

Theorem 3 then remains valid, with Wr replacing W. Note: Another regularization method, which will not be pursued here, is the use of ridge regression rather than truncation in the inversion process (3.2), as in Hall and Meister (2007).

Reducing r reduces Wr, hence reducing (4.9) and the approximate coefficient of variation of Êr. The reduction can be dramatic. W9 almost disappears compared to W12 in Figure 2. Table 2 compares various choices of r for parameters (1) and (3) (3.26). The choice turns out to be unimportant for parameter (1) and crucial for parameter (3).

Table 2.

Coefficient of variation and standard deviation (N = 1), for E{t|x = 2:5} as in Table 1; now using Poisson regression in Theorem 3, with X based on a natural spline with 5 degrees of freedom. Increasing choice of r, (4.13)(4.17), decreases bias but increases variability of Ê for parameter (3); g error from (4.20).

Parameter (1)
Parameter(3)
r g error Er cvx Sdx Er cvx sdx
3 .464 1.75 1.00 1.75 .021 3.6 .1
6 .254 2.00 1.34 2.68 .027 4.6 .1
9 .110 2.00 1.36 2.73 .031 8.2 .3
12 .067 2.00 1.41 2.83 .032 38.6 1.2
15 .024 2.00 1.39 2.78 .033 494.0 16.1
18 .012 2.00 1.39 2.78 .033 23820.8 783.8
21 .006 2.00 1.40 2.80 .033 960036.4 31688.8

Why not always choose a small value of r? The trouble lies in possible bias for the estimation of E = E{t|x}. Rather than the crucial inverse mapping g = Af (3.2), we get an approximation

gr=Arf=ArPg=RrDr1LrLDRg=RrRrg (4.18)

(the last step following from LDR'=LrDrRr'+L(r)D(r)R(r)', with L(r) indicating the last mr columns of L, etc., (4.18) says that gr is the projection of g into the linear space spanned by the first r columns of R). Then, looking at (4.15)(4.16),

Er=UrfVrf=ugrvgr, (4.19)

possibly making Êr badly biased for estimating E = u′g/v′g.

The Er columns of Table 2 show that bias is a problem only for quite small values of r. However the example of Figure 1 is “easy” in the sense that the true prior g is smooth, which allows gr to rapidly approach g as r increases, as pictured in Figure 3. The gerror column of Table 2 shows this numerically in terms of the absolute error

gerror=i=1m|grigi|. (4.20)

Figure 3.

Figure 3

Approximation gr (4.18) with r = 6, 9, 12 for g in Figure 1, heavy blue curve is g.

A more difficult case is illustrated in Figure 4. Here g is a mixture: 90% of a delta function at θ = 0 and 10% of a uniform distribution over the 31 points θj in θ = (−3, −2.8, …, 3), P and x are as before. Now gerror exceeds 1.75 even for r = 21, gr puts too small a weight on θ = 0, while bouncing around erratically for θ ≠ 0, often going negative.

Figure 4.

Figure 4

True g = 0.90·δ(0) + 0.10 uniform (heavy curve), approximation gr (4.18) for r = 6, 9, 12, 15, 18, 21, as labeled.

We expect, correctly, that empirical Bayes estimation of E{t(θ)|x} will usually be difficult for the situation of Figure 4. This is worrisome since its g is a reasonable model for familiar false discovery rate analyses, but see Section 6. Section 5 discusses a different regularization approach that ameliorates, without curing, the difficulties seen here.

5 Modeling the prior distribution g

The regularization methods of Section 4 involved modeling f, the marginal distribution (2.3) on the x-space, for example by Poisson regression in Table 2. Here we discuss an alternative strategy: modeling g, the prior distribution (2.2) on the θ-space. This has both advantages and disadvantages, as will be discussed.

We begin with an m × q model matrix Q, which determines g according to

g(α)=eQα1mϕ(α)[ϕ(α)=log1meQjα]. (5.1)

(For v = (v1, v2, …, vm), ev denotes a vector with components evj; 1m is a vector of m 1’s, indicating in (5.1) that φ(α) is subtracted from each component of Qα.) Here α is the unknown q-dimensional natural parameter of exponential family (5.1), which determines the prior distribution g = g(α). In an empirical Bayes framework, g gives f = Pg (2.6), and the statistician then observes a multinomial sample y of size N from f as in (3.8),

y~Multn(N,Pg(α)), (5.2)

from which inferences about g are to be drawn.

Model (5.1)(5.2) is not an exponential family in y, a theoretical disadvanage compared to the Poisson modeling of Theorem 3. (It is a curved exponential family, Efron, 1975.) We can still pursue an asymptotic analysis of its frequentist accuracy. Let

D(g)diag(g)gg, (5.3)

the covariance matrix of a single random draw Θ from distribution g, and define

Qα=D(g(α))Q. (5.4)

Lemma 1. The Fisher information matrix for estimating α in model (5.1)(5.2) is

=NQαPdiag(1/f(α))PQα, (5.5)

where P is the sampling density matrix (2.5), and f (α) = Pg(α).

Proof. Differentiating log g in (5.1) gives the m × q derivative matrix d log gi/dαk,

dloggdα=[I1mg(α)]Q, (5.6)

so

dgdα=diag(g(α))dloggdα=D(g(α))Q=Qα. (5.7)

This yields df/dα = PQα and

dlogfdα=diag(1f(α))PQα. (5.8)

The log likelihood from multinomial sample (5.2) is

lα(y)=ylogf(α)+constant, (5.9)

giving score vector

dlα(y)dα=ydlogfdα. (5.10)

Since y has covariance matrix N(diag fff′) (3.9), ℐ, the covariance matrix of the score vector, equals

=NQαPdiag(1/f)(diagfff)diag(1/f)PQα=NQαP(diag(1/f)1n1n)PQα. (5.11)

Finally

1nPQα=1mD(g(α))Q=0Q=0 (5.12)

(using the fact that the columns of P sum to 1), and (5.11) yields the lemma.

Standard sampling theory says that the maximum likelihood estimate (MLE) α̂ has approximate covariance matrix ℐ−1, and that ĝ = g(α̂) has approximate covariance, from (5.7),

cov(g^)=Qα1Qα. (5.13)

Lemma 2. The approximate covariance matrix for the maximum likelihood estimate g(α̂) of g in model (5.1)(5.2) is

cov(g^)=1NQα[QαPdiag(1/f(α))PQα]1Qα. (5.14)

If we are interested in a real-valued parameter τ = T(g), the approximate standard deviation of its MLE τ̂ = T(g(α̂)) is

sd(τ^)=[T˙cov(g^)T˙]1/2, (5.15)

where is the gradient vector dT/dg, evaluated at ĝ. When T(g) is the conditional expectation of a parameter t(θ) (3.5),

T(g)=E{t(θ)|x=xi}=ug/vg, (5.16)

we compute

T˙(g)=w=(u/ug)(v/vg) (5.17)

(3.22), and get the following.

Theorem 4. Under model (5.1)(5.2), the MLE Ê of E{t(θ)|x = xi} has approximate standard deviation

sd(E^)=|E|[wcov(g^)w]1/2, (5.18)

with w as in (5.17) and cov(ĝ) from (5.14).

We can now compare sd(Ē) from g-modeling (5.18), with the corresponding f-modeling results of Theorem 3. Figure 5 does this with parameters (1) and (3) (3.26) for the example of Figure 1. Theorem 3, modified as at (4.17) with r = 12, represents f-modeling, now with X based on ns(x, 6), natural spline with six degrees of freedom. Similarly for g-modeling, Q = ns(θ, 6) in (5.1), α was chosen to make g(α) very close to the upper curve in Figure 1. (Doing so required six rather than five degrees of freedom.)

Figure 5.

Figure 5

Top Standard deviation of E{t|x} as a function of x, for parameter (1) t(θ) = θ (with N = 1), f-modeling (solid), g-modeling (dashed). Bottom Now for parameter (3), t(θ) = 1 or 0 as θ ≤ 0 or > 0, using natural spline models, df = 6, for both calculations.

The upper panel of Figure 5 shows f-modeling yielding somewhat smaller standard deviations for parameter (1), t(θ) = θ. This is an especially favorable case for f-modeling, as discussed in Section 6. However for parameter (3), E = Pr{t ≤ 0|x}, g-modeling is far superior. Note: in exponential families, curved or not, it can be argued that the effective degrees of freedom of a model equals its number of free parameters, see Remark D of Efron (2004). The models used in Figure 5 each have six parameters, so in this sense the comparison is fair.

Parametric g-space modeling, as in (5.1), has several advantages over the f-space modeling of Section 4:

Constraints ĝ = exp(Qα̂–1mφ(α̂)) has all coordinates positive, unlike the estimates seen in Figure 4. Other constraints such as monotonicity or convexity that may be imposed on = by the structure of P are automatically enforced, as discussed in Chapter 3 of Carlin and Louis (2000).

Accuracy With some important exceptions, discussed in Section 6, g-modeling often yields smaller values of sd(Ê), as typified in the bottom panel of Figure 5. This is particularly true for discontinuous parameters t(θ), such as parameter (3) in Table 1.

Simplicity The bias/variance trade-offs involved with the choice of r in Section 4 are avoided, and in fact there is no need for “Bayes rule in terms of f

Continuous formulation It is straightforward to translate g-modeling from the discrete framework (2.1)(2.4) into more familiar continuous language. Exponential family model (5.1) now becomes

gα(θ)=eq(θ)αϕ(α)[ϕ(α)=eq(θ)αdθ], (5.19)

where q(θ) is a smoothly defined 1×q vector function of θ. Letting fθ(x) denote the sampling density of x given θ, define

h(x)=fθ(x)g(θ)(q(θ)q¯)dθ[q¯=g(θ)q(θ)dθ]. (5.20)

Then the q × q information matrix ℐ (5.5) is

=N[h(x)h(x)f(x)2]f(x)dx[f(x)=g(θ)fθ(x)dx]. (5.21)

A posterior expectation E = E{t(θ)|x} has MLE

E^=t(θ)fθ(x)gα^(θ)dθ/fθ(x)gα^(θ)dθ. (5.22)

An influence function argument shows that E has gradient

dEdα=Ez(θ)gα(θ)(q(θ)q¯)dθ, (5.23)

with

z(θ)=t(θ)fθ(x)gα(θ)t(φ)fφ(x)gα(φ)dφfθ(x)gα(θ)fφ(x)gα(φ)dφ. (5.24)

Then the approximate standard deviation of Ê is

sd(E^)=(dEdα1dEdα)1/2, (5.25)

combining (5.21)(5.24). (Of course the integrals required in (5.25) would usually be done numerically, implicitly returning us to discrete calculations!)

Modeling the prior Modeling on the g-scale is convenient for situations where the statistician has qualitative knowledge concerning the shape of the prior g. As a familiar example, large-scale testing problems often have a big atom of prior probability at θ = 0, corresponding to the null cases. We can accomodate this by including in model matrix Q (5.1) a column e0 = (0, 0, …, 0, 1, 0, …, 0)′, with the 1 at θ = 0.

Such an analysis was carried out for the situation in Figure 4, where the true g equaled 0:9e0+0.1·uniform. Q was taken to be the natural spline basis ns(θ, 5) augmented by column e0, a 31 × 6 matrix. Table 3 shows the results for t = e0, that is, for

E=E{t|x}=Pr{θ=0|x}. (5.26)

Table 3.

Estimating E = Pr{θ = 0|x} in the situation of Figure 4; using g-modeling (5.1) with Q equal ns(x; 5) augmented with a column putting a delta function at θ = 0. Sd is sd(Ê) (5.25), cv is the coefficient of variation sd /E. (For sample size N, divide entries by N1/2.)

x −4 −3 −2 −1 0 1 2 3 4
E .04 .32 .78 .94 .96 .94 .78 .32 .04
N1/2.sd .95 3.28 9.77 10.64 9.70 10.48 9.92 3.36 .75
N1/2.cv 24.23 10.39 12.53 11.38 10.09 11.20 12.72 10.65 19.21

The table gives E and sd(Ê) (5.18) for x = −4, −3, …, 4 (N = 1), as well as the coefficient of variation sd(Ê)=E.

The results are not particularly encouraging: we would need sample sizes N on the order of 10,000 to expect reasonably accurate estimates Ê (3.27). On the other hand, f-modeling as in Section 4 is hopeless here. Section 6 has more to say about false discovey rate estimates (5.26).

A random sample of N = 5000 X values was drawn from the distribution f = Pg corresponding to the true g in Figure 4 (with P based on the normal density φ(xi – θj) as before), giving count vector y (3.7). Numerical maximization yielded α̂, the MLE in model (5.1)(5.2), Q as in Table 3. The estimate ĝ = g(α̂) put probability 0.920 at θ = 0, compared to true value 0.903, with nonnull distribution as shown in Figure 6. The nonnull peaks at θ = ±2 were artifacts of the estimation procedure. On the other hand, ĝ correctly put roughly equal nonnull probability above and below 0. This degree of useful but crude inference should be kept in mind for the genuine data examples of Section 6, where the truth is unknown.

Figure 6.

Figure 6

MLE nonnull distribution, estimated from a sample of N = 5000 X values from f corresponding to true g in Figure 4, estimated atom at θ = 0 was 0.92.

Our list of g-modeling advantages raises the question of why f-modeling has dominated empirical Bayes applications. The answer — that a certain class of important problems is more naturally considered in the f domain — is discussed in the next section. Theoretically, as opposed to practically, g-modeling has played a central role in the empirical Bayes literature.

Much of that work involves the nonparametric maximum likelihood estimation of the prior distribution g(θ), some notable references being Laird (1978), Zhang (1997), and Jiang and Zhang (2009). Parametric g-modeling, as discussed in Morris (1983) and Casella (1985), has been less well-developed. A large part of the effort has focused on the “normal-normal” situation, normal priors with normal sampling errors, as in Efron and Morris (1975), and other conjugate situations. Chaper 3 of Carlin and Louis (2000) gives a nice discussion of parametric empirical Bayes methods, including binomial and Poisson examples.

6 Classic empirical Bayes applications

Since its post-war emergence (Good and Toulmin, 1956, James and Stein, 1961, Robbins, 1956), empirical Bayes methodology has focused on a small set of specially structured situations: ones where certain Bayesian inferences can be computed simply and directly from the marginal distribution of the observations on the x-space. There is no need for g-modeling in this framework, or for that matter any calculation of ĝ at all. False discovery rates and the James{Stein estimator fall into this category, along with related methods discussed in what follows. Though g-modeling is unnecessary here, it will still be interesting to see how it performs on the classic problems.

Robbins' Poisson estimation example exemplifies the classic empirical Bayes approach: independent but not identically distributed Poisson variates

Xk~indPoi(Θk)k=1,2,,N, (6.1)

are observed, with the Θk's notionally drawn from some prior g(θ). Applying Bayes rule with the Poisson kernel e−θθx/x! shows that

E{θ|x}=(x+1)fx+1/fx, (6.2)

where f = (f1, f2, …) is the marginal distribution of the X's. (This is an example of (3.5), Bayes rule in terms of f, defining ei = (0, 0, …, 1, 0, …, 0)′ with 1 in the ith place, U = (x + 1)ex+1, and V = ex.) Letting = (1, 2, …) be the nonparametric MLE (3.10), Robbins' estimate is the “plug-in” choice

E^{θ|x}=(x+1)f^x+1/f^x, (6.3)

as in (3.11). Brown et al. (2013) use various forms of semi-parametric f-modeling to improve on (6.3).

The prehistory of empirical Bayes applications notably includes the missing species problem, see Section 11.5 of Efron (2010). This has the Poisson form (6.1), but with an inference different than (6.2) as its goal. Fisher, Corbet and Williams (1943) employed parameterized f-modeling as in Section 4, with f the negative binomial family. Section 3.2.1 of Carlin and Louis (2000) follows the same route for improving Robbins' estimator (6.3).

Tweedie’s formula (Efron, 2011) extends Robbins-type estimation of E{θ|x} to general exponential families. For the normal case

θ~g(·) and x|θ~𝒩(θ,1), (6.4)

Tweedie's formula is

E{θ|x}=x+l(x) wherel(x)=ddxlogf(x), (6.5)

with f(x) the marginal distribution of X. As in (6.2), the marginal distribution of X determines E{θ|x}, without any specific reference to the prior g(θ).

Given observations Xk from model (6.4),

Xk~𝒩(Θk,1) fork=1,2,,N, (6.6)

the empirical Bayes estimation of E{θ|x} is conceptually straightforward: a smooth estimate (x) is obtained from the Xk’s, and its logarithm (x) differentiated to give

E^{θ|x}=x+l^(x), (6.7)

again without explicit reference to the unknown g(θ). Modeling here is naturally done on the x-scale. (It is not necessary for the Xk's to be independent in (6.6), or (6.1), although dependence decreases the accuracy of Ê, see Theorem 8.4 of Efron (2010).)

Figure 7 concerns an application of Tweedie's formula to the prostate data, the output of a microarray experiment comparing 52 prostate cancer patients with 50 healthy controls (Efron, 2010, Sect. 2.1). The genetic activity of N = 6033 genes was measured for each man. Two-sample tests comparing patients with controls yielded z-values for each gene, X1, X2, …, XN, theoretically satisfying

Xk~𝒩(0,1) (6.8)

under the null hypothesis that gene k is equally active in both groups. Of course the experimenters were searching for activity differences, which would manifest themselves as unusually large values |Xk|. Figure 2.1 of Efron (2010) shows the histogram of the Xk values, looking somewhat like a long-tailed version of a 𝒩 (0, 1) density.

Figure 7.

Figure 7

Prostate data Left panel shows estimates of E{θ|x} from Tweedie's formula (solid curve), f-modeling (circles), and g-modeling (dots). Right panel compares standard deviations of Ê{θ|x}, for Tweedie estimates (dots), f-modeling (dashed curve), and g-modeling (solid curve), reversals at far right are computational artifacts.

The “smooth estimate” ^ (x) needed for Tweedie's formula (6.7) was calculated by Poisson regression, as in (4.3)(4.7). The 6033 Xk values were put into 193 equally spaced bins, centered at x1, x2, …, x193, chosen as in (2.8) with yi being the number in bin i. A Poisson generalized linear model (4.3) then gave MLE = (1, 2, …, 193). Here the structure matrix X was the normal spline basis ns(x, df = 5) augmented with a column of 1's. Finally, the smooth curve (x) was numerically differentiated to give ′(x) = ′(x)/(x) and Ê = x + ′(x)

Tweedie's estimate Ê{θ|x} (6.7) appears as the solid curve in the left panel of Figure 7. It is nearly zero between −2 and 2, indicating that a large majority of genes obey the null hypothesis (6.7) and should be estimated to have θ = 0. Gene 610 had the largest observed z-value, X610 = 5.29, and corresponding Tweedie estimate 4.09.

For comparison, Ê{θ|x} was recalculated both by f-modeling as in Section 4 and g-modeling as in Section 5 (with discrete sampling distributions (2.4)(2.6) approximated by Xk ~ 𝒩(Θk, 1), Θk being the “true effect size” for gene k), f-modeling used X and as just described, giving E^f=Urf^/Vrf^ Ur and Vr as in (4.19), r = 12, g-modeling took θ = (−3, −2.8, …, 3) and Q = (ns(θ, 5), 1), yielding ĝ = g(α̂) as the MLE from (5.1)(5.2). (The R nonlinear maximizer nlm was used to find α̂, some care was needed in choosing the control parameters of nlm. We are paying for the fact that the g-modeling likelihood (5.2) is not an exponential family.) Then the estimated posterior expectation Êg was calculated applying Bayes rule with prior ĝ. Both Êf and Êg closely approximated the Tweedie estimate.

Standard deviation estimates for Êf (dashed curve, from Theorem 3 with replacing f in (4.9)) and Êg (solid curve, from Theorem 4) appear in the right panel of Figure 7, f-modeling gives noticeably lower standard deviations for E{θ|x} when |x| is large.

The large dots in the right panel of Figure 7 are bootstrap standard deviations for the Tweedie estimates Ê{θ|x}, obtained from B = 200 nonparametric bootstrap replications, resampling the N = 6033 Xk values. These closely follow the f-modeling standard deviations. In fact E^f*, the bootstrap replications of Êf, closely matched Ê* for the corresponding Tweedie estimates on a case-by-case comparison of the 200 simulations. That is, Êf is numerically just about the same as the Tweedie estimate, though it is difficult to see analytically why this is the case, comparing formulas (4.16) and (6.7). Notice that the bootstrap results for Êf verify the accuracy of the delta-method calculations going into Theorem 3.

Among empirical Bayes techniques, the James–Stein estimator is certainly best known. Its form,

θ^=X¯+[1+(N3)/S](XkX¯)[S=1N(XkX¯)2], (6.9)

again has the “classic” property of being estimated directly from the marginal distribution on the x-scale, without reference to g(θ). The simplest application of Tweedie's formula, taking X in our previous discussion to have rows (1,xi,xi2), leads to formula (6.9), see Section 3 of Efron (2011).

Perhaps the second most familar empirical Bayes applications relates to Benjamini and Hochberg's (1995) theory of false discovery rates. Here we will focus on the local false discovery rate (fdr), which best illustrates the Bayesian connection. We assume that the marginal density of each observation of Xk has the form

f(x)=π0φ(x)+(1π0)f1(x), (6.10)

where π0 is the prior probability that Xk is null, φ(x) is the standard 𝒩 (0, 1) density exp(12x2)/2π, and f1(x) is an unspecified nonnull density, presumably yielding values farther away from zero than does the null density φ.

Having observed Xk equal to some value x, fdr(x) is the probability that Xk represents a null case (6.8),

fdr(x)=Pr{null|x}=x0φ(x)/f(x), (6.11)

the last equality being a statement of Bayes rule. Typically π0, the prior null probability, is assumed to be near 1, reflecting the usual goal of large-scale testing: to reduce a vast collection of possible cases to a much smaller set of particularly interesting ones. In this case, the upper false discovery rate,

ufdr(x)=φ(x)/f(x), (6.12)

setting π0 = 1 in (6.11), is a satisfactory substitute for fdr(x), requiring only the estimation of the marginal density f(x).

Returning to the discrete setting (2.9), suppose we take the parameter of interest t(θ) to be

t=(0,0,,0,1,0,,0), (6.13)

with “1” at the index j0 having θj0 = 0 (j0 = 16 in (2.7)). Then E{t(θ)|xi} equals fdr(xi), and we can assess the accuracy of a g-model estimate fdr^(xi) using (5.18), the corollary to Theorem 4.

This was done for the prostate data, with the data binned as in Figure 7, and Q = (ns(θ, 5), 1) as before. Theorem 4 was applied with θ as in (2.7). The bottom two lines of Table 4 show the results. Even with N = 6033 cases, the standard deviations of fdr^(x) are considerable, having coefficients of variation in the 25% range.

Table 4.

Local false discovery rate estimates for the prostate data; ufdr^ and its standard deviation estimates sdf obtained from f-modeling; fdr^ and sdg from g-modeling; sdf is substantially smaller than sdg.

x −4 −3 −2 −1 0 1 2 3 4
ufdr^
.060 .370 .840 1.030 1.070 1.030 .860 .380 .050
sdf .014 .030 .034 .017 .013 .021 .033 .030 .009
sdg .023 .065 .179 .208 .200 .206 .182 .068 .013
fdr^
.050 .320 .720 .880 .910 .870 .730 .320 .040

F-model estimates of fdr fail here, the bias/variance trade-offs of Table 2 being unfavorable for any choice of r. However, f-modeling is a natural choice for ufdr, where the only task is estimating the marginal density f(x). Doing so using Poisson regression (4.3), with X = (ns(x, 5), 1), gave the top two lines of Table 4. Now the standard deviations are substantially reduced across the entire x-scale. (The standard deviation of ufdr^ can be obtained from Theorem 3, with U = φ(xi)1 and V the coordinate vector having 1 in the ith place.)

The top line of Table 4 shows ufdr^(x) exceeding 1 near x = 0. This is the penalty for taking π0 = 1 in (6.12). Various methods have been used to correct ufdr^, the simplest being to divide all of its values by their maximum. This amounts to taking π̂0 = 1/maximum,

π^0=1/1.070=0.935 (6.14)

in Table 4. (The more elaborate f-modeling program locfdr, described in Chapter 6 of Efron (2010), gave π̂0 = 0:932.) By comparison, the g-model MLE ĝ put probability π̂0 = 0.852 on θ = 0.

7 Discussion

The observed data X1,X2, …, XN from the empirical Bayes structure (1.1)(1.2) arrives on the x scale but the desired Bayesian posterior distribution g(θ|x) requires computations on the θ scale. This suggests the two contrasting modeling strategies diagrammed in Table 5: modeling on the x scale, “f-modeling,” permits the application of direct fitting methods, usually various forms of regression, to the X values, but then pays the price of more intricate and less stable Bayesian computations. We pay the price up front with “g-modeling,” where models such as (5.2) require difficult non-convex maximum likelihood computations, while the subsequent Bayesian computations become straightforward.

Table 5.

f-modeling permits familiar and straightforward fitting methods on the x scale but then requires more complicated computations for the posterior distribution of θ; the situation is reversed for g-modeling

Model fitting Bayesian computations
f-modeling direct indirect
g-modeling indirect direct

The comparative simplicity of model fitting on the x scale begins with the nonparametric case: f-modeling needs only the usual vector of proportions (3.10), while g-modeling requires Laird's (1978) difficult nonparametric MLE calculations. In general, g-models have a “hidden” quality that puts more strain on parametric assumptions, f-modeling has the advantage of fitting directly to the observed data.

There is a small circle of empirical Bayes situations in which the desired posterior inferences can be expressed as simple functions of f(x), the marginal distribution of the X observations. These are the “classic” situations described in Section 6, and account for the great bulk of empirical Bayes applications. The Bayesian computational difficulties of f-modeling disappear here. Not surprisingly, f-modeling dominates practice within this special circle.

“Bayes rule in terms of f,” Section 2, allows us to investigate how well f-modeling performs outside the circle. Often not very well seems to be the answer, as seen in the bottom panel of Figure 5 for example. G-modeling comes into its own for more general empirical Bayes inference questions, where the advantages listed in Section 5 count more heavily. Suppose, for instance, we are interested in estimating Pr{|θ|≥|x} for the prostate data. Figure 8 shows the g-model estimates and their standard deviations from Theorem 4, with Q = ns(θ, 6) as before. Accuracy is only moderate here, but nonetheless some useful information has been extracted from the data (while, as usual for problems involving discontinuities on the θ scale, f-modeling is ineffective).

Figure 8.

Figure 8

g-modeling estimates of Pr{|θ|≥1.5|x} for the prostate data. Dashed bars indicate ± one standard deviation, from Theorem 4.

Improved f-modeling strategies may be feasible, perhaps making better use of the kinds of information in Table 2. A reader has pointed out that pseudo-inverses of P other than A (3.1) are available, of the form

(PBP)1PB. (7.1)

Here the matrix B might be a guess for the inverse covariance matrix of , as motivated by generalized least squares estimation. So far, however, situations like that in Figure 8 seem inappropriate for f-modeling, leaving g-modeling as the only game in town.

Theorems 3 and 4 provide accuracy assessments for f-modeling and g-modeling estimates. These can be dishearteningly large. In the bottom panel of Figure 5, the “good” choice, g-modeling, would still require more than N = 20,000 independent observations Xk to get the coefficient of variation down to 0:1 when x exceeds 2. More aggressive g-modeling, reducing the degrees of freedom for Q, improves accuracy, at the risk of increased bias. The theorems act as a reminder that, outside of the small circle of its traditional applications, empirical Bayes estimation has an ill-posed aspect that may call for draconian model choices. (The ultimate choice is to take g(θ) as known, that is, to be Bayesian rather than empirical Bayesian. In our framework, this amounts to tacitly assuming an enormous amount N of relevant past experience.)

Practical applications of empirical Bayes methodology have almost always taken Θk and Xk in (1.1)(1.2) to be real-valued, as in all of our examples. This is not a necessity of the theory (nor of its discrete implementation in Section 2). Modeling difficulties mount up in higher dimensions, and even studies as large as the prostate investigation may not carry enough information for accurate empirical Bayes estimation.

There are not many big surprises in the statistics literature, but empirical Bayes theory, emerging in the 1950s, had one of them: that parallel experimental structures like (1.1)(1.2) carry within themselves their own Bayesian priors. Essentially, the other N – 1 cases furnish the correct “prior” information for analyzing each (Θk, Xk) pair. How the statistician extracts that information in an efficient way, an ongoing area of study, has been the subject of this paper.

Footnotes

*

Research supported in part by NIH grant 8R37 EB002784 and by NSF grant DMS 1208787.

Acknowlegement I am grateful to Omkar Muralidharan, Amir Najmi, and Stefan Wager for many helpful discussions.

References

  1. Benjamini Y, Hochberg Y. Controlling the false discovery rate: A practical and powerful approach to multiple testing. J. Roy. Statist. Soc. Ser. B. 1995;57:289–300. [Google Scholar]
  2. Brown LD, Greenshtein E, Ritov Y. The Poisson compound decision problem revisited. J. Amer. Statist. Assoc. 2013;108:741–749. [Google Scholar]
  3. Butucea C, Comte F. Adaptive estimation of linear functionals in the convolution model and applications. Bernoulli. 2009;15:69–98. [Google Scholar]
  4. Carlin BP, Louis TA. Texts in Statistical Science. 2nd ed. Boca Raton, FL: Chapman & Hall/CRC; 2000. Bayes and Empirical Bayes Methods for Data Analysis. [Google Scholar]
  5. Casella G. An introduction to empirical Bayes data analysis. Amer. Statist. 1985;39:83–87. [Google Scholar]
  6. Cavalier L, Hengartner NW. Estimating linear functionals in Poisson mixture models. J. Nonparametr. Stat. 2009;21:713–728. [Google Scholar]
  7. Efron B. Defining the curvature of a statistical problem (with applications to second order efficiency) Ann. Statist. 1975;3:1189–1242. with a discussion by C. R. Rao, Don A. Pierce, D. R. Cox, D. V. Lindley, Lucien LeCam, J. K. Ghosh, J. Pfanzagl, Niels Keiding, A. P. Dawid, Jim Reeds and with a reply by the author. [Google Scholar]
  8. Efron B. The estimation of prediction error: Covariance penalties and cross-validation. J. Amer. Statist. Assoc. 2004;99:619–642. with comments and a rejoinder by the author. [Google Scholar]
  9. Efron B. Institute of Mathematical Statistics Monographs. Vol. 1. Cambridge: Cambridge University Press; 2010. Large-Scale Inference: Empirical Bayes Methods for Estimation, Testing, and Prediction. [Google Scholar]
  10. Efron B. Tweedie's formula and selection bias. J. Amer. Statist. Assoc. 2011;106:1602–1614. doi: 10.1198/jasa.2011.tm11181. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Efron B, Morris C. Data analysis using Stein’s estimator and its generalizations. J. Amer. Statist. Assoc. 1975;70:311–319. [Google Scholar]
  12. Fisher R, Corbet A, Williams C. The relation between the number of species and the number of individuals in a random sample of an animal population. J. Anim. Ecol. 1943;12:42–58. [Google Scholar]
  13. Good I, Toulmin G. The number of new species, and the increase in population coverage, when a sample is increased. Biometrika. 1956;43:45–63. [Google Scholar]
  14. Hall P, Meister A. A ridge-parameter approach to deconvolution. Ann. Statist. 2007;35:1535–1558. [Google Scholar]
  15. James W, Stein C. Proc. 4th Berkeley Sympos. Math. Statist. and Prob. Vol. I. Berkeley, Calif.: Univ. California Press; 1961. Estimation with quadratic loss; pp. 361–379. [Google Scholar]
  16. Jiang W, Zhang C-H. General maximum likelihood empirical Bayes estimation of normal means. Ann. Statist. 2009;37:1647–1684. [Google Scholar]
  17. Laird N. Nonparametric maximum likelihood estimation of a mixing distribution. J. Amer. Statist. Assoc. 1978;73:805–811. [Google Scholar]
  18. Morris CN. Parametric empirical Bayes inference: Theory and applications. J. Amer. Statist. Assoc. 1983;78:47–65. with discussion. [Google Scholar]
  19. Muralidharan O, Natsoulis G, Bell J, Ji H, Zhang N. Detecting mutations in mixed sample sequencing data using empirical Bayes. Ann. Appl. Stat. 2012;6:1047–1067. [Google Scholar]
  20. Robbins H. Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, 1954–1955. Vol. I. Berkeley and Los Angeles: University of California Press; 1956. An empirical Bayes approach to statistics; pp. 157–163. [Google Scholar]
  21. Zhang C-H. Empirical Bayes and compound estimation of normal means. Statist. Sinica. 1997;7:181–193. [Google Scholar]

RESOURCES