Abstract
The ideal observer sets an upper limit on the performance of an observer on a detection or classification task. The performance of the ideal observer can be used to optimize hardware components of imaging systems and also to determine another observer’s relative performance in comparison with the best possible observer. The ideal observer employs complete knowledge of the statistics of the imaging system, including the noise and object variability. Thus computing the ideal observer for images (large-dimensional vectors) is burdensome without severely restricting the randomness in the imaging system, e.g., assuming a flat object. We present a method for computing the ideal-observer test statistic and performance by using Markov-chain Monte Carlo techniques when we have a well-characterized imaging system, knowledge of the noise statistics, and a stochastic object model. We demonstrate the method by comparing three different parallel-hole collimator imaging systems in simulation.
1. INTRODUCTION
We advocate the position that to properly quantify image quality one must clearly define the task that is to be performed and the observer who will be performing this task.1 Tasks in medical imaging are typically detection tasks, e.g., determining whether a tumor is present, or estimation tasks, e.g., determining the value of a clinically relevant parameter. Observers can be human, human-model, or ideal observers. An ideal observer is a mathematical construct that is optimal in a well-defined sense.
The class of ideal observers includes the Bayesian ideal observer,2 which sets an absolute upper limit on task performance as measured by most common figures of merit. (For simplicity, any reference to ideal observers from now on implies Bayesian ideal observers.) Because no other observer can outperform the ideal observer by these standards, it is reasonable to use ideal-observer performance as a figure of merit in comparing imaging systems. That is, the ideal observer is a useful tool for comparing and optimizing imaging systems. Furthermore, it is often useful to compare the performance of other observers relative to the performance of the best observer.
The ideal observer requires complete knowledge of the statistics of the images or image data. Thus computing the ideal observer is often considered either too difficult or impossible for image data, which tend to be large; e.g., images of size 512 × 512 are characterized by a probability density function of a 5122 random vector. Researchers often resort to unrealistic assumptions that dramatically simplify the math, such as assuming fixed backgrounds.3 Clarkson and Barrett4 worked out the ideal-observer test statistic for a number of situations in which the statistics of the backgrounds followed certain distributions, including Gaussian and Laplacian distributions. A great deal of research has been done on the performance of nonoptimal observers such as human observers and model observers in different types of backgrounds. See studies by Abbey and Barrett,5 Burgess et al.,6 and Bochud et al.7 for examples. However, the performance of the ideal observer has not been computed for complicated backgrounds that are not characterized by simple density functions.
In this paper we present a method for computing the ideal observer in simulation for a class of stochastic object models. The computation of the ideal observer is accomplished by using Markov-chain Monte Carlo techniques8,9 (to be referred to hereafter as Markov-Chain Monte Carlo). This method can be used for hardware optimization or comparison. In this paper we present the results of a simulation study in which we compare the ideal-observer performance of three different parallel-hole collimator imaging systems.
2. BACKGROUND
The process of data acquisition in imaging may be represented mathematically by
| (1) |
where f is the continuous object being imaged, H is a continuous-to-discrete operator representing the imaging system, n is the measurement noise, and g is an M × 1 vector of image data. In radiological imaging, the object f may represent the three-dimensional activity distribution of radio tracer within a body or the two-dimensional fluence of x-rays impinging on a detector. The imaging system H describes the mapping of continuous objects to discrete image data. The statistics of the random vector n may depend on the object f, e.g., Poisson noise. The image data g may be an image as in the case of direct digital detectors or may require processing before being shown to a human observer as in the case of sinogram data. In either case the ideal observer uses the raw image data g and does not require any reconstruction or processing.
For a linear system, the continuous-to-discrete imaging operator H can be more clearly represented by
| (2) |
where gm and nm are the mth elements of the vectors g and n, respectively, r is a spatial coordinate (two or three dimensional), f(r) is the object f, hm(·) is the mth detector sensitivity function, and S is the field of view. The entire operator H is composed of M functions hm(·).
As discussed above, classification tasks in medicine consist of determining the presence of a signal, e.g., tumor. In this paper we consider signal-known-exactly tasks; we consider the tumor to be some known signal fs in a random background fb. As a result there exist two possible hypotheses for any given image: signal absent (H0) or signal present (H1). We mathematically represent imaging under these two hypotheses as
| (3) |
| (4) |
For convenience we define the background image and signal image to be
| (5) |
| (6) |
A. Ideal Observer
The ideal observer employs the likelihood ratio, or any monotonic transformation thereof, to make decisions.10 The likelihood ratio is defined as
| (7) |
where pr(g|Hi) is the probability density of the image data under hypothesis Hi. An observer compares the test statistic, e.g., Λ(g), to a predetermined critical value (or threshold) and makes a decision accordingly. A fixed critical value corresponds to a single point on a receiver operating characteristic (ROC) curve.11,12 An ROC curve is generated by varying this critical value over the real line. The area under the ROC curve (AUC)12 is a commonly used performance metric that is maximized by the ideal observer.
Traditionally, to compute the ideal observer, researchers were forced to make strict and unrealistic assumptions regarding the imaging system. For example, if we were to assume that the background b and the signal s were nonrandom quantities, then the randomness in g would be due only to noise. Hence the densities pr(g|Hi) would be known, and the likelihood ratio could be easily calculated for various noise models. However, the stochastic nature of normal patient anatomy has a dramatic effect on the performance of an observer.5,13 For example, a lesion in a mammogram may be very easy or very difficult to detect depending upon whether there is dense tissue surrounding the tumor. Ultimately, if the task is to detect a lesion in a mammogram, then the background structure of a mammogram must be accounted for.6
B. Object Models
Characterizing the full probability density of b is a very difficult task, which many researchers have addressed.14,15 However, the probability density pr(b) is particular to the imaging system operator H since and is not especially useful if we wish to perform hardware optimizations. We require a characterization of the randomness in the background objects fb and not the background images b. However, for the sake of notational simplicity we will often refer to as pr(b), which we can know in simulation given knowledge of our object model and a complete description of the imaging system H.
We will employ a stochastic object model known as the lumpy object model, developed by Rolland and Barrett,13 although this method can easily be extended to other parameterized object models. The lumpy object model may be represented by the equation
| (8) |
where r is a two- or three-dimensional spatial coordinate, N is the random number of lumps in the object, L(·) is the lump function, cn is the center of the nth lump, a is the magnitude of the lump function, and s is the width. Typically L(·) is chosen to be a Gaussian function, but other functions can be chosen as well. The variable N is a Poisson-distributed random variable with mean N̄, and the centers cn are uniformly distributed within the field of view. Note that the randomness in this object model is completely characterized by the random variables N and {cn}. Examples of two-dimensional lumpy objects with different parameters are given in Fig. 1.
Fig. 1.

Examples of two-dimensional lumpy objects. (a) Parameters N = 39, a = 1, and s = 10; (b) parameters N = 348, a = 1, and s = 4. The field of view (FOV) for both objects is 128 × 128.
It follows from Eqs. (2) and (5) that the mth background element is given by
| (9) |
Thus, to generate a single pixel of a noise-free and signal-free background image, we need to compute the N inner products of a sensitivity function with the lumps. To generate an entire image, we compute these inner products over all M pixels, i.e., all sensitivity functions hm(·).
The lumpy object model is not appropriate for all imaging problems. The methodology we are about to describe, however, can be applied to many parameterized object models. We show in this paper that we can compute the ideal observer for this lumpy object model. This is a clear step forward for ideal-observer computations, which have traditionally been limited to flat or Gaussian background models. In future papers we will discuss the extension of this work to different object models.
3. METHOD
Using our knowledge of the imaging system, its associated noise properties, and the lumpy object model, we would like to calculate likelihood ratios for a set of simulated image data {g}. We begin by rewriting Eq. (7) as
| (10) |
For a given background b, the randomness in g is due only to the noise, and hence pr(g|b,Hi) characterizes the noise distribution of the imaging system, which we know a priori. The numerator and denominator of Eq. (10) turn out to be exceptionally small numbers, because they are integrals over a product of high-dimensional probability density functions. Thus estimating the ratio accurately by computing the numerator and denominator separately is infeasible.
To make computation of the likelihood ratio possible,16 we rewrite Eq. (10) as
| (11) |
where we have taken the denominator into the integral. Next, we multiply and divide by pr(g|b, H0), which leads to
| (12) |
Using Bayes’s law we are able to rewrite Eq. (12) as
| (13) |
where ΛBKE(g|b) is the background-known-exactly (BKE) likelihood ratio defined as
| (14) |
and
| (15) |
The BKE likelihood ratio is the ideal-observer test statistic in the situation where the background is fixed and not random. Thus we have derived the random-background likelihood ratio in terms of an integral of the BKE likelihood ratio (a posterior mean).
For an imaging system with Poisson noise
| (16) |
and
| (17) |
We are thus able to rewrite Eq. (14) as
| (18) |
Similarly, for an imaging system with Gaussian noise with covariance Kn, the BKE likelihood ratio becomes
| (19) |
where x† represents the adjoint of x.
Ideally we would use Monte Carlo techniques to calculate the integral in Eq. (13) by sampling backgrounds from pr(b|g, H0), evaluating ΛBKE(g|b), and averaging. Sampling from pr(b|g, H0) is difficult because the density function is generally not known. As a result, we consider alternate methods. The advantage that we have, and must use, is our knowledge of our model for fb. We are representing our background objects fb as lumpy backgrounds that are completely characterized by the number of lumps N (following a Poisson distribution) and the centers {cn} (uniformly distributed throughout the field of view) for fixed a, s, lump function L(·), and field of view S. We represent the stochastic parameters as an unordered set θ = {c1, c2,…, cN}.
We know that for any given θ there exists a unique b. This relationship allows us to rewrite Eq. (13) as
| (20) |
where
| (21) |
Unlike pr(b), the function pr(θ) has a simple analytic form that we know from the definition of our object model. For a more detailed explanation of the replacement of b with θ in the above expressions we refer the reader to Appendix A.
A. Markov-Chain Monte Carlo
We want to estimate the likelihood ratio in Eq. (20) by using Monte Carlo integration, i.e.,
| (22) |
where each θ (j) is a sample from the density pr(θ|g, H0). To generate the θ (j), we construct a Markov chain with pr(θ|g, H0) as the stationary density for the chain. We choose an initial parameter vector θ (0) and a proposal density q(θ|θ (i)) for the Markov chain. Given θ (i) we draw a sample parameter vector θ̃ from the proposal density and add it to our Markov chain with probability
| (23) |
If θ̃ is accepted, then θ (i+1) is θ̃; if θ̃ is not accepted, then θ (i+1) is θ (i). The Markov chain is the sequence consisting of
| (24) |
We adopt a Metropolis–Hastings approach8 and design our proposal density to be symmetric, i.e., q(θ̃ |θ (i)) = q(θ (i)|θ̃), resulting in a cancellation of the proposal densities in Eq. (23). Furthermore, we can rewrite pr(θ|g, H0) using Bayes’s rule [Eq. (21)] and cancel the denominators, leaving an expression that we can compute exactly for a given noise model and object model. In the case of Poisson noise (which we will use for our simulation studies below), the expression in Eq. (23) becomes a ratio of
| (25) |
evaluated at θ̃ and θ(i), respectively. The bracketed terms in Eq. (25) are pr(g|b(θ), H0), pr(N), and pr({cn}), respectively. The final N! in Eq. (25) arises because pr({cn}) is invariant under permutations of {cn} (see Appendix A). We are exploiting our lumpy object model in which we know that the probability of a set of N centers {cn} is given by the probability that we choose N lumps (Poisson with mean N̄) times the probability that we observed the centers {cn} (uniform within the FOV and invariant under permutations).
In practice it is difficult to design a proposal density q(θ̃|θ(i)) that is symmetric because the number of elements in θ̃ and θ(i) can be different. Furthermore, we cannot forgo symmetry, and we choose to compute the ratio of the proposal densities because it is difficult to design a practical, yet easily computable, density of an unordered set conditioned on another unordered set. We avoid both of these difficulties by designing a proposal density on a matrix Φ, i.e., q(Φ̃ |Φ(i)). The matrix Φ is designed such that we can write θ as a function of Φ, i.e., θ (Φ). The matrix Φ has the advantage that the number and position of elements is fixed. It has the disadvantage that not every possible θ can be represented as θ (Φ). However, the states b not achievable with Φ are those for which pr(b|g, H0) is zero for all practical purposes (i.e., beyond computer precision).
This matrix Φ is composed of a binary column vector α of dimension N′ followed by a list of centers of dimension N′ × 2 for two-dimensional lumpy objects and N′ × 3 for three-dimensional lumpy objects, i.e.,
| (26) |
where the centers cn are row vectors. Here we are using an unusual notation by inserting multiple elements into the matrix Φ using the row vectors cn. However, this keeps our definition of Φ general for two- or three-dimensional lumpy objects. The mapping function θ (Φ) is defined as
| (27) |
The proposal density q(Φ̃|Φ(i)) is sampled by first flipping (i.e., a 1 to a 0 or a 0 to a 1) each of the binary numbers αn with probability η. Then we randomly choose one center that is turned on (i.e., αn = 1) in both Φ̃ and Φ(i) and randomly shift the position of this center by sampling an isotropic Gaussian distribution with fixed width σ. In Appendix B we write down an expression of the density q(Φ̃|Φ(i)) and show that it is symmetric. The flipping of the αn allows us to add or remove lumps, and the shifting of one of the centers allows us to make a small change to a given lump. We are limited here to N′ lumps, but we choose N′ to be much larger than the number of lumps N that went into the generation of the image g. The probability that we will ever accept N′ lumps is beyond the floating-point range of computers. Finally, we have a symmetric proposal density and are left only to compute the ratio pr(θ (Φ̃)|g, H0)/pr(θ (Φ(i))|g, H0), the numerator and denominator of which are shown in Eq. (25). Note that there are typically only a couple of different lumps between b(θ (Φ(i+1))) and b(θ(Φ(i))). Thus to generate the new background b(i+1) [Eq. 9] from the old background b(i), we need only to replace the few lumps that have changed, which dramatically speeds up the simulation procedure.
4. SIMULATIONS
The ultimate goal of our work is to compare imaging systems based on ideal-observer performance. As an example, we simulate parallel-hole collimator imaging systems for nuclear medicine. We vary the parameters of the collimator, e.g., bore length and height, to achieve different noise levels and resolution in the generated images. More specifically, we model the imaging system transfer functions hm(·) as Gaussian centered on the mth pixel. Each imaging system has different widths and heights for the hm(·) functions, corresponding to different collimator parameters. By choosing Gaussian transfer functions and the lumpy object model with Gaussian lumps, we are able to compute a closed-form solution for the background, using Eq. (9).
There are numerous practical issues in Markov-chain Monte Carlo that need to be addressed. For example, if Φ(0) is a completely random realization, then one must “burn in” the chain for a number of iterations to remove bias due to the initial samples. Furthermore, we must choose an N′, a flip probability η, and a shift width σ for the random shift. We are aided in this problem because we generate the image g from the lumpy model and therefore know the number of lumps N* and centers that went into making the image. Thus the first N* entries into Φ(0) are randomly shifted versions of , and they are all turned on, i.e., αn = 1 for n = 1,…, N. We chose N′ to be 100 times bigger than N*, with the remaining 100N* − N* centers in Φ(0) being uniformly distributed within the field of view. These remaining centers are all initially turned off, i.e., αn = 0 for n = (N*+ 1),…, N′. The flip probability η was chosen to be 0.04/N′, which makes flips a rare occurrence. The Gaussian shift density was chosen to have a width of s/10, where s is the width of the lump function L(·) [Eq. (8)]. Because our initial Φ(0) is chosen with knowledge of the centers that went into the creating of g, our burn-in times are relatively short (500 iterations). Finally, we iterated the chain for a total of 150,000 iterations, computing ΛBKE(·) at each iteration.
The parameter N′ was chosen because the probability that the chain would ever accept ten times more lumps than went into the creation of the image is effectively zero. The parameters η and σ were determined empirically by studying the rejection rate of the chain. For example, if 99 out of 100 proposals are rejected on average, then the chain is not very efficient. Conversely, if the chain accepts all the proposed solutions, then it is likely that insignificant changes are occurring. This insignificance might be caused by a small σ. The burn-in of 500 iterations was determined by visual examination of ΛBKE(·) as the chain progressed. It was clear that after a couple of hundred iterations, the chain had converged to the stationary density. If we were to choose Φ(0) at random we would need to lengthen the burn-in time for our chain because it would take longer to arrive at the peak of the stationary density. Finally, the total number of iterations (150,000) was chosen because it produced estimates of Λ(g) with little variation.
We simulated three different imaging systems (A, B, and C) with different widths and heights for the sensitivity functions given in Table 1. The images acquired by all three imaging systems are 64 × 64, i.e., g is a 642 × 1 vector. For each imaging system we generated 50 signal-absent images and 50 signal-present images. We used a lumpy object model with N̄= 25, a = 1, and s = 7 [see Eq. (8)]. Example images of the same lumpy object from our three different imaging systems are shown in Fig. 2. For the signal we used a Gaussian with width 3 and height 0.1 located in the center of the object. We found that these signals were difficult to detect visually, although a psychophysical experiment was not performed. They are, however, often detected by the ideal observer.
Table 1.
Characteristics of the Three Imaging Systems
| Imaging System | hm(·) Width | hm(·) Height | AUC |
|---|---|---|---|
| A | 0.5 | 40 | 0.92 |
| B | 2.5 | 100 | 0.88 |
| C | 5 | 200 | 0.67 |
Fig. 2.

Image realizations of the same lumpy object for (a) imaging system A, (b) imaging system B, and (c) imaging system C. There is no signal in these images.
As described in Subsection 3.A, we generated a Markov chain in order to sample from pr(θ |g, H0) in Eq. (20) to approximate the ideal observer by using Eq. (22). For each of the 300 images generated (100 for each imaging system), we generated 150,000 iterations of the Markov chain. In Fig. 3 we show ΛBKE(g|b(θ(Φ(i)))) evaluated at the iterations of an example Markov chain. The image g used to generate this plot was without signal and was acquired from imaging system B. In all of our calculations we ignore the first 500 iterations of the chain to allow for burn-in, and we average the ΛBKE(·) of the remaining 149,500 iterations.
Fig. 3.

ΛBKE versus iteration number. The iteration numbers here are the states of the Markov chain described in Subsection 3.A.
5. RESULTS
We calculated the ideal observer for 50 signal-absent and 50 signal-present images for three different imaging systems. Histograms of the log-likelihood ratio are given in Fig. 4. Imaging systems A and B seem to have substantial separation between the signal-absent and the signal-present data [Figs. 4(a) and (b)]. There is a great deal of overlap between the two histograms for imaging system C [Fig. 4(c)]. Larger separation indicates that the ideal observer can more easily detect the signal. We used ROC analysis to quantify the performance of the ideal observer for these three imaging systems. Figure 5 shows ROC curves for the three imaging systems. These ROC curves were fitted by using LABROC4 software.17 The AUCs for imaging systems A, B, and C are 0.92, 0.88, and 0.67, respectively. These values provide a means of ranking imaging systems A, B, and C for the aforementioned detection task.
Fig. 4.

Log-likelihood histograms for (a) imaging system A, (b) imaging system B, and (c) imaging system C.
Fig. 5.

ROC curve comparison of the ideal-observer performance for the three imaging systems. The AUC values for imaging systems A, B, and C are 0.92, 0.88, and 0.67, respectively.
Because our method returns estimates of likelihood ratios, we reran the above experiment five times using the same 100 images from imaging system A. The five different runs used different random seeds, and thus the chains progressed differently. This experiment checks the interobserver variability that arises from the stochastic nature of Markov-chain Monte Carlo. That is, we wish to isolate the component of the variability that is due to the technique. We found that the variability in AUC as measured by the standard deviation of the five different AUC values was 0.006. This number could be reduced by running the chain for more than 150,000 iterations. This number does not reveal any information about the variability that will arise if different images (or more images) from system A are used. Similar runs were performed for imaging systems B and C with similar results.
6. CONCLUSIONS
We have presented a method for comparing imaging hardware based on the performance of the ideal observer on a signal-known-exactly detection task. The random backgrounds used in our simulations were generated by using the lumpy object model and simulated imaging systems. We regard this work as a first step toward calculations that would involve randomly varying signals, more realistic imaging-system models, and more complicated object models. Therefore the results for the particular imaging systems investigated here should be viewed as proof of concept rather than the final word on the trade-off between noise and resolution in collimator design.
Acknowledgments
We thank Hongbin Zhang and Brandon Gallas for helpful discussions and extensive work on very similar topics and Thomas Kennedy for sharing his expertise of Markov-chain Monte Carlo with us. This work was supported by National Science Foundation grant 9977116 and National Institutes of Health grants P41 RR14304, KO1 CA87017, and RO1 CA52643.
APPENDIX A: REPLACING b WITH θ
We would like to show the equivalence of Eqs. (13) and (20) for the backgrounds generated with the lumpy object model. We use a standard method of transforming density functions given by
| (A1) |
where M is the dimension of b. In general this integral is difficult because knowledge of the roots of b = b(θ) is required. However, in our problem, pr(b|g, H0) is within an integrand, allowing us to compute the following:
| (A2) |
| (A3) |
| (A4) |
This integral expression is shorthand for the following:
| (A5) |
The above integral is over a region SN contained within R2N for two-dimensional lumpy objects. This region is defined by the requirement that all centers be within the FOV and must satisfy the inequalities c1<c2< … <cN, where < symbolizes a lexicographical ordering18 of the vectors cn. This ensures that there is only one point in SN corresponding to the unordered set {cn}. Also note that the volume of this set SN is given by FOVN/N!, which explains the expression pr({cn}) in Eq. (25).
APPENDIX B: SYMMETRY OF THE PROPOSAL DENSITY
The proposal density is given by
| (B1) |
where f is the number of α’s that are flipped from Φ to Φ̃ given by
| (B2) |
The function G(·) is a properly normalized, symmetric Gaussian density with variance σ2. The variable C is the number of terms in the sum in Eq. (B1), given by
| (B3) |
In the case in which there are no centers in common between Φ and Φ̃, the sum in Eq. (B1) is zero; i.e., its probability density is zero. For two-dimensional (respectively three-dimensional), lumpy objects, δ (c−j − c̃−j) is a 2C − 2 (respectively 3C − 3)-dimensional Dirac delta function. The vector c−j is a concatenation of all the center vectors ci satisfying α̃i = αi = 1, except cj itself. The vector c̃−j is defined similarly.
The term to the left of the summation in Eq. (B1) gives the probability of making the flips in the αi to get from Φ to Φ̃. The summation term is a mixture of densities that represents the shift of one of the centers in common between Φ and Φ̃. To sample from such a mixture, we choose one of the terms with probability 1/C and then sample from it.
The symmetry of q(Φ̃|Φ) follows from the symmetry of the f [Eq. (B2)], C [Eq. (B3)], δ (c−j − c̃−j), and G(c̃j − cj) regarded as functions of Φ and Φ̃.
Contributor Information
Matthew A. Kupinski, Optical Sciences Center and Department of Radiology, University of Arizona, Tucson, Arizona 85721
John W. Hoppin, Program in Applied Mathematics, University of Arizona, Tucson, Arizona 85721
References
- 1.Barrett HH. Objective assessment of image quality: effects of quantum noise and object variability. J Opt Soc Am. 1990;A 7:1266–1278. doi: 10.1364/josaa.7.001266. [DOI] [PubMed] [Google Scholar]
- 2.Van Trees HL. Detection, Estimation, and Modulation Theory, Part I. Academic; New York: 1968. [Google Scholar]
- 3.Fukunaga K. Statistical Pattern Recognition. Academic; San Diego, Calif: 1990. [Google Scholar]
- 4.Clarkson E, Barrett HH. Approximation to ideal-observer performance on signal-detection tasks. Appl Opt. 2000;39:1783–1794. doi: 10.1364/ao.39.001783. [DOI] [PubMed] [Google Scholar]
- 5.Abbey CK, Barrett HH. Human- and model-observer performance in ramp-spectrum noise: effects of regularization and object variability. J Opt Soc Am. 2001;A 18:473–488. doi: 10.1364/josaa.18.000473. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Burgess AE, Jacobson FL, Judy PF. Human observer detection experiments with mammograms and power-law noise. Med Phys. 2001;28:419–439. doi: 10.1118/1.1355308. [DOI] [PubMed] [Google Scholar]
- 7.Bochud FO, Abbey CK, Eckstein MP. Visual signal detection in structured backgrounds III. Calculation of figures of merit for model observers in statistically nonstationary backgrounds. J Opt Soc Am A. 2000;17:193–206. doi: 10.1364/josaa.17.000193. [DOI] [PubMed] [Google Scholar]
- 8.Robert CP, Casella G. Monte Carlo Statistical Methods. Springer-Verlag; New York: 1999. [Google Scholar]
- 9.Gilks WR, Richardson S, Spiegelhalter DJ, editors. Markov Chain Monte Carlo in Practice. Chapman & Hall; Boca Raton, Fla: 1996. [Google Scholar]
- 10.Barrett HH, Abbey CK, Clarkson E. Objective assessment of image quality III. ROC metrics, ideal observers, and likelihood-generating functions. J Opt Soc Am. 1998;A 15:1520–1535. doi: 10.1364/josaa.15.001520. [DOI] [PubMed] [Google Scholar]
- 11.Egan J. Signal Detection Theory and ROC Analysis. Academic; New York: 1975. [Google Scholar]
- 12.Metz CE. Basic principles of ROC Analysis. Semin Nucl Med. 1978;VIII:283–298. doi: 10.1016/s0001-2998(78)80014-2. [DOI] [PubMed] [Google Scholar]
- 13.Rolland JP, Barrett HH. Effect of random background inhomogeneity on observer detection performance. J Opt Soc Am. 1992;A 9:649–658. doi: 10.1364/josaa.9.000649. [DOI] [PubMed] [Google Scholar]
- 14.Zhu SC, Wu Y, Mumford D. Filters, random fields and maximum entropy (FRAME) Int J Comput Vision. 1998;27:1–20. [Google Scholar]
- 15.Simoncelli EP, Olshausen B. Natural image statistics and neural representation. Annu Rev Neurosci. 2001;24:1193–1217. doi: 10.1146/annurev.neuro.24.1.1193. [DOI] [PubMed] [Google Scholar]
- 16.H. Zhang and B. Gallas, University of Arizona, Tucson, Ariz. 85721 (personal communication, 2001).
- 17.Metz CE, Herman BA, Shen JH. Maximum likelihood estimation of receiver operating characteristic (ROC) curves from continuously-distributed data. Stat Med. 1998;17:1033–1053. doi: 10.1002/(sici)1097-0258(19980515)17:9<1033::aid-sim784>3.0.co;2-z. [DOI] [PubMed] [Google Scholar]
- 18.Dugundji J. Topology. Allyn and Bacon; Boston, Mass: 1965. [Google Scholar]
