Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 Jul 8.
Published in final edited form as: Phys Med Biol. 2013 Apr 15;58(9):3037–3059. doi: 10.1088/0031-9155/58/9/3037

SPECT System Optimization Against A Discrete Parameter Space

L J Meng 1, N Li 1
PMCID: PMC3703762  NIHMSID: NIHMS489533  PMID: 23587609

Abstract

In this paper, we present an analytical approach for optimizing the design of a static SPECT system or optimizing the sampling strategy with a variable/adaptive SPECT imaging hardware against an arbitrarily given set of system parameters. This approach has three key aspects. First, it is designed to operate over a discretized system parameter space. Second, we have introduced an artificial concept of virtual detector as the basic building block of an imaging system. With a SPECT system described as a collection of the virtual detectors, one can convert the task of system optimization into a process of finding the optimum imaging time distribution (ITD) across all virtual detectors. Thirdly, the optimization problem (finding the optimum ITD) could be solved with a block-iterative approach or other non-linear optimization algorithms. In essence, the resultant optimum ITD could provide a quantitative measure of the relative importance (or effectiveness) of the virtual detectors and help to identify the system configuration or sampling strategy that leads to an optimum imaging performance. Although we are using SPECT imaging as a platform to demonstrate the system optimization strategy, this development also provides a useful framework for system optimization problems in other modalities, such as positron emission tomography (PET) and X-ray computed tomography (CT) [1, 2].

I. INTRODUCTION

Single photon emission computed tomography (SPECT) is a commonly used nuclear imaging modality for small animal studies [3, 4]. In recent years, there has been extensive research and commercial efforts to provide higher spatial resolution in future SPECT instrumentations. Examples of recent developments include the SemiSPECT reported by Kastis et al. [5], the SiliSPECT under development by Peterson et al. [6], the MediSPECT proposed (and evaluated) by Accorsi et al. [7] and the U-SPECT-III proposed by Beekman et al. [8], a low-cost ultra-high resolution imager based on the second-generation image intensifier [9] and the use of a pre-existing SPECT camera, arranged in an extreme focusing geometry for ultra-high resolution small animal SPECT imaging applications [10]. We have recently developed a prototype ultra-high resolution single photon emission microscope (SPEM) system for mouse brain studies [1113]. This system delivers an ultra-high imaging resolution of around 100 μm (with I-125 labeled tracers) in phantom studies. However, ultrahigh resolution SPECT imaging is associated with a well-known bottleneck – the detection efficiency is inevitably limited by the small open-fractions of the apertures in use, which ultimately limits the efficiency of the systems to collect useful imaging information. Therefore, for SPECT to become a practical tool for in vivo studies at ultrahigh spatial resolutions, technical approaches to improve the information collection process are in critical need.

This problem could be partially alleviated with the adaptive SPECT approach reported by Barrett et al.[14], Clarkson et al. [15], Freed et al.[2] and Li et al.[16]. In an adaptive SPECT system, the data acquisition hardware could be varied in real-time in response to the information being collected during an imaging study. This helps to offer an improved efficiency for collecting useful imaging information and therefore to provide an optimum performance for imaging an unknown object. Zhou et al. [17] also explored a similar adaptive concept for use in a proposed zoom-in PET geometry. We have previously reported an adaptive angular sampling approach for use with a SPECT system having 1–2 ultrahigh resolution camera-heads rotating around the object [16]. This method allows one to identify the optimum distribution of imaging time among all possible sampling angles for achieving the lowest image variance.

With this current study, we intend to tackle a very basic question in SPECT imaging: How to optimize a SPECT system against a wide range of system and imaging parameters? Finding the answers to this question is intrinsically difficult for a number of reasons. First, to optimize a system, one needs to define an imaging task, along with one or more quantitative indices measuring the performance of any possible system configuration for the task. For example, one could choose to optimize an imaging system for estimation tasks, based on performance indices such as spatial resolution [18, 19], image variance [20, 21] and their tradeoffs [2225], and/or the accuracy for ROI quantification[26, 27]. Alternatively, one could also optimize a system for detection tasks based on the receiver-operating characteristics (ROC) of computer or human observers [2835]. Unfortunately, for typical SPECT systems, the relationship between these performance indices and system parameters are highly complex and often mathematically intractable. Second, system optimization procedures typically require a search through a vast (often infinite) number of possible system and sampling configurations. This makes it difficult to use brute-force numerical approaches for finding the best system configurations. Thirdly, adaptive imaging procedures [15] would require many cycles of system optimization to be completed within a single imaging study. This further adds to the need for an efficient computation procedure for system optimization.

To overcome these challenges, we have developed a theoretical approach for optimizing an imaging system indirectly against system parameters. This approach has a few key features. First, the optimization procedure is designed to operate against a discretized parameter space, so that the system could have a finite number of possible configurations. Second, the new system optimization strategy is based on an artificial concept of virtual detector as the basic building block of SPECT systems. Based on this concept, the problem of optimizing a SPECT system is converted into optimizing the distribution of a finite imaging time across all the virtual detectors, so that the system could deliver the best task performance. In the following text, this distribution of imaging time across all virtual detectors will be referred to as imaging time distribution, or ITD for short. For a variable SPECT system that allow multiple system configurations to be used during an imaging study, the optimum ITD could provide a measure of the relative importance of each possible system configuration and identify the best combination of these configurations to be used. The same approach could also be used for optimizing stationary SPECT systems. Such systems could be treated as special cases of the variable SPECT systems, with only one configuration allowed during an imaging study. Therefore, the same algorithms could be used for both optimization problems, by introducing appropriate constraints into the optimization process.

In general, optimizing a SPECT system against ITD assigned to the virtual detectors could be mathematically simpler than optimizing the system directly against all the system parameters (please see Sec. II.E for further discussions). This simplification allowed us to develop a general system optimization strategy that could accommodate almost any system parameter. Furthermore, we were able to develop an efficient computation approach, or use existing non-linear minimization algorithm to optimize the system design or sampling strategy. Although we used SPECT instrument as a platform to demonstrate this system optimization strategy, this method could be applied to many other imaging modalities, such as positron emission tomography (PET) and X-ray computed tomography (CT) [1, 2].

II. Material and Methods

A. Image Formation

Let x=[x1, x2, …, xN]T denote a series of unknown but deterministic pixel intensities that are underlying the measured projection data y=[y1, y2, …, yM]T. The mapping from x to y is governed by a probability distribution function, pr(y;x). For emission tomography, y could be approximated as a series of independent random Poisson variables, whose expectations are given by

y¯mE[ym]=ym=0ympr(ym;x)m=1,,M, (1)

or by the following discrete transform

y¯=Tp¯,andp¯=Ax. (2)

E[·] denotes the expectation operator. T is the total imaging time. p¯ is the mean projection with a unit imaging time. A is a M×N matrix that represents the discretized system-response function (SRF). We have assumed that the SRF is free of systematic error. The log-likelihood function of the measured data y is given by

L(x,y)logpr(y;x)=mymlogy¯my¯m, (3)

and

y¯m=Tnamnxn, (4)

where amn is an element of A. It gives the probability of a gamma ray emitted at the n'th source voxel and being detected by the m'th detector pixel within a unit imaging time. The underlying image function may be reconstructed as

{x^PML(y)=x0argmax[L(x,y)βR(x)]and thenx^PFPML=Ffilterx^PML(y),} (5)

where R(x) is a scalar function that selectively penalizes certain undesired features in reconstructed images. β is a parameter that controls the degree of regularization. Ffilter is an N×N matrix that represents the post-filtering operator. In this study, we used a quadratic roughness penalty function as defined by [19]:

R(x)=j12kwjkϕ(xjxk), (6)

where wjk's are the weighting factors that are non-zero for the pairs of immediate neighbors, and

ϕ(θ)=θ22. (7)

Note that the computation approach developed below would allow one to use many other regularization functions.

B. Direct System Optimization

Suppose an imaging system is characterized by a total of S system parameters, q=[q1, q2, …, qS], these parameters could include detector positions, aperture positions, detector intrinsic resolution, locations of pinholes on each aperture and many more. One could consider optimizing a system as the following,

qoptimum=argmaxq=[q1,q1,,qS][Ω(q)]. (8)

The system performance index Ω is typically a scalar or a vector function of q. It is chosen to represent the effectiveness of the system to fulfill a given imaging task or a class of tasks. For example, one could chose to optimize an imaging system for estimation tasks, based on performance indices such as spatial resolution [18, 19], image variance [20, 21] and their tradeoffs [2225], and/or the accuracy for ROI quantification [2835]. One could also optimize a system for detection tasks based on the receiver-operating characteristics (ROC) of computer or human observers [2931].

Unfortunately, the inter-relationships between these performance indices and system parameters q are often highly complex. It makes it challenging to derive an efficient algorithm for searching through the parameter space. Brute force approach may work for small-scale optimization problems, but is typically impractical for identifying the best system configuration from a large number of possible candidates. A general computation approach facilitating the direct system optimization has yet to be developed. .

C. An Indirect Approach for SPECT System Optimization

In this work, we have developed an indirect approach for optimizing SPECT system design or optimizing the sampling strategy for use with a variable imaging hardware. To illustrate this approach, let's consider a SPECT imaging system as the following:

  • (a)

    A SPECT imaging system consists of L sub-detection-systems that can acquire data simultaneously. For example, a SPECT system may have 16 camera-heads working simultaneously, so L=16.

  • (b)

    Each sub-detection-system has a finite number of possible configurations: Sub-system 1 has l1 options, Sub-system 2 has l2 options and so on, and Σli = k. For example, the first camera could be coupled to an aperture that can be placed at one of l1 different locations.

  • (c)

    Each sub-detection-system with a unique configuration is defined as a virtual detector. The above-mentioned SPECT system could be considered as having K virtual detectors.

  • (d)

    For an imaging study with a finite imaging time T, we assumed that all K virtual detectors could be used either sequentially, or simultaneously. Each virtual detector is used for a finite duration t(k). The distribution of imaging times across all virtual detectors is constrained by the specific imaging protocol. For example, the sum of the imaging times assigned to all virtual detectors associated with any given sub-detection-system should be equal to T.

With these notations, one could turn the problem of optimizing the corresponding system design or sampling strategy into the task of finding the best imaging time distribution (ITD) across all the virtual detectors,

topt[topt(1),topt(2),,topt(K)]=argmaxt=[t(1),t(2),,t(K)],under certain constraints[Ω(t)]. (9)

This approach could be used both to optimize the sampling strategy for a variable SPECT hardware, and to optimize the design of stationary SPECT systems. For a SPECT system equipped with imaging hardware that can be adjusted during an imaging study, this approach naturally identifies the best combination of possible system configurations to be used sequentially to obtain the optimum imaging performance. Optimizing the design parameters for a stationary SPECT system could be treated as a special case of (9). For this purpose, one could add extra constraints in (9) to ensure the system geometry remaining constant during an imaging study. In Sec. III, we have included simulation studies to demonstrate the applications of this approach for both of these optimization problems.

Furthermore, the indirect optimization approach (9) could offer a greatly improved computation-efficiency over the brute force direct optimization approach (8). To illustrate this aspect, we consider a SPECT system with L gamma cameras looking at the object simultaneously. Each camera could be placed at M possible positions. During an imaging study, each camera is allowed to move between the M positions. To make an exhaustive search possible, we further assumed that the imaging time that a camera spends at each of the possible positions must take one of N discrete values. For this setup, if one were to use brute-force search to find the best sampling strategy, there would be a total of (NM)L possible candidates to be evaluated. Making N=M=L=10, the total number of possible sampling strategies would be 10100! Considering the computation load for evaluating Ω(q) for each possible sampling strategy and the vast number of possibilities to be explored, it would be practically impossible to use brute-force approach to solve such optimization problem.

By contrast, to optimize the same system with the indirect optimization approach (9), we would introduce a total of K=L ×M virtual detectors, and therefore L ×M imaging times, t=[t(1), t(2)t(K)]T, as parameters in (9). This simplification allowed us to develop an efficient search algorithm or to use existing non-linear minimization algorithms to find the optimum sampling strategy with a relatively small computation load. In the simulation studies described in Sec. II.G, and III.A–C, we were able to perform similar optimization problems for a pinhole SPECT system (with L=8, M=16 and therefore 128 virtual detectors) using a single CPU and a running time of a few hours.

D. Synthesized Fisher Information Matrix (FIM)

One of the keys for implementing the proposed system optimization strategy (9) is being able to evaluate the Fisher information matrix (FIM) for each sampling strategy characterized by a given ITD, t=[t(1), t(2)t(K)]T, across all possible virtual detectors. In the following text, the corresponding Fisher information matrix will be referred to as the synthesized FIM.

Suppose the imaging time spent by the k'th virtual detector is t(k), the mean projection on each virtual detector is given by:

y¯(k)=t(k)p¯(k),andp¯(k)=A(k)x,k=1K. (10)

Note that the vectors and matrices with superscript (k) are corresponding to an imaging study that uses the k'th virtual detector only. The synthesized Fisher information matrix (FIM) corresponding to a ITD, t, is given by [37],

Jij(t)=E[2xixjL(x,y,t)]. (11)

E[·] is the expectation operator. We could further define an elementary FIM that is associated with the measurement acquired by the k'th virtual detector and with a unit imaging time only,

J(k)(0)=(A(k))Tdiag{1p¯(k)}A(k), (12)

where A(k) is the response function of the k'th virtual detector. The synthesized FIM corresponding to any given ITD is given by [16],

J=k=1KJ(k)=k=1Kt(k)J(k)(0), (13)

which is a linear combination of elementary FIMs pre-computed for each virtual detector. Therefore, the synthesized FIM can be obtained readily during a system optimization process.

E. Resolution-Variance Tradeoffs as a Performance Index

In this study, we have chosen to optimize an imaging system (or the sampling strategy with a given system) using the resolution-variance tradeoffs criterion – the optimum system should minimize the average variance across an arbitrarily chosen subset of pixels, while delivering a desired spatial resolution. This approach has been developed and applied extensively in the design and optimization of various SPECT imaging systems [11, 16, 2225, 38]. To facilitate the use of the resolution-variance tradeoffs for system optimization, Hero et al. have developed the uniform Cramer-Rao bounds (UCRB) [3941]. For a given imaging system configuration, the UCRBs set lower bounds on imaging variance attainable with any estimator that satisfies certain constraints applied on the bias-gradient function. We have extended this idea to introduce a modified uniform Cramer-Rao bound (MUCRB) that sets a lower bound on attainable variance with estimators satisfying a constraint applied directly to the gradient of the mean estimator (referred to as the mean-gradient in the following text) [23, 24]. In this section, we will provide a brief introduction to several key aspects regarding the implementation of the MUCRB approach.

The spatial resolution property corresponding to a given estimator (or reconstruction algorithm) x^ could be quantified by the mean-gradient matrix defined as

G(x)=E(x^)x={x1E[x^1]x2E[x^1]xNE[x^1]x1E[x^2]x2E[x^2]xNE[x^2]x1E[x^N]x2E[x^N]xNE[x^N]}. (14)

Each row of G is the mean-gradient vector for estimator x^ at a given voxel. Each column of G is the corresponding local-impulse response (LIR) function, or point-spread function (PSF), as defined through equation (2) in [19]. Furthermore, from definition (14), each pair of mean-gradient vector and LIR correspond to the same pixel are closely related, and would become identical for a shift-invariant imaging system. Therefore, one could use the mean-gradient vectors, as a substitute for local-impulse response (LIR) functions, to represent the imaging properties across the object space.

Since our interest is to compare the resolution-variance tradeoffs attainable with several system configurations, we have designed a way to “enforce” these system configurations to deliver similar spatial resolution properties. The first step is to define a target spatial resolution function represented by an N-by-N matrix F. Similar to the mean-gradient matrix G defined in (14), each column of F is defined as the desired (or target) local-impulse response function around a given pixel.

We have previously demonstrated in [24] that if a system configuration delivers a spatial resolution function G that satisfies the following inequality,

s=trace[W(GF)C(GF)TWT]γ, (15)

the total variance, Vtotal, across a group of selected pixels will be bounded by a scalar function as the following,

Vtotaltrace{WF[(k=1Kt(k)J(k)(0))+βR]1(k=1Kt(k)J(k)(0))[(k=1Kt(k)J(k)(0))+βR]1FTWT}. (16)

In (15) and (16), we have introduced a positive-definite matrix C and another non-negative-definite matrix W to allow different weightings for calculating s. For example, W could be defined as a diagonal matrix with 1's on those diagonal entries corresponding to the voxels-of-interest, and with all zeros otherwise. In this case, (16) could provide a lower bound on the total variance across any arbitrarily chosen sub-set of pixels specified with matrix W.

Any efficient estimator x^eff that achieve the minimum total variance given in (16) must have the following mean-gradient matrix [24],

G(x)=F[(k=1Kt(k)J(k)(0))+βR]1(k=1Kt(k)J(k)(0)) (17)

Note that reducing the value of β in (17) would allow the mean-gradient matrix G to approach the corresponding target resolution function F. An example of the increasing similarity between selected pair of row-vectors of G and F, with decreasing β value, is given in Fig. 5 in [23].

Fig. 5.

Fig. 5

A comparison of convergence rates obtained with different algorithms using the block-iterative approach. These algorithms led to virtually identical final ITD shown in the lower-right pie-chart in Fig. 6.

Furthermore, if an estimator has a mean-gradient matrix given by (17), the covariance matrix associated with this estimator must be bounded by,

Cov[x^]F[(k=1Kt(k)J(k)(0))+βR]1(k=1Kt(k)J(k)(0))[(k=1Kt(k)J(k)(0))+βR]1FT. (18)

(17) and (18) were derived in our previous work detailed in [24]. The actual lower bound shown on the right-hand side of (18) is often used as an approximation of the minimum image variance attainable with the given system configuration.

Using (17) and (18), one could evaluate the lower bound on variance and the resolution properties attainable with a given system configuration. One could further derive the tradeoffs between resolution and variance as a performance index for comparing different system configurations. To help implementing this approach, we provide a few remarks as the following:

  • Achievability of the MUCRBs with practical image reconstruction algorithms. For linear Gaussian problems, where projection y is a collection of independent Gaussian random variables and the mean projection y¯ is related to the underlying image function x by (10), the image variance given by (16) and (18), can be achieved exactly by a post-filtered penalized weighted least-square (PF-PWLS) estimator using a quadratic penalty function with a small regularization term β and a post-filter function exactly defined as F [23, 24]. If the individual elements of projection y are independent Poisson variables, then Vmin can be achieved asymptotically using post-filtered penalized maximum-likelihood reconstruction algorithm (5) with a small regularization term β and a post-filter function exactly defined as F. Further detail regarding the achievability of the lower bound on variance can be found in [23, 24].

  • Target resolution function F. Since each column of F, by definition, is the target local-impulse function around a given pixel, it is up to the user to construct F with each of its columns reflecting the desired resolution functions across the entire object space.

  • Choosing a reasonable value for β requires some practical considerations. The primary purpose for the inclusion of β is to make (J+ β R) positive-definite, and therefore to ensure the existence of the inverse (J+ β R)−1. This argument would favor a large β. On the other hand, we would like to keep β relatively small, so that the resultant mean-gradient matrix G is sufficiently close to the target resolution function F. In our practice, we chose β with some pilot studies. With a given resolution function defined by F, we test several values for β with (17). In each case, we examine the similarity between the resultant matrix G with F, by either plotting corresponding columns of G and F in the same plot, or calculating the square-error between the corresponding vectors (corresponding columns of G and F). We then chose the largest β that still allow G to be closely approximated by F.

  • The amount of computation required to evaluate the total variance across multiple pixels increases almost linearly with the number of pixels considered. In this study, we used the recipe, previously developed and presented in Sec. II.A (Eq. 27) in [24], for evaluating the total variance across multiple pixels.

F. A Block-Iterative Approach with Successive-Parameter-Elimination (SPE) Procedure

In this work, we have developed a computationally efficient algorithm for optimizing the system performance index Ω against the imaging time distribution, t=[t(1), t(2)t(K)]T. This method is based on a search algorithm that we have previously developed in [16].

For the sake of simplicity, we have chosen to use the pixel-wise image variance Var[x^j] as a performance index, which could be approximated as

Var[x^j]ejTF[(k=1Kt(k)J(k)(0))+βR]1(k=1Kt(k)J(k)(0))[(k=1Kt(k)J(k)(0))+βR]1FTej, (19)

where ej is the j'th unit vector. Plugging (19) into (9), the corresponding system optimization task could be formulated as,

topt[topt(1),topt(2),,topt(K)]=argmint=[t(1),t(2),,t(K)],{Var[x^j]}argmint{ejTF[(k=1Kt(k)J(k)(0))+βR]1(k=1Kt(k)J(k)(0))[(k=1Kt(k)J(k)(0))+βR]1FTej}. (20)

Note that similar algorithm could be developed to optimize the imaging system against many other performance indices, such as the average variance over a collection of pixels, the variance value on the estimated tracer-uptake in a region-of-interest (ROI). These results are detailed in [16].

Note that in the minimization problem (20), the object function Var[x^j] is a function of t only. Other components, including J(k)(0), R, F, ej and β, are treated as constants during the optimization process. This highly simplified expression is the key to allow us to derive a gradient-based research algorithm for finding the optimum system configuration.

To facilitate the variance minimization process (20), we have designed a block-iterative approach as described below. We first divide all the imaging times into multiple subsets as the following:

graphic file with name nihms-489533-f0001.jpg (21)

s=1, 2, …, S, is the index of individual subsets and l=1, 2, …Ls, is the index of virtual detectors within each subset. Within each iteration, the imaging times are updated one subset at a time. While updating the imaging times in a subset, all other imaging times are temporarily kept as constants. The corresponding optimization process is illustrated in Fig. 1. t0 is a user-defined parameter to control the step size during the update process. Note that the proposed algorithm could accommodate similar iterative approach without the use of subset.

Fig. 1.

Fig. 1

The block-iterative variance minimization, (B)IVM, algorithm with successive parameter elimination (SPE). t0 is a user defined parameter to control the step size during the update process. tc is a threshold value. If an imaging time is below tc as the result of the update process, it will be set to 0 and eliminated from the parameter list.

There are two primary reasons for us to introduce the block-iterative scheme. First, it could be a potential way to speed up the convergence rate. To verify this potential benefit, we have carried out a series of simulation studies, and the results are shown in Sec. III.A. Secondly, while running the proposed algorithm, different definitions for the subsets would lead to different pathways for the estimated parameter values to move through the parameter space. Therefore, experimenting on different definitions of sebsets could allow us to study the robustness of the iterative variance minimization approach, by examining how much the final result is affected by the choice of the subsets.

The simplicity of (19) allowed us to derived the partial derivative of image (co)variance with respect to t(k). It is given by [16],

t(k)Cov(x^)F{[J+βR]1J(k)(0)[J+β+R]12[J+βR]1J(k)(0)[J+βR]1J[J+βR]1}FT. (22)

Similarly, the partial derivatives of the pixel-wise variance is

t(k)Var[x^j(t)]ejTF[J+βR]1J(k)(0)[J+βR]1FTej, (23)

The non-negativity of the imaging times is ensured with the following two procedures. First, if a given imaging time tj is supposed to be set to a negative value, we will artificially prevent this by setting tj to one-half of its current (positive) value. If this procedure leads to a positive tj that is smaller than a pre-set threshold tc, we will set it to zero and immediately remove this parameter from the minimization process (20). This gradual reduction in parameters (imaging times) is referred to as successive parameter elimination (SPE). The above-mentioned SPE process will be referred to as SPE-I. We have also experimented with a more aggressive rule for parameter elimination. With this rule, any parameter that is set to negative by the update procedure will be immediately removed from the minimization process. The corresponding SPE process will be referred to as SPE-II in the following text.

The SPE schemes have another important function. It is necessary procedure for the algorithm to convergence to a minimum variance value. According to the algorithm shown in Fig. 1, each individual parameter (imaging time assigned to a virtual detector) affects the update process through the corresponding partial derivative (variance against the given parameter) rather than its actual value. If an imaging time tj is approaching zero and the corresponding virtual detector is known to hardly contribute any useful imaging information, then tj should be removed from the parameter list. Otherwise, the inclusion of tj could still influence the outcome of future iterations through the partial derivative of image variance against tj. This could prohibit the algorithm from finding the true variance-minimizing image time distribution. We have included several numerical examples later in Sec. III.A.

In this study, we have also cross-validated the block-iterative variance minimization ((B)IVM) approach (Fig. 1) with a few prevent this by setting tj to one-half of its current (positive) value. If this procedure leads to a positive tj that is smaller other constrained minimization algorithms included in the Matlab optimization toolbox [49]. The Matlab routine used in this study is called <fmincon>. It allows one to incorporate non-negativity constraint, and to select from several algorithms. Among these algorithms, we have chosen to use sqp and Interior-Point algorithms, both guarantee the non-negativity of parameters at each iteration. Since the Matlab-sourced sqp and inter-point algorithms do not allow block-iterative implementation, we have manually implemented this scheme as the following. The 16 detectors were divided into n (4 or 16 in this study) groups (subsets). Each group has 16/n detectors and each detector has 8 possible positions. This gives a total of 16/n×8 virtual detectors within each group. We use the existing Matlab routine to update the imaging times one sub-set at a time and for a single iteration only. While processing the current subset, imaging times in other sub-sets are kept constant, but they influence the update of imaging times in the current subset through the partial derivatives given in (21). Once the current subset is processed, we gathered all the imaging times (including the ones just updated) to re-evaluate the partial derivatives for the next subset, and then move on to update the imaging times in the next subset. This process is repeated until all subsets are updated, which marks the end of the iteration. This sequence is exactly the same for sqp, inter-point and the (B)IVM algorithm. In this study, all the computation was performed on a single CPU on the same PC to ensure a fair comparison.

G. Simulation Study

We have conducted a simulation study to evaluate the indirect system optimization approach that is outlined by Eqs. (9) and (20) and the block-iterative variance minimization ((B)IVM) algorithm shown in Fig. 1. In this study, SPECT systems will be optimized to provide a minimum imaging variance on a target pixel (Var[x^j] in Eq. 20), while offering a user-specified imaging resolution around the same pixel. The user-specified imaging resolution was given by matrix F as defined for (17)(20). Each of its columns represents a spatially shifted 3-D Gaussian function of 480μm FWHM.

In this study, we simulated a non-rotating SPECT system that consists of 16 camera-heads arranged in a closely packed ring with a diameter of 11.76 cm (distance between the front-surfaces of two opposite detectors). The detector has 128×128 pixels of 0.175 × 0.175 mm2. The detector was assumed to be infinitely thin, but have 100% detection efficiency. The effective spatial resolution of the detector is limited by the pixel-size. Each detector is coupled to a collimator with a single pinhole of 300 μm diameter. It has a knife-edge and 90-degree open angles on both sides. The aperture was made of tungsten sheet of 6 mm thickness. The simulation included the penetration of gamma rays through the knife-edge, but photon scattering and attenuation through the object were ignored. The object is a sphere of 30 mm diameter with a uniform tracer uptake. The total activity in the object is 1 mCi.

The simulated SPECT system has a variable aperture system. During an imaging study, the pinholes could be moved between the detector and the object to allow different magnification ratios and angular coverage. The maximum aperture-to-center distance was 39.4 mm. It was to ensure the entire object to be fully projected onto the corresponding detector. The minimum distance was 18.6 mm, in which case, the pinhole is pushed as close to the object as practically possible without touching the object. This offers the largest magnification for a spherical region of 10 mm diameter, located in the center of the field-of-view (FOV). During an imaging study, the pinhole can be moved between these two extreme positions, in a total of eight equally spaced steps. The object space is divided into 127×127×127 voxels with 0.24×0.24×0.24 mm in size. We have carried out several similar studies, with the target pixels located at 0 mm, 4.8 mm and 9.6 mm away from the center of the object. The system and phantom are shown in Fig. 2.

Fig. 2.

Fig. 2

Simulated system and object geometry. Each detector is coupled to a single pinhole that can be placed at one of eight positions. The two extreme pinhole positions are chosen to cover either the entire object or the central target area of 5 mm in diameter.

To utilize the proposed system optimization approach, we defined each camera with the pinhole at each of the eight possible positions as a distinct virtual detector. This leads to a total of 144 (16×8) virtual detectors in the system. Since the simulated SPECT system has 16 detectors operating at the same time, the total time spent by each detector, shared by all eight pinhole locations, is equal to the total imaging time T=480 seconds. The threshold tc used in the update process (Fig. 1) was 1 second, unless specified otherwise. As part of this study, we have compared three subset configurations. For the system with 16 gamma cameras, we experimented with 1, 4 and 16 subsets. The imaging times associate with the same gamma camera are always included in the same subset. When using 4 subsets, we chose to pack timing times corresponding to cameras 1, 5, 9, 13 into subset 1, and imaging times for cameras 2, 6, 10, 14 into subset 2, and so on.

To further reduce the amount of computation involved, we have applied a non-uniform object-space pixilation (NUOP) approach that we have previously developed in [38]. This approach allows one to represent the object-space with a dramatically reduced overall pixel count and therefore to reduce the amount of computation involved in image reconstruction and in evaluating the uniform Cramer-Rao bound. In this previous work, we have demonstrated the use of the NUOP approach could preserve the image quality in regions-of-interest arbitrarily specified by the user, given that the rebinning process does not combine pixels with dramatically different values [38]. In practice, this condition could be met approximately by using the information from a short scout imaging studies. We have also studied the impact of the non-uniform pixel rebinning on the evaluation of modified uniform Cramer-Rao bounds (MUCRB), and demonstrated that the rebinning process will not necessarily lead to significant errors in the system performance predicted with MUCRB calculations. With this approach, the object-space is represented by pixels of different sizes, according to their distance from the center of the selected point-of-interest. A pixel size of 0.24 mm was used in a spherical region of 4 mm diameter center around the point-of-interest, a pixel size of 0.96 mm pixels was used in the region that is more than 2 mm but less than 5 mm from the target-pixel. For the region more than 5 mm away from the target pixel, the effective pixel size was increased to 3.84 mm. With the rebinning process outlined above, the total number of unknown pixels is shrunk to around 8000 (from ~2M) and thus the computation load was greatly reduced. Therefore, it would be possible to perform the optimization in real-time in a PC-based parallel computing environment.

III. RESULTS

A. Evaluation of the (B)IVM Algorithm

In Fig. 3, we compared the convergence behavior of the (B)IVM algorithm with and without the successive parameter elimination (SPE) procedure. In this study, the imaging times assigned to the 128 virtual detectors were divided into four subsets. As we previously discussed in Sec. II.F, eliminating nonsense parameters is a necessary procedure for the algorithm to converge to lower variance values. This is evident from the results shown in Fig. 3. The use of the more aggressive elimination rule, SPE-II, led to the faster convergence rate, although both SPE-I and SPE-II produced identical final variance values after a sufficient number of iterations. Furthermore, when comparing the top and bottom panels of Fig. 3, one sees that although the imaging variance could converge rapidly, the corresponding imaging time distribution (ITD) has been settling down at a much slower pace. These results indicated that there could be many different ITDs leading to virtually identical imaging variance.

Fig. 3.

Fig. 3

Top panel: The reduction of pixel-wise variance with three algorithms. Since these algorithms require different amount of computation per iteration, we plotted the resultant image variances against the computation time taken by a single CPU to loop through multiple iterations. The iterative approach, shown in Fig. 1, leads to monotonically reducing variance values. Bottom panel: The imaging times assigned to all 144 virtual detectors as functions of computation time. These results are obtained with 4 subsets.

To further validate the (B)IVM algorithm, we have compared this algorithm against two Matlab-sourced non-linear minimization algorithms for the same optimization problem. In this study, all algorithms were implemented without subsets. The results are shown in Fig. 4. The (B)IVM algorithm was slightly slower to converge, but with sufficiently large numbers of iterations, all three algorithms have produced virtually identical image variance values and optimum ITD, which are shown in the upper and lower panel of Fig. 4.

Fig. 4.

Fig. 4

Top panel: Variance versus computation time curves obtained with three different algorithms. Bottom panel: A comparison between the optimum ITDs across the 128 virtual These results are detectors derived with two different algorithms. obtained without subset.

Fig. 5 shows the results from a similar comparative study, but with different subset implementations added to all three algorithms. For the iterative variance minimization algorithm shown in Fig. 1, the block-iterative approach did lead to a faster convergence rate. For this simple system optimization problem, the (B)IVM algorithm with 4 subsets leads to the fastest convergence among all configurations compared. It took one single iteration to achieve 90% variance reduction. By comparison, achieving the same fraction of variance reduction with the none-subset implementation would require 8 iterations. However, the convergence-rate was not monotonically increasing with the increasing number of sub-sets. The use of 16 subsets led to a slower convergence rate, when compared to the use of only 4 subsets. This is partially due to the extra computation involved in evaluating the variance gradient values using (22) and (23). Regardless how many partial derivatives to be updated/evaluated, one needs to invert [JR] (see Eq. 22). So having 16 subsets means that one needs to invert the matrix 16 times, while one only needs to do it four times with 4 subsets. Interestingly, the same subset-based block-iterative scheme did not speed up the convergence with the Matlab-sourced sqp and inter-point algorithms. In reality, achieving the 90% variance reduction would probably suffice for most of system optimization task. The proposed (B)IVM algorithm with subset implementation could offer a rapid computation approach to solve the variance minimization problem (20) and to identify the optimum system design or sampling strategy for SPECT imaging.

B. Optimization of the Sampling Strategy with a Variable SPECT Detection System

To demonstrate the effectiveness of the proposed approach, we have carried out a series of simulation studies based on the simple system optimization problem outlined in Sec. II.D. In this study, we have chosen three target pixels in the object that are located at the center, 4.8 mm off the center and 9.6 mm off the center respectively. The object is surrounded by 16-detectors. Each detector is coupled to a single pinhole that could be positioned at any of 8 possible radial positions, with pinhole-to-detector distance ranging from 18.6 mm to 39.4 mm as shown in Fig. 2.

Fig. 6 shows the imaging time distributions (ITD) derived using the block-iterative variance minimization ((B)IVM) algorithm as a function of the numbers of iterations. For this study, the pixel-of-interest is only slightly off the center by 120um (1/2 of a pixel). The starting guess was to have the total imaging time T equally divided between all 8 possible pinhole positions. The ITDs identified with the (B)IVM algorithm generally agree with our expectation – all pinholes should be placed as close to the object as possible (Position 1 shown in Fig. 2) throughout the entire data acquisition. This geometry provides the maximum sensitivity and magnification ratio and therefore delivers the best imaging performance for the target pixel.

Fig. 6.

Fig. 6

Image time distribution (ITD) obtained with different number of iterations. The target pixel is located at the center of the object, as indicated with the black dot in the object. The solid circles outline the boundary of the object. The pie-charts around the objects show the image times assigned for all possible pinhole locations, after different numbers of iterations. The pie-chart at the lower right corner represents the “optimum” IDT identified with the (B)IVM algorithm and SPE-II approach that leads to the minimum image variance of 1.54 × 109 at the target pixel. The open circles in the pie-charts represent “average” pinhole positions weighted by the ITDs at individual angles.

Note that since the pixel-of-interest (POI) is slightly off the center (by the same 0.5 pixel on X, Y and Z directions), the resultant IDT is not symmetric across all angles. It assigns small fractions of imaging time to several pinhole positions further away from the object (as detailed in Table I). We have numerically verified that the corresponding image variance (1.54 × 109) is indeed smaller than the variance value (1.58 × 109) corresponding to the “conceptually ideal” ITD – all pinholes staying as close to the object as possible at all time. In principle, using multiple pinhole locations at individual angles could provide richer angular sampling for the off-the-center POI and other adjacent pixels, and therefore offers slightly better resolution-variance tradeoffs.

Table I.

Optimized ITD for a Pixel-of-Interest at the Center

angle/position 1 2 3 4 5 6 7 8
0 4801 0 0 0 0 0 0 0
1 480 0 0 0 0 0 0 0
2 284.41 0 0 195.59 0 0 0 0
3 480 0 0 0 0 0 0 0
4 462.16 0 17.84 0 0 0 0 0
5 480 0 0 0 0 0 0 0
6 349.74 0 111.78 18.47 0 0 0 0
7 443.72 0 36.28 0 0 0 0 0
8 433.69 0 46.31 0 0 0 0 0
9 415.45 0 64.55 0 0 0 0 0
10 468.18 0 0 11.82 0 0 0 0
11 411.07 0 68.93 0 0 0 0 0
12 452.31 0 11.56 16.14 0 0 0 0
13 448.1 0 31.9 0 0 0 0 0
14 343.35 0 61.62 75.03 0 0 0 0
15 480 0 0 0 0 0 0 0
1

All imaging times are given in seconds.

To further verify the proposed approach, we have moved the point-of-interest (POI) away from the center by 4.8 mm and 9.6 mm, and used the same routine to find the optimum ITD. As shown in Figs. 7 and 8, the optimum pinhole positions at certain angles were pushed away from the object. This is because some pinhole positions are too close to the object and do not allow gamma rays originating from pixel-of-interest to reach the corresponding detectors.

Fig. 7.

Fig. 7

Image time distribution (ITD) obtained as a function of the computation time. The target pixel is 4.8 mm off the center of the object, as indicated with the black dot. The solid circles outline the boundary of the object, and the pie-charts around the objects show the image time assigned for a given pinhole location. The pie-chart at the lower right corner represent the “optimum” IDT identified with the (B)IVM algorithm and SPE-II approach.

Fig. 8.

Fig. 8

Image time distribution (ITD) obtained as a function of the computation time. The target pixel is 4.8 mm off the center of the object, as indicated with the black dot. The solid circles outline the boundary of the object, and the pie-charts around the objects show the image time assigned for a given pinhole location. The pie-chart at the lower right corner represent the “optimum” IDT identified with the (B)IVM algorithm and SPE-II approach.

For this object-known-exactly case, the use of multiple sampling geometries sequentially has lead to the best imaging performance (more specifically, as measured by the resolution-variance tradeoffs criterion). These results indicated that one should use a synthetic sampling scheme – acquiring projection data with multiple sampling geometries. In principle, this sampling scheme would provide complementary imaging information that leads to the optimum imaging performance. The idea of synthetic sampling is not new. It has been used or explored in SPECT imaging under a few limited circumstances. One example is the so-called synthetic aperture, as explored by Wilson et al in [36]. The analytical formula and computation approach developed in this work is naturally applicable to future work involving synthetic sampling strategies.

C. Application to Static SPECT System Optimization

In this section, we used a simple simulation study to demonstrate the use the proposed approach for finding an optimum static SPECT system configuration. In essence, this is a special case of the system optimization problem characterized by (9) and (20), but with added constraints, e.g. the system geometry remains unchanged during an imaging study. Therefore, as long as one can build the constraint into the iterative variance minimization approach, it is possible to use a similar procedure to find the optimum static system configuration.

We have simulated simple SPECT system as sketched in Fig. 9. It has essentially the same geometry as shown in Fig. 2, expect with only four detectors placed 90 degree apart. Each detector is coupled to a pinhole that can be placed at one of eight possible positions and remain there throughout the entire imaging study. The use of this simple geometry led to a reduced computation load and allowed us to use the brute force approach to validate the optimum system configurations identified using the methods developed here.

Fig. 9.

Fig. 9

Optimum static system configurations identified for six points-of-interest. These configurations are identified using both the IVM algorithm and exhaustive search (note that the two approaches led to exactly the same results).

To find the best static system geometry, we used the block-iterative approach (shown in Fig. 1) to optimize pinhole positions for one angle at a time. At each angle, the iterative variance minimization approach is used to find the optimum distribution of imaging time across all pinhole positions allowed at the given angle. Based on this result, we simply choose the pinhole position with the maximum imaging time to be the “best” pinhole position for the angle. The algorithm will then move onto the next angle, and the “best” pinhole positions for previously processed angles will be used throughout upcoming optimization process, until further updated by the angle-by-angle optimization scheme.

Fig. 9 shows several static system configurations optimized for imaging several target pixels that are located 0 mm, 1.6 mm, 3.2 mm, 4.8 mm, 6.4 mm, and 9.6 mm from the center respectively. In order to validate these results, we have also carried out a series of exhaustive search to find the true variance-minimizing system configurations. In this study, both approaches produced exactly the same results. This confirmed that for these six cases studied, the IVM algorithm has successfully identified the system configurations that minimize the image variance.

In this example, we have used a relatively crude approach to enforce the update process to assign all the imaging time to a single pinhole position at each angle. Although it has worked well for these six cases, there could be many other ways that one can implement the same constraint with different computational procedures. The topic of building various constraints, with sufficient numerical, computational, or theoretical efficiency, would certainly deserve further explorations. Since this topic depends heavily on the nature of the particular system optimization problems, we will have to leave it to our future studies.

IV. CONCLUSION AND DISCUSSION

In this work, we have developed a general approach for optimizing the design of static SPECT imaging systems, or for optimizing the sampling strategy for use with a variable-geometry SPECT system. This approach has several unique features. First, it offers a framework for optimizing an imaging system against almost any system parameter. Second, the task of system optimization is achieved by discretizing the parameter space and by optimizing system performance against imaging times assigned to individual possible system configurations. This approach has allowed us to derive a series of closed-form equations and an efficient computation scheme to facilitate the system optimization with a reasonable computing power.

We have carried out an extensive series of studies to validate the proposed block-iterative variance minimization ((B)IVM) algorithm. First, we have presented a simulation study in [16] to compare optimum ITDs identified with the (B)IVM algorithm (without sub-sets) and the ITDs obtained by brute-force search. Both sets of results matched very well (as shown in Table III in [16]), and the (B)IVM algorithm has indeed lead to system configurations that minimized image variance. Second, the results from the (B)IVM algorithms have been cross-validated with the results obtained using Matlab-sourced algorithms (as shown in Fig. 5). Although the seven algorithms have exhibited different convergence rate, they have resulted in the same final imaging-time-distribution as shown in the right-bottom panel in Fig. 6. Third, in Sec. III.C, we have compared the (B)IVM algorithms and brute-force approach for optimizing static system configurations. The (B)IVM approach has correctly identified the variance-minimizing system configurations for all six source geometries studied.

The indirect system optimization approach developed in this work could certainly benefit from a better understanding of how the performance indices depend on the imaging time distribution. At this point, there are many unanswered questions that worth future studies. These include (a) whether the performance indices used (e.g. image variance) is a concave function of the imaging times? (b) whether the proposed algorithm (or the Matlab-sourced algorithms compared) could find the global minima for image variance? and what is the best way to divide the imaging times into subsets to truly minimize the variance and to speed up the convergence rate? Unfortunately, the answers to these questions are likely to vary depending on the specific optimization problems. These issues will be explored in our future studies.

Finally, the formulation and computation approaches developed here are based on a comprehensive description of the resolution and covariance properties attainable with various system configurations. These properties could be used to expand the current approach for optimizing imaging systems for other imaging tasks and against other performance measures, such as the bias-variance tradeoffs for region-of-interest (ROI) quantitation, and observer performance for detection tasks. Furthermore, although we are using SPECT imaging as a platform to demonstrate this system optimization strategy, the general framework developed in this effort could certainly be applied to other imaging modalities, such as positron emission tomography (PET), X-ray computed tomography (CT) [1, 2].

V. References

  • [1].Moore JW, Barrett HH, Furenlid LR. Adaptive CT for high-resolution, controlled-dose, region-of-interest imaging. pp. 5154–5157. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [2].Freed M, Kupinski MA, Furenlid LR, et al. A prototype instrument for single pinhole small animal adaptive SPECT imaging. Medical Physics. 2008 May;35(5):1912–1925. doi: 10.1118/1.2896072. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [3].Cherry SR. In vivo molecular and genomic imaging: new challenges for imaging physics. Physics in Medicine and Biology. 2004 Feb 7;49(3):R13–R48. doi: 10.1088/0031-9155/49/3/r01. [DOI] [PubMed] [Google Scholar]
  • [4].Meikle SR, Kench P, Kassiou M, et al. Small animal SPECT and its place in the matrix of molecular imaging technologies. Physics in Medicine and Biology. 2005 Nov 21;50(22):R45–R61. doi: 10.1088/0031-9155/50/22/R01. [DOI] [PubMed] [Google Scholar]
  • [5].Kastis GA, Furenlid LR, Wilson DW, et al. Compact CT/SPECT small-animal imaging system. IEEE Transactions on Nuclear Science. 2004 Feb;51(1):63–67. doi: 10.1109/TNS.2004.823337. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [6].Peterson TE, Wilson DW, Barrett HH. Application of silicon strip detectors to small-animal imaging. Nuclear Instruments & Methods in Physics Research Section a-Accelerators Spectrometers Detectors and Associated Equipment. 2003 Jun 1;505(1–2):608–611. [Google Scholar]
  • [7].Accorsi R, Curion AS, Frallicciardi P, et al. Preliminary evaluation of the tomographic performance of the mediSPECT small animal imaging system. Nuclear Instruments & Methods in Physics Research Section a-Accelerators Spectrometers Detectors and Associated Equipment. 2007 Feb 1;571(1–2):415–418. [Google Scholar]
  • [8].Beekman FJ, Vastenhouw B. Design and simulation of a high-resolution stationary SPECT system for small animals. Physics in Medicine and Biology. 2004 Oct 7;49(19):4579–4592. doi: 10.1088/0031-9155/49/19/009. [DOI] [PubMed] [Google Scholar]
  • [9].Jimenez-Cruz A, Loustaunau-Lopez VM, Bacardi-Gascon M. The use of low glycemic and high satiety index food dishes in Mexico: a low cost approach to prevent and control obesity and diabetes. Nutricion Hospitalaria. 2006 May-Jun;21(3):353–356. [PubMed] [Google Scholar]
  • [10].Finck BK, Weissman BNW, Rubenstein JD, et al. 100 micron digitization resolution is optimal for x-rays for a large multicenter trial in rheumatoid arthritis (RA) Arthritis and Rheumatism. 1997 Sep;40(9):1544–1544. [Google Scholar]
  • [11].Meng LJ, Clinthorne NH, Skinner S, et al. Design and feasibility study of a single photon emission microscope system for small animal I-125 imaging. IEEE Transactions on Nuclear Science. 2006 Jun;53(3):1168–1178. doi: 10.1109/TNS.2006.871405. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [12].Meng LJ. An intensified EMCCD camera for low energy gamma ray imaging applications. IEEE Transactions on Nuclear Science. 2006 Aug;53(4):2376–2384. doi: 10.1109/TNS.2006.878574. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [13].Meng LJ, Fu G, Roy EJ, et al. An ultrahigh resolution SPECT system for I-125 mouse brain imaging studies. Nuclear Instruments & Methods in Physics Research Section a-Accelerators Spectrometers Detectors and Associated Equipment. 2009 Mar 1;600(2):498–505. doi: 10.1016/j.nima.2008.11.149. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [14].Barrett HH, Furenlid LR, Freed M, et al. Adaptive SPECT. IEEE Transactions on Medical Imaging. 2008 Jun;27(6):775–788. doi: 10.1109/TMI.2007.913241. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [15].Clarkson E, Kupinski MA, Barrett HH, et al. A task-based approach to adaptive and multimodality imaging. Proceedings of the IEEE. 2008 Mar;96(3):500–511. doi: 10.1109/JPROC.2007.913553. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [16].Li N, Meng LJ. Adaptive Angular Sampling for SPECT Imaging. IEEE Transactions on Nuclear Science. 2011 Oct;58(5):2205–2218. doi: 10.1109/TNS.2011.2164935. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [17].Zhou L, Khurd P, Kulkarni S, et al. Aperture optimization in emission imaging using ideal observers for joint detection and localization. Physics in Medicine and Biology. 2008 Apr 21;53(8):2019–2034. doi: 10.1088/0031-9155/53/8/002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [18].Rentmeester MCM, van der Have F, Beekman FJ. Optimizing multi-pinhole SPECT geometries using an analytical model. Physics in Medicine and Biology. 2007 May 7;52(9):2567–2581. doi: 10.1088/0031-9155/52/9/016. [DOI] [PubMed] [Google Scholar]
  • [19].Fessler JA, Rogers WL. Spatial resolution properties of penalized-likelihood image reconstruction: Space-invariant tomographs. IEEE Transactions on Image Processing. 1996 Sep;5(9):1346–1358. doi: 10.1109/83.535846. [DOI] [PubMed] [Google Scholar]
  • [20].Fessler JA. Mean and variance of implicitly defined biased estimators (such as penalized maximum likelihood): Applications to tomography. IEEE Transactions on Image Processing. 1996 Mar;5(3):493–506. doi: 10.1109/83.491322. [DOI] [PubMed] [Google Scholar]
  • [21].Qi JY, Leahy RM. Resolution and noise properties of MAP reconstruction for fully 3-D PET. IEEE Transactions on Medical Imaging. 2000 May;19(5):493–506. doi: 10.1109/42.870259. [DOI] [PubMed] [Google Scholar]
  • [22].Meng LJ, Wehe DK. Feasibility study of using hybrid collimation for nuclear environmental imaging. IEEE Transactions on Nuclear Science. 2003 Aug;50(4):1103–1110. doi: 10.1109/TNS.2003.815135. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [23].Meng LJ, Clinthorne NH. A modified uniform Cramer-Rao bound for multiple pinhole aperture design. IEEE Transactions on Medical Imaging. 2004 Jul;23(7):896–902. doi: 10.1109/TMI.2004.828356. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [24].Meng LJ, Li N. A Vector Uniform Cramer-Rao Bound for SPECT System Design. IEEE Transactions on Nuclear Science. 2009 Feb;56(1):81–90. doi: 10.1109/TNS.2008.2006609. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [25].Meng LJ, Rogers WL, Clinthorne NH, et al. Feasibility study of Compton scattering enchanced multiple pinhole imager for nuclear medicine. IEEE Transactions on Nuclear Science. 2003 Oct;50(5):1609–1617. doi: 10.1109/NSSMIC.2002.1239548. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [26].Qi JY, Huesman RH. Theoretical study of penalized-likelihood image reconstruction for region of interest quantification. IEEE Transactions on Medical Imaging. 2006 May;25(5):640–648. doi: 10.1109/TMI.2006.873223. [DOI] [PubMed] [Google Scholar]
  • [27].Fu L, Stickel JR, Badawi RD, et al. Quantitative Accuracy of Penalized-Likelihood Reconstruction for ROI Activity Estimation. IEEE Transactions on Nuclear Science. 2009 Feb;56(1):167–172. doi: 10.1109/TNS.2008.2005063. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [28].Gifford HC, Kinahan PE, Lartizien C, et al. Evaluation of multiclass model observers in PET LROC studies. IEEE Transactions on Nuclear Science. 2007 Feb;54(1):116–123. doi: 10.1109/TNS.2006.889163. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [29].Khurd P, Gindi G. Fast LROC analysis of Bayesian reconstructed emission tomographic images using model observers. Physics in Medicine and Biology. 2005 Apr 7;50(7):1519–1532. doi: 10.1088/0031-9155/50/7/014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [30].Smith WE, Barrett HH. Hotelling Trace Criterion as a Figure of Merit for the Optimization of Imaging-Systems. Journal of the Optical Society of America a-Optics Image Science and Vision. 1986 May;3(5):717–725. [Google Scholar]
  • [31].Barrett HH, Abbey CK, Clarkson E. Objective assessment of image quality. III. ROC metrics, ideal observers, and likelihood-generating functions. Journal of the Optical Society of America a-Optics Image Science and Vision. 1998 Jun;15(6):1520–1535. doi: 10.1364/josaa.15.001520. [DOI] [PubMed] [Google Scholar]
  • [32].Gifford HC, Wells RG, King MA. A comparison of human observer LROC and numerical observer ROC for tumor detection in SPECT images. IEEE Transactions on Nuclear Science. 1999 Aug;46(4):1032–1037. [Google Scholar]
  • [33].Qi JY, Huesman RH. Theoretical study of lesion detectability of MAP reconstruction using computer observers. IEEE Transactions on Medical Imaging. 2001 Aug;20(8):815–822. doi: 10.1109/42.938249. [DOI] [PubMed] [Google Scholar]
  • [34].Qi J, Huesman RH. Lesion detectability of MAP reconstruction using computer observer: A theoretical study. Journal of Nuclear Medicine. 2000 May;41(5):18p–19p. doi: 10.1109/42.938249. [DOI] [PubMed] [Google Scholar]
  • [35].Khurd P, Gindi G. Rapid computation of LROC figures of merit using numerical observers (for SPECT/PET reconstruction) IEEE Transactions on Nuclear Science. 2005 Jun;52(3):618–626. doi: 10.1109/TNS.2005.851458. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [36].Wilson DW, Barrett HH, Clarkson EW. Reconstruction of two- and three-dimensional images from synthetic-collimator data. IEEE Transactions on Medical Imaging. 2000 May;19(5):412–422. doi: 10.1109/42.870252. [DOI] [PubMed] [Google Scholar]
  • [37].Wolbarst AB. Foundations of image science. Health Physics. 2004 Jul;87(1):93–93. [Google Scholar]
  • [38].Meng LJ, Li N. Non-Uniform Object-Space Pixelation (NUOP) for Penalized Maximum-Likelihood Image Reconstruction for a Single Photon Emission Microscope System. IEEE Transactions on Nuclear Science. 2009 Oct;56(5):2777–2788. doi: 10.1109/TNS.2009.2024677. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [39].Hero AO, Antoniadis N, Clinthorne N, et al. Optimal and Suboptimal Postdetection Timing Estimators for Pet. IEEE Transactions on Nuclear Science. 1990 Apr;37(2):725–729. [Google Scholar]
  • [40].Hero AO, Clinthorne NH, Rogers WL. A Lower Bound on Pet Timing Estimation with Pulse Pileup. IEEE Transactions on Nuclear Science. 1991 Apr;38(2):709–712. [Google Scholar]
  • [41].Hero AO, Fessler JA, Usman M. Exploring estimator bias-variance tradeoffs using the uniform CR bound. IEEE Transactions on Signal Processing. 1996 Aug;44(8):2026–2041. [Google Scholar]
  • [42].Fessler JA. Mean and variance of implicitly defined biased estimators (such as penalized maximum likelihood): applications to tomography. IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. 1996;5(3):493–506. doi: 10.1109/83.491322. [DOI] [PubMed] [Google Scholar]
  • [43].Fessler JA, Rogers WL. Spatial resolution properties of penalized-likelihood image reconstruction: space-invariant tomographs. IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. 1996;5(9):1346–58. doi: 10.1109/83.535846. [DOI] [PubMed] [Google Scholar]
  • [44].Qi J, Huesman RH. Theoretical study of lesion detectability of MAP reconstruction using computer observers. IEEE transactions on medical imaging. 2001 Aug;20(8):815–22. doi: 10.1109/42.938249. [DOI] [PubMed] [Google Scholar]
  • [45].Kulkarni S, Khurd P, Hsiao I, et al. A channelized Hotelling observer study of lesion detection in SPECT MAP reconstruction using anatomical priors. Physics in medicine and biology. 2007 Jun 21;52(12):3601–17. doi: 10.1088/0031-9155/52/12/017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [46].Barrett HH. Objective assessment of image quality: effects of quantum noise and object variability. Journal of the Optical Society of America. A, Optics and image science. 1990 Jul;7(7):1266–78. doi: 10.1364/josaa.7.001266. [DOI] [PubMed] [Google Scholar]
  • [47].Barrett HH, Gooley T, Girodias K, et al. Linear discriminants and image quality. Image Vis. Computing. 1992;10:451–460. [Google Scholar]
  • [48].Barrett HH, Myers JK. Foundations of Image Science. Wiley Interscience; 2003. [Google Scholar]
  • [49]. http://www.mathworks.com/products/matlab/

RESOURCES