Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2014 Dec 15.
Published in final edited form as: J Electron Imaging. 2013 Nov 19;22(4):043018. doi: 10.1117/1.JEI.22.4.043018

One-angle fluorescence tomography with in-and-out motion

Gengsheng L Zeng 1
PMCID: PMC4266511  NIHMSID: NIHMS643197  PMID: 25520544

Abstract

The usual tomography is achieved by acquiring measurements around an object with multiple angles. The possibility of obtaining a fluorescence tomographic image from measurements at only one angle is explored. Instead of rotating around the object, the camera (or the objective lens) moves toward (or away from) the object and takes photographs while the camera’s focal plane passes through the object. The volume of stacked two-dimensional pictures forms a blurred three-dimensional image. The true image can be obtained by deconvolving the system’s point spread function. Simplified computer simulations are carried out to verify the feasibility of the proposed method. The computer simulations indicate that it is feasible to obtain a tomographic image by using the in-and-out motion to acquire data.

1 Introduction

Traditional tomography requires measurements from multiple views. For example, in a two-dimensional (2-D) tomographic problem, it is required that every line passing through the object should be measured. Thus, requirement is based on the assumption that no depth information is available in the projection measurements. However, if some depth information is available in the projections, e.g., as in the time-of-flight positron emission tomography (PET), it may be possible to obtain the tomographic image from projections at one view angle.14

This paper explores the possibility of one-angle fluorescence tomography. In fluorescence imaging, the object is first being triggered by, e.g., a laser pulse, and then fluorescent light will be emitted from the source within the object. After the fluorescent signals are recorded, they can be used to reconstruct the image of the fluorescent sources within the object.58 In many small animal fluorescent imaging systems, one or two angular views are sufficient for the task of point-source imaging. For a sophisticated tomographic imaging system, multiple view angles may be required.

In the following section, a rotation-free, one-angle, in-and-out motion is discussed. That is, the imaging system has a linear translation (i.e., the in-and-out) movement. No rotational movements are used. We do not see any practical value of this in-and-out motion imaging system yet, because a rotational tomographic imaging system can provide more stable tomographic images. The motivation of writing this paper is pure scientific curiosity, to investigate whether it is possible to generate tomography images by simply moving the camera in an in-and-out motion. Initial computer simulations with a 2-D phantom are provided in this paper to demonstrate the feasibility of the proposed imaging approach.

2 Optical Tomography with In-and-Out Camera Motion

In small animal fluorescence imaging, the in-and-out motion is likely to produce useful tomographic images. A simple optical camera is shown in Fig. 1. As the camera (or its objective lens) moves toward (or away from) the object while taking pictures, different depths of the object are photographed. By stacking up these photographs, a “blurred” 3-D image of the object is obtained. This blurring is caused by the off-focal-plane objects. We would like the object in the focal plane to be sharp and the off-focal-plane objects to be as blurred as possible. In other words, we prefer a larger value of c1. This is equivalent to having a very narrow depth-of-field (DoF). The following formulas are helpful to see what parameters may influence the value of c1.

Fig. 1.

Fig. 1

A simplified schematic of a camera. Off-focal-plane objects are blurred in the image plane.

c1d=v1-vv1, (1)
d=fN, (2)
1s+1v=1f, (3)
1s1+1v1=1f, (4)

where f is the lens focal length, d is the aperture diameter, and N is the lens f number. To have a narrower (i.e., shallower) DoF, we would like to have a larger aperture d, a longer focal length f, and a shorter shooting distance s. The computer simulations in this paper assume that the lens radius d/2 is equal to the focal length s.

The proposed fluorescence imaging setup is illustrated in Fig. 2, in which the camera is positioned on a track, allowing the camera to take series of pictures while it moves toward (or away from) the object. Each picture is blurred by the objects that are not in the focal plane.

Fig. 2.

Fig. 2

The camera takes a series of pictures as it moves toward (or away from) the object.

The blurred 3-D image volume can be made sharper by deconvolving the image volume with the point spread function (PSF) of the blurring effect during picture taking.

3 Rotation versus In-and-Out Motion

In conventional tomography, the projection data are the Radon transform of the object. The Radon transform itself at one angle does not contain any depth information. Thus, measurements at multiple angles are needed to provide the depth information. If enough measurements are available, the original object can be exactly reconstructed. This fact is well established.

On the other hand, our method has a very narrow DoF and the in-and-out movement of the focal plane is able to provide the depth information. Therefore, rotation of the camera is unnecessary. If there is no attenuation, this in-and-out motion imaging problem is well posed, and the image can be exactly reconstructed.

The proposed in-and-out camera motion imaging problem is more ill conditioned than the rotational imaging problem when we consider an attenuating medium. If the point source of interest is located at the far side of the object, the photons from this point source will be significantly attenuated before they can reach the camera. If the camera can rotate, after a 180-deg half-circle rotation, the point source of interest is at the near side of the object, and the attenuation effect is significantly reduced. Therefore, the rotational camera motion has an advantage over the in-and-out camera motion when photons are propagating in an attenuating medium.

We must point out that, in general, moving the camera in-and-out does not provide sufficient data for tomography. For example, in single photon emission computed tomography (SPECT), by moving the cone-beam collimator in-and-out can only provide limited-angle measurements, which are not sufficient for exact 3-D image reconstruction.

If we compare a SPECT (or an x-ray CT) system to our proposed optical system with an in-and-out motion, our optical system uses a lens, whereas the SPECT (or the x-ray CT) system does not have a lens. The working principle is different for these two types of imaging systems. Moving the lens-based optical camera in-and-out is equivalent to moving the collimator-based nonoptical camera in two independent movements: an in-and-out movement and a movement in the direction orthogonal to the in-and-out motion. If a SPECT (or an x-ray CT) system uses this two-dimensional (2-D) motion scheme, a tomographic image can be obtained from its measurements. However, this 2-D motion scheme is much more difficult to implement than the rotational motion, and we have not seen any applications where the SPECT (or the x-ray CT) system is not allowed to rotate.

4 Computer Simulation Setup

The computer simulations in this paper consider a 2-D object that consists of three dots that have the same intensity, as illustrated in Fig. 3. Each dot is a small 2-D disk with a radius of 7.7 pixels. The camera takes series of one-dimensional (1-D) pictures. The focal length of the lens is the same as the radius of the lens. Poisson noise is added to the measurements. One potential application in optical tomography is small animal fluorescence imaging, which is a common tool to locate the tumors inside a small animal (e.g., a mouse). The tumors are point-source-like objects. When we decided to use a three-point source phantom in our computer simulations, we had the application of small animal cancer imaging in mind.

Fig. 3.

Fig. 3

The emission source contains three identical dots.

In the first computer simulation, the phantom has no attenuation/scattering medium. In the second computer simulation, a large circular uniform attenuation/scattering medium is assumed (see Fig. 4). This large uniform disk has a radius of 102.4 pixels. The attenuation coefficient is μ = 0.05 per pixel. The scattering effect is approximated as a 2-D Gaussian PSF with standard deviation of σ = 5 pixels.

Fig. 4.

Fig. 4

The uniform circular attenuation/scattering medium.

In solving an inverse problem, one must have a model for the forward problem. However, this forward model cannot be exactly the same as that used in numerical data generation. One commits a “crime” if one solves an inverse problem by using exactly the same numerical (forward) method that is used to generate the raw computer simulated data. In order not to commit the inverse-problem crimes, data acquisition and image processing use different image sizes in this paper.

The 2-D image array is 256 × 256 during data acquisition procedure. After stacking up series of 1-D pictures, a blurred raw 256 × 256 image is obtained. This 256 × 256 2-D raw image is then binned down to a 128 × 128 image. The PSF that is used in modeling data acquisition blurring is also binned down accordingly. The binned-down PSF is used in the Lucy–Richardson image deblurring method.9,10 Our MATLAB® code with computer simulation details is given in Appendix of this paper. During image restoration, the Poisson noise is suppressed by performing 2-D lowpass filtering that has a Gaussian kernel with standard deviation of 2 pixels.

5 Computer Simulation Results

In the case of the emission photons being attenuation free and scattering free, the raw image only suffers from the blurring effect corrupted by the off-focal-plane objects, as shown in Fig. 5. After applying 20 iterations of the Lucy–Richardson method, the original image can be restored (see Fig. 6).

Fig. 5.

Fig. 5

The raw image in the attenuation/scattering free case is only blurred by the off-focal-plane objects.

Fig. 6.

Fig. 6

The image is restored by using the Lucy–Richardson method from Fig. 5.

When the emission photons travel through the attenuation/scattering medium, the raw image suffers from attenuation, scattering, as well as blurring effect corrupted by the off-focal-plane objects, as shown in Fig. 7. After compensation for the attenuation effect and PSF deconvolution for both the scattering effect and the off-focal-plane object blurring effect, the final restored image is shown in Fig. 8. The devolution for the scattering effect is achieved by applying 15 iterations of the Lucy–Richardson method, and the devolution for the off-focal-plane blurring is carried out by 20 iterations of the Lucy–Richardson method. One can identify the three dots in the restored image, as shown in Fig. 8.

Fig. 7.

Fig. 7

The blurred raw image in the case of a uniform attenuation/ scattering medium is affected by attenuation, scattering, and off-focal-plane object blurring.

Fig. 8.

Fig. 8

The restored image is obtained by the Lucy–Richardson method and attenuation compensation from Fig. 7.

6 Discussion and Conclusion

State-of-the-art tomographic imaging systems use the rotational scanning approach, in which the camera rotates around the object while making measurements. The one-angle imaging with a linear in-and-out camera movement has never been used before. One disadvantage of this one-angle system is that the system is extremely ill conditioned, especially when photons are travelling in attenuating media. There may be some applications in which only one camera can be used and the camera is not allowed to rotate. To date, we do not aware of any imaging situation with these restrictions. This paper merely presents a feasibility study of this new tomography methodology. Hopefully in the future, some applications may emerge and require a tomography imaging method that uses only one-angle, in-and-out camera motion.

The working principle of our paper is quite different from that of the wavefront coding methods.1114 The goal of the wavefront coding methods is to apply a phase mask to “code” the photon rays. The coding is insensitive to the depth information. Only one image is taken. After digital decoding, the resultant image has an increased DoF.

On the other hand, our imaging system prefers a shallow DoF, the shallower the better. The in-and-out motion scans the object in a slice-by-slice manner. The slices are parallel to the camera. Multiple images are acquired at different distances. Each image focuses at one slice of the object. The images are not coded.

It is a well-known fact that tomographic images can be obtained with the camera rotating around the object. The reason that we restrict the camera to be positioned at one angle (but allowed to move in-and-out) is purely a scientific curiosity. The goal of this paper is to find whether it is possible to obtain a tomography image without changing the camera’s viewing angle.

Computer simulations demonstrate that this data acquisition scheme is feasible. Our future plan is to identify an optical tomography application in which camera rotation is not allowed. Once the application is identified, the proposed method will be further developed and tailored toward the application.

In both rotational tomography and moving in-and-out tomography, separation of the overlapped information is achieved by two steps: one being backprojection and the other being deconvolution. The main effect that makes rotational tomography more favorable than the moving in-and-out tomography is photon attenuation within the object. Without attenuation, these two methods are comparable in terms of ill conditionness. The purpose of this paper is not to promote the moving in-and-out method because the rotational method generally speaking is more stable and is more robust when imaging situation is not ideal (e.g., there is attenuation). The purpose of this paper is to answer the scientific curiosity: is it possible to obtain a tomographic image by moving the camera in-and-out, regardless of its usefulness? The answer to this question is positive.

Biography

graphic file with name nihms643197b1.gifGengsheng L. Zeng has a BS degree in Applied Mathematics from Xidian University, China, and an MS degree and a PhD degree both in Electrical Engineering from University of New Mexico, Albuquerque, New Mexico. He was with University of Utah as a tenured full Professor before coming to Weber State University in 2013. His research includes medical imaging and homeland security. He has published 130 peer-reviewed journal papers and holds 20 U. S. patents. Larry is an IEEE fellow and is the author of the textbook “Medical Image Reconstruction.”

Appendix

% This MATLAB code simulates In-and-Out motion optical tomography:

   % July 12, 2013, Larry Zeng
   clear all
   % Define an emission phantom PE, and an attenuator PA
   E = [10 0.03 0.03 0 0 0;10 0.03 0.03 0.156125 0.156125 0; 10 0.03 0.03 –0.3125 0 0];
   A = [1 0.4 0.4 0 0 0];
   N = 256; %Acquisition image size
   PE = phantom(E, N);
   PA = phantom(A, N);
   figure(1),imshow(PE, [])
   figure(2),imshow(PA, [])
   % Define a Gaussian scattering kernel G
   G = fspecial(‘gaussian’, [63, 63], 10);
   PG = conv2(PE, G, ’same’). * PA; % Scattered object
   figure(3),imshow(PG, [])
   % Define an attenuation factor map PACF
   mu = 0.05;
   % mu = 0.0;
   PA = mu * PA;
   PAC = cumsum(PA, 2);
   PACF = exp(–PAC);
   figure(4),imshow(PACF, [])
   % Scattered and Attenuated phantom
   PGA = PG. * PACF;
   figure(5),imshow(PGA, [])
   % Define a 2D convolver (2M + 1) × (2M + 1) for camera blurring
   M = 128;
   M1 = M + 1;
   MM = 2 * M + 1;
   H = zeros(MM, MM);
   for n = 0:M
   for nn = –n:n
   H(nn + M1, n + M1) = 1/(2.0 * n + 1.0);
   H(nn + M1, –n + M1) = 1/(2.0 * n + 1.0);
   end
   end
   %—— Case 2: Attenuated and scattered
   Img = conv2(PGA, H);
   %—— Case 1: Ideal ——
   % Img = conv2(PE, H);
   % Add Poison noise
   max(max(Img))
   Img = 100 * Img * 10(^–12);
   Img = imnoise(Img, /poisson /);
   Img = Img * 10(^12);
   figure(6),imshow(Img, [])
   %—————————————
     Image Restoration —————————
   % Scale everything down to avoid inverse problem crime
   Img2 = imresize(Img, 0.5);
   % Noise supression
   ker = fspecial(‘gaussian’, [23, 23], 2);
   Img2 = conv2(Img2, ker, ’same’);
   % Deblur camera effect
   M = 60;
   M1 = M + 1;
   MM = 2 * M + 1;
   H2 = zeros(MM, MM);
   for n = 0:M
   for nn = –n:n
   H2(nn + M1, n + M1) = 1/(2.0 * n + 1.0);
   H2(nn + M1, –n + M1) = 1/(2.0 * n + 1.0);
   end
   end
   Img3 = deconvlucy(Img2, H2, 20);
   figure(7),imshow(Img3, [])
   % Compansate for attenuation
   PA2 = imresize(PA, 0.5);
   PA2 = 2 * PA2;
   PAC2 = cumsum(PA2, 2);
   PACF2 = exp(–PAC2);
   PACF2 = padarray(PACF2, [6464], 1);
   Img4 = Img3. /PACF2;
   figure(8),imshow(Img4, [])
   % Correct for scattering blurring
   G2 = fspecial(‘gaussian’, [43, 43], 5); %Scatteringkernal
   PA2 = padarray(PA2, [6464]);
   Img4 = Img4: * PA2;
   ImgOut = deconvlucy(Img4, G2, 15);
   figure(9),imshow(ImgOut, [])

References

  • 1.Brownell GL, et al. Medical Radioisotope Scintigraphy, Proc Symp. Vol. 1. Salzburg, I.A.E.A; Vienna: 1969. New developments in positron scintigraphy and the application of cyclotron-produced positron emitters; p. 163. [Google Scholar]
  • 2.Anger HO. Survey of radioisotope camera. Trans ISA. 1966;5:311–334. [Google Scholar]
  • 3.Budinger TF. Time-of-flight positron emission tomography: status relative to conventional PET. J Nucl Med. 1983;24(1):73–78. [PubMed] [Google Scholar]
  • 4.Rezaei A, et al. Simultaneous reconstruction of activity and attenuation in time-of-flight PET. IEEE Trans Med Imaging. 2012;31(12):2224–2233. doi: 10.1109/TMI.2012.2212719. [DOI] [PubMed] [Google Scholar]
  • 5.Klose AK, Hielscher AH. Fluorescence tomography with the equation of radiative transfer for molecular imaging. Opt Lett. 2003;28(12):1019–1021. doi: 10.1364/ol.28.001019. [DOI] [PubMed] [Google Scholar]
  • 6.Klose AK, Hielscher AH. Optical fluorescence tomography with the equation of radiative transfer for molecular imaging. Proc SPIE. 2003;4955:219–225. [Google Scholar]
  • 7.Stuker F, et al. Hybrid small animal imaging system combining magnetic resonance imaging with fluorescence tomography using single photon avalanche diode detectors. IEEE Trans Med Imaging. 2011;30(6):1265–1273. doi: 10.1109/TMI.2011.2112669. [DOI] [PubMed] [Google Scholar]
  • 8.Sarasa-Renedo A, et al. Source intensity profile in noncontact optical tomography. Opt Lett. 2010;35(1):34–36. doi: 10.1364/OL.35.000034. [DOI] [PubMed] [Google Scholar]
  • 9.Biggs DSC. Acceleration of iterative image restoration algorithms. Appl Opt. 1997;36(8):1766–1775. doi: 10.1364/ao.36.001766. [DOI] [PubMed] [Google Scholar]
  • 10.Hanisch RJ, White RL, Gilliland RL. Deconvolution of Hubble space telescope images and spectra. In: Jansson PA, editor. Deconvolution of Images and Spectra. Academic Press; Boston, Massachusetts: 1997. pp. 310–356. [Google Scholar]
  • 11.Carles G, Carnicer A, Bosch S. Phase mask selection in wavefront coding systems: A design approach. Opt Lasers Eng. 2010;48(7–8):779–785. [Google Scholar]
  • 12.Hui B, et al. The simulation of wavefront coding imaging systems. Proc. 2007 IEEE, Int. Conf. Integration Technology; Shenzhen, China. 2007. pp. 711–713. [Google Scholar]
  • 13.Yang QG, Sun JF, Liu LR. Phase-space analysis of wavefront coding imaging systems. Chin Phys Lett. 2006;23(8):2080–2083. [Google Scholar]
  • 14.Carles G, et al. Design and implementation of a scene-dependent dynamically selfadaptable wavefront coding imaging system. Comput Phys Commun. 2012;183(1):147–154. [Google Scholar]

RESOURCES