Skip to main content
Springer logoLink to Springer
. 2011 Aug 4;34(3):375–389. doi: 10.1007/s13246-011-0089-x

Neural network algorithm for image reconstruction using the “grid-friendly” projections

Robert Cierniak 1,
PMCID: PMC3183276  PMID: 21814824

Abstract

The presented paper describes a development of original approach to the reconstruction problem using a recurrent neural network. Particularly, the “grid-friendly” angles of performed projections are selected according to the discrete Radon transform (DRT) concept to decrease the number of projections required. The methodology of our approach is consistent with analytical reconstruction algorithms. Reconstruction problem is reformulated in our approach to optimization problem. This problem is solved in present concept using method based on the maximum likelihood methodology. The reconstruction algorithm proposed in this work is consequently adapted for more practical discrete fan beam projections. Computer simulation results show that the neural network reconstruction algorithm designed to work in this way improves obtained results and outperforms conventional methods in reconstructed image quality.

Keywords: Medical imaging, Computed tomography, Image reconstruction from projections, Neural network

Introduction

X-ray computerized tomography (CT) remains the most popular and the most widespread tomography method used in medicine. The tomographic images are obtained by applying a method of projection acquisition and an appropriate image reconstruction algorithm. The key problem arising in computerized tomography is image reconstruction from projections which are received from an X-ray scanner of a given geometry. There are several well-known reconstruction methods to solve this problem. The most popular reconstruction algorithms are methods using convolution and back-projection [13] and the algebraic reconstruction technique (ART) [47]. Besides those methods, there exist some alternative reconstruction techniques. The most worthy of emphasis seem to be neural network-based algorithms. Neural networks are used in different implementations, for example, in image processing [810], in particular in computerized tomography. Reconstruction algorithms based on supervised neural networks have been presented in various papers, for example [1114]. Other structures representing the so-called algebraic approach to image reconstruction from projections and based on recurrent neural networks have been studied by several authors [1517]. Their approach can be characterized as a unidimensional signal reconstruction problem. In this case, the main disadvantage is the extremely large number of variables arising during calculations. The computational complexity of the reconstruction process is proportional in that approach to the square of the image size.

In this paper, an original approach to the reconstruction problem will be developed, based on original transformation methodology [18, 19]. The most important improvement of our reconstruction method, in comparison to the previous publication, is an adaptation of the discrete Radon transform (DRT) concept (see e.g. [20, 21]) in the fully original way. This methodology provides the so-called “grid-friendly” angles at which the projections are performed. Because this concept limits the number of the performed during investigation projections, we develop our approach and provide a new idea—the modified (extended) “grid-friendly” methodology which lifts that limitation. In this paper, we present a comparison of the equiangular interval sampling procedure with the “grid-friendly” and the modified “grid-friendly” methodologies for specifying the projection angles. We can decrease in this way the artifacts in reconstructed image without any cost: geometry of tomographic scanner do not need to be changed and reconstruction algorithm needs only some small reformulations. It is worth to emphasise, that these reformulations do not cause the algorithm to become more computationally complex.

In our approach, a recurrent neural network [22] is proposed to design the reconstruction algorithm. Owing to the 2D methodology of the image processing in our approach, we significantly decreased the complexity of the tomographic reconstruction problem. The applied recurrent neural network proposed to solve the reconstruction problem is designed in a fully analytical way. We show how all parameters of this network can be obtained, in particular the weights of the network, and what roles these parameters play. The calculations of these weights will be carried out only once before the principal part of the reconstruction process is started. Additionally, because the number of neurons in the network does not depend on the resolution of the projections performed earlier, we can quite freely modulate the number of projections carried out.

The reconstruction method presented in this paper, originally formulated by the author, can consequently be applied to the fan-beam scanner geometry of the tomography device, as is described later in this work.

The paper is organized as follows. The reconstruction method is presented in “Neural network reconstruction algorithm” section. The acquisition of the fan-beam projections (“Parallel beam collection” section), the rebinning procedure (“Back-projection operation” section) and the neural network reconstruction algorithm (“Reconstruction using a recurrent neural network” section) will be depicted in subsequent subsections. “Fan-beam reconstruction algorithm” section describes the performance of the computer simulations and presents the most important results. “Experimental results” section gives some conclusions.

Neural network reconstruction algorithm

The image processing procedure in our reconstruction method resembles one of the transformation algorithms—the ρ-filtered layergram method [2]. In our approach, instead of 2D filtering, we implemented a recurrent neural network. This network performs the function of an “energy pump”, which carries out the reconstruction process from the blurred image obtained after the back-projection operation. The principal idea of the presented reconstruction method using the recurrent neural network is shown in Fig. 1, where the rather theoretical parallel beam geometry of the scanner is taken into consideration.

Fig. 1.

Fig. 1

Neural network image reconstruction algorithm using parallel beams

Parallel beam collection

Only a limited number of parallel projection values Inline graphic are chosen for further processing. Firstly, we determine the values of the angles Inline graphic. Let Inline graphic denote discrete values of parallel projections taken at angles indexed by the variable ψ. In our approach, according to the concept of the discrete Radon transform (DRT) [20, 21], we propose only grid “friendly” angles of parallel projections. The motivation for this approach is the better adjustment of the rays in the parallel beam crossing the discrete image to the grid of pixels in this image, if at every angle of projection every ray crosses at least one pixel. In this case, we propose Inline graphic where Inline graphic is the number of projections. Considering the above condition, the discrete values of parameter α p are as follows:

graphic file with name M6.gif 1

The proposed distribution of the projection angles is approximately equiangular in the range of Inline graphic, as is depicted clearly in Fig. 2 for the case of Inline graphic.

Fig. 2.

Fig. 2

The choice of “grid-friendly” parallel projection angles

The number of “grid-friendly” projection angles is strictly limited and is equal to 256 for a half rotation around the investigated object and 512 for the full rotation. One can introduce a certain modification of the above approach to avoid this limitation, by multiplying the value 256 (512) by k. We evolve Eq. 1 into the following expanded form

graphic file with name M9.gif 2

Alternatively, as a comparative case, we can choose the equiangular set of parallel projections taken at angles indexed by variable ψ e, where Inline graphic, where Ψ e is the number of projections. In this simplified case, the discrete angles of projections are given by the following relationship

graphic file with name M11.gif 3

where Inline graphic is the angle, given in radians, by which the tube-screen pair is rotated after each projection.

The topological differences between both concepts of projection angle determination are depicted in Fig. 3 for the case of a reconstructed image having a resolution 5 × 5 (only the rays lying on the symmetry lines of given projections are depicted).

Fig. 3.

Fig. 3

Topology of “grid-friendly” parallel projection angles (a) and equiangular positioning of the parallel beam scanner (b)

Now we determine a uniform sampling on the screen at points Inline graphic, where L is an odd number of virtual detectors, from the projection obtained at angle α pψ. It is easy to calculate the distance between each parallel ray from the origin in the (x,y) space if these detectors are symmetrically placed on the screen. The distance is given by

graphic file with name M14.gif 4

where Inline graphic is the sample interval of the virtual projections on the screen. Taking into consideration the sample of parameters s and α p of the parallel projections, we can write

graphic file with name M16.gif 5

In this way, we obtained all the imaginary parallel projections Inline graphic given on the grid Inline graphic Inline graphic (or alternatively Inline graphic), which will be used in the following steps of the reconstruction procedure.

Back-projection operation

After the next step of our reconstruction algorithm for parallel beams, namely the back-projection operation [1, 2], we obtain a blurred image which can be expressed by the following formula

graphic file with name M21.gif 6

Because we have only a limited number of the virtual parallel projection values, it is necessary to apply interpolation. In this case, a projection value mapped to a certain point (x,y) of the reconstructed image is given by the equation

graphic file with name M22.gif 7

where Inline graphic is an interpolation function and Inline graphic.

In the presented method, we consider the discrete forms of the images Inline graphic and Inline graphic. That means these continuous functions of the images will be substituted by their discrete equivalents Inline graphic and Inline graphic, respectively, where Inline graphic; Inline graphic; I and J are the numbers of pixels in the horizontal and vertical directions. Thus, the discrete approximation of Eq. 7 is given by the expression

graphic file with name M31.gif 8

which is convenient from a computational point of view. In (8), Inline graphic is an interpolation function, Inline graphic. If we use the linear interpolation function [2]

graphic file with name M34.gif 9

Eq. 8 has only two terms and can be reformulated as [1]

graphic file with name M35.gif 10

where Inline graphicis the highest integer value less than the value of variable s ij, Inline graphic.

In practice only a limited number of projections are performed. In particular, if we use “grid-friendly” methodology, at angles Inline graphic, where Inline graphic (I—size of the processed image), then we can approximate the integration over the angle Inline graphic by a finite sum. In consequence, Eq. 6 takes the following form

graphic file with name M41.gif 11

where Inline graphic, Inline graphic. It is a very similar case if we use the modified “grid-friendly” set of projection angles specified by Eq. 2, that is

graphic file with name M44.gif 12

Alternatively, in the case of the equiangular approach, we perform projections at angles Inline graphic, where Inline graphic and we can approximate the integration in Eq. 6 over the angle Inline graphic as follows

graphic file with name M48.gif 13

The discrete image obtained after the back-projection operation Inline graphic includes information about the original image Inline graphic blurred by a geometrical term. Our task is to reconstruct the original image from the given form of Inline graphic using a recurrent neural network [22]. Before we start the design process of this network, it is necessary to formulate the discrete reconstruction problem, and in particular to calculate the coefficients representing the geometrical term distorting the original image. In our approach, we take into consideration the interpolation function used during the back-projection operation.

Reconstruction using a recurrent neural network

Due to relationships (6), (7) and the definition of the Radon transform it is possible to define the image, obtained after the back-projection operation, in the following way

graphic file with name M52.gif 14

where Inline graphic. After some reformulations of the Eq. 14, approximation of the integrations by a finite sums, we obtain relationship the following relation (see e.g. [18, 19]),

graphic file with name M54.gif 15

where

graphic file with name M55.gif 16

Since the interpolation function Inline graphic is even, we can write

graphic file with name M57.gif 17

Therefore, we are able to formulate a very convenient relationship between the original image and the image obtained after the back-projection operation, in the form of

graphic file with name M58.gif 18

where

graphic file with name M59.gif 19

for the “grid-friendly” choice of projection angles (see Eq. 1) or

graphic file with name M60.gif 20

for the modified “grid-friendly” projection angles (see Eq. 2).

Alternatively, in the case of the equiangular approach to determining the projection angles, we obtain the following equivalent of Eq. 19

graphic file with name M61.gif 21

As one can see from Eq. 18, the original image of a given cross-section of the object, obtained in the way described above, is equal to the amalgamation of this image with a geometrical distortion element expressed by formulas (19), (20) or (21). The number of Inline graphic coefficients is greatly reduced and the values of these coefficients are easily calculated. The h Δij coefficients are used to determine the weights in the recurrent neural network.

The recursive neural network structure for 1D signal reconstruction was proposed for the first time in [23] and later in [15, 24]. The network realizes the image reconstruction from projections by the deconvolution of relationship (22). The problem of deconvolution can be reformulated to the following optimisation problem, basing on the maximum likelihood (ML) methodology:

graphic file with name M63.gif 22

where Inline graphic—the optimal image (reconstructed image), Inline graphic—the matrix with elements from image being reconstructed, Inline graphic—the activation function, and

graphic file with name M67.gif 23

If the value of the coefficient v tends to infinity or is suitably large, then the solution of the optimisation problem (22) tends to the optimal one. Our research has shown that the following activation function yields the always stable reconstruction process (other possible forms of this function are presented in [24]):

graphic file with name M68.gif 24

where λ is a slope coefficient, v is a suitable large positive acceleration coefficient.

In our experiments we have never observed any divergent iterative reconstruction process using activation function (24) (at suitably chosen in experimental way parameters v and λ). That means the iterative realisation of the neural reconstruction algorithm is robust even if we change the reconstructed image. The main motivation to use this form of activation function was a property of its derivation used in reconstruction process. This derivation takes the following form:

graphic file with name M69.gif 25

Thanks to the saturation effect of the function (25) outside the range e ij Inline graphic, it is possible to avoid instabilities in the reconstruction process when there is a drastic increase in the value of any of the variables used in the calculations.

Now we will formulate the energy function which will be minimized by the constructed neural network. Simultaneously, we will realise the task of deconvolution (see Eq. 18). The energy function is given by

graphic file with name M71.gif 26

In order to find the minimum of function (26) we determine the derivative

graphic file with name M72.gif 27

If we let (see [18, 19])

graphic file with name M73.gif 28

then Eq. 27 takes the form of

graphic file with name M74.gif 29

One can see that the values of Eq. 29 are always less than or equal to zero, that is Inline graphic. Therefore, if Inline graphic then it means that Inline graphic and the minimum of E is obtained. Our calculation tends to this state and when Inline graphic we can stop the reconstruction process.

The neural network performing the minimization task consists of two layers with the same topology of neurons. The structure is shown in Fig. 4.

Fig. 4.

Fig. 4

Structure of the recurrent neural network: a topology of the neurons in the net; b scheme of connections in the net

Fan-beam reconstruction algorithm

The principal idea of the presented reconstruction method using the recurrent neural network is shown in Fig. 5, where the target fan-beam geometry of the collected projections is taken into consideration.

Fig. 5.

Fig. 5

Neural network image reconstruction algorithm using fan-beams

The first step in the reconstruction procedure described is the collection of all the fan-beam projections using a scanner, as depicted in Fig. 6.

Fig. 6.

Fig. 6

Fan-beam geometry of the scanner

A given ray from a fan-beam is involved in obtaining a particular projection value Inline graphic, where the projection value is obtained at angle Inline graphic and β is the angle of divergence of the ray from the symmetry-line of the fan-beam. In real scanners, only samples Inline graphic of the projections are measured, where usually Inline graphic are equiangular rays, Inline graphicare indexes of these rays, α fγ = γ · Δfα are particular angles of the X-ray source at which the projections are obtained, Inline graphic are the indexes of these angles. For simplicity, we can define the discrete values of the projections as Inline graphic.

In the next step of our reconstruction algorithm, we perform the rebinning operation, which re-sorts the fan-beam projection values Inline graphic obtained in the previous step into equivalent parallel projection data [1]. Referring to Fig. 7, we can find the relationships between the parameters in both of the scanner geometries considered, as

graphic file with name M87.gif 30

Fig. 7.

Fig. 7

Geometric relationship between parallel and fan X-ray beams

After defining the parameters of the virtual parallel projections (see Eqs. 1, 4) we can start the rebinning operation. Unfortunately, in a lot of cases there is a lack of equivalences for parallel rays in the set of fan-beam projections. As a remedy we use an interpolation, in the simplest way—bilinear interpolation. In this case an estimation of the parallel projection Inline graphic can begin by identifying the neighbourhood of the fan-beam projection given by

graphic file with name M89.gif 31

The neighbourhood is determined based on four real measures from a whole set of fan-beam projections: Inline graphicwhere Inline graphic is the highest integer value less than

graphic file with name M92.gif 32

Inline graphic is the highest integer value less than

graphic file with name M94.gif 33

γ  = γ  + 1. In order to calculate a linear interpolated value Inline graphic the following expression is used

graphic file with name M96.gif 34

Having all the required parallel projection values, we can then perform the reconstruction procedure for parallel beams. In our case, this is a method using a recurrent neural network, as was explained in “Neural network reconstruction algorithm” section.

Experimental results

It is very useful, for various reasons, to simulate projection data. Idealized projection measurements obtained in this way allow us to develop and evaluate the reconstruction algorithms we have designed. One of the most widespread of this kind of simulation method is the use of a head phantom model, the so-called Shepp–Logan mathematical phantom [6, 1]. In our experiments, we used the Shepp–Logan model extended in an original way to 3D space, similar to the approach presented in [25]. Our 3D phantom consists of ellipsoids, whose parameters are described in Table 1.

Table 1.

Parameters of the ellipsoids used to construct our mathematical phantom

No. Coordinates of the centre a (semi-axis x) b (semi-axis y) c (semi-axis z) Inclination Inline graphic Density μ const
x 0 y 0 z 0
I 0.000 0.000 0.000 0.6900 0.9200 0.9000 0.0 2.000
II 0.000 0.000 0.000 0.6624 0.8740 0.8800 0.0 −0.980
III −0.220 0.000 −0.250 0.4100 0.1600 0.2100 108.0 −0.020
IV 0.220 0.000 −0.250 0.3100 0.1100 0.2200 72.0 −0.020
V 0.000 0.330 −0.250 0.2200 0.2200 0.3700 0.0 0.010
VI 0.000 0.100 −0.250 0.0460 0.0460 0.0460 0.0 0.020
VII −0.060 −0.650 −0.250 0.0460 0.0230 0.0200 0.0 0.010
VIII 0.060 −0.650 −0.250 0.0460 0.0230 0.0200 90.0 0.010
IX 0.060 −0.105 0.625 0.0560 0.0400 0.1000 90.0 0.020
X 0.000 0.100 0.625 0.0560 0.0560 0.1000 0.0 −0.020

A view of the mathematical model of a skull phantom is depicted in Fig. 7—the size of the processed image was fixed at Inline graphic pixels. Such a resolution of the image seems to be a good choice, taking into account the balance between the reconstructed image quality and the real time of calculation during the computer simulations.

Figure 8b, c show two cross-sections of the 3D mathematical phantom. These images will be used in our experiments to evaluate the designed neural reconstruction algorithm both for parallel projections and for fan-beam projections.

Fig. 8.

Fig. 8

Mathematical model of the phantom given in Table 1: a a view in the xz plane; b cross-section in the plane A; c cross-section in the plane B

It is quite easy to reformulate the above model for fan-beam projections using the following relationship

graphic file with name M99.gif 35

During the simulations, we established 170 measurement points (detectors) on the screen as virtual parallel projections. We chose the number of these projections to be 512 rotation angles because this number is suitable for the approach with “grid-friendly” projection angles. In other experiments, the number of projections was modified.

Before we start the reconstruction process, it is necessary to evaluate the coefficients Inline graphic. This is only done once, for all the possible further processing approaches: with equiangular rotation, with only “grid-friendly” projection angles and the expanded “grid-friendly” technique. Using the linear interpolation functions from Eqs. 19, 20 and 21, the values of these coefficients are presented in Fig. 9. Because of the very fine differences between the three approaches analysed, we only present one chart showing a general view of the coefficients h Δij and an enlargement showing details of the chart around the origin. In the cases of the equiangular sample and the modified approach with “grid-friendly” methodology, we used 7200 projection angles to calculate the coefficients h Δij and in the case of the “grid-friendly” approach only 512 angles. A more in-depth discussion of the number of necessary projections performed during calculation of the coefficients h Δij is presented below.

Fig. 9.

Fig. 9

Values of coefficients Inline graphic: a the general view; b values around origin, where Δj = 0

Having obtained the coefficients h Δij, we can start the next step of the reconstruction procedure and perform the back-projection operation using relationships (11), (12) or (13) to get a blurred image of the X-ray attenuation coefficient distribution in a given cross-section of the investigated object (see Fig. 10). (We must use the same interpolation function as in the calculation of the coefficients h Δij, for example, the linear interpolation given by Eq. 9).

Fig. 10.

Fig. 10

Distorted image of the mathematical model obtained after the back-projection operation

The image obtained in this way was next subjected to a process of reconstruction using a neural network, whose structure was explained in the previous section. To do this we adopted the discrete Eq. 23 taking into consideration the time-varying values of the pixels in the reconstructed image. Thus

graphic file with name M102.gif 36

Euler’s method was used to approximate linear Eq. 27 in the following form [17]

graphic file with name M103.gif 37

where Inline graphic is an appropriate small time step.

It is very subjective to evaluate a reconstruction procedure based only on a view of the reconstructed image. That is why the quality of the reconstructed image has been evaluated by an error measure defined as follows

graphic file with name M105.gif 38

where Inline graphic is the original image of the Shepp–Logan mathematical phantom.

Additionally, during the experiments, we used the following error measure [17], which is more relevant to subjective observation of reconstructed image

graphic file with name M107.gif 39

where Inline graphic and Inline graphic are the original image of the mathematical phantom, the reconstructed image and the mean value of the original image, respectively. All images are transformed by the so-called window determined by parameters C (centre) and W (width):

graphic file with name M110.gif 40

The measure described by Eq. 39 allows us to evaluate the subjective impression of an observer viewing the reconstructed image on a real screen.

As was mentioned earlier, we evaluate the coefficients h Δij in the first step of the reconstruction procedure. It is crucial to choose the minimum number of projections necessary to calculate these parameters objectively. In this experiment, we use the most intuitive approach with equiangular projections and the extended “grid-friendly” methodology, in both cases fixing the number of projections during the initial acquisition process starting the actual reconstruction algorithm at 256 (the “grid-friendly” methodology is a special case with 512 projection angles). In the experiment, the value of coefficient v was selected at v = 2.5 × 1010, and the slope coefficient at Inline graphic. The objective results of these investigations are depicted in Fig. 11 and views of the reconstructed images of the mathematical phantom in the cross-section in plane A after 30,000 iterations are presented in Fig. 12.

Fig. 11.

Fig. 11

Fig. 11

Results of the reconstruction process, dependent on the number of projections during the calculation of the Inline graphic coefficients, evaluated by: a the MSE measure (see Eq. 38); b the Error measure (see Eq. 39)

Fig. 12.

Fig. 12

View of the reconstructed image, dependent on the number of projections during the calculation of the Inline graphic coefficients (window: C = 1.0, W = 0.1—see Eq. 40)

Based on the plots in Fig. 11 and the views in Fig. 12, we can say that using the “grid-friendly” and the extended “grid-friendly” methodologies of projection performance, we obtain a reconstructed image more quickly and with better quality.

In the next step of our investigations, we carried out some experiments incorporating the fan-beam reconstruction method described in “Parallel beam collection” section. At this stage, we used the neural network reconstruction algorithm for parallel projections as depicted in Fig. 5 with the extended “grid-friendly” method of calculating the h Δij coefficients (the number of projections was fixed at 7200). Projection acquisition for the fan-beam reconstruction algorithm can be performed at angles (exactly 512 measurement samples) specified by pure “grid-friendly” methodology without any loss of reconstruction image quality. However, for the extended “grid-friendly” approach, the experiments were carried out with different numbers of performed projections. Results of these simulations are shown in Fig. 13 for cross-sections in planes A and B after 100,000 iterations of the neural network algorithm. For comparison, the standard convolution/back-projection method with rebinning and the Shepp–Logan kernel is also considered. In all cases, we used the following geometrical parameters of the fan-beam scanner: Inline graphic, where Inline graphic.

Fig. 13.

Fig. 13

View of the images (window: C = 1.02, W = 0.11): a original image; b reconstructed image using the algorithm described in this paper, after 100 000 iterations; c reconstructed image using the standard convolution/back-projection method with rebinning and the Shepp–Logan kernel

Conclusions

In this paper, we propose an original neural network image reconstruction from a projection algorithm based on the “grid-friendly” methodology of projection acquisition. Our experiments showed objectively that the “grid-friendly” method of specifying the projection angles gave better results than the more intuitive equiangular scheme of projection angle sampling, for parallel beam scanner geometry. This phenomenon may follow from the fact that the parallel rays used for projection acquisition in the “grid-friendly” approach are closer to the pixels in the reconstructed image, which is assumed to be a discrete function in our method (see the discrete reconstruction problem formulation considered in “Reconstruction using a recurrent neural network” section, Eq. 22).

Based on the results obtained for parallel beams, we extended the above conclusion to the problem of reconstruction from fan-beam projections, using in these further simulations only the “grid-friendly” methodology both for the calculation of the h Δij coefficients and in the projection acquisition used for the actual reconstruction process. The simulations showed the superb quality of the reconstructed image of the cross-section of the investigated mathematical model, with respect to quality measures (38) and (39), when compared to the standard reconstruction method, in the case of fan-beam scanner geometry. Therefore, we are entitled to state that our method outperforms algorithms used recently in commercial CT scanners and it can be in easy way extended to helical geometry of scanner. The simulations also show that sequential realization of the proposed reconstruction algorithm is very time consuming. On the other hand, parallel hardware implementation of our neural network structure, for example, by effective implementation of VLSI or nanotechnologies, e.g. core–shell systems, could give incomparably better results than the previous methods of image reconstruction from projections, as far as the time to process the reconstruction is concerned. In this case, the time complexity of our neural algorithm is proportional to the number of iterations this algorithm performs (for a parallel geometry of scanner). For comparison, in the case of the standard convolution/back-projection method, the computational time depends on 2I2Ψ additions and multiplications, where I is a dimension of the processed image and Ψ is the number of projections. The rebinning operation and back-projection (interpolation) are identical in both cases.

Open Access

This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.

References

  • 1.Jain AK (1989) Fundamentals of digital image processing. Prentice Hall, New Jersey
  • 2.Lewitt RM. Reconstruction algorithms: transform methods. Proc IEEE. 1983;71(3):390–408. doi: 10.1109/PROC.1983.12597. [DOI] [Google Scholar]
  • 3.Ramachandran GN, Lakshminarayanan AV. Three-dimensional reconstruction from radiographs and electron micrographs: II. Application of convolutions instead of Fourier transforms. Proc Natl Acad Sci USA. 1971;68:2236–2240. doi: 10.1073/pnas.68.9.2236. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Censor Y (1983) Finite series-expansion reconstruction methods. Proc IEEE 71(3):409–419
  • 5.Gordon R, Bender R, Herman GT. Algebraic reconstruction techniques (ART) for three-dimensional electron microscopy and X-ray photography. J Theor Biol. 1970;29:471–481. doi: 10.1016/0022-5193(70)90109-8. [DOI] [PubMed] [Google Scholar]
  • 6.Jaene B. Digital image processing—concepts. Algoritms and scientific applications. Berlin: Springer; 1991. [Google Scholar]
  • 7.Kaczmarz S. Angeneaherte Aufloesung von Systemen Linearer Gleichungen. Bull Acad Polon Sci Lett A. 1937;35:355–357. [Google Scholar]
  • 8.Tadeusiewicz R. New trends in neurocybernetics. Comput Methods Mater Sci. 2010;10(1):1–7. [Google Scholar]
  • 9.Cierniak R, Rutkowski L (2000) A new algorithm for image compression. Image Process Commun 2(4):29–36
  • 10.Rutkowski L, Cierniak R. Image compression by competitive learning neural network and predictive vector quantization. Int J Appl Math Comput Sci. 1996;6(3):431–445. [Google Scholar]
  • 11.Yau SF, Wong SH. Limited angle tomography using artificial neural networks. SPIE. 1996;2664:170–179. doi: 10.1117/12.234254. [DOI] [Google Scholar]
  • 12.Kerr JP, Bartlett EB. A statistically tailored neural network approach to tomographic image reconstruction. Med Phys. 1995;22:601–610. doi: 10.1118/1.597586. [DOI] [PubMed] [Google Scholar]
  • 13.Knoll P, Mirzaei S, Muellner A, Leitha T, Koriska K, Koehn H, Neumann M. An artificial neural net and error backpropagation to reconstruct single photon emission computerized tomography data. Med Phys. 1999;26:244–248. doi: 10.1118/1.598511. [DOI] [PubMed] [Google Scholar]
  • 14.Munlay MT, Floyd CE, Bowsher JE, Coleman RE. An artificial neural network approach to quantitative single photon emission computed tomographic reconstruction with collimator, attenuation, and scatter compensation. Med Phys. 1994;21:1889–1899. doi: 10.1118/1.597167. [DOI] [PubMed] [Google Scholar]
  • 15.Cichocki A, Unbehauen R, Lendl M, Weinzierl K. Neural networks for linear inverse problems with incomplete data especially in application to signal and image reconstruction. Neurocomputing. 1995;8:7–41. doi: 10.1016/0925-2312(94)E0063-W. [DOI] [Google Scholar]
  • 16.Srinivasan V, Han YK, Ong SH. Image reconstruction by a Hopfield neural network. Image Vis Comput. 1993;11(5):278–282. doi: 10.1016/0262-8856(93)90005-2. [DOI] [Google Scholar]
  • 17.Wang Y, Wahl FM. Vector-entropy optimization-based neural-network approach to image reconstruction from projections. IEEE Trans Neural Netw. 1997;8(5):1008–1014. doi: 10.1109/72.623202. [DOI] [PubMed] [Google Scholar]
  • 18.Cierniak R. A new approach to tomographic image reconstruction using a Hopfield-type neural network. Int J Artif Intell Med. 2008;43(2):113–125. doi: 10.1016/j.artmed.2008.03.003. [DOI] [PubMed] [Google Scholar]
  • 19.Cierniak R. New neural network algorithm for image reconstruction from fan-beam projections. Neurocomputing. 2009;72:3238–3244. doi: 10.1016/j.neucom.2009.02.005. [DOI] [Google Scholar]
  • 20.Averbuch A, Coifman RR, Donoho DL, Israeli M, Waldén J. A notion of Radon transform for data in a Cartesian grid, which is rapidly computable, algebraically exact, geometrically faithful and invertible, TR no 2001–11. Stanford: Department of Statistics, Stanford University; 2001. [Google Scholar]
  • 21.Kingston A, Svalbe I (2003) Mapping between digital and continuous projections via the discrete Radon transform in Fourier space. In: Proceedings of VIIth digital image computing: techniques and applications, Sydney, pp 263–272
  • 22.Hopfield JJ. Neural networks and physical systems with emergent collective computational abilities. Proc Natl Acad Sci USA. 1982;79:2554–2558. doi: 10.1073/pnas.79.8.2554. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Ingman D, Merlis Y. Maximum entropy signal reconstruction with neural networks. IEEE Trans Neural Netw. 1992;3:195–201. doi: 10.1109/72.125860. [DOI] [PubMed] [Google Scholar]
  • 24.Luo Fa-Long, Unbehauen R (1998) Applied neural networks for signal processing. Cambridge University Press, Cambridge
  • 25.Kak AC, Slanley M. Principles of computerized tomographic imaging. New York: IEEE Press; 1988. [Google Scholar]

Articles from Australasian Physical & Engineering Sciences in Medicine are provided here courtesy of Springer

RESOURCES