Abstract
The Pulse Couple Neural Network (PCNN) was developed by Eckhorn to model the observed synchronization of neural assemblies in the visual cortex of small mammals such as a cat. In this paper we show the use of the PCNN as an image segmentation strategy to crop MR images of rat brain volumes. We then show the use of the associated PCNN image ‘signature’ to automate the brain cropping process with a trained artificial neural network. We tested this novel algorithm on three T2 weighted acquisition configurations comprising a total of 42 rat brain volumes. The datasets included 40 ms, 48 ms and 53 ms effective TEs, acquisition field strengths of 4.7T and 9.4T, image resolutions from 64x64 to 256x256, slice locations ranging from +6 mm to −11 mm AP, two different surface coil manufacturers and imaging protocols. The results were compared against manually segmented gold standards and Brain Extraction Tool (BET) V2.1 results. The Jaccard similarity index was used for numerical evaluation of the proposed algorithm. Our novel PCNN cropping system averaged 0.93 compared to BET scores circa 0.84.
Keywords: PCNN, brain cropping, small mammals, neural networks, segmentation
1. Introduction
A common precursor to several neuroimaging analyses is the use of Brain Extraction Algorithms (BEAs) designed to crop brain tissue from non brain tissues such as cranium, eyes, muscles, skin etc. Following a BEA application, also described as intracranial segmentation or skull stripping, several downstream and independent applications are applied, such as registration of subjects to an atlas for Region Of Interest (ROI) analysis (Grachev et al., 1999), brain tissue segmentation (Shattuck et al., 2001), functional Magnetic Resonance Imaging (fMRI) analysis preprocessing (Beckmann et al., 2006) and monitoring brain volume as a function of time to study brain atrophy (Battaglini et al., 2008). Although these researchers applied BEA and subsequent neuroimaging techniques on human subjects, the number of neuroimaging studies on animal models such as the rat is growing rapidly, providing new insights into brain function as well as improved translation to/from analogous clinical studies. Schwarz et al. (2006) cropped 97 brain volumes in the development of a stereotaxic Magnetic Resonance Imaging (MRI) template for the rat brain. The processing pipeline of the somatosensory pathway mapping fMRI study of Lowe et al. (2007), the pharmacological fMRI study of Littlewood et al. (2006) included rat brain cropping. Ferris et al. (2005) registered rat brain volumes to an atlas for ROI analysis. Yet, an efficient brain cropping algorithm focused on small mammals is lacking.
Automated brain extraction is a subset (Smith, 2002; Zhuang et al., 2006), of general image segmentation strategies which delineates edges between regions frequently exhibiting similar texture and intensity characteristics. However, there is no definitive line separating extraction (cropping) and segmentation functions. All published automated BEAs use various combinations of basic segmentation (Pham et al., 2000) techniques on individual slices or on entire 3D volumes to crop brain tissue from non brain tissue. Frequently (Smith, 2002; Ségonne et al., 2004), automated BEAs have been clustered into the following broad classes: thresholding with morphology based methods (Lee et al., 1998; Lemieux et al., 1999; Mikheev et al., 2008), deformable surface based (Aboutanos et al., 1999; Dale et al., 1999; Kelemen et al., 1999; Smith, 2002; Zhuang et al., 2006) and hybrid methods (Rehm et al., 2004; Rex et al., 2004; Ségonne et al., 2004). Each of these methodologies have advantages and all areas are being advanced. There is clear evidence (Lee et al., 2003; Rex et al., 2004; Fennema-Notestine et al., 2006; Zhuang et al., 2006) that no single BEA is suitable for all studies or image acquisition protocols. Generally, human intervention is employed for satisfactory cropping.
Our review of automated BEAs noted a fundamental lack of these algorithms applied to small animals. The methodology has been applied dominantly on human subjects. Most brain tissue cropping in small laboratory animals continues to be manual or semi automatic (Pfefferbaum et al., 2004; Wagenknecht et al., 2006; Sharief et al., 2008). Some studies such as Schwarz et al. (2006) working with T2 weighted Rapid Acquisition with Relaxation Enhancement (RARE) sequences have successfully used semi automatic segmentation tools (Kovacevic et al., 2002) developed for the human brain in animal models. Kovacevic et al. (2002) had reported a histogram based technique involving the use of co registered Proton Density (PD), T2 weighted anatomy data to crop T1 weighted anatomy images of the human brain. This idea has been supported by the skull and scalp stripping work of Dogdas et al. (2005) and Wolters et al. (2002) who establish that the inner skull boundary can be determined more accurately by the use of PD images. Another example Roberts et al. (2006) uses an adaptation of Brain Extraction Tool (BET) (Smith, 2002) with manual correction for extraction of the rat brain from RARE anatomy data. However, the overall quality of the small animal brain extraction is significantly lower than that obtained for human images (http://www.fmrib.ox.ac.uk/fslfaq/#bet_animal).
This article presents a novel Pulse Coupled Neural Network (PCNN) based approach to automatically crop rat brain tissue. The proposed method takes advantage of the specificity accorded by T2 weighted images in terms of contrast for the proton rich brain environment and the inherent segmentation characteristics of the PCNN to rapidly crop the rat brain. The method described here does not attempt a second level segmentation to differentiate, for instance, White matter from Grey matter. Rather, the focus is to crop the brain quickly and automatically so that subsequent operations, such as registration can proceed immediately.
Artificial Neural Network (ANN) and Pattern Recognition methods (Egmont-Petersen et al., 2002) have been widely applied on the brain tissue type segmentation problem (Reddick et al., 1997; Dyrby et al., 2008; Powell et al., 2008). However, there have been very few neural network approaches that specifically address the problem of automatic brain extraction. Congorto et al. (1996) used a Kohonen Self Organizing Map approach which combines self-organization with topographic mapping and classifies image regions by similarities in luminance and texture. They applied this technique on 2 dimensional T1 slices to segment the image into 3 classes: scalp, brain and skull. Belardinelli et al. (2003) used an adaption of a LEGION (Locally Excitatory Globally Inhibitory Network) for segmenting T1 weighted 2D images. The LEGION is a neural network model that simulates the human visual cortex in which each pixel is mapped to an individual oscillator and the size of the network is the same as that of the input image. Both Congorto et al. (1996) and Belardinelli et al. (2003) provided qualitative results but did not report extensive testing of their respective algorithms on large datasets.
The underlying algorithm used in this paper is the standard Eckhorn PCNN model (Johnson and Padgett, 1999). The PCNN is a neural network model based on the visual cortex of a cat, which captures the inherent spiking nature of the biological neurons. The brain extraction is accomplished using the feature extraction property that (Eckhorn et al., 1990), described in the ‘Linking’ part of their neural network model, which associates regions of input images that are similar in intensity and texture. Lindblad and Kinser (2005) cover numerous aspects of the PCNN model tuned for image processing applications. Several independent research groups have applied the basic Eckhorn model for various applications; image segmentation (Kuntimad and Ranganath, 1999), image thinning (Gu et al., 2004) and path optimization (Caulfield and Kinser, 1999). A recent pattern recognition procedure (Muresan, 2003) involved the use of the PCNN to generate a 1D time signature from an image. This time signature was then trained using a back propagation neural network model for image recognition. The method proposed in this article follows a similar approach.
2. Materials and Methods
2.1 Overview
The proposed brain extraction algorithm operates on individual 2D grayscale data (slices), Fig. 1. For purposes of illustration of the proposed algorithm we follow the various operations on the representative 2D slice highlighted in Fig. 1. Intensity rescaling to [0 1] is the first operation on each 2D slice, as noted on the highlighted slice in Fig. 1. The PCNN algorithm is then applied in the ‘accumulate’ mode (discussed subsequently) on individual 2D slices, Fig. 2. A morphological operator is employed to break ‘narrow bridges’ that might link the brain tissue with other regions, like the skull, Fig. 3. A contour operation is used with level set to unity. Only the largest contiguous region from each PCNN iteration is selected, Fig. 4. The contour outlines corresponding to the selected regions are then overlaid on the corresponding grayscale image, Fig. 5. At this stage the problem is simply one of choosing a particular iteration that best outlines the brain region. The accumulated response as a function of iteration has a characteristic behavior as shown in, Fig. 6. Several techniques can be used to identify the first plateau in Fig. 6. A previously trained ANN can be used to identify the iteration that best represents the brain outline. In this mode, one has the option to view the predicted selection with override ability, Fig. 5. This process is repeated for each slice resulting in a set of mask slices that can be used in a marching cube routine (Wu and Sullivan, 2003) to create a full 3D geometry representation of the cropped brain, Fig. 7.
Fig. 1.

Schematic of a multiple slice volume of a rat brain. The highlighted slice has been intensity rescaled [0 1].
Fig. 2.

Subfigures (a) – (f) illustrate the raw binary PCNN iteration numbers 10, 20, 25, 30, 40 and 50 respectively of the highlighted coronal grayscale slice of Fig. 1.
Fig. 3.

The center sub-figure is a close-up of the highlighted region on the left. The right sub-figure illustrates the result of the applied morphological operation meant to break small bridges that connect the brain tissue with the skull.
Fig. 4.

Subfigures (a) – (f) illustrate the largest contiguous region of PCNN iteration numbers 10, 20, 25, 30, 40 and 50 respectively of the highlighted coronal grayscale slice of Fig. 1 after the morphological operation.
Fig. 5.

The predicted PCNN iteration (highlighted) is presented with an override option and alternate choices.
Fig. 6.

Illustrates the characteristic shape of the normalized image signature G.
Fig. 7.

Full 3D representation of the cropped brain with end overlaid by corresponding 2D cropped grayscale slice.
2.2 The PCNN Formulation
The PCNN belongs to a unique category of neural networks, in that it requires no training (Lindblad and Kinser, 2005) unlike traditional models where weights may require updating for processing new inputs. Specific values (Table 1) of the PCNN coefficients used in our work were derived from Johnson and Padgett (1999) and Waldemark et al. (2000).
Table 1.
The values of the PCNN coefficients used in this algorithm were sourced from Johnson and Padgett (1999) and Waldemark et al. (2000). Further coefficients αF,L,T = ln2/τF,L,T as described by Waldemark et al. (2000).
| Constant | PCNN coefficient | Context |
|---|---|---|
| β | 0.2 | Linking strength |
| τF | 0.3 | Feeding decay |
| τL | 1 | Linking decay |
| τT | 10 | Threshold decay |
| VF | 0.01 | Feeding coupling |
| VL | 0.2 | Linking coupling |
| VT | 20 | Magnitude scaling term for threshold |
| r | 3 | Radius of linking field |
Generally, a 3D volume of grayscale coronal slices of the rat brain is created in the MR system. Since the PCNN operates on 2D data, individual slices are sequentially extracted and their grayscale intensities normalized within the range [0, 1].
Let Sij be the input grayscale image matrix. The subscripts i, j denote the position of the PCNN ‘neuron’ as well as the corresponding pixel location of the input grayscale image. Each neuron in the processing layer of the PCNN is coupled directly to an input grayscale image pixel or to a set of neighboring input pixels with a predefined radius r. Functionally, it consists of a Feeding and Linking compartment, described by arrays Fij and Lij, each of dimension equaling the 2D input grayscale image, linked by two synaptic weighting matrices M and W. The synaptic weighting matrix is square with a dimension of (2r + 1) and is a normalized Gaussian about the center of the square matrix.
| (1) |
| (2) |
| (3) |
| (4) |
| (5) |
| (6) |
The PCNN is implemented by iterating through equations (1)–(6) with n as the current iteration index and ranging from 1 to N (the total number of iterations). The matrices Fij [0], Lij [0],Uij [0] and Yij [0] were initialized to a zero matrix, while Tij [0] was initialized to a unit matrix. For each iteration, the internal activation Uij is computed and compared against the threshold Tij. Thus, the array Yij[n] is a binary image representing the PCNN mask at that particular iteration.
αF, αL, αT are iteration (surrogate time) constants that determine the internal state of the network effecting exponential decay and VF,VL,VT are magnitude scaling terms for Feeding, Linking and Threshold components of the PCNN. * is the two dimensional convolution operator. β is a parameter affecting linking strength, Table 1.
Our implementation of the PCNN operates in the ‘accumulate’ mode: that is, each iteration sums its contributions with the previous PCNN iterations.
| (7) |
The process described by equation (7) can result in a non binary image Aij. However, for our work the accumulated iteration Aij[n] is converted into a binary image by means of a thresholding operation at unity, Fig. 2.
2.3 Morphological, contour operations on accumulated PCNN iterations
A binary morphological operation breaks ‘narrow bridges’ or clusters of pixels with a radius less than p pixels. Each pixel i, j value (0 or 1) within a PCNN iteration must be continuous in at least two orthogonal directions. That is IF (i ± p,i,…,i ± 1,i) is 1 AND (j ± p, j, …, j ± 1, j) is 1, THEN pixel i, j =1.
Perimeters or contours of isolated islands are created. The largest area within each PCNN iteration is selected. All pixels within the selected perimeter are filled with ‘ones’. This process results in only one contiguous segment for each PCNN iteration. We denote each PCNN iteration at this stage by Cij [n]with iteration n ranging from [1, N]. Fig. 4 is used to illustrate the outcome of the described morphological and contour operations on the same coronal section shown in Fig. 1.
A successful brain extraction results when an appropriate PCNN iteration n is selected. A 1D time signature is constructed for the PCNN iterations similar to that of Muresan (2003). The abscissa or timeline is the iteration count. The ordinate is the total number of pixels within the largest contoured area for each PCNN iteration.
Where n ranges from [1, N]. This image signature has a characteristic shape for similar images with similar regions of interest. This information is used as a surrogate time series in a traditional ANN training sequence to automatically extract the brain tissue. It is also used as the surrogate time in the first order response fitting. The maximum number of iterations (N) of the PCNN is established when the sum of the array Y[n] (equation (6)) exceeds 50% of the image space. This maximum iteration count varies somewhat for each slice and subject.
2.4 Traditional ANN based selection of brain mask
A previously trained ANN receives the accumulated response as a function of iteration and outputs an iteration number, n. Multi Layer Perceptron (MLP) is a widely used (Haykin, 1998) supervised, feedforward ANN model which can be trained to map a set of input data to a desired output using standard backpropagation algorithms. Since each grayscale brain coronal section Sij is now represented by the PCNN iterations Cij [n] with n ranging from [1, N] and an image signature G, it is possible to create a training set for the MLP.
Figure 6 shows the characteristic shape of the image signature for the sample mid section coronal brain slice. In the illustrated example, iteration numbers corresponding to 0.4 to 0.6 will produce very similar brain masks. This characteristic step response behavior can be fitted easily. It requires few training volumes to create a reliable trained ANN. For the work presented herein, the number of rat brain volumes used to train the network was 7.
The neural architecture of the MLP used in this article consists of one input layer, one hidden layer and a single output neuron. The input layer neurons simply map to the image signature which is a vector of dimension N. The vector is normalized for the purposes of efficient supervised training using the back propagation algorithm. The hidden layer consisted of about half the number of neurons in the input layer and the single output neuron mapped the desired PCNN iteration corresponding to the brain mask.
3. Experiment details and description
3.1 Data
T2 weighted RARE anatomical images (Spenger et al., 2000; Ferris et al., 2005; Roberts et al., 2006; Schwarz et al., 2006; Canals et al., 2008) are widely used in rat brain studies. Three different coronal datasets representing different imaging field strengths, T2 weightings, resolution and coil manufacturers were assembled to demonstrate the proposed algorithm. The field of view was adjusted to span the entire cranium of the rat. The images were acquired along the coronal section of the rat brain. The data were obtained over multiple imaging sessions and multiple studies.
4.7T anatomy dataset (30 volumes)
The imaging parameters of this dataset are similar to those published by Ferris et al. (2005). Adult Long-Evans rats were purchased from Harlan (Indianapolis, IN, USA) and cared for in accordance with the guidelines published in the Guide for the Care and Use of Laboratory Animals (National Institutes of Health Publications No. 85–23, Revised 1985) and adhere to the National Institutes of Health and the American Association for Laboratory Animal Science guidelines. The protocols used in this study were in compliance with the regulations of the Institutional Animal Care and Use Committee at the University Massachusetts Medical School.
All data volumes were obtained in a Bruker Biospec 4.7 T, 40 cm horizontal magnet (Oxford Instruments, Oxford, U.K.) equipped with a Biospec Bruker console (Bruker, Billerica, MA, U.S.A) and a 20 G/cm magnetic gradient insert (inner diameter, 12 cm; capable of 120 μs rise time, Bruker). Radiofrequency signals were sent and received with dual coil electronics built into the animal restrainer (Ludwig et al., 2004). The volume coil for transmitting RF signal features an 8-element microstrip line configuration in conjunction with an outer copper shield. The arch-shaped geometry of the receiving coil provides excellent coverage and high signal-to-noise ratio. To prevent mutual coil interference, the volume and surface coils were actively tuned and detuned. The imaging protocol was a RARE pulse sequence (Eff TE 48 ms; TR 2100 ms; NEX 6; 7 min acquisition time, field of view 30 mm; 1.2 mm slice thickness; 256 ×256×12 (nrow×ncol×nslice) data matrix; 8 RARE factor).
4.7T functional dataset (6 volumes)
This dataset was obtained with the same hardware and animal specifications as those described in the 4.7 T anatomy dataset. The imaging protocol was a multi-slice fast spin echo sequence (TE 7 ms; Eff TE 53.3 ms; TR 1430 ms; NEX 1; field of view 30 mm; 1.2 mm slice thickness; 64×64×12 (nrow×ncol×nslice) data matrix; 16 RARE factor). This sequence was repeated 50 times in a 5 minute imaging session of baseline data on 6 different rats. The dataset comprised of MRI functional volumes at the 35th time step of the study.
9.4T anatomy dataset (6 volumes)
The imaging parameters of this dataset are similar to those published by Lu et al. (2007, 2008). The volumes were of a Sprague-Dawley rat, scanned with a Bruker coil setup, 72 mm volume coil for RF transmission with a 3 cm flat receiver surface coil. The imaging protocol was a RARE sequence (Eff TE 40 ms; TR 2520 ms; field of view 35 mm ×35 mm; 1 mm slice thickness; matrix size 192×192, zero-padded to 256×256 for reconstruction). For the purposes of this study 18 slices from +6 mm to −11 mm AP (Paxinos and Watson 1998) in a coronal plane passing through the Bregma were considered.
3.2 Parameters employed
The algorithm employing the methods described in Section 2 is presented as a pseudo code in Table 2. The entire algorithm was implemented in MATLAB 2007b (Mathworks, MA, U.S.A).
Table 2.
Pseudo code of algorithm used in this paper
| function [autoCroppedBrainVolume(nrow,ncol,nslice)] = autoCrop[grayscaleAnatomy(nrow,ncol,nslice), |
| areaCutOff, PCNNInputParametersVector] |
| for i = 1 : nslice |
| j = 1; PCNNImageSignature(i,j) = 0; |
| while (PCNNImageSignature(i,j))/(nrow * ncol) <= areaCutOff |
| //PCNN returns binary array A on input of S (see equations (1) – (7)) |
| binaryPCNNIterations(:,:,i,j) = PCNN(greyscaleAnatomy(:,:,i),PCNNInputParametersVector),j) |
| //binary morphological operator to break ‘narrow bridges’ with a radius less than p pixels. |
| binaryPCNNIterations (:,:,i,j) = breakBridges(binaryPCNNIterations(:,:,i,j), p) |
| //assuming largest area of corresponding iteration contain the desired brain mask |
| binaryPCNNIterations(:,:,i,j) = largestArea(binaryPCNNIterations(:,:,i,j)) |
| //stores image signature in vector form |
| PCNNImageSignature(i,j) = area(binaryPCNNIterations(:,:,i,j)) |
| //increment counter |
| j = j+1; |
| end |
| //determines iteration |
| choiceOfIteration = preTrainedNeuralNetworkClassifier(PCNNImageSignature(i,:)) |
| autoCroppedBrainVolume(:,:,i) = binaryPCNNIterations(:,:,i,choiceOfIteration) |
| end |
The input grayscale brain volumes were treated as the subject data and individually referred to as ‘grayscaleAnatomy’ variable in Table 2. The PCNN algorithm was implemented and the ‘PCNNInputParametersVector’ of Table 2 contained numerical values of the various PCNN parameters, αF, αL, αT, β,VF,VL,VT, and r described in Table 1. The PCNN image signature was determined for each slice based on equation (6) summing to 50% (parameter described by ‘areaCutOff’ in Table 2) of the image space. This length N of each PCNN image signature vector was generally in the range of 40–50 iterations. The grayscale anatomy file was passed to the PCNN algorithm and the N binary output pulses for each slice computed, which corresponds to A of equation (7) and held in variable ‘binaryPCNNIterations’. This data was further processed by means of a binary morphological operation to break ‘bridges’, as described in section 2.3. The value of the ‘bridge’ radius p was set to 2 for this study.
The neural network classifier in direct relation to the choice of the number of pulses had N input neurons, two hidden layers of 24 and 12 neurons and one output. For purposes of training, 7 rat volumes were used, each containing 12 slices. The activation function of the hidden layer was chosen to be a nonlinear hyperbolic tangent function while that of the output layer was linear. The ‘newff’ and ‘train’ functions available in Matlab 2007b’s Neural Network toolbox V5.1 was used to train the classifier using the gradient descent with momentum backpropagation algorithm.
4. Discussion
4.1 Results
The PCNN based automated algorithm was tested on 42 volumes acquired on the three different rat brain acquisition parameter settings, described in Section 3. These volumes were different from the 7 data volumes used to train the ANN for automatic cropping. The compute time of the algorithm including original volume input (4.7 T, 256×256×12 anatomy volume) to cropped and mask volume outputs is about 5 minutes on a modern Pentium 4 class machine with 4GB RAM. Fig. 8 provides a qualitative handle of the results obtained using the proposed PCNN based brain extraction algorithm compared to BET.
Fig. 8.

The 3 columns (L to R) represent the contours of the brain mask predicted by BET (Jaccard index 0.84), Manual gold standard (Jaccard index 1.0) and the Automatic PCNN (Jaccard index 0.95) overlaid on the corresponding anatomy image.
For purposes of numerical validation, we created manual masks for each of the volumes, employing MIVA (http://ccni.wpi.edu/miva.html) with the Swanson (Swanson, 1998), Paxinos and Watson (Paxinos and Watson, 1998) rat atlases for reference. The manually created masks served as the ‘gold’ standard. For a quantitative metric, we employed the Jaccard’s index (Jaccard, 1912). This index is a similarity measure in the range [0, 1], where 1 describes an ideal match between the subject mask ASub generated by the proposed algorithm and the ground truth represented by the manually created mask MG for that subject. The Jaccard similarity index is defined by:
We computed these indices using our automated PCNN algorithm for all volumes and summarized the results in Table 3. It has been established that popular automated brain extraction methods such as BET (Smith, 2002) have been inherently developed for cropping the human brain and offer lesser performance in cropping rat brain volumes (http://www.fmrib.ox.ac.uk/fslfaq/#bet_animal). In the interest of experimentation we conducted tests on our rat brain volumes using BET V2.1. The average Jaccard index for these tests is also reported in Table 3. To obtain the highest BET score we scaled the rat brain image dimensions by a factor of 10 (Schwarz et al., 2007). We then manually specified the centre coordinates and initial radius of the rat brain for each individual animal. The fractional intensity threshold was iterated to 0.3 as the default setting of 0.5 yielded poor results.
Table 3.
Lists the performance metrics of the automatic PCNN, BET V2.1 on the three different datasets described in the paper.
| Dataset, Method | Mean | Std. dev. | Median | Min | Max |
|---|---|---|---|---|---|
| 4.7 T Dataset (256×256), PCNN | 0.93 | 0.02 | 0.94 | 0.89 | 0.94 |
| 4.7 T Dataset (256×256), BET | 0.84 | 0.04 | 0.85 | 0.70 | 0.85 |
| 4.7 T Dataset (128×128) 2D rebinning, PCNN | 0.92 | 0.02 | 0.92 | 0.88 | 0.94 |
| 4.7 T fMRI Dataset (64×64), PCNN | 0.91 | 0.03 | 0.91 | 0.87 | 0.95 |
| 9.4 T Dataset (256×256), PCNN | 0.95 | 0.01 | 0.95 | 0.94 | 0.96 |
| 9.4 T Dataset (256×256), BET | 0.78 | 0.05 | 0.78 | 0.71 | 0.84 |
| 9.4 T Dataset (128×128) 2D rebinning, PCNN | 0.93 | 0.02 | 0.94 | 0.91 | 0.95 |
These results support our proposed automated brain extraction algorithm for small animals such as rats, as the BET results are significantly lower than that presented using the PCNN strategy.
The PCNN as an algorithm has outstanding segmentation characteristics and as such independent of the image orientation and voxel dimension scaling. The PCNN readily segments the entire rat brain volume as delineated by Paxinos and Watson 1998, (+6 to −15mm AP in a coronal plane passing through Bregma). However, our current selection strategy identifies the largest area within the PCNN iteration mask. This poses a problem in extreme coronal slices (> +7mm AP) where the eyes are larger and brighter as a result of T2 weighting, than the brain region. Surface coils can inherently lower sensitivities in regions distant from the coil diminishing overall image intensities. The PCNN operates only on 2D regions and one of the PCNN iterations would normally capture the brain anatomy and that iteration would be on the plateau (Fig. 6) identified by the proposed selection strategy.
4.2 Alternate PCNN iteration selection strategies
The main contribution of this paper is the recasting of a complex 2D image segmentation task into a selection of an appropriate point a 1D time series curve. Several alternate strategies may be employed to automate or otherwise train the classifier. The accumulated response (Fig. 6) can be modeled as a first order response system
with the selected iteration corresponding to a value of 2τpcnnIterationT · Creating a trained ANN or augmenting an existing one can be done using the manual override option (Fig. 5). To illustrate, if a blank trained ANN is used, the system predicts the N/2 iteration and displays a 3×3 grid centered about the predicted iteration. The iteration contours are superimposed on the grayscale image. If the identified iteration is acceptable (N/2 in this example), one accepts the default and the next slice is analyzed. If an alternate iteration is desired, the user identifies its number and the next slice is analyzed. The process is the same for any decision pathway selected (blank ANN, partially trained ANN, trained ANN, or First Order Response). If the user specifies a manual override option, the PCNN output will display the forecasted iteration for each slice allowing the user to override its selection. Once the volume set is analyzed the user has the option to merge the dataset responses into the trained ANN matrix.
5. Conclusion
A novel, brain extraction algorithm was developed and tested for automatic cropping of rat brain volumes. This strategy harnessed the inherent segmentation characteristics of the PCNN to produce binary images. These image masks were mapped onto a timeline curve rendering the task into an appropriate iteration selection problem. The surrogate ‘time’ signature was passed to a previously trained ANN for final iteration selection. The algorithm was tested on rat brain volumes from 3 different acquisition configurations and quantitatively compared against corresponding manually created masks which served as the reference. Our results conclusively demonstrate that PCNN based brain extraction represents a unique, viable fork in the lineage of the various brain extraction strategies.
Acknowledgments
We are grateful to the reviewers for their comments, which have enhanced our paper substantially. We are indebted to Dr. Elliot Stein and Dr. Hanbing Lu of NIDA (National Institute on Drug Abuse, Baltimore) for providing their 9.4T anatomy data for testing. This work was supported in part by NIH P01CA080139-08 award.
Footnotes
Supplementary Material
The PCNN code and data (4.7T 256 x 256 x 12 anatomy volumes, 'Gold' standard masks)described in this paper is available as a supplementary download (NeuroImage/Elsevier webproducts server) on a 'Non profit, academic/research use only' type of license. The includedcode is suitable for a Matlab 2007b environment with Image Processing Toolbox V2.5 andNeural Network Toolbox 5.1.
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
References
- Aboutanos GB, Nikanne J, Watkins N, Dawant BM. Model creation and deformation for the automatic segmentation of the brain in MR images. IEEE Transactions on Bio-Medical Engineering. 1999;46(11):1346–1356. doi: 10.1109/10.797995. [DOI] [PubMed] [Google Scholar]
- Battaglini M, Smith SM, Brogi S, De Stefano N. Enhanced brain extraction improves the accuracy of brain atrophy estimation. NeuroImage. 2008;40:583–589. doi: 10.1016/j.neuroimage.2007.10.067. [DOI] [PubMed] [Google Scholar]
- Beckmann CF, Jenkinson M, Woolrich MW, Behrens TEJ, Flitney DE, Devlin JT, Smith SM. Applying FSL to the FIAC data: Model-based and model-free analysis of voice and sentence repetition priming. Human Brain Mapping. 2006;27:380–391. doi: 10.1002/hbm.20246. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Belardinelli P, Mastacchi A, Pizzella V, Romani GL. Applying a visual segmentation algorithm to brain structures MR images. Neural Engineering, 2003. Conference Proceedings. First International IEEE EMBS Conference on, 507–510.2003. [Google Scholar]
- Canals S, Beyerlein M, Murayama Y, Logothetis NK. Electric stimulation fMRI of the perforant pathway to the rat hippocampus. Magnetic Resonance Imaging. 2008;26:978–986. doi: 10.1016/j.mri.2008.02.018. [DOI] [PubMed] [Google Scholar]
- Caulfield HJ, Kinser JM. Finding the shortest path in the shortest time using PCNN's. Neural Networks, IEEE Transactions on. 1999;10:604–606. doi: 10.1109/72.761718. [DOI] [PubMed] [Google Scholar]
- Congorto S, Penna SD, Erne SN. Tissue segmentation of MRI of the head by means of a kohonen map. Engineering in Medicine and Biology Society, 1996. Bridging Disciplines for Biomedicine. Proceedings of the 18th Annual International Conference of the IEEE. 1996;3:1087–1088. [Google Scholar]
- Dale AM, Fischl B, Sereno MI. Cortical surface-based analysis: I. segmentation and surface reconstruction. NeuroImage. 1999;9:179–194. doi: 10.1006/nimg.1998.0395. [DOI] [PubMed] [Google Scholar]
- Dogdas B, Shattuck DW, Leahy RM. Segmentation of skull and scalp in 3-D human MRI using mathematical morphology. Human Brain Mapping. 2005;26:273–285. doi: 10.1002/hbm.20159. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dyrby TB, Rostrup E, Baaré WFC, van Straaten ECW, Barkhof F, Vrenken H, Ropele S, Schmidt R, Erkinjuntti T, Wahlund L, Pantoni L, Inzitari D, Paulson OB, Hansen LK, Waldemar G. Segmentation of age-related white matter changes in a clinical multi-center study. NeuroImage. 2008;41:335–345. doi: 10.1016/j.neuroimage.2008.02.024. [DOI] [PubMed] [Google Scholar]
- Eckhorn R, Reitboeck HJ, Arndt M, Dicke P. Feature linking via synchronization among distributed assemblies: Simulations of results from cat visual cortex. Neural Computation. 1990;2:293–307. [Google Scholar]
- Egmont-Petersen M, de Ridder D, Handels H. Image processing with neural networks—a review. Pattern Recognition. 2002;35:2279–2301. [Google Scholar]
- Fennema-Notestine C, Ozyurt IB, Clark CP, Morris S, Bischoff-Grethe A, Bondi MW, Jernigan TL, Fischl B, Ségonne F, Shattuck DW, Leahy RM, Rex DE, Toga AW, Zou KH, Brown GG. Quantitative evaluation of automated skull-stripping methods applied to contemporary and legacy images: Effects of diagnosis, bias correction, and slice location. Human Brain Mapping. 2006;27:99–113. doi: 10.1002/hbm.20161. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ferris CF, Kulkarni P, Sullivan JM, Jr, Harder JA, Messenger TL, Febo M. Pup suckling is more rewarding than cocaine: Evidence from functional magnetic resonance imaging and three-dimensional computational analysis. Journal of Neuroscience. 2005;25:149–156. doi: 10.1523/JNEUROSCI.3156-04.2005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Grachev ID, Berdichevsky D, Rauch SL, Heckers S, Kennedy DN, Caviness VS, Alpert NM. A method for assessing the accuracy of intersubject registration of the human brain using anatomic landmarks. NeuroImage. 1999;9:250–268. doi: 10.1006/nimg.1998.0397. [DOI] [PubMed] [Google Scholar]
- Gu X, Yu D, Zhang L. Image thinning using pulse coupled neural network. Pattern Recognition Letters. 2004;25:1075–1084. [Google Scholar]
- Haykin S. Neural networks: A comprehensive foundation. Prentice Hall PTR; Upper Saddle River, NJ, USA: 1998. [Google Scholar]
- Jaccard P. The distribution of the flora in the alpine zone. New Phytologist. 1912;11(2):37–50. [Google Scholar]
- Johnson JL, Padgett ML. PCNN models and applications. Neural Networks, IEEE Transactions on. 1999;10:480–498. doi: 10.1109/72.761706. [DOI] [PubMed] [Google Scholar]
- Kelemen A, Szekely G, Gerig G. Elastic model-based segmentation of 3-D neuroradiological data sets. Medical Imaging, IEEE Transactions on. 1999;18:828–839. doi: 10.1109/42.811260. [DOI] [PubMed] [Google Scholar]
- Kovacevic N, Lobaugh NJ, Bronskill MJ, Levine B, Feinstein A, Black SE. A robust method for extraction and automatic segmentation of brain images. NeuroImage. 2002;17:1087–1100. doi: 10.1006/nimg.2002.1221. [DOI] [PubMed] [Google Scholar]
- Kuntimad G, Ranganath HS. Perfect image segmentation using pulse coupled neural networks. Neural Networks, IEEE Transactions on. 1999;10:591–598. doi: 10.1109/72.761716. [DOI] [PubMed] [Google Scholar]
- Lee C, Huh S, Ketter TA, Unser M. Unsupervised connectivity-based thresholding segmentation of midsagittal brain MR images. Computers in Biology and Medicine. 1998;28:309–338. doi: 10.1016/s0010-4825(98)00013-4. [DOI] [PubMed] [Google Scholar]
- Lee J, Yoon U, Nam SH, Kim J, Kim I, Kim SI. Evaluation of automated and semi-automated skull-stripping algorithms using similarity index and segmentation error. Computers in Biology and Medicine. 2003;33:495–507. doi: 10.1016/s0010-4825(03)00022-2. [DOI] [PubMed] [Google Scholar]
- Lemieux L, Hagemann G, Krakow K, Woermann FG. Fast, accurate, and reproducible automatic segmentation of the brain in T1-weighted volume MRI data. Magnetic Resonance in Medicine : Official Journal of the Society of Magnetic Resonance in Medicine/Society of Magnetic Resonance in Medicine. 1999;42:127–135. doi: 10.1002/(sici)1522-2594(199907)42:1<127::aid-mrm17>3.0.co;2-o. [DOI] [PubMed] [Google Scholar]
- Lindblad T, Kinser JM. Image processing using pulse-coupled neural networks. Springer-Verlag New York, Inc.; Secaucus, NJ, USA: 2005. [Google Scholar]
- Littlewood CL, Cash D, Dixon AL, Dix SL, White CT, O'Neill MJ, Tricklebank M, Williams SCR. Using the BOLD MR signal to differentiate the stereoisomers of ketamine in the rat. NeuroImage. 2006;32:1733–1746. doi: 10.1016/j.neuroimage.2006.05.022. [DOI] [PubMed] [Google Scholar]
- Lowe AS, Beech JS, Williams SCR. Small animal, whole brain fMRI: Innocuous and nociceptive forepaw stimulation. NeuroImage. 2007;35:719–728. doi: 10.1016/j.neuroimage.2006.12.014. [DOI] [PubMed] [Google Scholar]
- Lu H, Xi Z, Gitajn L, Rea W, Yang Y, Stein EA. Cocaine-induced brain activation detected by dynamic manganese-enhanced magnetic resonance imaging (MEMRI) Proceedings of the National Academy of Sciences. 2007;104:2489–2494. doi: 10.1073/pnas.0606983104. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lu H, Yang S, Zuo Y, Demny S, Stein EA, Yang Y. Real-time animal functional magnetic resonance imaging and its application to neuropharmacological studies. Magnetic Resonance Imaging. 2008 doi: 10.1016/j.mri.2008.02.020. In Press, Corrected Proof. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ludwig R, Bodgdanov G, King J, Allard A, Ferris CF. A dual RF resonator system for high-field functional magnetic resonance imaging of small animals. Journal of Neuroscience Methods. 2004;132:125–135. doi: 10.1016/j.jneumeth.2003.08.017. [DOI] [PubMed] [Google Scholar]
- Mikheev A, Nevsky G, Govindan S, Grossman R, Rusinek H. Fully automatic segmentation of the brain from T1-weighted MRI using bridge burner algorithm. Journal of Magnetic Resonance Imaging. 2008;27:1235–1241. doi: 10.1002/jmri.21372. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Muresan RC. Pattern recognition using pulse-coupled neural networks and discrete fourier transforms. Neurocomputing. 2003;51:487–493. [Google Scholar]
- Paxinos G, Watson C. The rat brain in stereotaxic coordinates. 4. Academic Press; San Diego: 1998. [DOI] [PubMed] [Google Scholar]
- Pfefferbaum A, Adalsteinsson E, Sullivan EV. In vivo structural imaging of the rat brain with a 3-T clinical human scanner. Journal of Magnetic Resonance Imaging. 2004;20:779–785. doi: 10.1002/jmri.20181. [DOI] [PubMed] [Google Scholar]
- Pham DL, Xu C, Prince JL. Current methods in medical image segmentation. Annual Review of Biomedical Engineering. 2000;2:315–337. doi: 10.1146/annurev.bioeng.2.1.315. [DOI] [PubMed] [Google Scholar]
- Powell S, Magnotta VA, Johnson H, Jammalamadaka VK, Pierson R, Andreasen NC. Registration and machine learning-based automated segmentation of subcortical and cerebellar brain structures. NeuroImage. 2008;39:238–247. doi: 10.1016/j.neuroimage.2007.05.063. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Reddick WE, Glass JO, Cook EN, Elkin TD, Deaton RJ. Automated segmentation and classification of multispectral magnetic resonance images of brain using artificial neural networks. Medical Imaging, IEEE Transactions on. 1997;16:911–918. doi: 10.1109/42.650887. [DOI] [PubMed] [Google Scholar]
- Rehm K, Schaper K, Anderson J, Woods R, Stoltzner S, Rottenberg D. Putting our heads together: A consensus approach to brain/non-brain segmentation in T1-weighted MR volumes. NeuroImage. 2004;22:1262–1270. doi: 10.1016/j.neuroimage.2004.03.011. [DOI] [PubMed] [Google Scholar]
- Rex DE, Shattuck DW, Woods RP, Narr KL, Luders E, Rehm K, Stolzner SE, Rottenberg DA, Toga AW. A meta-algorithm for brain extraction in MRI. NeuroImage. 2004;23:625–637. doi: 10.1016/j.neuroimage.2004.06.019. [DOI] [PubMed] [Google Scholar]
- Roberts TJ, Price J, Williams SCR, Modo M. Preservation of striatal tissue and behavioral function after neural stem cell transplantation in a rat model of Huntington’s disease. Neuroscience. 2006;139:1187–1199. doi: 10.1016/j.neuroscience.2006.01.025. [DOI] [PubMed] [Google Scholar]
- Schwarz AJ, Gozzi A, Reese T, Bifone A. In vivo mapping of functional connectivity in neurotransmitter systems using pharmacological MRI. NeuroImage. 2007;34:1627–1636. doi: 10.1016/j.neuroimage.2006.11.010. [DOI] [PubMed] [Google Scholar]
- Schwarz AJ, Danckaert A, Reese T, Gozzi A, Paxinos G, Watson C, Merlo-Pich EV, Bifone A. A stereotaxic MRI template set for the rat brain with tissue class distribution maps and co-registered anatomical atlas: Application to pharmacological MRI. NeuroImage. 2006;32:538–550. doi: 10.1016/j.neuroimage.2006.04.214. [DOI] [PubMed] [Google Scholar]
- Ségonne F, Dale AM, Busa E, Glessner M, Salat D, Hahn HK, Fischl B. A hybrid approach to the skull stripping problem in MRI. NeuroImage. 2004;22:1060–1075. doi: 10.1016/j.neuroimage.2004.03.032. [DOI] [PubMed] [Google Scholar]
- Sharief AA, Badea A, Dale AM, Johnson GA. Automated segmentation of the actively stained mouse brain using multi-spectral MR microscopy. NeuroImage. 2008;39:136–145. doi: 10.1016/j.neuroimage.2007.08.028. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shattuck DW, Sandor-Leahy SR, Schaper KA, Rottenberg DA, Leahy RM. Magnetic resonance image tissue classification using a partial volume model. NeuroImage. 2001;13:856–876. doi: 10.1006/nimg.2000.0730. [DOI] [PubMed] [Google Scholar]
- Smith SM. Fast robust automated brain extraction. Human Brain Mapping. 2002;17:143–155. doi: 10.1002/hbm.10062. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Spenger C, Josephson A, Klason T, Hoehn M, Schwindt W, Ingvar M, Olson L. Functional MRI at 4.7 tesla of the rat brain during electric stimulation of forepaw, hindpaw, or tail in single- and multislice experiments. Experimental Neurology. 2000;166:246–253. doi: 10.1006/exnr.2000.7524. [DOI] [PubMed] [Google Scholar]
- Swanson LW. Brain maps : Structure of the rat brain : A laboratory guide with printed and electronic templates for data, models and schematics. 2. Elsevier; Amsterdam: 1998. [Google Scholar]
- Wagenknecht G, Belitz H, Wolff R, Castellanos J. Segmentation of ROIs/VOIs from small animal images for functional analysis. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment. 2006;569:481–487. [Google Scholar]
- Waldemark K, Lindblad T, Becanovic V, Guillen JLL, Klingner PL. Patterns from the sky: Satellite image analysis using pulse coupled neural networks for preprocessing, segmentation and edge detection. Pattern Recognition Letters. 2000;21:227–237. [Google Scholar]
- Wolters CH, Kuhn M, Anwander A, Reitzinger S. A parallel algebraic multigrid solver for finite element method based source localization in the human brain. Computing and Visualization in Science. 2002;5:165–177. [Google Scholar]
- Zhuang AH, Valentino DJ, Toga AW. Skull-stripping magnetic resonance brain images using a model-based level set. NeuroImage. 2006;32:79–92. doi: 10.1016/j.neuroimage.2006.03.019. [DOI] [PubMed] [Google Scholar]
- Wu Z, Sullivan JM., Jr Multiple material marching cubes algorithm. International Journal for Numerical Methods in Engineering. 2003;58:189–207. [Google Scholar]
