Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2018 Aug 1.
Published in final edited form as: IEEE Trans Biomed Eng. 2017 Dec 13;65(8):1871–1884. doi: 10.1109/TBME.2017.2783305

Learning Based Segmentation of CT Brain Images: Application to Post-Operative Hydrocephalic Scans

Venkateswararao Cherukuri 1,2, Peter Ssenyonga 4, Benjamin C Warf 4,5, Abhaya V Kulkarni 6, Vishal Monga 1,, Steven J Schiff 2,3,
PMCID: PMC6062853  NIHMSID: NIHMS974738  PMID: 29989926

Abstract

Objective

Hydrocephalus is a medical condition in which there is an abnormal accumulation of cerebrospinal fluid (CSF) in the brain. Segmentation of brain imagery into brain tissue and CSF (before and after surgery, i.e. pre-op vs. post-op) plays a crucial role in evaluating surgical treatment. Segmentation of pre-op images is often a relatively straightforward problem and has been well researched. However, segmenting post-operative (post-op) computational tomographic (CT)-scans becomes more challenging due to distorted anatomy and subdural hematoma collections pressing on the brain. Most intensity and feature based segmentation methods fail to separate subdurals from brain and CSF as subdural geometry varies greatly across different patients and their intensity varies with time. We combat this problem by a learning approach that treats segmentation as supervised classification at the pixel level, i.e. a training set of CT scans with labeled pixel identities is employed.

Methods

Our contributions include: 1.) a dictionary learning framework that learns class (segment) specific dictionaries that can efficiently represent test samples from the same class while poorly represent corresponding samples from other classes, 2.) quantification of associated computation and memory footprint, and 3.) a customized training and test procedure for segmenting post-op hydrocephalic CT images.

Results

Experiments performed on infant CT brain images acquired from the CURE Children’s Hospital of Uganda reveal the success of our method against the state-of-the-art alternatives. We also demonstrate that the proposed algorithm is computationally less burdensome and exhibits a graceful degradation against number of training samples, enhancing its deployment potential.

Index Terms: CT Image Segmentation, Dictionary Learning, neurosurgery, hydrocephalus, subdural hematoma, volume

I. Introduction

A. Introduction to the Problem

Hydrocephalus is a medical condition in which there is an abnormal accumulation of cerebrospinal fluid (CSF) in the brain. This causes increased intracranial pressure inside the skull and may cause progressive enlargement of the head if it occurs in childhood, potentially causing neurological dysfunction, mental disability and death [1]. The typical surgical solution to this problem is insertion of a ventriculoperitoneal shunt which drains CSF from cerebral ventricles into abdominal cavity. This procedure for pediatric hydrocephalus has failure rates as high as 40 percent in the first 2 years with ongoing failures thereafter [2]. In developed countries, these failures can be treated in a timely manner. However, in developing nations, these failures can often lead to severe complications and even death. To overcome these challenges, a procedure has been developed which avoids shunts known as endoscopic third ventriculostomy and choroid plexus cauterization [3]. However, the long-term outcome comparison of these methods has not been fully quantified. One way of achieving quantitative comparison is to compare the volumes of brain and CSF before and after surgery. These volumes can be estimated by segmenting brain imagery (MR and/or CT) into CSF and brain tissue. Manual segmentation and volume estimation have been carried out but this is tedious and not scalable across a large number of patients. Therefore, automated/semi-automated brain image segmentation methods are desired and have been pursued actively in recent research.

Substantial previous work has been done in the past for segmentation of pre-operative (pre-op) CT-scans of hydrocephalic patients [4]–[7]. It has been noted that the volume of the brain appears to correlate with neurocognitive outcome after treatment of hydrocephalus [5]. Figure 1A) shows pre-op CT images and Figure 1B) shows corresponding segmented images using the method from [4] for a hydrocephalic patient. The top row of Figure 1A) shows the slices near base of the skull, second row shows the middle slices and bottom row shows the slices near top of the skull. As we observe from Figure 1, segmentation of pre-op images can be a relatively simple problem as the intensities of CSF and brain tissue are clearly distinguishable. However, post-op images can be complicated by addition of further geometric distortions and the introduction of subdural hematoma and fluid collections (subdurals) pressing on the brain. These subdural collections have to be separated from brain and CSF before volume calculations are made. Therefore, the images have to be segmented into 3 classes (brain, CSF and subdurals) and subdurals must be removed from the volume determination. Figure 2 shows sample post-operative (post-op) images of 3 patients having subdurals. Note that the subdurals in patient-1 are very small compared to the subdurals in other two patients. Further, large subdurals are observed in patient-3 on both sides of the brain as opposed to patient-2. The other observation we can make is that the intensity of subdurals in patient-2 is close to the intensity of CSF, whereas the intensity of subdurals in other two patients is close to intensity of brain tissue. The histogram of the pixel intensity of the images remains bi-modal making it further challenging to separate subdurals from brain and CSF.

Fig. 1.

Fig. 1

A) Sample Pre-operative (pre-op) CT scan slices of a hydrocephalic patient B) Segmented CT-slices of the same patient using [4]

Fig. 2.

Fig. 2

Sample post-op CT-images of 3 patients. Top row shows the original images. Bottom row shows subdurals marked in blue. A shunt catheter is visible in patients 2 and 3.

B. Closely Related Recent Work

Many methods have been proposed in the past for segmentation of brain images [4], [8]–[13]. Most of these methods work on the principles of intensity based thresholding and model-based clustering techniques. However these traditional methods for segmentation fail to identify subdurals effectively as they are hard to characterize by a specific model, and subdurals pose different range of intensities for different patients. For example, Figure 3 illustrates the performance of [11] on the images of 3 different patients with subdurals. We can observe that the accuracy in segmenting these images is very poor. Apart from these general methods for brain image segmentation, relatively limited work has been done to identify subdurals [14]–[18]. These methods work on the assumption that the images have to be segmented into only 2 classes which are brain and subdurals. Therefore, these methods are unlikely to succeed for images acquired from hydrocephalic patients where CSF volume is significant. Because intensity or other features that can help characterize a pixel into one of three segments (brain, CSF and subdurals) are not apparent; they must be discovered via a learning framework.

Fig. 3.

Fig. 3

Demonstration of segmentation using a traditional intensity based method [11]. Top row represents original images of 3 patients. Second row represents manually segmented images. Third row represents the segmentation using [11]. Green-Brain, Red-CSF, Blue-Subdurals

Recently, sparsity constrained learning methods have been developed for image classification [19] and found to be widely successful in medical imaging problems [20]–[23]. The essence of the aforementioned sparse representation based classification (SRC) is to write a test image (or patch) as a linear combination of training images collected in a matrix (dictionary), such that the coefficient vector is determined under a sparsity constraint. SRC has seen significant recent application to image segmentation [24]–[28] wherein a pixel level classification problem is essentially solved.

In the works just described, the dictionary matrix simply includes training image patches from each class (segment). Because each pixel must be classified, in segmentation problems training dictionaries can often grow to be prohibitively large. Learning compact dictionaries [29]–[31] continues to be an important problem. In particular, the Label Consistent K-SVD (LC-KSVD) [30] dictionary learning method, which has demonstrated success in image classification has been re-purposed and successfully applied to medical image segmentation [32]–[36].

Motivation and Contributions

In most existing work on sparsity based segmentation, a dictionary is used for each voxel/pixel that creates large computational as well as memory footprint. Further, the objective function for learning dictionaries described in the above literature (based invariably on LC-KSVD) is focused on extracting features that characterize each class (segment) well. We contend that the dictionary corresponding to a given class (segment) must additionally be designed to poorly represent out-of-class samples. We develop a new objective function that incorporates an out-of-class penalty term for learning dictionaries that accomplish this task. This leads to a new but harder optimization problem, for which we develop a tractable solution. We also propose the use of a new feature that incorporates the distance of a candidate pixel from the edge of the brain computed via a distance transform. This is based on the observation that subdurals are almost always attached to the boundary of the brain. Both intensity patches as well as the distance features are used in the learning framework. The main contributions of this paper are summarized as follows:

  1. A new objective function to learn dictionaries for segmentation under a sparsity constraint: Because discriminating features are automatically discovered, we call our method feature learning for image segmentation (FLIS). A tractable algorithmic solution is developed for the dictionary learning problem.

  2. A new feature that captures pixel distance from the boundary of brain is used to identify subdurals effectively as subdurals are mostly attached to the boundary of the brain. This feature also enables the dictionary learning framework to use a single dictionary for all the pixels in an image as opposed to the existing methods that use a separate dictionary for each pixel type. Incorporating this additional “distance based feature” helps significantly reduce the computation and memory footprint of FLIS.

  3. Experimental validation: Validation on challenging real data acquired from CURE Children’s Hospital of Uganda is performed. FLIS results are compared against manually labeled segmentation as provided by an expert neurosurgen. Comparisons are also made against recent and state of the art sparsity based methods for medical image segmentation.

  4. Complexity analysis and memory requirements: We analytically quantify the computational complexity and memory requirements of our method against competing methods. The experimental run time on typical implementation platforms is also reported.

  5. Reproducibility: The experimental results presented in the paper are fully reproducible and the code for segmentation and learning FLIS dictionaries is made publicly available at: https://scholarsphere.psu.edu/concern/generic_works/bvq27zn031.

A preliminary version of this work was presented as a short conference paper at the 2017 IEEE Int. Conference on Neural Engineering [37]. Extensions to the conference paper include a detailed analytical solution to the objective function in Eq. (7). Further, extensive experiments are performed by changing various parameters of our algorithm and new statistical insights are provided. Additionally, a detailed complexity analysis is performed and memory requirements of FLIS along with competing methods is presented.

The remainder of the paper is organized as follows. A review of sparsity based segmentation and detailed description of the proposed FLIS is provided in Section II. Experimental results are reported in Section III including comparisons against state of the art. The appendix contains an analysis of the computation and memory requirements of our method and selected competing methods. Concluding remarks are provided in Section IV.

II. Feature Learning For Image Segmentation (FLIS)

A. Review of Sparse Representation Based Segmentation

To segment a given image into C classes/segments, every pixel z in the image has to be classified into one of these classes/segments. The general idea is to collect intensity values from a patch of size w × w (in case of 3D images a patch of size w × w × w is considered) around each pixel and to represent this patch as a sparse linear combination of training patches that are already manually labeled. This idea is mathematically represented by Eq. (1). m(z) ∈ ℝ(w2)×1 represents a vector of intensity values for a square patch around pixel z. Y(z) ∈ ℝ(w2N represents the collection of N training patches for pixel z in a matrix form. α ∈ ℝN×1 is the vector obtained by solving Eq. (1). ||•||0 represents l0 pseudo-norm of a vector which is the number of non-zero elements in a vector. ||•||2 represents the l2 Euclidean norm. The intuition behind this idea is to minimize the reconstruction error between m(z) and the linear combination Y(z)α with the number of non-zero elements in α less than L. The constraint on l0 pseudo-norm hence enforces sparsity. Often the l0 pseudo-norm is relaxed to an l1 norm [25] to obtain fast and unique global solutions. Once the sparse code α is obtained, pixel likelihood probabilities for each class (segment) j ∈ {1, …, C} are obtained using Eq. (2) and Eq. (3). The probability likelihood maps are normalized to 1 and a candidate pixel z is assigned to the most likely class (segment) as determined by its sparse code.

argminα0<Lm(z)-Y(z)α22 (1)
Pj(z)=i=1Nαiδj(Vi)i=1Nαi (2)

where Vi is the ith column vector in the pre-defined dictionary Y(z), and δj(Vi) is an indicator defined as

δj(Vi)={1,Viclassj0,otherwise (3)

Note that training dictionaries Y(z) could grow to be prohibitively large, which motivates the design of compact dictionaries that can lead to high accuracy segmentation. Tong [32] et al. adapted the well-known LC-KSVD method [30] for segmentation by minimizing reconstruction error along with enforcing a label-consistency criteria. The idea is formally quantified in Eq. (4). For a given pixel z, Y(z) ∈ ℝ(w2N represents all the training patches for pixel z. N is the number of training patches. D(z) ∈ ℝ(w2K is the compact dictionary that is obtained with K being the size of the compact dictionary. ||X||0 < L, a sparsity constraint means that each column of X has no more than L non-zero elements. H(z) ∈ ℝC×N represents the label matrix for the training patches with C being the number of classes/segments to which a given pixel can be classified. For example in our case C = 3 (Brain, CSF and Subdurals) and the label matrix for a patch around a pixel which has its ground truth as CSF will be [0 1 0]T. W(z) ∈ ℝC×K is the linear classifier which is obtained along with D(z) to represent H(z). ||•||F represents the Frobenius (squared error) norm. The terms in black minimize reconstruction error while the term in red represents the label-consistency criteria. When a new test image is analyzed for segmentation, for each pixel z, D(z) and W(z) are invoked and the sparse code α ∈ ℝK×1 is obtained by solving Eq. (5) which is an l1 relaxation form of Eq. (1). Unlike the classification strategy used in Eq. (2), we use the linear classifier W(z) on sparse code α to classify/segment the pixel which is shown in Eq. (6). Note that β is a positive regularization parameter that controls the relative regularization between reconstruction error and label consistency.

argminD(z),W(z),X{minX0<L{Y(z)-D(z)XF2+βH(z)-W(z)XF2}} (4)
argminα>0m(z)-D(z)α22+λα1 (5)
Hz=W(z)α,label(z)=argmaxj(Hz(j)), (6)

where Hz is the class label vector for the tested pixel z, and the arg max reveals the best labelling achieved through applying α to the linear classifier W(z).

Tong [32] et al.’s work is promising for segmentation but we identify two key open problems: 1.) learned dictionaries for each pixel lead to a high computational and memory footprint, and 2.) the label consistency criterion enhances segmentation by encouraging intra- or within-class similarity but inter-class differences must be maximized as well. Our proposed FLIS addresses both these issues.

B. FLIS Framework

We introduce a new feature that captures the pixel distance from the boundary of the brain. This serves two purposes. First, as we observe from Figure 2, subdurals are mostly attached to the boundary of the brain. Adding this feature along with the vectorized patch intensity intuitively helps enhance the recognition of subdurals. Secondly, we no longer need to design pixel specific dictionaries because the aforementioned “distance vector” (for a patch centered around a pixel) provides enough discriminatory nuance.

Notation

For a given patient, we have a stack of T CT slice images starting from base of the skull to top of the skull which can be observed from Figure 1. The goal is to segment each image of the stack into three categories: brain, CSF and subdurals. Let YB ∈ ℝd×NB, YF ∈ ℝd×NF and YS ∈ ℝd×NS represent the training samples of brain, CSF and subdurals respectively. Each column of Yi, iB, F, S represents intensity of the elements in a patch of size w × w around a training pixel concatenated with the distances from boundary of brain for each pixel in the patch (described in detail in Section II-E). Ni represents the number of training patches for each class/segment. They are chosen to be same for all the 3 classes/segments. We denote the dictionaries learned as Di ∈ ℝd×K. K is the size of each dictionary. Xi ∈ ℝK×Ni represents the matrix that contains the sparse code for each training sample in Yi. Hi ∈ ℝNi represents the label matrices of the corresponding training elements Yi. For example, a column vector of HB looks like [1 0 0]T and finally, Wi denotes the linear classifier that is learned to represent Hi.

C. Problem Formulation

The dictionary Di should be designed such that it represents in-class samples effectively and poorly represent complementary samples along with achieving the label consistency criteria. To ensure this, we propose the following problem:

argminDi,Wi{1NiminXi0<L{Yi-DiXiF2+βHi-WiXiF2}-ρN^iminX^i0<L{Y^i-DiX^iF2+βHi-WiX^iF2}} (7)

The terms with (●̂) represent the complementary samples of a given class, ||●||F represents Frobenius norm and ||X||0 < L implies that each column of ||X|| has non-zero elements not more than L. The label matrices are concatenated, i = [Hi Hi], to maintain consistency with the dimension of Wii, because there are two complimentary samples. β and ρ are positive regularization parameters. ρ is an important parameter to obtain a solution for the objective function that we discuss in subsequent sections.

Intuition behind the objective function

The term in black makes sure that intra-class difference is small and the term in red enforces label-consistency. These two terms make sure that in-class samples are well represented. To represent the complementary samples poorly, the reconstruction error between the complementary samples and the sparse linear combination of in-class dictionary samples should be large. This is achieved through the term in blue. Further, a ”label-inconsistency term” is added (in brown) utilizing the sparse code for out of class samples, which again encourages interclass differences. Essentially, the combination of terms in blue and brown enables us to discover discriminative features that differentiate one class (segment) from another effectively. Note that the objective functions described in [32]–[36] are special cases of Eq. (7) since they do not include terms that emphasizes inter-class differences. The visual representation of our idea in comparison with the objective function defined in [32] (known as discriminative dictionary learning and sparse coding (DDLS)) is shown in Figure 4. The problem in Eq. (7) is non-convex with respect to its optimization variables; we develop a new tractable solution which is reported next.

Fig. 4.

Fig. 4

Visual representation of our FLIS in comparison with DDLS [32]. a) represents the idea of DDLS and b) represents a desirable outcome of our idea which is more capable of differentiating in-class and out of class samples.

D. Proposed Solution

For simplifying notation in Eq. (7), we replace Yi, Ŷi, Xi, i, Hi, i, Wi, Ŵi, Ni, i with Y, Ŷ, X, , , H, , W, Ŵ, N, respectively. Therefore, the cost function becomes

argminD,W{1NminX0<L{Y-DXF2+βH-WXF2}-ρN^minX^0<L{Y^-DX^F2+βH-WX^F2}} (8)

First, an appropriate L should be determined. We begin by learning an “initialization dictionary” using the well-known online dictionary learning (ODL) [38] given by:

(D(0),X(0))=argminD,X{Y-DXF2+λX1} (9)

where λ is a positive regularization parameter. An estimate for L can then be obtained by:

L1Ni=1Nxi(0)0 (10)

where xi(0) represents the ith column of X(0).

We develop an iterative method to solve Eq. (8). The idea is to find X, X̂ with a fixed values of D,W and then obtain D,W with the updated values of X, X̂. This process is repeated until D,W converge. Since, we have already obtained an initial value for D from Eq. (9), we need to find an initial value for W. To find an initial value for W, we obtain the sparse codes X and by solving the following equations:

argminX0LY-DXF2;argminX^0LY^-DX^F2

The above can be combined to find in Eq. (11) using orthogonal matching pursuit (OMP) [39].

argminX¯0LY¯-DX¯F2 (11)

where, Ȳ = [Y Ŷ], = [X X̂]. Then, to obtain the initial value for W, we use the method proposed in [30] which is given by:

W=H¯X¯t(X¯X¯t+λ1I)-1 (12)

where = [H H̃]. λ1 is a positive regularizer parameter. Once the initial value of W is obtained, we construct the following vectors:

Ynew=(YβH),Y^new=(Y^βH),Dnew=(DβW)

As we have the initial values of D,W, we obtain the values of X, X̂ by solving the following equation:

argminX¯0LY¯new-DnewX¯F2 (13)

where Ȳnew = [Ynew Ŷnew], = [X X̂].

With these values of X and , we find Dnew by solving the problem in Eq. (14) which automatically gives the values for D,W.

argminDnew{1NYnew-DnewXF2-ρN^Y^new-DnewX^F2} (14)

Using the definition of Frobenius norm, the above equation expands to:

argminDnew{1N(Ynew-DnewX)(Ynew-DnewX)T-ρN^(Y^new-DnewX^)(Y^new-DnewX^)T} (15)

Applying the properties of trace and neglecting the constant terms in Eq. (15), solution to the problem in Eq. (14) is equivalent to

argminDnew{-2trace(EDnewT)+trace(DnewFDnewT))} (16)

where, E=1NYnewXT-ρN^Y^newX^T;F=1NXXT-ρN^X^X^T. The problem in Eq. (16) is convex if F is positive semidefinite. However, F is not guaranteed to be positive semidefinite. To make F a positive semidefinite matrix, ρ should be chosen in a way such that the following condition is met:

1Nλmin(XXT)-ρN^λmax(X^X^T)>0 (17)

where λmin(•) and λmax(•) represent the minimum and maximum eigenvalues of the corresponding matrices. Once an appropriate ρ is chosen, Eq. (16) can be solved using dictionary update step in [38]. After we obtain Dnew, Eq. (13) is solved again to obtain new values for X and and we keep iterating between these two steps to obtain the final Dnew. The entire procedure is formally described in Algorithm 1, which is used on a per-class basis to learn 3 class/segment specific dictionaries corresponding to brain, CSF and subdurals.

After we obtain class specific dictionaries and linear classifiers, we concatenate them to obtain D = [DB DF DS] and W = [WB WF WS].

Assignment of a test pixel to a class (segment)

Once the dictionaries are learned, to classify a new pixel z, we extract a patch of size w×w around it to collect the intensity values and distance values from the boundary of the brain for the elements in the patch to form column vector m(z). Then we find the sparse code α in Eq. (18) using the learned dictionary D. Once α is obtained, we classify the pixel using Eq. (19).

argminα>0m(z)-Dα22+λα1 (18)
Hz=Wα,label=argmaxj(Hz(j)) (19)
Algorithm 1.

FLIS algorithm

1: Input: Y, Ŷ, H, ρ, β, dictionary size K
2: Output: D, W
3: procedure FLIS
4:  Find L and an initial value for D using Eq. (9) and Eq. (10)
5:  Find X and using Eq. (11)
6:  Initialize W using Eq. (12)
7:  Update Ynew=(YβH),Y^new=(Y^βH),Dnew=(DβW)
8:  Update X, X̂ using Eq. (13)
9: while not converged do
10:   Fix X, and calculate E=1NYnewXT-ρN^Y^newX^T;F=1NXXT-ρN^X^X^T
11:   Update Dnew by solving
argminDnew{-2trace(EDnewT)+trace(DnewFDnewT))}
12:   Fix Dnew, find X and using Eq. (13)
13: end while
14: end procedure
15: RETURN: Dnew

E. Training and Test Procedure Design for Hydrocephalic Image Segmentation

Training Set-Up

In selecting training image patches for segmentation, it is infeasible to extract patches for all the pixels in each training image because that would require a lot of memory. Further, it is desired that patches used from training images should be in correspondence with the patches from test images. For example, training patches collected from the slices in the middle of the CT stack cannot be used for segmenting a slice that belongs to top or bottom. To address this problem, we divide the entire CT-stack of any patient into P partitions such that images belonging to a given partition are anatomically similar. For each image in a partition (i.e a sub collection of CT image stack), we must carefully extract patches to have enough representation from the 3 classes (segments) and likewise have enough diversity in the range of distances from the boundary of the brain.

Patch Selection Strategy for each class/segment

First we find a candidate region for each image in the CT-stack by using an optical flow approach as mentioned in [4]. The candidate region is a binary image which labels the region of an image that is to be segmented into brain, CSF and subdurals as 1. Then, the distance value for each pixel z is given by DT(z) = min(d(z,q)) : CR(q) = 0, where d(z,q) is the Euclidean distance between pixel z and pixel q and CR is the candidate region. For a pixel z, it is essentially the minimum distance calculated from all the pixels that are not part of the candidate region. The candidate region of a sample image and its distance transform is shown in Fig. 5. A subset of “these distances” should be used in our training feature vectors. For this purpose, we propose a simple strategy wherein first we calculate the maximum and minimum distance of a given label/class in a CT image and pick patches randomly such that the distance range is uniformly sampled from min to max values. The pseudo-code for this strategy and more implementation details can be found in [40].

Fig. 5.

Fig. 5

Visual representation of obtaining distance values from a CT-slice.

Once training patches for each partition are extracted, we learn dictionaries and linear classifiers for each partition using the objective function described in Section II-C. The entire training setup and segmentation of a new test CT stack is summarized as a flow chart in Figure 6.

Fig. 6.

Fig. 6

A) illustrates the procedure for selecting patches for training. B) illustrates the procedure for segmentation of a new CT- stack

III. Experimental Results

We report results on a challenging real world data set of CT images acquired from the CURE Children’s Hospital of Uganda. Each patient (on an average) is represented by a stack of 28 CT images. We choose the number of partitions of such a stack P to be 12 based on neurosurgeon feedback. The size of each slice is 512×512. Slice thickness of the scans varied from 3mm to 10mm. The test set includes 15 patients while the number of training patients ranged from 9–17 and were non-overlapping with the test set. To validate our results, we used the dice-overlap coefficient, which for regions A and B is defined as

DO(A,B)=2ABA+B (20)

Note, DO(A,B) evaluates to 1, only when A = B. The dice-overlap is computed for each method by using carefully obtained manually segmented results under the supervision of an expert neurosurgeon - (SJS). The proposed FLIS is compared against the following state of the art methods:

  • SRC [19] based segmentation was implemented in [25] by using pre-defined dictionaries for each voxel/pixel in the scans. The objective function and classification procedure proposed in their work is implemented on our data set.

  • LC-KSVD [30] based dictionary learning method was used to segment MR brain images in [32] for hippocampus labeling. Two types of implementations were proposed in their paper which are named as DDLS and F-DDLS. In Fixed-DDLS (F-DDLS) dictionaries are learned offline and segmentation is performed online to improve speed of segmentation whereas in DDLS both operations are performed simultaneously. In this paper, we compare with the DDLS approach, as storing a dictionary for each pixel offline requires a very large memory.

Apart from these two methods, there are few others that use dictionary learning and a sparsity based framework for medical image segmentation [26]–[28], [33]–[36]. The objective function used in these aforementioned methods is similar to the above two methods with the application being different. We chose to compare against [25] and [32] because they are widely cited and were also applied to brain image segmentation.

A. The need for a learning framework

Before we compare our method against the state of the art in learning based segmentation, we demonstrate the superiority of the learning based approaches in comparison to the traditional intensity based methods. It was illustrated visually in Fig. 3 in Section I that intensity based methods find it difficult to differentiate subdurals from brain and CSF. To validate this quantitatively, we compare dice-overlap coefficients obtained by using the segmentation results of [11]1 which is one of the best known intensity based methods and addressed as Brain Intensity Segmentation (BIS). The comparisons are reported in Table I. The learning based methods use a patch size of 11×11 with number of training patients set to 15 and the sizes of individual class specific dictionaries set to 80.

TABLE I.

Comparison of learning based method with traditional intensity based thresholding method. Values are reported in Mean±SD(standard deviation) FORMAT

Method Brain CSF Subdural
BIS [11] .580±0.21 .696±0.18 .226±0.14
Patch based SRC [25] .885±0.15 .805±0.22 .496±0.28
DDLS [32] .932±0.04 .892±0.08 .641±0.2
FLIS (our method) .937±0.02 .908±0.07 .767±0.14

The results in Table I confirm that learning based methods clearly outperform the traditional intensity based method, esp. in terms of the accuracy of identifying subdurals. Note that the dice overlap values in Table I for each class/segment are averaged over the 15 test patients. This will be the norm for the remainder of this Section unless otherwise stated. We performed a balanced two-way Analysis of Variance (ANOVA)2 [42] on the dice overlap values across patients for all 3 classes (Brain, CSF and Subdural). Fig. 7 illustrates these comparisons using posthoc Tukey range test [42] and confirms that SRC, DDLS and FLIS (learning based methods) are significantly separated from BIS. p values of BIS compared with other methods are observed to be much less than .01 which emphasizes the fact that learning based methods are more effective.

Fig. 7.

Fig. 7

Comparison of traditional intensity based thresholding method with learning based approaches by a two-way ANOVA. Values reported by ANOVA across the method factor are d f =3, F =45.23, p ≪ .01, indicating that results of learning based approaches are significantly different and better than BIS. The intervals shown represent the 95 percent confidence intervals of the dice overlap values for the corresponding method-class configuration. Blue color represents BIS method and Red indicates the learning based approaches.

B. Parameter Selection

In our method, several parameters have to be chosen carefully before we start implementation. Some of the important parameters are patch size, dictionary size, number of training patients and regularization parameters ρ and β. ρ and β are picked by a cross-validation procedure [43], [44] such that ρ is in compliance with Eq. (17). The best values are found to be ρ = .5 and β = 2. Our algorithm is fairly robust to other parameters such as patch size, number of training patients and length of dictionaries which is discussed in the subsequent sub-sections.

C. Influence of Patch Size

If the patch size is very small, namely a single pixel in the extreme case, the necessary spatial information to accurately determine its class/segment is unavailable. On the other hand, a very large patch size might include pixels from different classes. For the experiment performed, the dictionary size of each class/segment and number of training patients for performing experiments are set to 120 and 17 respectively. Experiments are reported for square patch windows with size varying from 5 to 25. The mean dice overlap values for all the 15 patients that are shown in Fig. 8 reveal that the results are quite stable for patch size in the range 11 to 17, indicating that while patch size should be chosen carefully, FLIS is robust against small departures from the optimal choice.

Fig. 8.

Fig. 8

Mean dice overlap coefficients for all the 15 patients using our method are reported in this figure. Results for different square patch sizes varying from 5 to 25 are reported.

D. Influence of Dictionary Size

Dictionary size is another important parameter in our method. Similar to patch size, very small dictionaries are incomplete and can not represent the data accurately. However, large dictionaries can represent the data more accurately, but at the cost of increased run-time and memory requirements.

In the results presented next, varying dictionary sizes of 20, 80, 120 and 150 are chosen. Note that these dictionary sizes are for each individual class. However, DDLS does not use class specific dictionaries. Therefore, to maintain consistency in both the methods, the overall dictionary size for DDLS is fixed to be 3 times the size of each individual dictionary in our method. Table II compares FLIS with DDLS for different dictionary sizes. We did not compare with [25] as dictionary learning in not used in their approach. Experiments are conducted with a patch size of 13×13 and with data from 17 patients used for training.

TABLE II.

Performance of our method with different dictionary sizes. Values are reported in Mean±SD(standard deviation) FORMAT

Dictionary size Method Brain CSF Subdural

20 FLIS .891±0.04 .833±0.12 .580±0.23
DDLS [32] .887±0.06 .827±0.12 .539±0.30

80 FLIS .939±0.03 .907±0.07 .770±0.13
DDLS [32] .932±0.05 .892±0.08 .641±0.26

120 FLIS .940±0.03 .906±0.07 .768±0.14
DDLS [32] .931±0.04 .890±0.07 .679±0.17

150 FLIS .938±0.03 .911±0.07 .773±0.13
DDLS [32] .921±0.04 .891±0.08 .687±0.19

From Table II, we observe that FLIS remains fairly stable with the change in size of dictionary whereas the DDLS method performed better in identifying subdurals as the size of dictionary is increased. For a fairly small dictionary size of 20, the performance of both methods drops but FLIS is still relatively better. Further, to compare both the methods statistically, a 3-way balanced ANOVA is performed for all the 3 classes as shown in Fig. 9. We observe that FLIS exhibits superior segmentation accuracy compared to DDLS although there is significant overlap between confidence intervals of FLIS and DDLS. This can be primarily attributed to the discriminative capability of the FLIS objective function which automatically discovers features that are crucial for separating segments. Visual comparisons are available in Figure 10 when size of dictionary is set to 120. Visual results from Figure 10 show that both the methods performed similarly in detecting large subdurals, but FLIS identifies subdurals more accurately in Patient 3 (3rd column of Fig. 10) where the subdurals have a smaller spatial footprint.

Fig. 9.

Fig. 9

Comparison of FLIS with DDLS for different dictionary sizes by using a 3-way ANOVA. The intervals represent the 95 percent confidence intervals of dice overlap values for a given configuration of method-class-dictionary size. FLIS is represented in blue and DDLS in red. Values reported for ANOVA across the method factor are d f = 1, F = 7.22, p = .0075. ANOVA values across dictionary length factor are d f =3, F =9.95, p≪.01. We also performed a repeated ANOVA across dictionary size factor for the two methods which reported a p–value=1.73×10−10, which confirms that dictionary size has a significant role.

Fig. 10.

Fig. 10

Comparison of results of the 2 methods for a dictionary size of 120 and training size of 17 patients. First row represents the original images of 3 patients. Second row represents their corresponding manually segmented image. Third row represents segmented images using FLIS. Fourth row represent segmented images using DDLS [32]. Green-Brain, Red-CSF, Blue-Subdurals.

E. Performance variation against training

For the following experiment, we vary the number of training and test samples by dividing the total 32 patients CT stacks into 9–23, 11–21, 13–19, 15–17, 17–15, 19–13 and 21-11 configurations (to be read as training-test). Figure 12 compares our method with DDLS and patch based SRC [25] for all these configurations. Note that, the results reported for each configuration are averaged over 10 random combinations of a given training-test configuration to remove selection bias. The per-class dictionary size was fixed to 80 for our method and DDLS, whereas for [25], the dictionary size is determined automatically for a given training selection. The patch size is set to 13×13.

Fig. 12.

Fig. 12

Comparing dice-overlap coefficients of FLIS with DDLS [32] and patch based SRC [25] for different sizes of training data.

A plot of dice overlap vs. training size is shown in Fig. 12. Unsurprisingly, each of the three methods shows a drop in performance as the number of training image patches (proportional to the number of training patients) decreases. However, note that FLIS exhibits the most graceful degradation.

Fig. 13 represents the gaussian fit for the histogram (for all 10 realizations combined) of dice-overlap coefficients for the configuration 13–19. Two trends may be observed: 1.) FLIS histogram has a mean higher than competing methods, indicating higher accuracy, 2.) the variance is smallest for FLIS confirming robustness to choice of training-test selection.

Fig. 13.

Fig. 13

Gaussian fit for the histogram of dice overlap coefficients for ten random realizations of training data.

Comparisons are visually shown in Figure 11. A similar trend is also observed here where patch based SRC and DDLS improve as the number of training patients increase. We observe that DDLS and SRC based methods performed poorly in identifying the subdurals for Patient 3 (column 3) in Figure 11. We also observe that both DDLS and FLIS outperform SRC implying that dictionary learning improves accuracy significantly.

Fig. 11.

Fig. 11

Comparison of results of the 3 methods for a training size of 17 patients. First row represents the original images of 3 patients. Second row represents their corresponding manually segmented image. Third row represents segmented images using FLIS. Fourth and Fifth rows represent segmented images using DDLS [32] and patch-based SRC [25] respectively. Green-Brain, Red-CSF, Blue-Subdurals.

F. Discriminative Capability of FLIS

To illustrate the discriminative property of FLIS, we plot the sparse codes that are obtained from the classification stage for our method and DDLS for a single random pixel with a dictionary size of 150 in Fig. 14. The two red lines in the figure act as a boundary for the 3 classes. For each of the three segments, i.e. brain, CSF and subdurals, we note that the active coefficients in the sparse code are concentrated more accurately in the correct class/segment for FLIS vs. DDLS.

Fig. 14.

Fig. 14

Comparing Sparse codes of a random pixel for brain (B), fluid (F) and subdurals (S). Row1: Sparse code for FLIS. Row2: Sparse code for DDLS. X axis indicates the dimension of the sparse codes. The left side of first red line correspond to brain, middle section corresponds to fluid and right side of second red line correspond to subdurals. Y axis indicate the values of the sparse codes.

To summarize the quantitative results, FLIS stands out particularly in its ability to correctly segment subdurals. The overall accuracy of brain and fluid segmentation is better than the accuracy of subdural segmentation for all the 3 methods. This is to be expected because the amount of subdurals present throughout in the images is relatively small compared to brain and fluid volumes.

G. Computational Complexity

We compare the computational complexity of our FLIS with DDLS method. We do not compare with [25] as it does not learn dictionaries. Complexity of dictionary learning methods is estimated by calculating the approximate number of operations required for learning dictionaries for each pixel. Detailed derivation of complexity is presented in Appendix A. The run-time and derived complexity per pixel are shown in Table III. The run-time and computational complexity are derived per pixel. The values of parameters are defined as follows: The number of training patches N = 4700 for each class and the patch size is 11×11. Sparsity level L is chosen to be 5. The run time numbers are consistent with the estimated number of operations shown in Table III obtained by plugging in the values of above parameters in to the derived complexity formulas. FLIS is substantially less expensive from a computational standpoint. This is to be expected because DDLS uses pixel specific dictionaries, whereas FLIS dictionaries are class or segment specific but do not vary with the pixel location.

TABLE III.

Complexity Analysis of methods

Method Complexity Run time Est. Operations
DDLS
~9NK(2(d2+3)+L2)
46.66 seconds 1.39 × 109
FLIS
~9NK(2(d+3)+L2)Ix×Iy
.0003 seconds 1.005 × 104

H. Memory requirements

Memory requirements are derived in Appendix B. The memory required for storing dictionaries for all the 3 methods are reported in Table IV. These numbers are obtained assuming each element requires 16 bytes, and the following parameter choices: Number of training patients, Nt = 15, patch size = 11×11, K = 80 and Ix = Iy = 512. Consistent with Section III-G, the memory requirements of FLIS are also modest.

TABLE IV.

Memory requirements

Method Memory(in bytes) Approx Memory
SRC [25]
d2×d2×Nt×Ix×Iy×16
~ 9.2 × 1011 bytes
DDLS [32]
(d2+3)×3K×16×Ix×Iy
~ 1.24 × 1011 bytes
FLIS (our method) (d + 3) × 3K × 16 ~ 4.8 × 105 bytes

I. Comparison with deep learning architectures

A significant recent advance has been the development of deep learning methods, which have recently been applied to medical image segmentation [45], [46]. We implement the technique in [45] which designs a convolutional neural network (CNN) for segmenting MR images. This method extracts 2D patches of different sizes centered around the pixel to be classified and a separate network is designed for each patch size. The output of each network is then connected to a single softmax layer to classify the pixel. Three different patch sizes were used in their work and the network configuration for each patch size is mentioned in Table V. We reproduced the design in [45] but with CT scans for training. We address this method as Deep Network for Image Segmentation (DNIS). Results in terms of comparisons with FLIS are shown in Table VI. Note that the training-test configuration of this experiment is the same as the one performed in subsection III-E. Unsurprisingly, FLIS performed better than DNIS for low training scenarios and DNIS performed slightly better than FLIS with an increase in number of training samples. Further, to confirm this statistically, a 3-way balanced ANOVA is performed for all the 3 classes as shown in Fig. 15. It may be inferred from Fig. 15 that FLIS outperforms DNIS in the low to realistic training regime, while DNIS is competitive or mildly better than FLIS when training is generous. An example visual illustration of the results is shown for 3 patients in Fig. 16 where the benefits of FLIS are readily apparent. Also, note that the cost of training DNIS is in hours vs. the training time of FLIS which takes seconds – see Table VI.

TABLE V.

Deep network configuration of DNIS. Note: Conv-Convolutional layer followed by a 2 × 2 Max pool Layer, FC-Fully connected layer

Patch Size Layer1 (Conv) Layer2 (Conv) Layer3 (Conv) Layer4 (FC)
25 × 25 24 5 × 5 × 1 32 3 × 3 × 24 48 3 × 3 × 32 256 nodes
50 × 50 24 7 × 7 × 1 32 5 × 5 × 24 48 3 × 3 × 32 256 nodes
75 × 75 24 9 × 9 × 1 32 7 × 7 × 24 48 5 × 5 × 32 256 nodes

TABLE VI.

Performance of our method with DNIS. Values are reported in Mean±SD(standard deviation) format

Training samples Method Brain CSF Subdural Training Time (in seconds)

9 FLIS .915 ± 0.03 .845 ± 0.08 .660 ± 0.14 69.83
DNIS .890 ± 0.03 .80 ± 0.09 .632 ± 0.13 2860.66

11 FLIS .926 ± 0.02 .873 ± 0.07 .694 ± 0.13 96.61
DNIS .910 ± 0.03 .834 ± 0.07 .671 ± 0.13 9464.34

13 FLIS .934 ± 0.02 .906 ± 0.06 .729 ± 0.14 106.15
DNIS .919 ± 0.02 .880 ± 0.07 .690 ± 0.13 10443.57

15 FLIS .935 ± 0.02 .908 ± 0.06 .750 ± 0.12 115.23
DNIS .934 ± 0.02 .897 ± 0.06 .728 ± 0.12 11823.99

17 FLIS .939 ± 0.02 .910 ± 0.06 .770 ± 0.11 124.41
DNIS .939 ± 0.02 .908 ± 0.05 .752 ± 0.12 12940.41

19 FLIS .940 ± 0.02 .917 ± 0.06 .786 ± 0.13 138.71
DNIS .943 ± 0.02 .914 ± 0.04 .786 ± 0.10 14669.76

21 FLIS .940 ± 0.01 .913 ± 0.04 .786 ± 0.10 149.05
DNIS .950 ± 0.02 .919 ± 0.04 .792 ± 0.10 15846.87

Fig. 15.

Fig. 15

Comparison of FLIS with DNIS for different training configurations by using a 3-way ANOVA. The intervals represent the 95 percent confidence intervals of dice overlap values for a given configuration of method-class-training size. FLIS is represented in blue and DNIS in red. Values reported for ANOVA across the method factor are d f =1, F = 35.54, p ≪.01. ANOVA values across training size factor are d f = 3, F = 308.85, p ≪ .01.

Fig. 16.

Fig. 16

Comparison of results between DNIS and FLIS for training-test configuration of 17–15. First row represents the original images of 3 patients. Second row represents their corresponding manually segmented image. Third row represents segmented images using FLIS. Fourth row represent segmented images using DNIS. Green-Brain, Red-CSF, Blue-Subdurals.

IV. Discussion and Conclusion

In this paper, we address the problem of segmentation of post-op CT brain images of hydrocephalic patients from the viewpoint of dictionary learning and discriminative feature discovery. This is very challenging problem from the distorted anatomy and subdural hematoma collections on these scans. This makes subdurals hard to differentiate from brain and CSF. Our solution involves a sparsity constrained learning framework wherein a dictionary (matrix of basis vectors) is learned from pre-labeled training images. The learned dictionaries under a new criterion are shown capable of yielding superior results to state of the art methods. A key aspect of our method is that only class or segment specific dictionaries are necessary (as opposed to pixel specific dictionaries), substantially reducing the memory and computational requirements.

Our method was tested on real patient images collected from CURE Children’s Hospital of Uganda and the results outperformed well-known methods in sparsity based segmentation.

Acknowledgments

We thank Tiep Huu Vu for providing his valuable inputs to this work supported by NIH grant R01HD085853.

Appendix A. Complexity analysis

We derive the computational complexity of our FLIS and compare it with DDLS [32]. Computational complexity for each method is derived by finding the approximate number of operations required per pixel in learning the dictionaries. To simplify the derivation, let us assume that number of training samples and size of dictionary be same for all the 3 classes. Let they be represented as N and K. Let us also assume that sparsity constraint L remains the same for all the classes. Let the training samples be represented as Y and the sparse code be represented as X.

Two major steps in most of the dictionary learning methods are the dictionary update and sparse coding steps, which in our case are l0 minimization. The dictionary update step is solved either by using block coordinate descent [38] or the singular value decomposition [47]. The second step which involves solving an Orthogonal Matching Pursuit [39] is the most expensive step. Therefore, to derive the computational complexities, we find the approximate number of operations required to solve the sparse coding step in each iteration.

A. Complexity of FLIS

As discussed above, we find the approximate number of operations required to solve the sparse coding step in our algorithm. To do that, first we find the complexity of the major sparse coding step which is given by Eq. (21).

argminX0LY-DXF2 (21)

where the dimension of Y is equal to ℝd×N and dimension of D is equal to ℝd×K. For a batch-OMP problem with the above dimensions, the computational complexity is derived in [48] and it is equal to N(2dK+L2K+3LK+L3)+dK2. Assuming LKdN, it approximately simplifies to

NK(2d+L2). (22)

The sparse coding step in our FLIS algorithm requires us to solve argminX¯0LY¯new-DnewX¯F2 where Ȳnew ∈ ℝ(d+3)×3N and Dnew ∈ ℝ(d+3)×K which can be solved from Eq. (13). Substituting these values into Eq. (22), we get the complexity of learning dictionary for a single class as 3NK(2(d+3)+L2). Since we have 3 classes, the overall complexity of learning is multiplied by 3: CFLIS = 9NK(2(d +3)+L2). As the same dictionary is used for all the pixels in an image I with dimension Ix×Iy, CFLIS=9NK(2(d+3)+L2)Ix×Iy.

B. Complexity of DDLS [32]

We already showed that by removing the discriminating term from FLIS in Eq. (7), it turns into the objective function described for DDLS in Section II-C. Therefore, the most complex step remains the same for DDLS as well. However, since DDLS does not include distance feature the size of d changes to d2 and also it computes the dictionaries for all the classes at once. Keeping these two differences in mind, the computational complexity of DDLS is: CDDLS=9NK(2(d2+3)+L2). In addition, a separate dictionary is computed for each pixel in DDLS, which means the complexity scales with the size of the image.

Appendix B. Memory Requirements

We now calculate the memory required for our method and compare it with DDLS [32] and patch based SRC [25]. Memory requirement for all the methods is calculated by estimating the number of bytes required to store the dictionaries. In the case of FLIS and DDLS, the size of the dictionary plays an important role in calculating memory requirement whereas in SRC, the number of training images plays an important role as it uses pre-defined dictionaries. Another point to note is, as the entire CT stack is divided into P partitions and a dictionary is stored for each partition, we derive the memory required for storing dictionaries for each individual partition. To obtain the total memory required, the formulas derived in the subsequent sections have to be multiplied by P.

A. Memory required for FLIS

Suppose the length of each dictionary is K and the size of the column vector is d, then the size of the complete dictionary for all the 3 classes combined is d×3K. Further, we also store linear classifier W for classification which is of size 3×3K. Therefore, the complete size of the dictionary is (d +3)× 3K. Assuming each element in dictionary is represented by 16 bytes, the total memory in bytes required for storing FLIS dictionaries is MFLIS = (d+3)×3K×16.

B. Memory required for DDLS [32]

One major difference between FLIS and DDLS is the size of the column vector in DDLS is approximately half of the size in FLIS’s case as the distance values are not considered in DDLS. The other major difference is a dictionary is stored for each individual pixel. Keeping these two differences in mind and with the same dictionary length, the total memory in bytes required for storing DDLS dictionaries is MDDLS=(d2+3)×3K×16×Ix×Iy where Ix×Iy is the image size.

C. Memory required for Patch based SRC [25]

In SRC method, predefined dictionaries for each pixel are stored instead of compact dictionaries. For a given pixel x in an image, a patch of size w×w is considered around the same pixel location in training images and then a patch of size w×w around new pixels form the dictionary of pixel x. Assuming there are Nt training images, the total size of the dictionary for a given pixel is d2×d2×N as the size of the patch in this method is approximately half of the size of column vector in FLIS method. Therefore, the total memory in bytes required for this methods is MSRC=d2×d2×Nt×Ix×Iy×16.

Footnotes

*

This work is supported by NIH Grant number R01HD085853

1

Note that the method in [11] was implemented for MR brain images. We adapted their strategy for segmenting our CT images.

2

Prior to application of ANOVA, we rigourously verified that the observations (dice overlap values) satisfy ANOVA assumptions [41].

References

  • 1.Adams R, et al. Symptomatic occult hydrocephalus with normal cerebrospinal-fluid pressure: a treatable syndrome. New England Journal of Medicine. 1965;273(3):117–126. doi: 10.1056/NEJM196507152730301. [DOI] [PubMed] [Google Scholar]
  • 2.Drake JM, et al. Randomized trial of cerebrospinal fluid shunt valve design in pediatric hydrocephalus. Neurosurgery. 1998;43(2):294–303. doi: 10.1097/00006123-199808000-00068. [DOI] [PubMed] [Google Scholar]
  • 3.Warf BC. Endoscopic third ventriculostomy and choroid plexus cauterization for pediatric hydrocephalus. Clinical neurosurgery. 2007;54:78. [PubMed] [Google Scholar]
  • 4.Mandell JG, et al. Volumetric brain analysis in neurosurgery: Part 1. particle filter segmentation of brain and cerebrospinal fluid growth dynamics from MRI and CT images. Journal of Neurosurgery: Pediatrics. 2015;15(2):113–124. doi: 10.3171/2014.9.PEDS12426. [DOI] [PubMed] [Google Scholar]
  • 5.Mandell JG, et al. Volumetric brain analysis in neurosurgery: part 2. Brain and CSF volumes discriminate neurocognitive outcomes in hydrocephalus. Journal of Neurosurgery: Pediatrics. 2015;15(2):125–132. doi: 10.3171/2014.9.PEDS12427. [DOI] [PubMed] [Google Scholar]
  • 6.Luo F, et al. Wavelet-based image registration and segmentation framework for the quantitative evaluation of hydrocephalus. Journal of Biomedical Imaging. 2010;2010:2. doi: 10.1155/2010/248393. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Brandt ME, et al. Estimation of CSF, white and gray matter volumes in hydrocephalic children using fuzzy clustering of MR images. Computerized Medical Imaging and Graphics. 1994;18(1):25–34. doi: 10.1016/0895-6111(94)90058-2. [DOI] [PubMed] [Google Scholar]
  • 8.Mayer A, Greenspan H. An adaptive mean-shift framework for MRI brain segmentation. IEEE Transactions on Medical Imaging. 2009;28(8):1238–1250. doi: 10.1109/TMI.2009.2013850. [DOI] [PubMed] [Google Scholar]
  • 9.Li C, Goldgof DB, Hall LO. Knowledge-based classification and tissue labeling of MR images of human brain. IEEE transactions on Medical Imaging. 1993;12(4):740–750. doi: 10.1109/42.251125. [DOI] [PubMed] [Google Scholar]
  • 10.Weisenfeld NI, Warfield SK. Automatic segmentation of newborn brain MRI. Neuroimage. 2009;47(2):564–572. doi: 10.1016/j.neuroimage.2009.04.068. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Makropoulos A, et al. Automatic whole brain MRI segmentation of the developing neonatal brain. IEEE Transactions on Medical Imaging. 2014;33(9):9. doi: 10.1109/TMI.2014.2322280. [DOI] [PubMed] [Google Scholar]
  • 12.Ribbens A, et al. Unsupervised segmentation, clustering, and group-wise registration of heterogeneous populations of brain MR images. IEEE transactions on medical imaging. 2014;33(2):201–224. doi: 10.1109/TMI.2013.2270114. [DOI] [PubMed] [Google Scholar]
  • 13.Greenspan H, Ruf A, Goldberger J. Constrained gaussian mixture model framework for automatic segmentation of MR brain images. IEEE transactions on medical imaging. 2006;25(9):1233–1245. doi: 10.1109/tmi.2006.880668. [DOI] [PubMed] [Google Scholar]
  • 14.Liu B, et al. Automatic segmentation of intracranial hematoma and volume measurement. 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society; IEEE; 2008. pp. 1214–1217. [DOI] [PubMed] [Google Scholar]
  • 15.Liao CC, et al. A multiresolution binary level set method and its application to intracranial hematoma segmentation. Computerized Medical Imaging and Graphics. 2009;33(6):423–430. doi: 10.1016/j.compmedimag.2009.04.001. [DOI] [PubMed] [Google Scholar]
  • 16.Sharma B, Venugopalan K. Classification of hematomas in brain CT images using neural network. Issues and Challenges in Intelligent Computing Techniques (ICICT), 2014 International Conference on; IEEE; 2014. pp. 41–46. [Google Scholar]
  • 17.Gong T, et al. Finding distinctive shape features for automatic hematoma classification in head CT images from traumatic brain injuries. Tools with Artificial Intelligence (ICTAI), 2013 IEEE 25th International Conference on; IEEE; 2013. pp. 242–249. [Google Scholar]
  • 18.Soltaninejad M, et al. A hybrid method for haemorrhage segmentation in trauma brain CT. MIUA. 2014:99–104. [Google Scholar]
  • 19.Wright J, et al. Robust face recognition via sparse representation. IEEE transactions on pattern analysis and machine intelligence. 2009;31(2):210–227. doi: 10.1109/TPAMI.2008.79. [DOI] [PubMed] [Google Scholar]
  • 20.Srinivas U, et al. Simultaneous sparsity model for histopathological image representation and classification. IEEE transactions on medical imaging. 2014;33(5):1163–1179. doi: 10.1109/TMI.2014.2306173. [DOI] [PubMed] [Google Scholar]
  • 21.Vu TH, et al. Histopathological image classification using discriminative feature-oriented dictionary learning. IEEE transactions on medical imaging. 2016;35(3):738–751. doi: 10.1109/TMI.2015.2493530. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Mousavi HS, et al. Automated discrimination of lower and higher grade gliomas based on histopathological image analysis. Journal of pathology informatics. 2015;6 doi: 10.4103/2153-3539.153914. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Yu Y, et al. Group sparsity based classification for cervigram segmentation. Biomedical Imaging: From Nano to Macro, 2011 IEEE International Symposium on; IEEE; 2011. pp. 1425–1429. [Google Scholar]
  • 24.Wang L, et al. Segmentation of neonatal brain MR images using patch-driven level sets. NeuroImage. 2014;84:141–158. doi: 10.1016/j.neuroimage.2013.08.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Wang L, et al. Integration of sparse multi-modality representation and anatomical constraint for isointense infant brain MR image segmentation. NeuroImage. 2014;89:152–164. doi: 10.1016/j.neuroimage.2013.11.040. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Wu Y, et al. Prostate segmentation based on variant scale patch and local independent projection. IEEE transactions on medical imaging. 2014;33(6):1290–1303. doi: 10.1109/TMI.2014.2308901. [DOI] [PubMed] [Google Scholar]
  • 27.Zhou Y, et al. Nuclei segmentation via sparsity constrained convolutional regression. 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI); IEEE; 2015. pp. 1284–1287. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Liao S, et al. Sparse patch-based label propagation for accurate prostate localization in CT images. IEEE transactions on medical imaging. 2013;32(2):419–434. doi: 10.1109/TMI.2012.2230018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Yang M, et al. Fisher discrimination dictionary learning for sparse representation. 2011 International Conference on Computer Vision; IEEE; 2011. pp. 543–550. [Google Scholar]
  • 30.Jiang Z, Lin Z, Davis LS. Label consistent K-SVD: Learning a discriminative dictionary for recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2013;35(11):2651–2664. doi: 10.1109/TPAMI.2013.88. [DOI] [PubMed] [Google Scholar]
  • 31.Monga V. Handbook of Convex Optimization Methods in Imaging Science. Springer; 2017. [Google Scholar]
  • 32.Tong T, et al. Segmentation of MR images via discriminative dictionary learning and sparse coding: Application to hippocampus labeling. NeuroImage. 2013;76:11–23. doi: 10.1016/j.neuroimage.2013.02.069. [DOI] [PubMed] [Google Scholar]
  • 33.Lee J, et al. Brain tumor image segmentation using kernel dictionary learning. 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC); IEEE; 2015. pp. 658–661. [DOI] [PubMed] [Google Scholar]
  • 34.Roy S, et al. Subject-specific sparse dictionary learning for atlas-based brain MRI segmentation. IEEE journal of biomedical and health informatics. 2015;19(5):1598–1609. doi: 10.1109/JBHI.2015.2439242. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Bevilacqua M, Dharmakumar R, Tsaftaris SA. Dictionary-driven ischemia detection from cardiac phase-resolved myocardial BOLD MRI at rest. IEEE transactions on medical imaging. 2016;35(1):282–293. doi: 10.1109/TMI.2015.2470075. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Nouranian S, et al. Learning-based multi-label segmentation of transrectal ultrasound images for prostate brachytherapy. IEEE transactions on medical imaging. 2016;35(3):921–932. doi: 10.1109/TMI.2015.2502540. [DOI] [PubMed] [Google Scholar]
  • 37.Cherukuri V, et al. Learning based image segmentation of post-operative ct-images: A hydrocephalus case study. Neural Engineering (NER), 2017 8th International IEEE/EMBS Conference on; IEEE; 2017. pp. 13–16. [Google Scholar]
  • 38.Mairal J, et al. Online learning for matrix factorization and sparse coding. Journal of Machine Learning Research. 2010 Jan;11:19–60. [Google Scholar]
  • 39.Tropp JA, Gilbert AC. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Transactions on information theory. 2007;53(12):4655–4666. [Google Scholar]
  • 40.Cherukuri V, et al. Tech Rep. The Pennsylvania State University; 2017. Implementation details of FLIS. [Online]. Available: https://scholarsphere.psu.edu/concern/generic_works/bvq27zn031. [Google Scholar]
  • 41.McDonald JH. Handbook of biological statistics. Vol. 2 Sparky House Publishing; Baltimore, MD: 2009. [Google Scholar]
  • 42.Wu CJ, Hamada MS. Experiments: planning, analysis, and optimization. Vol. 552 John Wiley & Sons; 2011. [Google Scholar]
  • 43.Kohavi R, et al. Ijcai. 2. Vol. 14. Stanford, CA: 1995. A study of cross-validation and bootstrap for accuracy estimation and model selection; pp. 1137–1145. [Google Scholar]
  • 44.Lee MS, et al. Efficient algorithms for minimizing cross validation error. Machine Learning Proceedings 1994: Proceedings of the Eighth International Conference; Morgan Kaufmann; 2014. p. 190. [Google Scholar]
  • 45.Moeskops P, et al. Automatic segmentation of MR brain images with a convolutional neural network. IEEE transactions on medical imaging. 2016;35(5):1252–1261. doi: 10.1109/TMI.2016.2548501. [DOI] [PubMed] [Google Scholar]
  • 46.Pereira S, et al. Brain tumor segmentation using convolutional neural networks in MRI images. IEEE transactions on medical imaging. 2016;35(5):1240–1251. doi: 10.1109/TMI.2016.2538465. [DOI] [PubMed] [Google Scholar]
  • 47.Aharon M, Elad M, Bruckstein A. rmk-svd: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on signal processing. 2006;54(11):4311–4322. [Google Scholar]
  • 48.Rubinstein R, Zibulevsky M, Elad M. Efficient implementation of the K-SVD algorithm using batch orthogonal matching pursuit. Cs Technion. 2008;40(8):1–15. [Google Scholar]

RESOURCES