Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 Sep 1.
Published in final edited form as: Int J Comput Assist Radiol Surg. 2012 Sep;7(5):785–798. doi: 10.1007/s11548-012-0670-0

Segmentation and quantification of intra-ventricular/cerebral hemorrhage in CT scans by modified distance regularized level set evolution technique

K N Bhanu Prakash 1,, Shi Zhou 2, Tim C Morgan 3, Daniel F Hanley 4, Wieslaw L Nowinski 5
PMCID: PMC3477508  NIHMSID: NIHMS387991  PMID: 22293946

Abstract

Purpose

An automatic, accurate and fast segmentation of hemorrhage in brain Computed Tomography (CT) images is necessary for quantification and treatment planning when assessing a large number of data sets. Though manual segmentation is accurate, it is time consuming and tedious. Semi-automatic methods need user interactions and might introduce variability in results. Our study proposes a modified distance regularized level set evolution (MDRLSE) algorithm for hemorrhage segmentation.

Methods

Study data set (from the ongoing CLEAR-IVH phase III clinical trial) is comprised of 200 sequential CT scans of 40 patients collected at 10 different hospitals using different machines/vendors. Data set contained both constant and variable slice thickness scans. Our study included pre-processing (filtering and skull removal), segmentation (MDRLSE which is a two-stage method with shrinking and expansion) with modified parameters for faster convergence and higher accuracy and post-processing (reduction in false positives and false negatives).

Results

Results are validated against the gold standard marked manually by a trained CT reader and neurologist. Data sets are grouped as small, medium and large based on the volume of blood. Statistical analysis is performed for both training and test data sets in each group. The median Dice statistical indices (DSI) for the 3 groups are 0.8971, 0.8580 and 0.9173 respectively. Pre- and post-processing enhanced the DSI by 8 and 4% respectively.

Conclusions

The MDRLSE improved the accuracy and speed for segmentation and calculation of the hemorrhage volume compared to the original DRLSE method. The method generates quantitative information, which is useful for specific decision making and reduces the time needed for the clinicians to localize and segment the hemorrhagic regions.

Keywords: Segmentation, Level sets, Hemorrhage, CT, Brain

Introduction

Intra-ventricular and cerebral hemorrhage (IVH and ICH) are a very dangerous condition with poor outcome and high death rate [13]. Quantification of IVH and ICH is essential for management of the hemorrhage and treatment planning. Though manual methods are accurate, they are time consuming, strenuous and might introduce variability when performed on large data sets. In situations like a clinical trial, where a large number of data sets have to be analyzed by different people at various locations and/or on different settings, manual and semi-automatic segmentation methods consume a huge amount of valuable time, are laborious and have a high probability of introducing random errors due to inter- and intra-operator variability. Hence, in a clinical setting where time and clinical resources are valuable, there is a necessity for an automatic, accurate and fast segmentation method.

Computed Tomography (CT) is one of the popular imaging techniques for evaluation of hemorrhages because it is noninvasive, painless, fast and provides high contrast between tissue and blood and is available in most hospitals and emergency services. Fresh blood or hemorrhage appears brighter than other tissues in CT scans and has detection/identification sensitivity of 90% after the first 24 h, 80% for first 3 days, and about 50% for upto 1 week [4]. To quantitatively evaluate the IVH and ICH volume, the hemorrhage region has to be segmented in CT scans. Designing an image segmentation technique for medical applications is a challenging task due to non-availability of anatomical models that capture all the possible variations (e.g., shape, size, texture) of different structures, low signal-to-noise ratio, inherent artifacts and other factors that influence image quality. Segmentation of hemorrhage in CT brain scans is difficult due to various reasons like variability in Hounsfield units (HU) within the hemorrhagic region, partial volume at the edges, artifacts and noise in the scans.

Many image segmentation algorithms are available in literature, but there are only a few studies on segmentation of hemorrhage [513]. Most of these studies either do not report the accuracy obtained or are limited to data set from a single source. Active contours and level set methods have been widely used for medical image segmentation and computer vision and are progressively improved. The concept of active contours was introduced by Kass et al. [14] for segmenting objects in images using dynamic curves. Active contour models are classified as either parametric or geometric based on their representation and implementation. The parametric active contours are represented explicitly as parameterized curves in a Lagrangian framework, while the geometric active contours are represented implicitly as LSFs that evolve in an Eulerian framework. The level set method was introduced by Osher and Sethian [15] for capturing moving fronts. Different types of level set function (LSF) formulations can be found in the literature, but signed distance function-based LSF are preferred due to their numerical stability. However, during the evolution, the LSF often deviates from the ideal signed distance function. Hence, as a numerical remedy, the LSF is either periodically approximated to a signed distance function or the evolution is intentionally stopped and re-initialized. This re-initialization technique has been extensively used for maintaining stable curve evolution. However, re-initialization of the LSF is considered as a disagreement between the theory of the level set method and its implementation [16]. Many proposed re-initialization schemes also have undesirable side effects such as shifting of the zero level contours away from their true location. Therefore, how to apply re-initialization or even avoid reinitialization remains a serious issue.

Li et al. [17] proposed a different approach to deal with the issue of level set method re-initialization. They proposed a variational formulation that forces the LSF to be close to a signed distance function thus eliminating the need of the costly re-initialization procedure. Despite its side effects under certain circumstances, the major contributions of level set evolution without re-initialization are the following. First, a significantly larger time step can be used for numerically solving the evolving partial differential equation which in turn speeds up the curve evolution; second, the LSF can be initialized as simple binary functions that are computationally more efficient to generate than the signed distance function; third, the proposed level set evolution can be implemented using the simple finite difference scheme which is more accurate than the complex upwind scheme as in traditional level set formulations. The distance regularized level set evolution (DRLSE) [18] is a generalized variational form of level set evolution without re-initialization. The DRLSE drives the motion of zero level contours more accurately to the desired location and eliminates the side effects of its earlier versions of re-initialization formulations. The distance regularization term is defined with a potential function that forces the gradient magnitude of the LSF to one of its minimum points, thereby maintaining a desired shape as a signed distance profile near its zero level set. The level set evolution is derived as a gradient flow that minimizes the energy function. In the level set evolution, the regularity of the LSF is maintained by a forward- and backward-diffusion derived from the distance regularization term. As a result, the distance regularization completely eliminates the need for re-initialization and avoids the undesirable side effect introduced by the penalty term.

The aim of this study was to develop an accurate and automatic method for hemorrhage segmentation for large studies (such as clinical trial) and to explore the suitability and applicability of the DRLSE method for hemorrhage segmentation. The paper is organized as (i) data description, appearance of tissue and blood in a CT scan and pre-processing, (ii) performance evaluation of the original DRLSE method on our data set (brain CT images with hemorrhage) and its limitations, (iii) proposed modifications to the algorithm for improvement of accuracy and speed followed by (iv) results of Matlab simulations and (v) discussion.

Materials and methods

The Clot Lysis Evaluating Accelerated Resolution of Intra-ventricular Hemorrhage trial (CLEAR-IVH) phase III is a multicenter, international, randomized, clinical trial [1921] in the management and treatment of subjects with small ICH and large IVH.

Patient data

The data set for our study is part of the data collected for the CLEAR-IVH trial. The data set comprised of two hundred sequential CT scans of 40 subjects from 10 different hospitals (from different geographical locations in USA and Europe). Data set included subjects with severe spontaneous IVH and a relatively small-medium sized supratentorial ICH (≤30 ml) and the combined volume of ICH and IVH varied from 1.5 to 90 cc. In order to confine the study only to IVH and ICH, patients with extra-axial hemorrhage (bleeding that occurs within the skull but outside of the brain tissue; epidural, subdural and subarachnoid hemorrhages) or suspected aneurysm or arteriovenous malformation were not included in the study.

Image data

The CT scans of the enrolled patients were acquired using the hospital (participating in the trial) scanners and their respective scan protocols. Slice thickness within a volume was constant in most data sets, while 20 data sets had variable thickness of slices. The slice thickness varied from 2.5 to 10 mm. To test the variability of the algorithm, we also included data volumes that had considerable artifacts, such as beam hardening, motion artifact, partial volume effects (volume averaging), head tilt and clipped field of view. About 20% of the data sets (i.e., 40 out of 200 scans) had considerable artifacts. All series of the study were saved as standard Digital Imaging and Communications in Medicine (DICOM) axial (transverse) slices.

Our study involved four stages: (i) pre-processing (filtering and skull removal), (ii) segmentation using DRLSE method, (iii) post-processing (to remove false positives and reduce false negatives) and (iv) validation as shown in Fig. 1.

Fig. 1.

Fig. 1

Flowchart describing the overall process of the study

Appearance of tissues and blood on a CT scan

The amount of X-rays absorbed by the tissue is known as attenuation, and each tissue has rather a constant attenuation coefficient. In CT scanning, these attenuation coefficients are mapped to an arbitrary scale between −1000 (air) and +1000 (bone) as shown in Fig. 2. The intensity of the tissue or bone depends on the attenuation coefficient; higher attenuation coefficients result in brighter objects in the CT scan. By windowing, we can focus on tissues of interest in a CT scan. The bone (skull) and the calcified regions have higher HU than the other regions of the brain and blood, and blood has higher HU than the soft tissues and cerebral spinal fluid (CSF). Blood appears as hyperintense, soft tissues of brain as isointense and CSF as hypointense regions in CT scans.

Fig. 2.

Fig. 2

A Hounsfield scale for different kinds of tissues. The skull and calcified regions appear as the brightest, blood as hyperintense, soft tissues of brain as isointense and cerebral spinal fluid as hypointense on a CT scan

Pre-processing

The pre-processing stage comprised of skull removal and filtering of CT scans. Irrelevant features like skull and head holder that are present in the scan image, which can be potential sources of errors for segmentation, were removed from the data before segmentation. An in-house developed automatic algorithm (based on clustering, thresholding and morphological processing) was used to perform skull removal. Each CT scan was first converted to an intensity image using the scan window settings. The volume data were clustered into three classes (background, tissue, including hemorrhage, and skull). The threshold value for each class was derived using the window settings of the scan and the attenuation coefficient of each tissue type to obtain brain tissue image from the full range scan. Morphological operations (erosion and dilation with a disk-shaped structuring element whose size was based on the width of the skull bone) were performed to remove bones in the sphenoid sinus, ethmoid sinus and anterior clinoid regions. Gaussian filtering (with sigma = 1.5 and kernel size = 6) was used in the pre-processing stage to reduce the noise in the data.

Performance evaluation of the original distance regularized level set evolution and its limitations

There are both high contrast (high gradient edges) and low contrast regions (low gradient edges) in the CT scans with hemorrhage (IVH and ICH). The fresh blood or hemorrhage has higher HU than the decomposed blood or the blood mixed with CSF in the ventricles. Along with the variability of the blood HU, there are artifacts and noise (e.g., beam hardening artifacts, thermal noise, partial volume), which affect the contrast and edges in the CT images. Although weak noise can be attenuated or removed by pre-processing, such as filtering, strong noise induced during data acquisition step or in low contrast images could make the objects’ boundaries very difficult to distinguish.

The original DRLSE algorithm accurately identified the strong edges in the scan but had limitations in tracking the weak edges. This is not only a limitation of DRLSE, but an open research area to all edge-based active contour techniques. Considering its overall initial performance, DRLSE (Fig. 3a, b) appeared to be a promising method in brain CT image segmentation. Hence, we ventured to improve the DRLSE algorithm according to our application needs.

Fig. 3.

Fig. 3

Segmentation results of original DRLSE algorithm on brain CT images with high contrast (a, b) and low contrast (c, d) edges. The original DRLSE algorithm converged well to the strong edges (a, b) but failed in the cases of low contrast edges (c, d)

Some of the major limitations found in the original DRLSE algorithm for our application were as follows: (i) computation time of the algorithm, (ii) influence of the initial position on the convergence and (iii) number of iterative steps required for convergence.

Computation time

Even though periodic re-initialization was avoided in DRLSE algorithm, the simulation took a considerable amount of time to perform segmentation over a 512×512 image. The simulation took about 200 s in MATLAB 2008R environment, running on a Windows XP environment with dual core CPU X9650 @ 3 GHz with 2 GB RAM to process one slice of a volume data. We also used a larger time step to make the evolution of LSF faster, but the gain of time was very limited and in some cases, it caused instability in numerical model and missed weak boundaries (Fig. 3c, d).

Initial position and shape of level set function

In some cases, the result depended on the initial position and shape of the LSF. Many scans had a strong and high contrast edge (in the posterior fossa region) between the brain and the background of an image after the skull removal (due to beam hardening artifacts and partial volume). The initial LSFs if defined outside of the brain region are likely to be trapped at the brain’s boundary instead of hemorrhage inside. These reasons enunciated for a proper initialization of the initial LSF around the brain’s boundary without losing detection sensitivity.

Number of iterative steps

LSFs on high contrast images need only a small number of iterations to reach a stable shape; while it may need more iterations for convergence on low contrast images. Therefore, a suitable criterion to stop the evolution of the LSF had to be explored.

Selection of parameter values

To get optimum segmentation results, we need to set parameters for each image independently such as the distance regularization weight, contour integral weight and balloon or pressure force. As it is difficult to define the optimum parameters theoretically (due to variability in data), these were determined empirically.

Proposed modified DRLSE algorithm (MDRLSE)

In order to overcome noise, we used Gaussian filtering as preprocessing before subjecting the data to segmentation. To overcome the computation time limitation, we carried out segmentation in two stages: shrinking and expansion. The objective of shrinking was to let the zero level contours surround hemorrhage regions as fast as possible. The key concern here was not to find exact boundaries, but to eliminate all contours that do not correspond to hemorrhage. We used the radiological properties of CT images (i.e., using HU of blood, 60–80; as blood is more hyperintense than other soft tissues of brain) to decide on the region. During the expansion step, the zero level contours were expanded slowly outwards until it reaches boundaries of bright hemorrhage regions (using the edge information). The expansion process was designed to cover the hemorrhagic regions and converge to the edges without crossing their boundaries.

The flow chart of the modified implementation of the original DRLSE algorithm, that is, two-stage DRLSE algorithm and the stability check, is shown in Fig. 1 (Segmentation block—MDRLSE).

Initial level set function

The initial LSF was built based on the shape of the skull removed brain slice (axial slice) in the CT image. The initial contour, which is slightly smaller than the brain size, was placed at the center of the slice. This ensured the shrinkage of active contour in the brain area without being struck at the edge of the brain. The advantage of using DRLSE is the LSF does not have to be a signed distance function, and we can simply initialize it with all binary values.

Stage-1: shrinking by DRLSE

The contours of LSF were assumed to be smooth as the hemmorhagic regions and the brain structures have smooth contours. Further, for defining a contour, we need some minimum area. In order to reduce the false-positive contours due to artifacts or noise, we assumed that the hemorrhagic regions have an area of at least 20 pixels (based on observations of the data set). This area information was used to generate the balloon or pressure force in the normal direction of zero level contours. Using the radiological and anatomical information (HU of blood and location of ventricles) a binary image was generated from the slice (Fig. 4a) by thresholding, where the hyperintense hemorrhagic regions were represented by black and normal tissue regions and background as white. The binary image was eroded to remove the high contrast edges near the skull boundary. These steps reduced the interference of the artifacts and noise on zero-level contours. The dark regions on the binary image (Fig. 4b), which correspond to the hyperintense hemorrhagic regions (at least 20 pixels in size), formed the region of interest for the DRLSE for shrinking the initial level set contour.

Fig. 4.

Fig. 4

Example of shrinking stage by DRLSE. a Original slice, b binarized slice, showing the hemorrhagic regions and noise. The dark regions with an area of at least 20 pixels was considered as hemorrhagic regions and they formed the regions of interest for the DRLSE to shrink zero level contours, c results of stage-1; shrinking of the initial level set function to the regions of interest

During the shrinking process, DRLSE parameters were defined (time step = 2; weight of distance regularization term, μ = 0.2/time step; max_iterations = 100, coefficient of the weighted length term, λ = 3.5; coefficient of the weighted area term, α = 4.0; and width of Dirac Delta function, ε = 3.0) such that the zero-level contours neglects tiny bright spots or edges (based on area information) and converges faster to hyperintense regions on the CT image (Fig. 4c).

Stability check for shrinking stage

At the end of shrinking stage, the area of the current contour at the end of iteration was checked with respect to the area of contour of previous iteration. If the difference was negligible or less than a predefined threshold (e.g., < 10 pixels), the algorithm terminates and proceeds to next stage else continues to iterate further. If the area difference between successive iterations is not less than the given threshold even after the defined maximum number of iterations (e.g., 100 iterations), the program forcibly terminates shrinking process.

Stage-2: expansion by DRLSE

In the expansion step, DRLSE parameters (time step = 8; weight of distance regularization term, μ = 0.2/time step; max_iterations = 50, coefficient of the weighted length term, λ = 0.1; coefficient of the weighted area term, α = −0.3; and width of Dirac Delta function, ε = 1.0) were such that the zero-level contours are flexible, sensitive to edge information and search boundaries outward at a slow pace (small step size). To eliminate the effects of noise on edge detection, Gaussian smoothing with a small sigma (σ = 1.5, to prevent blurring of the boundaries) was performed. The canny edge detector was used to obtain the image gradient information. The edge indicator takes smaller values near boundaries as illustrated in Fig. 5a. Eventually, the LSF displays signed distance bands near zero level, and the zero-level contours are expected to segment regions of high intensity as in Fig 5b.

Fig. 5.

Fig. 5

Example of expansion stage by DRLSE. a Canny edge image of the original image with the high contrast edges represented by darker lines, b results of stage-2; expansion of the LSF to the regions of interest

Stability check for expansion stage

At the end of expansion stage, the area of the current contour at the end of iteration was checked with respect to the area of contour of previous iteration. If the difference was negligible or less than a predefined threshold (e.g., <10 pixels), the algorithm terminates and proceeds to next stage else continues to iterate further. If the area difference between successive iterations is not less than the given threshold even after the defined maximum number of iterations, the program forcibly terminates expanding process.

Post-processing

After segmentation, every CT scan was subjected to post-processing to remove as much as possible false-positive regions (due to over segmentation) and reduce the size of false-negative regions (caused by under segmentation). False positives were caused by regions having similar Hounsfield units as blood and overgrowing of LSFs. Anatomy (e.g., ventricle position, ventricle shape) and neuroradiology-based rules (like appearance of different tissues on unenhanced CT, size of structures based on DICOM header) were formulated to remove false positives (e.g., falx celebri false positives are removed using the information of the width and location-closeness to the inter-hemispheric fissure and skull). Automatic region growing [22] was used to reduce the false negatives. The edge pixels of the segmented regions and the maximum intensity point within the regions were identified automatically. The gradients from the maximum intensity point to each edge pixel of a region were calculated. Using the weighted gradient, in a neighborhood of nine pixels, and the intensity value of the center pixel of the neighborhood, region growing was performed from the edge pixels in each direction.

Gold standard generation (manual segmentation)

Gold standards (for both training and test sets) were marked manually by a trained CT reader, senior neurologist in consultation with a neuroradiologist. Each gold standard was done by a freehand technique on every slice of the CT scan (IVH and the ICH) and were saved for a subsequent review and analysis to ensure consistency. In situations where hemorrhage boundaries were difficult to assess, the observer was allowed to rely on their understanding of the underlying neuroanatomical boundaries to define hemorrhage boundaries and incorporate the Hounsfield units for decision-making (HU ≥ 40 representing hemorrhage, HU < 40 representing parenchyma/Cerebral spinal fluid).

Training and test data set formulation

Forty subjects, with 200 scans, were used for algorithm development and assessment. The scans were divided into three groups [small volume (≤15 cc), medium volume (>15 and ≤35 cc) and large volume (>35 cc)] based on the combined (IVH + ICH) volume of blood using the fuzzy C-means clustering. There were 69 data sets belonging to small volume, 73 to medium volume and 58 to large volume respectively. From these data sets, a proportional number of samples were selected for test set, that is, 17 sets for small volume, 19 for medium and 14 for large volume respectively. Using MAT-LAB built in function cvpartition, the CT scan data sets were randomly partitioned into training and test set with stratification, using the class information in each group, that is, both training and test sets had roughly the same class proportions as in each group. The details of summary statistics for the training and test set calculated using MEDCALC statistical software are given in the Table 1 and Fig. 6. The training and the test set had comparable statistics.

Table 1.

Summary statistics of training and test sets

Variable Training set Test set
Sample size 150 50
Lowest value 0.7300 0.9100
Highest value 98.3600 96.9700
Arithmetic mean 26.8045 27.2354
95% CI for the mean 23.4651–30.1439 21.0155–33.4553
Median 21.3300 21.4350
95% CI for the median 17.2565–26.7935 15.9605–28.9904
Variance 428.4005 478.9946
Standard deviation 20.6978 21.8859
Relative standard deviation 0.7722 (77.22%) 0.8036 (80.36%)
Standard error of the mean 1.6900 3.0951
Coefficient of skewness 1.1101 (P < 0.0001) 1.3773 (P = 0.0004)
Coefficient of kurtosis 0.9945 (P = 0.0416) 1.8554 (P = 0.0354)
D’Agostino–Pearson test for normal distribution Reject normality (P < 0.0001) Reject normality (P = 0.0002)
Percentiles 95% confidence interval 95% confidence interval
2.5 1.7950 0.7440–2.7214 1.6000
5 2.7100 1.1683–3.5161 3.1200
10 3.8050 2.7316–6.4500 4.0200 1.6485–9.8518
25 11.5400 8.8177–13.7604 12.2300 6.3407–16.3000
75 37.8700 32.8067–43.4725 36.8400 28.1699–46.4718
90 58.4650 47.2174–65.6137 55.5400 42.1284–87.4980
95 73.2800 60.1202–77.7717 77.1100
97.5 77.6025 69.2293–98.1973 88.1200
Fig. 6.

Fig. 6

Box plot analysis of training and test set used for developing and testing the algorithm

Statistical analysis

Statistical measures like sensitivity, specificity, Dice statistical index (DSI) [23,24], Jaccard’s coefficient, conformity, sensibility and Cohen’s kappa (using MATLAB, EXCEL and MEDCALC) were calculated for each group (small, medium and large volume of blood) of training and test sets. Summary statistics (minimum, maximum, mean, median, standard deviation and coefficient of variation) of the parameters were also calculated for each group of training and test sets. The distribution of the DSI, Jaccard’s coefficient and Cohen’s kappa for both the training and test sets was also calculated to check the distribution of segmentation accuracy with respect to gold standard. Correlation analysis was performed to check the correlation between gold standard and segmentation results.

Results

The implementation was done on MATLAB 2008R in a Windows XP environment with dual core CPU X9650 @ 3 GHz with 2 GB RAM. The native DRLSE algorithm took about 200 s for processing one slice of data, whereas the modified DRLSE algorithm took around 60 s. An average time reduction of about 60% was achieved due to the modification.

The segmentation results were compared with the gold standard images provided by a trained neurologist in consultation with a neuroradiologist. Descriptive statistics of Sensitivity, Specificity and Dice statistical index (DSI) [23,24], Cohen’s kappa, Jaccard’s coefficient, conformity and sensibility were calculated and tabulated in Table 2 below. The coefficient of variation (COV) that is a ratio of the standard deviation to mean was also calculated for all the parameters.

Table 2.

Descriptive statistical analysis of segmentation results for the training and test sets (Min—minimum, Max—maximum, SD—standard deviation and COV—coefficient of variation, defined as SD/Mean)

Training set
Test set
Small volume (≤15 cc) Medium volume (>15 and ≤35 cc) Large volume (>35 cc) Small volume (≤15 cc) Medium volume (>15 and ≤35 cc) Large volume (>35 cc)
Dice statistical index
Min 0.4305 0.6617 0.7936 0.7936 0.7960 0.7575
Max 0.9730 0.9215 0.9667 0.9667 0.9527 0.9389
Mean 0.7797 0.8432 0.8877 0.8877 0.8584 0.8983
Median 0.8203 0.8667 0.8971 0.8971 0.8580 0.9173
SD 0.1203 0.0569 0.0319 0.0319 0.0358 0.0479
COV 0.1543 0.0675 0.0402 0.0402 0.0417 0.0533
Cohen’s kappa
Min 0.4492 0.6805 0.8036 0.4343 0.8168 0.7975
Max 0.9732 0.9247 0.9667 0.8932 0.9524 0.9390
Mean 0.7999 0.8509 0.8906 0.7538 0.8656 0.9022
Median 0.8296 0.8681 0.8978 0.8194 0.8635 0.9197
SD 0.0997 0.0500 0.0296 0.1507 0.0314 0.0393
COV 0.1246 0.0588 0.0368 0.1999 0.0363 0.0436
Jaccard’s coefficient
Min 0.2743 0.4944 0.6578 0.2777 0.6612 0.6097
Max 0.9474 0.8545 0.9355 0.8016 0.9097 0.8848
Mean 0.6529 0.7328 0.7995 0.6134 0.7536 0.8183
Median 0.6953 0.7648 0.8134 0.6788 0.7514 0.8472
SD 0.1454 0.0805 0.0507 0.1764 0.0560 0.0735
COV 0.2227 0.1099 0.0770 0.2876 0.0744 0.0899
Conformity
Min −1.6452 −0.0226 0.4798 −1.6004 0.4876 0.3597
Max 0.9445 0.8297 0.9311 0.7525 0.9008 0.8698
Mean 0.3520 0.6163 0.7440 0.1749 0.6663 0.7669
Median 0.5618 0.6924 0.7706 0.5269 0.6691 0.8197
SD 0.5580 0.1798 0.0840 0.7450 0.0959 0.1331
COV 1.5853 0.2917 0.1751 4.2601 0.1440 0.1735
Sensibility
Min −14.25 18.51 74.44 55.94 65.68 49.56
Max 100 98.73 98.77 100 99.55 98.46
Mean 80.13 89.19 92.53 91.16 88.72 91.35
Median 89.84 92.80 94.47 93.17 91.65 94.56
SD 26.76 12.08 5.64 9.89 9.06 12.29
COV 0.3340 0.1355 0.0758 0.1085 0.1021 0.1346
Sensitivity
Min 28.71 54.51 72.22 27.78 72.70 77.36
Max 96.66 97.70 95.85 85.62 92.67 93.92
Mean 76.10 80.76 85.80 67.06 83.58 88.24
Median 77.98 82.27 86.15 76.91 83.32 89.44
SD 14.25 8.47 5.13 20.24 5.62 5.12
COV 0.1872 0.1048 0.0711 0.3019 0.0672 0.0580
Specificity
Min 99.60 99.28 99.64 99.85 99.83 99.64
Max 100 99.99 99.99 100 99.99 99.99
Mean 99.93 99.92 99.90 99.97 99.94 99.99
Median 99.97 99.96 99.92 99.97 99.95 99.93
SD 0.09 0.11 0.09 0.04 0.05 0.09
COV 0.0009 0.0011 0.0008 0.0004 0.0005 0.0009

The training sets were used to capture the information like the variability in Hounsfield units of different tissues and blood, understanding the localization of IVH and ICH, shape variations of the hemorrhage.

Table 3 shows the distribution of the DSI, Jaccard’s coefficient and Cohen’s kappa coefficient for all the three groups (small, medium and large volume) and for both training and test sets in each group. The results are expressed in percentage with respect to the number in data sets of the group.

Table 3.

Distribution of the DSI, Jaccard’s coefficient and Cohen’s kappa coefficient for training and test sets in percentage (number of data sets/total number of data sets)

Training set
Test set
DSI Jaccard’s coeff Cohen kappa DSI Jaccard’s coeff Cohen kappa
Small volume (≤15 cc)
≤0.1 0 0 0 0 0 0
>0.1 and ≤0.2 0 0 0 0 0 0
>0.2 and ≤0.3 0 1.92 0 0 5.88 0
>0.3 and ≤0.4 0 9.62 0 0 11.76 0
>0.4 and ≤0.5 3.85 1.92 1.92 11.76 5.88 11.76
>0.5 and ≤0.6 7.69 11.54 1.92 11.76 11.76 5.88
>0.6 and ≤0.7 5.77 26.92 11.54 5.88 17.65 11.76
>0.7 and ≤0.8 23.08 40.38 19.23 17.65 41.18 17.65
>0.8 and ≤0.9 53.85 5.77 59.62 52.94 5.88 52.94
>0.9 5.77 1.92 5.77 0.00 0.00 0.00
Total number of data sets 52 52 52 17 17 17
Medium volume (>15 and ≤35 cc)
≤0.1 0 0 0 0 0 0
>0.1 and ≤0.2 0 0 0 0 0 0
>0.2 and ≤0.3 0 0 0 0 0 0
>0.3 and ≤0.4 0 0 0 0 0 0
>0.4 and ≤0.5 0 3.70 0 0 0 0
>0.5 and ≤0.6 0 1.85 0 0 0 0
>0.6 and ≤0.7 3.70 24.07 1.85 0 15.79 0
>0.7 and ≤0.8 12.96 55.56 14.81 5.26 73.68 0
>0.8 and ≤0.9 74.07 14.81 70.37 89.47 5.26 94.74
>0.9 9.26 0.00 12.96 5.26 5.26 5.26
Total number of data sets 54 54 54 19 19 19
Large volume (>35 cc)
≤0.1 0 0 0 0 0 0
>0.1 and ≤0.2 0 0 0 0 0 0
>0.2 and ≤0.3 0 0 0 0 0 0
>0.3 and ≤0.4 0 0 0 0 0 0
>0.4 and ≤0.5 0 0 0 0 0 0
>0.5 and ≤0.6 0 0 0 0 0 0
>0.6 and ≤0.7 0 4.55 0 0 7.14 0
>0.7 and ≤0.8 2.27 34.09 0 7.14 21.43 7.14
>0.8 and ≤0.9 59.09 59.09 54.55 35.71 71.43 35.71
>0.9 38.64 2.27 45.45 57.14 0 57.14
Total number of data sets 44 44 44 14 14 14

The results of segmentation along with the gold standard are shown in Fig. 7 for some of the example slices. The first two rows represent the segmentation results from data sets having small volume of blood, third and fourth rows from medium volume and the last two rows represent data sets of large volume. These are taken from test data set results of different patients. The contours in green represent the gold standard drawn manually by experts; the blue contours indicate the results after stage-1; shrinking and the red contours represent the results after stage-2 and post-processing.

Fig. 7.

Fig. 7

Examples of final results of segmentation along with gold standard (1st column represents the gold standard—green contours, 2nd column represents results after the first stage of shrinking—the blue contours and last column represents the final results after expansion of contours—the red contours). First two rows are the results from small volume data sets; third and fourth row—for medium volume and fifth and sixth rows indicate the results from large volume of blood

A scatter plot with the linear regression analysis demonstrating the correlation of the gold standard and automatic segmentation volumes (for both training and test sets), with the R-square is shown in Fig. 8. The x-axis represents the gold standard volume in cc, and y-axis represents the modified method segmented volume in cc.

Fig. 8.

Fig. 8

Scatter diagram and linear regression analysis for training and test sets against the gold standard

The same study was repeated without pre-processing and post-processing to assess the impact of these stages on the segmentation. Only DSI was calculated for the analysis of results. Without only pre-processing stage, the median DSI were 0.8243, 0.7981 and 0.8443 for small, medium and large volume groups respectively. The DSI for all the groups without post-processing only were 0.8612, 0.8241 and 0.8736. When both the pre- and post-processing stages were included, along with the segmentation the median values of DSI increased by about 8%. The median DSI of the original algorithm segmentation including pre- and post-processing stages for small, medium and large groups were 0.6571, 0.7243, 0.7628 for the training set and 0.6632, 0.7256, 0.7712 for the test set respectively. The blood volume was calculated by multiplying the segmented regions with the voxel size of each slice and also considering the inter-slice gap. There were high numbers of false positives due to improper convergence of the LSF.

Discussion

In the management of IVH and ICH volume quantification is essential for the assessment and treatment planning. Though manual segmentation is accurate, it is a time consuming and tedious process. In a clinical trial setup, manual segmentation is not a practicable solution due to large number of data sets, evaluation to be done at different locations by different personnel. A clinician or neuroradiologists need about 20–30 min to accurately segment the volume. Though there are some of the commercially available image analysis tools that uses semi-automated segmentation techniques (Analyze®, Mayo Clinic; OsiriX®), they need inputs from users which require some level of expertise in both neuroanatomy and image manipulation which might introduce some level of variability. Hence an accurate and automatic method is needed to overcome the limitations. In this study, we explored the feasibility of DRLSE and modified it to make it fully automatic, to improve its speed and accuracy. The improvement in speed was achieved by reducing the number of iterative steps, and the improvement in accuracy by using the anatomical and radiological information in pre-processing, segmentation and post-processing.

Accuracy and speed of segmentation

The speed and accuracy of segmentation was improved by implementing a two-stage DRLSE algorithm, that is, shrinking and expansion stages, with shrinking stage allowing for faster convergence of the initial contour to the hemorrhagic regions and expansion stage for convergence to the edges in the region. The shrinking stage takes a larger step while the expansion stage grows with smaller steps. Anatomic and radiological information like location of ventricle and Hounsfield units of hemorrhage were used in the shrinkage step. This modification reduced the number of steps needed for the convergence. Usage of radiological information aided convergence to the hemorrhagic regions.

Accuracy of segmentation was further enhanced by reduction/elimination of false-positive regions that are caused by artifacts and tissue regions like tentorium cerebelli and falx cerebri etc. Rules were framed using the anatomical information like location of ventricles, size of fourth ventricle and radiological information of Hounsfield units of blood, tissue etc. The effect of false negatives on accuracy is overcome with automatic gradient weighted region growing. Pre-processing and post-processing stages increased the accuracy by 8 and 4%.

The usage of such an automatic algorithm in the clinical trial setting will reduce the valuable time of the clinicians needed for segmentation and quantification of the hemorrhage. Further, it will also alleviate the variability errors (inter- and intra-) in measurements introduced due to manual tracing by different individuals at different locations. Auto-segmentation with volume calculation could also be useful in the clinical management of ICH and IVH patients undergoing thrombolytic therapy. The availability of volumetric data might aid in neurosurgical decision making for evacuative procedures.

Along with the advantages, there are minor limitations of the algorithm in the current form such as in some cases, the contours were not fully evolved till the actual boundaries either due to maximum iterations was reached or the change in area between iterations is less than the predefined area. Although it seems to be possible to fix the issue by adding more iterations margin, the risk is that the zero-level contours may evolve anywhere once it has crossed the boundaries. From this viewpoint, the maximum limit of iterations serves as a secure guard that prevents contours from going too far in expansion step.

Computational time is still quite high for certain CT scans with large number of slices. It can be reduced by optimizing algorithm and code. Implementation in visual C++ environment, graphic processing unit based computation and/or parallelizing the algorithm may also speed up the process. In our study, the scans were processed sequentially. The filtering, segmentation and artifact removal can be parallelized (simultaneous processing of all slices) to further improve the speed.

Conclusion

An accurate method for localization, segmentation and quantification of hemorrhage is necessary for decision making and treatment. We have reported our work on segmentation and quantification of hemorrhage for the clinical trial, CLEAR-IVH phase III (500 patients to be enrolled, and approximately 6,000 CT scans from pre- and post-treatment periods) which included pre-processing; segmentation using modified DRLSE-based method; post-processing to remove false positives; and comparison of the performance of segmentation of intra-ventricular and intra-cerebral hemorrhage regions with the gold standard. The median DSI for small, medium and large volume groups were 0.8203, 0.8667, 0.8971 for training set and 0.8971, 0.8580 and 0.9173 for test set respectively. The method reduces the valuable time of the clinicians to localize and segment the hemorrhagic regions from about 20–30 to 3–5 min. The added advantage of the algorithm is the availability of the quantitative information that is important for therapeutic decision making.

Acknowledgments

This work belongs to our funding organization Agency for Science, Technology and Research (A*STAR), Singapore. The algorithm can be obtained from Exploit Technologies, the technology transfer office of A*STAR, by contacting tech-offer@exploit-tech.com.

Footnotes

Conflict of interest None.

Contributor Information

K. N. Bhanu Prakash, Email: bhanu@sbic.a-star.edu.sg, Biomedical Imaging Lab, SBIC, Biopolis, Agency for Science, Technology and Research, #07-01, Matrix, 30, Biopolis Road, Singapore 138671, Singapore

Shi Zhou, Biomedical Imaging Lab, SBIC, Biopolis, Agency for Science, Technology and Research, #07-01, Matrix, 30, Biopolis Road, Singapore 138671, Singapore.

Tim C. Morgan, Email: tmorga10@jhmi.edu, Division of Brain Injury OutComes, Department of Neurology, Johns Hopkins University, 1550 Orleans Street, CRB-II, 3M50 South, Baltimore, MD 21231, USA

Daniel F. Hanley, Email: dhanley@jhmi.edu, Department of Neurology, Johns Hopkins University, 1550 Orleans Street, CRB-II, 3M50 South, Baltimore MD 21231, USA

Wieslaw L. Nowinski, Email: wieslaw@sbic.a-star.edu.sg, Biomedical Imaging Lab, SBIC, Biopolis, Agency for Science, Technology and Research, #07-01, Matrix, 30, Biopolis Road, Singapore 138671, Singapore

References

  • 1.Nieuwkamp DJ, de Gans K, Rinkel GJ, et al. Treatment and outcome of severe intraventricular extension in patients with subarachnoid or intracerebral hemorrhage: a systematic review of the literature. J Neurol. 2000;247(2):117–121. doi: 10.1007/pl00007792. [DOI] [PubMed] [Google Scholar]
  • 2.Davis SM, Broderick J, Hennerici M, et al. Hematoma growth is a determinant of mortality and poor outcome after intra-cerebral hemorrhage. Neurology. 2006;66(8):1175–1181. doi: 10.1212/01.wnl.0000208408.98482.99. [DOI] [PubMed] [Google Scholar]
  • 3.Flaherty ML, Haverbusch M, Sekar P, et al. Long-term mortality after intracerebral hemorrhage. Neurology. 2006;66(8):1182–1186. doi: 10.1212/01.wnl.0000208400.08722.7c. [DOI] [PubMed] [Google Scholar]
  • 4. [Accessed on 20 Sept 2011]; http://www.ccmtutorials.com/neuro/
  • 5.Chen W, Smith R, Ji SY, et al. Automated ventricular systems segmentation in brain CT images by combining low-level segmentation and high-level template matching. BMC Med Inform Decis Mak. 2009;9(Suppl 1):S4. doi: 10.1186/1472-6947-9-S1-S4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Cosic D, Loncaric S. Computer system for quantitative analysis of ICH from CT head images. Proceedings of the 19th international conference on IEEE/EMBS; 1997. pp. 553–556. [Google Scholar]
  • 7.Cheng D, Cheng K. A PC-based medical image analysis system for brain CT hemorrhage area extraction. Proceedings of the 11th IEEE symposium on computer-based medical systems; 1998. p. 240. [Google Scholar]
  • 8.Loncaric S, Dhawan AP, Kovacevic D, et al. Quantitative intracerebral brain hemorrhage analysis. Proceedings of the SPIE medical imaging. 1999;3661:886–941. [Google Scholar]
  • 9.Perez N, Valdez JA, Guevara MA, et al. Set of methods for spontaneous ICH segmentation and tracking from CT head images. Proceedings of the 12th Iberoamerican congress on pattern recognition (CIARP); 2007. pp. 212–220. [Google Scholar]
  • 10.Chan T. Computer aided detection of small acute intracranial hemorrhage on computer tomography of brain. Comput Med Imaging Graph. 2007;31(4–5):285–298. doi: 10.1016/j.compmedimag.2007.02.010. [DOI] [PubMed] [Google Scholar]
  • 11.Yuh EL, Gean AD, Manley GT, et al. Computer-aided assessment of head computed tomography (CT) studies in patients with suspected traumatic brain injury. J Neurotrauma. 2008;25(10):1163–1172. doi: 10.1089/neu.2008.0590. [DOI] [PubMed] [Google Scholar]
  • 12.Liu B, Yuan Q, Liu Z, et al. Automatic segmentation of intracranial hematoma and volume measurement. Conf Proc IEEE Eng Med Biol Soc. 2008;2008:1214–1217. doi: 10.1109/IEMBS.2008.4649381. [DOI] [PubMed] [Google Scholar]
  • 13.Bardera A, Boada I, Feixas M, et al. Semi-automated method for brain hematoma and edema quantification using computed tomography. Comput Med Imaging Graph. 2009;33(4):304–311. doi: 10.1016/j.compmedimag.2009.02.001. [DOI] [PubMed] [Google Scholar]
  • 14.Kass M, Witkin A, Terzopoulos D. Snakes: active contour models. Int J Comput Vis. 1987;1:321–331. [Google Scholar]
  • 15.Osher S, Sethian J. Fronts propagating with curvature-dependent speed: algorithms based on Hamilton–Jacobi formulations. J Comput Phys. 1988;79(1):12–49. [Google Scholar]
  • 16.Gomes J, Faugeras O. Reconciling distance functions and level sets. J Vis Commun Image Represent. 2000;11(2):209–223. [Google Scholar]
  • 17.Li C, Xu C, Gui C, Fox MD. Level set evolution without re-initialization: A variational formulation. Proceedings of the IEEE conference on computer vision and pattern recognition; 2005. pp. 430–436. [Google Scholar]
  • 18.Li C, Xu C, Gui C, Fox M. Distance regularized level set evolution and its application to image segmentation. IEEE Trans Image Process. 2010;19(12):3243–3254. doi: 10.1109/TIP.2010.2069690. [DOI] [PubMed] [Google Scholar]
  • 19.Nyquist P, Hanley DF. The use of intraventricular thrombolytics in intraventricular hemorrhage. J Neurol Sci. 2007;261(1–2):84–88. doi: 10.1016/j.jns.2007.04.039. [DOI] [PubMed] [Google Scholar]
  • 20.Morgan T, Awad I, Keyl P, et al. Preliminary report of the clot lysis evaluating accelerated resolution of intraventricular hemorrhage (CLEAR-IVH) clinical trial. Acta Neurochir Suppl. 2008;105:217–220. doi: 10.1007/978-3-211-09469-3_41. [DOI] [PubMed] [Google Scholar]
  • 21.CLEAR III Study Group. Ongoing phase III clinical trial. http://www.cleariii.com.
  • 22.Fan J, Zeng F, Body M, et al. Seeded region growing: an extensive and comparative study. Pattern Recognit Lett. 2005;2005(26):1139–1156. [Google Scholar]
  • 23.Zou KH, Warfield SK, Bharatha A, et al. Statistical validation of image segmentation quality based on a spatial overlap index. Acad Radiol. 2004;11(2):178–189. doi: 10.1016/S1076-6332(03)00671-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Chang H-H, Zhuang A, Valentino D, Chu W-C. Performance measure characterization for evaluating neuroimage segmentation algorithms. Neuroimage. 2009;47(1):122–135. doi: 10.1016/j.neuroimage.2009.03.068. [DOI] [PubMed] [Google Scholar]

RESOURCES