Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2020 May 7.
Published in final edited form as: J Magn Reson Imaging. 2018 Oct 24;49(6):1565–1576. doi: 10.1002/jmri.26330

Curved planar reformatting and convolutional neural network-based segmentation of the small bowel for visualization and quantitative assessment of pediatric Crohn's disease from MRI

Y Lamash 1, S Kurugol 1, M Freiman 1, JM Perez-Rossello 2, MJ Callahan 2, A Bousvaros 3, SK Warfield 1
PMCID: PMC7205020  NIHMSID: NIHMS1020685  PMID: 30353957

Abstract

Background

Contrast-enhanced MRI of the small bowel is an effective imaging sequence for the detection and characterization of disease burden in pediatric Crohn’s disease (CD). However, visualization and quantification of disease burden requires scrolling back and forth through two-dimensional (2D) images to follow the anatomy of the bowel and it can be difficult to fully appreciate the extent of disease.

Purpose

To develop and evaluate a method that offers better visualization and quantitative estimation of CD from MRI.

Study Type

Retrospective.

Population

23 pediatric patients with CD.

Field Strength/Sequence

1.5T MRI system and T1-weighted post-contrast VIBE sequence.

Assessment

The CNN segmentation of the bowel’s lumen, wall and background was compared to manual boundary delineation. We assessed the reproducibility and the capability of the extracted markers to differentiate between different levels of disease according to the radiology reports.

Statistical tests

The segmentation algorithm was assessed using the Dice Similarity Coefficient (DSC) and Boundary Distances (BD) between the CNN and manual boundary delineations. The capability of the extracted markers to differentiate between different disease levels was determined using t-test. The reproducibility of the extracted markers was assessed using the Mean Relative Difference (MRD), Pearson correlation and Bland-Altman analysis.

Results

Our CNN exhibited DSCs of 75±18%, 81±8% and 97±2% for the lumen, wall and background, respectively. The extracted markers of wall thickness at the location of min radius (p=0.026) and the median value of relative contrast enhancement (p=0.01) could differentiate active and non-active disease segments. Other extracted markers could differentiate between segments with stricture or probable-stricture and segments without strictures (p<0.05). The observers’ agreement in measuring stricture length was >3 times superior when computed on CPR images compared to the conventional scheme.

Data conclusion

The results of this study show that the newly developed method is efficient for visualization and assessment of CD.

Keywords: Crohn’s disease, Convolutional Neural Network (CNN), Curved Planar Reformatting (CPR), MRI

Introduction

Magnetic resonance enterography (MRE) has emerged as an effective method for imaging the small bowel in patients with CD (1). MRE is especially useful for pediatric CD patients as we strive to spare children the potentially harmful effects of radiation. Information conveying length of involvement, severity of inflammation and luminal narrowing are required when assessing medical therapeutic response or decision for surgical treatment (1, 2). To facilitate these measurements, several computational methods have been proposed (3-11). However, to date there is not a single segmentation method that extracts the tubular structure of the diseased small bowel and includes both the lumen and wall compartments. Such segmentation is essential as it is the only way to enable the analysis of wall thickness, tissue enhancement and lumen narrowing along the diseased segment.

Curved planar reformatting (CPR) views are commonly used by radiologists and clinicians to demonstrate pathology in curved organs such as blood vessels, the spine and the colon (12). This technique is a key enabler in many clinical applications giving the ability to visualize and assess luminal narrowing and serve as a platform for segmentation and editing. CPR views provide comprehensive and clear depiction of the disease state on a single image. Such images can be saved as bookmark, shared among clinicians and used for evaluating change during therapy. However, despite the common use of CPR views in these curved organs, there seem to be no easily mastered tools for analyzing MR images of the small bowel. As a result, the radiologist or clinician is constrained to scrolling back and forth through 2D images in order to understand the three-dimensional (3D) structure of the abnormality.

Another limitation is the lack of labeled datasets required for training sophisticated supervised machine learning algorithms. Deep learning algorithms, in particular, convolutional neural networks, have rapidly become a methodology of choice for analyzing medical images (13). Thanks to its unique capability of learning hierarchical feature representations solely from data, deep learning has achieved record-breaking performance in a variety of artificial intelligence applications and grand challenges (14). In addition, the runtime of trained networks on a new data is very fast (typically in the order of seconds). Such networks generally have large number of parameters and training them requires a correspondingly large dataset. However, there are not enough publicly available labeled datasets and it is labor intensive to manually label images for segmentation. Specifically, for our problem, there is no publicly available training data.

The purpose of the current preliminary study is to develop and evaluate the feasibility of a new three-dimensional (3D) processing technique for improving the visualization and extraction of Crohn’s disease quantitative imaging markers from MRI.

Materials and Methods

We developed a software tool that generates CPR images and provide editing tools that can be used for several purposes: first, it enables visualization and measurement of degree and length of stenosis; second, it can be used for generation of annotated dataset to train supervised bowel wall segmentation algorithms; third, it serves as a platform for efficient editing tools for refinement of the automatic wall segmentation results if needed; finally, it provides a comprehensive depiction of disease severity in a single image. After generating annotated dataset of the small bowel lumen and wall, we trained a fully convolutional neural network architecture (15) with residual units (16) and a distance channel prior for segmenting the small bowel lumen and wall. Although the results of the automatic segmentation are quite satisfactory, we can refine the resultant segmentations using the proposed editing tools when needed.

Generating CPR views for the small bowel

CPR view generation starts with placement of seed points along the lumen’s centerline. We have generated practical and stable visualization software implemented in Matlab to achieve that goal. Our software visualizes coronal, sagittal and axial views that are automatically synchronized to the user’s curser location. This enables the user to position seed points in the most visible cross section along the curved lumen. If the lumen is completely obstructed, we select seed points the middle of the obstruction. Before the analysis, we interpolated the image volumes to isotropic sampling resolution.

Given a set of points on the centerline curve, we perform arclength parameterization (17) to obtain an equally sampled curve. We first interpolate between the seed points to get a curve r(t)=[X(t),Y(t),Z(t)].

We then perform integral to get the arclength parameter s as a function of t.

s(t)=0tr(t)dt

Since s is monotonic, we can calculate its inverse function along s.

t(s)=arg mints(t)s

Finally, we interpolate the curve along s:

r(s)=[X(t(s)),Y(t(s)),Z(t(s))]

Generating a stretched CPR view include the following stages: 1) select a plane that transverse through the two extreme points and another interactively selected third point; 2) project the curve onto that plane (12); 3) perform arclength parameterization to the projected curve; 4) interpolate the stretched CPR image by traversing on the projected curve at equal speed where each row is a ray perpendicular to the projected curve.

For straightened CPR view, we set the Frenet-Serret frame (18) along the curve and interpolate images on the planes normal to the curve. Given a curve with arclength parameterization, the Frenet-Serret equations are given by

T(s)=drdsdrdsN(s)=dTdsdTdsB(s)=B(s)×T(s)

where, T, N and B are the tangent, principle normal and secondary normal vectors, respectively.

Before calculating the Frenet-Serret frame vectors, we used the Savitzky-Golay (19) filtering to smooth the centerline curve and the orthogonal vectors along it.

In order to prevent discontinuities in the straightened CPR views that may occur due to locations with sharp curvature, we find the angle between adjacent normal vectors along the curve and rotate the normal vectors to align with the normal vector at s = 0.

In order to do so, we project each normal vector at location s – 1 on the normal plane at location s and calculate the projected normal angle with respect to the normal plane vectors Ns, Bs.

dθ(s)=tan1(Ns1,Ns/Ns1,Bs)

Then, the global required angle of the normal vector at position s is obtained by

θ(s)=0sdθds

We then calculate new normal plane vectors Nn(s), Bn(s) by rotating them on the normal plane with angle θ. We define a [3x3] matrix A:

A=[NT(s)BT(s)TT(s)]

And calculate the new normal plane vectors as follows:

Nn(s)=A1[cos(θ)sin(θ)0]
Bn(s)=T(s)×Nn(s)

Figure 1 demonstrates several results of stretched and straightened CPR views of small bowel segments with CD from contrast enhanced T1-weighted MR images that contain regions with strictures (narrowing).

Figure1:

Figure1:

Demonstration of stretched (A-C) and straightened CPR views (D-E) of small bowel segments of 5 different patients with CD. Green curves demonstrate measurements of regions with narrowing (stricture). These views enable rapid evaluation of the diseased bowel segment and the luminal narrowing and/or wall thickening of the diseased bowel segment without need to scroll through each plane to follow curved loops of the diseased bowel segment.

Bowel lumen and wall segmentation

Generation of annotated dataset

The generation of high quality annotated datasets is an essential component for training an accurate supervised segmentation algorithm. In order to generate an annotated dataset in a reasonable amount of time, we started with automatic segmentation and then refined the results. We therefore performed the following stages: first, we used graph cut segmentation (20) to obtain an initial segmentation of the bowel wall boundaries. Next, we manually refined the segmentation contours on six straightened CPR images. Then, we generate a tetrahedral mesh to transfer the segmentation represented by the contours in the straightened CPR views into a volumetric representation. Finally, we refined the volumetric annotations in the original acquisition planes.

Initial graph cut segmentation

Due to the axially symmetric representation of the bowel in the straightened CPR volume (e.g. Figure 1 D and E), we set the connectivity of the graph to be between pixels represented in cylindrical coordinates. We therefore sampled straightened CPR images every 5 degrees to generate a volume in which each image represents a slice that passes through the center at a specific angle θ. Using this representation, we segment the volume into five layers (or classes) from upper to lower direction, which are upper background, upper wall, lumen, lower wall, and lower background.

The graph cut energy function formulation is then given by

E(L)=pPD(Lp)+λ{p,q}NV(Lp,Lq)

where P is the set of pixels; N are the neighboring pixels for pixel p in polar coordinates; and L denotes the class label. λ is the weight factor between the data term D and the smoothness term V. The data term D is given by

D(Lp)=log(Pr(LpIp,dp)+ε),V(Lp,Lq)={exp(IpIqA)LpLq0Lp=Lq}

where Pr(LpIp, dp) is the product of the pixel’s conditional probabilities to belong to the lumen, wall or background classes given its image intensity value Ip and distance from the lumen-wall boundary dp. A is a normalization factor. We assume that these probabilities have Gaussian distribution N(μIL,σIL), N(μdL,σdL). We use an iterative approach, where in each iteration, we alternate between estimating the parameter of the Gaussian distributions and optimizing the energy of graph cut segmentation. We initialize the parameters of the Gaussian distribution using typical values for the wall and background and using the values at the positioned centerline seed points for the lumen.

After obtaining the segmentation results, we extract the contours of the wall and the lumen and then manually refine the results using the visual interface and the editing tool. The editing is performed on six discrete angles and with interpolation in between.

Generation of volumetric annotated data representation

We use the points on the oriented contours to construct a tetrahedral mesh. We then assign the voxels of each tetrahedron in the mesh with the proper lumen or wall annotation. The implementation of transferring from tetrahedral mesh into a volumetric representation is performed as follows:

Given a tetrahedron T, any point pT divides it into four sub tetrahedrons.

The vector e of the point p with respect to vertex v can be expressed by:

e=αei+βej+γek

Where the barycentric coordinates (α, β, γ) ∈ [0,1] are the volume ratios between each sub-tetrahedron and tetrahedronT.

α=det(e,ej,ek)det(ei,ej,ek)β=det(ei,e,ek)det(ei,ej,ek)γ=det(ei,ej,e)det(ei,ej,ek)

ei, ej, ek are the tetrahedron edge vectors with respect to vertex v.

To ensure that the global orientation is consistent (i.e. the Jacobians have equal signs), the ordering of the three vertices i, j, k when looking from vertex v should be either clockwise or counterclockwise for all tetrahedra.

To find the inner voxels surrounded by each tetrahedron, we take the grid pixels of the minimal box that bounds the tetrahedron and looked up the pixels whose barycentric coordinates apply:

α,β,γ0α+β+γ1

Figure 2 demonstrates the annotation generation process.

Figure 2:

Figure 2:

Generation of a labeled dataset of Crohn’s disease segments.

CNN-based lumen and wall segmentation with distance map channel prior

Our motivation for using a 3D CNN segmentation algorithm is based on the observation that the highly variable bowel appearance and shape requires a supervised algorithm that can learn feature representations and a classifier from a large set of augmented training patches for solving this difficult problem. The purpose of the network is to segment 3D patches centered on seed points along the centerline into lumen, wall and background classes. The obtained prediction scores of the 3D patches are then combined (by averaging the probabilities in overlapping pixels) to provide the final segmentation of the diseased bowel segment. We perform the segmentation in the original coronal image volumes instead of using CPR views to reduce the dependency of the segmentation performance on the quality of the initial centerline delineation. The segmentation of the small bowel is challenging because one diseased section of the wall can be adjacent to either part of the same diseased segment, or part of a distal healthy segment. In addition, the lumen and the mesentery might have similar intensities such that from a patch perspective, it may be unclear whether a region is inside the lumen or between two walls. To overcome this ambiguity, we added a distance map prior as an additional input to our algorithm. Accordingly, the distance prior is computed as the shortest distance of each voxel from the interpolated centerline seed points positioned in the lumen.

Our CNN network, shown in Figure 3, has a 3D fully connected U-Net architecture (15) with residual units (16). The network has three contracting layers; three expanding layers; and a final convolution layer (with kernel size one) followed by softmax. Each residual layer has two sets of batch normalization, leaky ReLU activation, and convolution as suggested by (16). Down-sampling and up-sampling of features is done using strided convolutions and transpose convolutions, respectively. The input to the network consists of two channel patches of size 64x64x32 (the full image size after interpolation for generating isotropic pixel-size of 0.75mm is 512x432x162). The first channel patches were taken from the contrast-enhanced T1-weighted MR images after resampling to isotropic resolution. Before cropping the patches, the images were normalized to have zero mean and a standard deviation equal to 1. The patches were centered on randomly selected lumen pixels. The rational for this selection was to ensure the robustness of the segmentation to variability in the selection of lumen seed points.

Figure3:

Figure3:

Left panel shows the network’s two input patches, (image intensity and distance prior) each of size 64x64x32 and the right panel shows the network architecture. For better demonstration, the two input channels are depicted one above the other, and the residual U-Net’s concatenating channels are depicted alongside one another).

We scale the distances to the range of [−1, 1] after truncating the max value to 32. We trained the network with a stochastic gradient descent with momentum of 0.9 and L2 regularization with lambda=10−3. To augment the training data, we added Gaussian noise, random rotations over the x-axis, random scaling of ±10%, and random flips in each of the three dimensions. Figure 3 illustrates the network’s input patches and its architecture.

Experiments

An IRB approval waiving the requirement of consent was obtained prior to retrospective collection of data from clinically acquired images. We retrospectively searched for T1-weighted VIBE scans of pediatric Crohn’s disease patients with slice thickness lower than 2mm that also have sufficient bowel distension and good image quality. All selected images were acquired using T1-weighted VIBE sequence in 1.5T Siemens Avanto scanner before and after contrast injection. We analyzed 23 pediatric patients with active (N=17) and non-active (N=6) Crohn’s disease according to the radiology report. The images were acquired in coronal planes with pixel size of about 0.75x0.75x2 mm. The acquisition protocol included sequential multiplanar MR images of the abdomen and pelvis both prior to and following the administration of intravenous contrast. Prior to the scan, the patients were prepped to increase bowel lumen distension (Polyethylene Glycol 3350, Bayer AG) and reduce peristalsis (Glucagon).

To perform an evaluation to both the proposed markers extraction pipeline that includes manual editing and to the CNN segmentation, we evaluated the method’s ability to extract markers that can differentiate between different levels of disease on the entire dataset and separately evaluated the CNN segmentation performance after splitting the dataset into train and test sets.

CNN segmentation of the small bowel lumen and wall

We generated labeled dataset of the small bowel lumen and wall according the above description. We divided the dataset into 15 and 8 cases from which we extract patches for training and testing the network, respectively. In total, we generated over 2.5 million augmented patches for training and several thousand patches for testing. All patches include a short tube of the disease small bowel in various directions, radiuses and thicknesses. The training took about 8 days on GPU (NVIDIA, GTX) with 8MB using Tensorflow (21).

Crohn’s markers extraction pipeline

To evaluate the accuracy of measurement of length of strictures or regions with narrowing using CPR compared to measurement using original image planes, two observers marked the region on stretched or straightened CPR views and on the original planes using one or more line measurements on 15 segments with stricture or narrowing. The agreements between the observers’ measurements on the CPR and the original images were calculated.

To test the ability of our method to extract Crohn’s disease activity markers, we used the segmentations (after manual refinement using our editing tool when needed) to calculate the wall thickness at the location of maximal luminal narrowing and median relative contrast enhancement in 22 active and 6 non–active disease segments. For analyzing stricture and luminal narrowing, we extracted three quantitative imaging markers: 1) the minimum radius as a marker of narrowing; 2) the wall thickness at the location of minimum radius, as a marker of active inflammation; and 3) the maximum radius as a marker of bowel dilation proximal to the stricture. We computed these three markers and compared their values in 21 segments without strictures, 6 segments with strictures and 3 segments marked by the reporting radiologist to have a probable stricture (1).

We designed an experiment to test the reproducibility of the method to generate the CPR views as follows. A user (YL) with experience of two years in generating CPR views in cardiovascular images and with experience in MR enterography imaging of CD, used the proposed tool to extract markers twice from two different phases of multiphase post-contrast images from the same study. The two analyses were performed with at least one-month time separation to ensure that the user had limited intellectual recall of his first analysis. Note that due to the motility of the bowel and the enlargement of the bladder, the two image volumes provide two different poses of the small bowel.

Statistical analysis

To estimate our network’s segmentation performance on the test set we calculated the cross entropy and the Dice Similarity Coefficient (DSC) for the lumen, wall and background classes. In addition, we calculated the mean and median Boundary Distances (BD) between the CNN and manual delineations of the lumen-wall and wall-background boundary contours.

The capability of the extracted imaging markers to differentiate between different disease levels was determined using t-test.

For the reproducibility test, we assessed the agreement between markers’ measurements at different time points using the Mean Relative Difference (MRD), Pearson correlation and Bland-Altman analysis.

Results

Figure 4 demonstrates the results of the model’s lumen and wall segmentation in the original coronal plane and after reformation of the segmentation results in straightened CPR views. Table 1 summarizes the performance of the proposed CNN architecture with distance prior in segmenting the small bowel lumen and wall. When integrating the distance prior into the proposed 3D Residual U-Net’s architecture, the Dice coefficients increased from 55% to 75% for the lumen, and from 60% to 81% for the wall segment. The median distance between the automated and manually labeled contours reduced from 1.70mm to 0.85mm and from 1.6mm to 1.0mm for the lumen-wall and wall-background boundaries, respectively.

Figure 4:

Figure 4:

Segmentation results of the convolutional neural network algorithm of several different patients in coronal images (Upper) and in straightened CPR views (Lower).

Table-1:

Performance of the network in segmenting the small bowel lumen and wall. DSC-Dice Similarity Coefficient, DB- Distance Boundary (between the CNN result and the label).

DSC[%]
lumen
DSC[%]
wall
DSC[%]
background
Median
BD[mm]
lumen
Median
BD[mm]
wall
Average
BD[mm]
lumen
Average
BD[mm]
wall
Cross
entropy
Single input channel 55±25 60±17 93±4 0.83±2.8 1.7±4.0 1.6±2.8 3.4±4.0 0.34
Distance prior concatenated at the final layer 70±19 75±10 95±4 0.83±1.8 1.8±3.1 1.2±1.8 3.1±3.1 0.22
Proposed method with distance channel prior 75±18 81±8 97±2 0.82±1.5 0.85±2.4 1.0±1.5 1.8±2.4 0.13

When comparing the level of agreement between two observers in measuring luminal narrowing length, there was a Mean Absolute Difference of 12mm vs. 3.2mm when the measurement is performed on the original images compared to the CPR views, respectively (Figure 5).

Figure 5:

Figure 5:

Comparison of stricture/ luminal narrowing length measurements in CPR views vs. measurements in the original acquisition images. There was better accuracy and reproducibility of stricture / luminal narrowing length measurements in the CPR views. Mean Absolute Difference (MAD) of 3.2mm in CPR vs. 12mm in the original images.

. The wall thickness at the location of min radius (p=0.026) and the median value of relative contrast enhancement (p=0.01) could differentiate active and non-active disease segments with statistical significance. Results are shown in Figure 6.

Figure 6:

Figure 6:

Performance of the extracted markers in differentiating between active and non-active inflammation and between stricture, probable-stricture and non-stricture small bowel segments. Asterisks represent statistically significant discrimination (p<0.05). The analysis included manual refinement.

The results showed that when analyzing markers to identify the presence of stricture, statistical significance was found between non-stricture segments and each of the two other groups (i.e. with a stricture and those with a probable stricture) when using each of the three markers. These results and combinations of markers are also shown in Figure 5 (right panel). No statistical significance was found between segments that were marked to have a stricture and those who marked to have a probable stricture in the radiological report. The results are demonstrated in Figure 7.

Figure 7:

Figure 7:

Estimated reproducibility of various CD image markers. Satisfactory reproducibility was found for median wall thickness, wall thickness at minimum radius, relative contrast enhancement, median enhancement, maximum enhancement, maximum radius and length of abnormality for <20cm segments. Moderate reproducibility was obtained for min radius and for length of abnormality for >20cm segments.

The reproducibility of the extracted quantitative image markers are demonstrated in Figure 6. The results are reported for image markers including median wall thickness, wall thickness at min radius, relative contrast enhancement, maximum radius (bowel dilation), median thickness, median contrast enhancement, maximum enhancement and diseased segment length.

3D surface visualization of Crohn’s imaging markers

We reconstructed surfaces from the segmentation contours and displayed the wall thickness and lumen narrowing as colormaps (Figure 8).

Figure 8:

Figure 8:

Surface rendering of the lumen boundary with colormaps indicating lumen radius [mm] (upper row) and wall thickness [mm] (lower row) obtained using the proposed method. This visualization demonstrates effectively the extent and severity of different regions along the diseased segment. Cases (a) and (d) has diseased areas with strictures that have a very narrow lumen (black arrows) and a very thickened bowel wall (magenta arrows) that may indicate surgical therapy. In case (d), there are locations with bowel dilations (orange arrows).

Discussion

To provide optimal care for patients with pediatric CD, response to therapy and disease severity must be assessed as early as possible during the course of the disease. Moreover, a timely and accurate diagnosis has important prognostic implications. The use of MRI for assessing pediatric CD has increased dramatically, especially as we strive to spare children the potentially harmful effects of radiation when imaged with computed tomography. Thickness and relative contrast enhancement of the wall, length of the inflamed bowel segment, and the extent of luminal narrowing are important imaging markers of disease activity (2). However, due to the curved structure of the bowel, these features are often difficult to measure in the coronal or axial acquisition planes. It is also labor intensive to extract these markers manually and therefore quantitative markers are not routinely used in clinic. In addition, the lack of spatial alignment in follow-up scans makes it challenging to longitudinally compare the parameters for the same region.

Several studies have proposed semi-automatic methods for characterization of CD from MRI data (10, 11, 22). However, use of CPR views for assessment of CD has not been proposed due to limitations like motion artifacts, highly deformable shape of the bowel, and poor bowel distention. These factors limited the quality of acquired images in a single breath-hold. Despite these difficulties, we generated CPR views for about two thirds of patients who were imaged with proper luminal distension and without major motion artifacts limiting image quality. The CPR platform enables improved visualization of the lumen narrowing and measurement of disease markers such as diseased segment length, wall thickness and stenosis length. In addition, we used it to both generate a labeled dataset for training CNN-based segmentation of the bowel wall and as a platform where editing tools can be quickly applied to fine tune the results.

Our CNN based segmentation could successfully segment the bowel lumen and the wall. The performance when adding the distance map as an additional channel was superior to that seen when integrating the distance prior at the final layer—an observation that implies that integrating spatial information to the learned image filters improves overall performance. Such spatial informative channels may improve the performance in other image segmentation applications as well. We observed that there were several locations where the algorithm delineated the boundaries more accurately than the labeled data. For cases that require manual refinement, our proposed editing software enables efficient and quick manual editing of the segmentations on CPR views before computing the disease markers.

When comparing to prior work for small bowel segmentation (Table-2), our work is the only method to report segmentation performance in segmenting the lumen, wall and background classes. Some works segment regions affected by CD without separation between the lumen and wall. These works cannot extract measurement of wall thickness or lumen narrowing. Other works perform semi-automated segmentation of the wall for evaluating its thickness but do not provide any segmentation performance values (11, 23).

Table-2:

Comparison to CD small bowel segmentation prior work. All prior works with reported segmentation performance segment the wall and lumen together from their background instead of each compartment alone. These works provide small tissue segments with CD instead of tube structure and therefore cannot extract the wall thickness or lumen narrowing. Other works (11, 25) for segmenting the small bowel wall and for extracting its thickness do not report any segmentation performances.

Method Provide the
entire disease
segment?
Can be used to
extract disease
markers?
Lumen
DSC
Wall
DSC
Background
DSC
Lumen+wall vs.
background
DSC
SL(3) No No N/A N/A N/A 86.5±2.3%
WSS (9) No No N/A N/A N/A 75.3%
RF[SPIE2013] No No N/A N/A N/A 91.9 ± 1.9
SS-AL (6) No No N/A N/A N/A 92.1%
AS(4) No No N/A N/A N/A 90±4%
AL (5) No No N/A N/A N/A 92.7%
CNN (ours) Yes Yes 75±18% 81±8% 97±2% No Need

In regards to stricture length measurements, better agreement was obtained when the measurement was done on the CPR image versus a measurement from the original images.

Statistical significance was found between segments with active and segments with non-active inflammation, and between segments without strictures and segments with strictures or with probable strictures using quantitative imaging markers of disease that were computed by the proposed method. This highlights the potential of the proposed tool in both analyzing bowel wall thickness and tissue enhancement for assessment of response to therapy as well as assessing luminal narrowing and dilation in CD for optimizing surgical decisions.

The estimated reproducibility of median wall thickness, wall thickness at minimum radius, relative contrast enhancement, median enhancement, maximum enhancement and maximum radius were good. The estimated reproducibility measure of the abnormal segment length was good for segments 20cm or less, and moderate for segments longer than 20cm. This might be associated with the difficulty to determine the start or end point of the abnormality in long segments. The estimated reproducibility measure of the minimum radius was moderate. This may be due to limited resolution resulting in decreased accuracy of the measurement.

Our study design includes several limitations. First, our references were based on the radiology reports alone and didn’t contain any external validation. Radiology reports alone are not considered as reference standard. However, for our preliminary feasibility study we assume that the radiology reports are sufficient for getting a proof of concept for our method. Second, retrospective study may introduce some level of bias. Third, we have analyzed only 23 cases that were acquired with the required resolution (i.e. pixel size <2mm in each direction) and that also have sufficient image quality for the proposed analysis. This limitation is associated with the difficulty in placing seed points along the centerline in images with significant motion artifacts, poor bowel distension or severe disease where the lumen path is barely visible. With the development of faster and motion-robust MRI techniques, we expect that most patients will have good quality images and will benefit from the proposed image analysis technique.

Future studies will focus on other disease imaging markers such as diffusion-weighted MRI. This additional information can be jointly or separately extracted after registration of the images acquired in the same imaging session (24).

In conclusion, we propose a novel method for analyzing diseased small bowel segments and for extracting quantitative imaging markers of disease activity in T1-weighted, contrast-enhanced MR images of pediatric CD patients. Our method involves generating CPR images of the small bowel and enables rapid segmentation of the small bowel lumen and wall using a 3D residual CNN with a distance prior. To the best of our knowledge, this is the first evaluated method for segmenting the diseased small bowel which includes the lumen, wall and background classes. Such analysis facilitates the computation of imaging markers for estimating disease severity and response to therapy. We anticipate that the proposed method may promote clinical utilization of these imaging markers for characterization and assessment of pediatric Crohn’s disease.

Reference

  • 1.Bruining DH, Zimmermann EM, Loftus EV, Sandborn WJ, Sauer CG, Strong SA. Consensus Recommendations for Evaluation, Interpretation, and Utilization of Computed Tomography and Magnetic Resonance Enterography in Patients With Small Bowel Crohn’s Disease. Gastroenterology. 2018. [DOI] [PubMed] [Google Scholar]
  • 2.Rimola J, Rodríguez S, García-Bosch O, Ordás I, Ayala E, Aceituno M, et al. Magnetic resonance for assessment of disease activity and severity in ileocolonic Crohn’s disease. Gut. 2009;58(8):1113–20. [DOI] [PubMed] [Google Scholar]
  • 3.Mahapatra D, Schueffler P, Tielbeek JA, Buhmann JM, Vos FM, editors. A supervised learning based approach to detect crohn’s disease in abdominal mr volumes International MICCAI Workshop on Computational and Clinical Challenges in Abdominal Imaging; 2012: Springer. [Google Scholar]
  • 4.Mahapatra D, Schuffler PJ, Tielbeek JA, Makanyanga JC, Stoker J, Taylor SA, et al. Automatic detection and segmentation of Crohn's disease tissues from abdominal MRI. IEEE transactions on medical imaging. 2013;32(12):2332–47. [DOI] [PubMed] [Google Scholar]
  • 5.Mahapatra D, Schüffler PJ, Tielbeek JA, Makanyanga JC, Stoker J, Taylor SA, et al. , editors. Active learning based segmentation of crohn's disease using principles of visual saliency Biomedical Imaging (ISBI), 2014 IEEE 11th International Symposium on; 2014: IEEE. [Google Scholar]
  • 6.Mahapatra D, Schüffler PJ, Tielbeek JA, Vos FM, Buhmann JM, editors. Semi-supervised and active learning for automatic segmentation of crohn’s disease International Conference on Medical Image Computing and Computer-Assisted Intervention; 2013: Springer. [DOI] [PubMed] [Google Scholar]
  • 7.Mahapatra D, Schüffler PJ, Tielbeek JA, Vos FM, Buhmann JM, editors. Crohn's disease tissue segmentation from abdominal MRI using semantic information and graph cuts Biomedical Imaging (ISBI), 2013 IEEE 10th International Symposium on; 2013: IEEE. [Google Scholar]
  • 8.Mahapatra D, Schüffler PJ, Tielbeek JA, Vos FM, Buhmann JM, editors. Localizing and segmenting Crohn's disease affected regions in abdominal MRI using novel context features Medical Imaging 2013: Image Processing; 2013: International Society for Optics and Photonics. [Google Scholar]
  • 9.Mahapatra D, Vezhnevets A, Schüffler PJ, Tielbeek JA, Vos FM, Buhmann JM, editors. Weakly supervised semantic segmentation of crohn's disease tissues from abdominal mri Biomedical Imaging (ISBI), 2013 IEEE 10th International Symposium on; 2013: IEEE. [Google Scholar]
  • 10.Naziroglu RE, Puylaert CA, Tielbeek JA, Makanyanga J, Menys A, Ponsioen CY, et al. Semi-automatic bowel wall thickness measurements on MR enterography in patients with Crohn's disease. The British journal of radiology. 2017;90(1074):20160654. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Vos FM, Tielbeek JA, Naziroglu RE, Li Z, Schueffler P, Mahapatra D, et al. , editors. Computational modeling for assessment of IBD: to be or not to be? Engineering in Medicine and Biology Society (EMBC), 2012 Annual International Conference of the IEEE; 2012: IEEE. [DOI] [PubMed] [Google Scholar]
  • 12.Kanitsar A, Fleischmann D, Wegenkittl R, Felkel P, Gröller ME, editors. CPR: curved planar reformation. Proceedings of the conference on Visualization'02; 2002: IEEE Computer Society. [Google Scholar]
  • 13.Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, et al. A survey on deep learning in medical image analysis. Medical image analysis. 2017;42:60–88. [DOI] [PubMed] [Google Scholar]
  • 14.Shen D, Wu G, Suk H-I. Deep learning in medical image analysis. Annual review of biomedical engineering. 2017;19:221–48. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Ronneberger O, Fischer P, Brox T, editors. U-net: Convolutional networks for biomedical image segmentation International Conference on Medical image computing and computer-assisted intervention; 2015: Springer. [Google Scholar]
  • 16.He K, Zhang X, Ren S, Sun J, editors. Identity mappings in deep residual networks European Conference on Computer Vision; 2016: Springer. [Google Scholar]
  • 17.Willmore TJ. An introduction to differential geometry: Courier Corporation; 2013. [Google Scholar]
  • 18.Weatherburn CE. Differential geometry of three dimensions: Cambridge University Press; 2016. [Google Scholar]
  • 19.Orfanidis S Introduction to signal processing: Pearson Education; 2010. [Google Scholar]
  • 20.Boykov Y, Funka-Lea G. Graph cuts and efficient ND image segmentation. International journal of computer vision. 2006;70(2):109–31. [Google Scholar]
  • 21.Pawlowski N, Ktena SI, Lee MC, Kainz B, Rueckert D, Glocker B, et al. Dltk: State of the art reference implementations for deep learning on medical images. arXiv preprint arXiv:171106853. 2017. [Google Scholar]
  • 22.Hampshire T, Menys A, Jaffer A, Bhatnagar G, Punwani S, Atkinson D, et al. A probabilistic method for estimation of bowel wall thickness in MR colonography. PloS one. 2017;12(1):e0168317. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.MSP A, VG I, SP M, VG U, JL J, RU J. Halide removal from aqueous solution by novel silver-polymeric materials. The Science of the total environment. 2016;573:1125–31. Epub 2016/10/05. [DOI] [PubMed] [Google Scholar]
  • 24.Kurugol S, Freiman M, Afacan O, Domachevsky L, Perez-Rossello JM, Callahan MJ, et al. Motion-robust parameter estimation in abdominal diffusion-weighted MRI by simultaneous image registration and model estimation. Medical image analysis. 2017;39:124–32. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Schüffler PJ, Mahapatra D, Naziroglu R, Li Z, Puylaert CA, Andriantsimiavona R, et al. , editors. Semi-automatic Crohn’s Disease Severity Estimation on MR Imaging International MICCAI Workshop on Computational and Clinical Challenges in Abdominal Imaging; 2014: Springer. [Google Scholar]

RESOURCES