Abstract
Patient-specific biomechanical models have been advocated as a tool for predicting deformations of soft body organs/tissue for medical image registration (aligning two sets of images) when differences between the images are large. However, complex and irregular geometry of the body organs makes generation of patient-specific biomechanical models very time consuming. Meshless discretisation has been proposed to solve this challenge. However, applications so far have been limited to 2-D models and computing single organ deformations. In this study, 3-D comprehensive patient-specific non-linear biomechanical models implemented using Meshless Total Lagrangian Explicit Dynamics (MTLED) algorithms are applied to predict a 3-D deformation field for whole-body image registration. Unlike a conventional approach which requires dividing (segmenting) the image into non-overlapping constituents representing different organs/tissues, the mechanical properties are assigned using the Fuzzy C-Means (FCM) algorithm without the image segmentation. Verification indicates that the deformations predicted using the proposed meshless approach are for practical purposes the same as those obtained using the previously validated finite element models. To quantitatively evaluate the accuracy of the predicted deformations, we determined the spatial misalignment between the registered (i.e. source images warped using the predicted deformations) and target images by computing the edge-based Hausdorff distance. The Hausdorff distance-based evaluation determines that our meshless models led to successful registration of the vast majority of the image features.
Keywords: Patient-Specific Biomechanical Modelling, Whole-Body Image Registration, Meshless Model, Hausdorff Distance, Meshless Methods
1 Introduction
Quantitative comparison of medical images acquired at different times or in different modalities for a given patient is crucial for analysis of disease progression and assessment of responses to therapies [1, 2]. Such images are typically acquired for different postures of the patient, and the patient’s stature/organ geometry can be affected by the therapy and disease progression. This necessitates aligning the images before they can be quantitatively compared. This is known as non-rigid (as both rigid body motion and organ deformations are involved) registration: one of the image sets (referred to as the source image) is deformed/”warped” to the configuration of the second image set (referred to as the target image). Many non-rigid registration algorithms that solely rely on image processing techniques have been proposed [2, 3], Figure 1a. Such algorithms have been proven effective for a single organ and relatively small differences between the source and target images [1–3]. Problems that involve large differences (deformations) between the source and target images, such as whole-body Computed Tomography (CT) or Magnetic Resonance (MR) images of the brain undergoing surgery, still remain a challenge. For such problems, biomechanical models, in which predicting deformations of body organs/tissue and motion of articulated skeletons is treated as a computational problem of solid mechanics, have been introduced in the last 10–15 years [2, 4], Figure 1b.
Figure 1.
Diagram of the image registration process. (a) Image processing-based registration. The source image (M) is transformed using the chosen transformation T (in this case the displacements of control points) to obtain the transformed image T(M). The transformed image is then compared with the target image (F) based on a chosen similarity measure S; (b) Registration using biomechanical models to compute organ/tissue deformations. Based on source images, a computational grid is created. In current research practice, this requires image segmentation to extract the anatomical features of interest followed by finite element meshing. In this study, we propose to replace image segmentation and meshing by fuzzy tissue classification and meshless discretisation. A biomechanical model is defined further by incorporating boundary conditions and material properties. The transformation computed by the solver is used to warp the source image.
Computations of soft tissue deformations for image registration have historically relied on finite element analysis [5, 6]. Our results and studies by other researchers have demonstrated that accurate prediction of organ deformations can be achieved through application of fully non-linear (i.e. accounting for both geometric and material non-linearity) Finite Element procedures [6–8]. However, building patient-specific Finite Element models that represent geometry of a given patient remains a tedious task that consumes valuable analyst’s time and is obviously incompatible with existing clinical workflows. Our research group [9, 10] and other scientists [8] have identified two key bottlenecks associated with creating such models in the context of computation of brain deformation for image-guided neurosurgery:
Dividing the image (MR or CT) into non-overlapping constituents with different material properties, in a process known as image segmentation [2, 7, 8, 10], to define the geometry for biomechanical models and assign constitutive properties;
Creating a computational grid (finite element mesh). For computation of soft tissue responses, high-quality hexahedral meshes are desirable because tissues are nearly incompressible [11].
Automated image segmentation still remains a challenge and is the subject of extensive research [12], particularly for registration of whole body and abdominal region images that are acquired in relatively low resolution and are affected by various artefacts. For instance, Sharma and Aggarwal [12] list six artefacts which can affect segmentation of abdominal CT images. Consequently, substantial input from an analyst is required to conduct segmentation, especially when tumours and other pathologies with irregular geometries are present.
Our experience [9, 10, 13] indicates that generation of patient-specific hexahedral meshes of the body organs requires time-consuming manual mesh correction even with the advanced software for generation of anatomic Finite Element meshes such as IA-FEMesh [14] and MIMICS [15].
In our previous studies we proposed the following solutions to eliminate tedious image segmentation and mesh generation when creating biomechanical models for computing organ deformations for image registration:
To assign material properties using a fuzzy tissue classification membership function without the need for image segmentation [9, 13].
To use meshless (also known as mesh-free) methods of computational mechanics that utilise an unstructured cloud of points for spatial discretisation and are therefore much less demanding when building computational grids than Finite Element discretisation [10, 16].
In recent years our research group has developed a suite of meshless algorithms (Meshless Total Lagrangian Explicit Dynamics MTLED) that rely on total Lagrangian formulation of non-linear solid mechanics and explicit integration in the time domain [9, 10, 16, 17]. We demonstrated the effectiveness of these algorithms working together through application in patient-specific brain models for computation of brain deformations for image-guided neurosurgery [9, 10]. However, applications that address the problem of rapid generation of patient-specific biomechanical models through the use of meshless discretisation together with fuzzy tissue classification were limited to computation of the 2-D deformation field within brain sections [9]. In this study, we further evaluate and demonstrate the capabilities of MTLED and fuzzy tissue classification through application in 3-D patient-specific simulations for computation of body organ/tissue deformations for registration of whole-body CT images. Given the variety of tissue types depicted in these images, large differences between the images (due to the differences in the patient posture) and large image size (number of voxels in the image), the problem is even more challenging than computation of brain deformations for image-guided neurosurgery which we previously addressed [9, 10].
To the best of our knowledge, application of meshless discretisation to create patient-specific models for computation of organ/tissue deformations for whole-body medical image registration has not been attempted before, with the exception of the limited analysis (data for only one patient, qualitative validation only) we recently presented in Li et al. [18]. As no quantitative evaluation of registration accuracy using this approach has been conducted before, stringent scrutiny of the results obtained here is needed. We verify the deformations of body organs/tissues computed using non-linear meshless models against the results we previously obtained [13] using finite element models. For validation, we use edge-based Hausdorff distance (HD) to quantify the spatial differences between the registered (i.e. source images warped using the deformations predicted using meshless models) and target images, which is a verified measure of image registration accuracy [19–21].
This paper is organised as follows: the introduction is in Section 1, information about the algorithms, construction of the biomechanical models, verification and validation procedures is in Section 2, results that report on verification and validation are in Section 3 and the discussion is presented in Section 4.
2 Materials and Methods
2.1 Meshless Method for Computing Organ Deformations
We use the Meshless Total Lagrangian Explicit Dynamics (MTLED) algorithm previously developed and verified by our research group. As the algorithm development has been described in the literature [10, 16, 17, 22], only a brief summary is provided here.
2.1.1 Spatial discretisation and integration
The MTLED algorithm uses a modified Galerkin method. For the field variable approximation, we discretise the analysed domain geometry by support nodes where the mass is concentrated (lumped), and forces and displacements are computed. Numerical integration is performed using a Gaussian quadrature over the background grid of integration cells. The MTLED algorithm facilitates tetrahedral and regular hexahedral background grids. In this study, we use a regular hexahedral grid with one integration point per cell. As we do not require the background cells to conform to the problem domain geometry (as determined by whole-body CT image-sets), generation of the integration grid can be performed automatically even for hexahedral integration cells. Our previous studies on computing brain responses confirm the efficiency and accuracy of this approach [9, 10].
2.1.2 Shape Functions
We use the moving least-square (MLS) approximation for its simplicity and robustness. The basis functions are low order monomials, and weight functions are quartic spline [17].
2.1.3 Explicit Dynamics
We use explicit integration (central differences method) in the time domain for its efficiency [23]
| (1) |
where tU is the displacement calculated at time t, F is the reaction force, Δt is the integration step, and M is the lumped mass matrix. The mass associated with an integration point is distributed equally across all nodes in the support domain of a given integration point.
Application of the lumped mass matrix decouples the system of Equation (1) and allows computation of the solution separately for each degree of freedom. No system of equations needs to be assembled and no iterations are required even for highly non-linear problems.
The explicit integration scheme using the central difference method (Equation 1) is only conditionally stable. We used a stable time step estimate established by Joldes et al. [22] for meshless methods that rely on mass lumping.
2.1.4 Dynamic Relaxation
In computing the organ/tissue deformations for medical image registration, we are interested in deforming (or “warping”) the source image to the target image configuration. Information about the history of deformation is not required for such computation. Therefore, we used the dynamic relaxation algorithm [24] for fast and accurate convergence to a steady state solution. The algorithm uses a termination criterion based on an estimate of the maximum absolute displacement error in the solution.
The solution terminates (i.e. is regarded as converged) if the estimated absolute error is smaller than the designated termination threshold for a consecutive number of iterations. Following our previous studies [9], we used the termination threshold of 0.1 mm (which is close to 1/10 of the smallest voxel size in the analysed images).
2.2 Meshless Patient-Specific Whole-Body Model
2.2.1 Whole-Body CT Image Datasets
The whole-body CT image datasets analysed in this study (Figure 2) were obtained from The Cancer Imaging Archive (https://public.cancerimagingarchive.net/ncia/login.jsf) database [25–33]. The images in this database are freely available to browse, download and use for commercial, scientific and educational purposes under the Creative Commons Attribution 3.0 Unported Licence. Each dataset contains images of a patient acquired at different times. We used two image-sets from each of three datasets (see Figure 2): one was treated as the source/moving image and another one as the target/fixed image.
Figure 2.
Sagittal sections of three whole-body CT image datasets analysed in this study
As the whole-body CT datasets used in this study differ in resolution (Table 1), we resampled the datasets using linear interpolation to a common resolution of 1mm×1mm×2.5mm before conducting the analysis. This was performed using a built-in procedure ‘Resample Scalar Volume’ in 3D SLICER (http://www.slicer.org/) an open-source software for visualisation, registration, segmentation and quantification of medical data developed by the Artificial Intelligence Laboratory of Massachusetts Institute of Technology and the Surgical Planning Laboratory at Brigham and Women’s Hospital and Harvard Medical School [34].
Table 1.
Original resolution (in mm) of three whole-body CT image datasets analysed in this study
| Source Image (mm) | Target Image (mm) | |
|---|---|---|
| Case I | 1.05×1.05×2.5 | 1.06×1.06×2.5 |
| Case II | 0.84×0.84×2.5 | 0.80×0.80×2.5 |
| Case III | 0.90×0.90×2.5 | 0.98×0.98×2.5 |
2.2.2 Geometry Discretisation
Computational grid density (node spacing) was determined based on the experience obtained in our previous studies on the application of meshless discretisation in computation of organ deformations [9, 10]. We filled the torso volume with nodes using an average spacing of 3.5 mm (the same as the nodal spacing used by Miller et al. [10] when computing brain deformations), which resulted in meshless discretisations consisting of 78,573 nodes for Case I, 86,016 nodes for Case II, and 137,344 nodes for Case III (see Table 2 and Figure 3). This method of discretising the complex and irregular geometry of the human body is a relatively trivial exercise. In contrast, constructing good quality Finite Element meshes for the same geometries was a time-consuming and tedious process despite the application of recent semi-automated software tools for anatomical mesh generation [13].
Table 2.
Number of support nodes for approximation of field variable and integration points in three analysed cases
| Number of Support Nodes | Number of Integration Points | |
|---|---|---|
| Case I | 78,573 | 162,943 |
| Case II | 86,016 | 174,051 |
| Case III | 137,344 | 277,495 |
Figure 3.
Example of meshless discretisation created and used in this study. Whole-body meshless computational grid used in registration of Case I. As specific features of geometry of the analysed continuum are rather difficult to distinguish/visualise in meshless discretisation, we do not show the discretisation for Cases II and III. (a) “Cloud” of 78,573 nodes is used for spatial discretisation; (b) Distribution of support nodes and integration points on selected transverse sections. The blue crosses and yellow circles represent the support node and integration point, respectively.
As stated in Section 2.1, our MTLED algorithm separates the computational grid for field variable approximation and background cells for numerical integration. Although this allows great flexibility when constructing the models, the analyst must still ensure that the number of integration points is sufficient to obtain an accurate and stable solution. We followed the results of a parametric study of the MTLED algorithm by Horton et al. [17] who recommended the ratio of integration points to nodes of slightly above two. The number of integration points in each model is given in Table 2.
2.2.3 Boundary Conditions
The deformations/geometry changes of body organs and tissue depicted in the whole-body CT images of the same patient taken at different times are due to multiple factors that are extremely difficult to quantify. They include changes in the patient’s posture, differences in the patient’s position in relation to the scanner, changes in patient’s stature, effects of treatment and disease progression. Furthermore, there are always uncertainties in the patient-specific properties of tissues.
To reduce the effects of such uncertainties, in the present study, computation of organ/tissue deformation for whole-body medical image registration is formulated as a “displacement–zero traction” problem of solid mechanics [35, 36]. In this formulation, a biomechanical model is loaded by forced motion of the boundaries. Consequently, the computed deformations very weakly depend on the mechanical properties of the continuum [36, 37].
Prescribing motion of the boundaries requires us to accurately and reliably determine the displacements of selected points between the source and target images. Since the vertebrae can be reliably distinguished in CT images (as their intensity appreciably differs from that of the surrounding tissues), we selected them as the areas of the boundary to determine the displacements and prescribe the motion. The vertebrae displacements were determined by conducting rigid registration (between target and source images) for each vertebra. We used the built-in rigid registration algorithm in 3D SLICER (a free, open source software package for visualisation and image analysis) [34].
MLS shape functions used in MTLED (and other Galerkin-type meshless algorithms) do not have the Kronecker Delta property [16], which tends to introduce inaccuracies when prescribing essential boundary conditions. Therefore, following the previous studies by our research group [16], we used coupling of MLS with Finite Element interpolation in the areas where essential boundary conditions were applied.
2.2.4 Assigning Material Properties: Fuzzy C-Means Algorithm and Constitutive Model
Fuzzy Tissue Classification (Fuzzy C-Means Algorithm)
To assign the material properties at the integration points we used tissue classification that utilises Fuzzy C-Means (FCM) [38]. This approach (referred to as fuzzy tissue classification) has been successfully used in the previous studies by our research group for computation of deformations of the brain undergoing surgery [9] using the meshless MTLED algorithm and computation of the organ/tissue deformations for whole-body CT image registration using Finite Element discretisation [13].
In the FCM clustering algorithm, each pixel (voxel) in the image is assigned to a number of different tissue types (classes) with different probability for each class. This is done by clustering similar intensity data (pixels) through computation of the membership function uij that links the intensity at each pixel with all the specified (i.e. defined by the analyst) cluster centres. The membership function forms partition of unity. It is calculated by minimising the objective function JFCM [9, 38]
| (2) |
where N is number of data samples (i.e. pixels in the analysed image), C is the number of cluster centres (tissue types/classes), q is the weighting factor of the fuzziness degree of clustering, uij is the fuzzy membership function that expresses the probability of one data sample xi (pixel) belonging to a specified tissue class, and d is the spatial distance between the data sample (pixel). xi and cluster centre θj. In this study, we used a fuzziness degree of clustering q of 2, which is a value commonly applied for soft tissue classification [39].
Following our previous studies [13], we used eight tissue classes (see Table 3). As the pixel intensity of muscles, liver and kidneys is similar (see Figure 2), we classified them as belonging to the same tissue class (Class 6 in Table 3). Although this may introduce some inaccuracy, the effects are likely to be limited [35–37]. Our previous studies on neuroimage registration suggest that if the loading is prescribed via forced motion of the boundary, the computed deformation field within the continuum depends weakly on the mechanical properties of the continuum [35–37]. Furthermore, this has also been observed in our recent study on computed deformations for whole-body image registration using the Finite Element method [13].
Table 3.
Cluster (image intensity) centres obtained using the FCM algorithm for three analysed CT image datasets. Class 1, 2 and 3 is for lungs and other gas-filled spaces (such as the abdominal cavity), Class 4 — fat, Class 5 – muscles and abdominal organs, Class 6 — stomach and intestines, and Class 7 and 8 — bones.
| Class 1 | Class 2 | Class 3 | Class 4 | Class 5 | Class 6 | Class 7 | Class 8 | |
|---|---|---|---|---|---|---|---|---|
| Case I | −650 | −481 | −247 | −89 | −38 | 16 | 238 | 527 |
| Case II | −826 | −537 | −326 | −90 | −32 | 43 | 274 | 661 |
| Case III | −711 | −519 | −303 | −104 | −45 | 57 | 253 | 665 |
Constitutive model and properties
Despite recent progress in magnetic resonance (MR) and ultrasound elastography [40], commonly accepted non-invasive methods for determining patient-specific constitutive properties of soft tissues have not been developed yet. However, there is a vast body of experimental evidence suggesting that soft tissues behave like hyperelastic materials [41, 42]. Therefore, following our previous studies [36], we used the Neo-Hookean hyperelastic model,
| (3) |
where µ is the shear modulus, k is the bulk modulus, Ī1 is the first strain invariant of the right Cauchy Green deformation tensor, is the volumetric change, and is the deformation gradient. For each integration point, the shear modulus µ is interpolated based on the membership function determined using the FCM algorithm
| (4) |
where µi is the shear modulus at a location (integration point) i, µj is the shear modulus for a given tissue class, and C is the number of tissue classes (centres of the intensity clusters in the images), and µij is the fuzzy membership function (see Equation 2).
An example (Case I) illustrating the results of calculation of the shear modulus of the body tissues using the FCM algorithm for each integration point of the meshless model is shown in Table 4 and Figure 4.
Table 4.
Shear modulus (×103Pa) for each tissue class for the analysed CT image datasets. Class 1, 2 and 3 is for lungs and other gas-filled spaces (such as the abdominal cavity), Class 4 — fat, Class 5 – muscles and abdominal organs, Class 6 — stomach and intestines, and Class 7 and 8 — bones.
Figure 4.
Case I (transverse slice). Material properties (shear modulus) assignment using the FCM algorithm at integration points. The shear modulus magnitude is represented by a colour scale. Note that the integration points belonging to the same tissue class (indicated by the same colour) match the areas where the image intensity is similar. Only local tissue misclassification is present. This can be seen as a local variation in the integration point colour (where the adjacent integration points have a different colour and, consequently, different shear modulus assigned) at the boundaries between different tissue classes.
2.3 Verification and Validation of Deformations Computed Using Patient-Specific Meshless Models
2.3.1 Verification
As explained in Section 2.1.1, a regular background integration grid we used in our MTLED algorithm does not conform to the problem domain geometry. Although our previous studies [9, 10] on computing brain deformations for image-guided neurosurgery confirm the accuracy of this approach, we conducted additional verification for all three image datasets analysed here. This was done by comparing the nodal displacements in the models implemented using this algorithm with the previously-validated whole-body finite element models [13] that conform to the problem domain geometry (Table 5). Such comparison was possible as we used the same material properties (see Section 2.2.4) and the same number of nodes in the finite element and meshless models (see Section 2.1.1). The Finite Element models were implemented using the Total Lagrangian Explicit Dynamics (TLED) algorithm with dynamic relaxation, developed and verified by our research group [23, 43].
Table 5.
Numbers of elements and nodes for finite element models of the three analysed cases. The number of nodes is the same as that used in the meshless models created in this study (see Table 2).
| Number of Nodes | Number of Elements | |
|---|---|---|
| Case I | 78,573 | 72,897 |
| Case II | 86,016 | 92,625 |
| Case III | 137,344 | 128,989 |
2.3.2 Validation
Qualitative evaluation of the accuracy of computed deformations
Following previous studies [19, 20], we visually compared the contours/edges automatically detected using a Canny edge filter [44] in the registered (i.e. source image warped using the deformations predicted by means of a biomechanical model) and target images.
Quantitative evaluation of the accuracy of computed deformations
Accuracy of computation of organ/tissue deformations for image registration is typically assessed using various similarity measures to quantitatively compare the registered (source image warped using the deformations predicted by means of a biomechanical model) and target images. There is still some controversy regarding the reliability of such measures [45]. Following our previous studies [19, 20] and recommendations by other researchers [46], we use an edge-based Hausdorff distance (HD) metric on edges detected using a Canny filter [44] (often referred to as Canny edges). This metric determines the spatial (Euclidean) distance between the Canny edges in the registered and target images [19, 20]
| (5) |
where X = {x1,x2,…xm} and Y = {y1,y2,…yn} are consistent (i.e. depicting the same anatomical features) Canny edges in the deformed (registered) and target image respectively, h(X,Y) is defined as the maximum distance from any of the points in the first edge set to the closest point in the second edge set.
Following Mostayed et al. [20], a “round-trip consistency” procedure [47] was applied before the Hausdorff distance was calculated. This procedure ensures the edges’ consistency by removing outliers (the pixels in one image that do not correspond to the other image) that tend to be present if the intensity ranges of the target and source images are different.
The Hausdorff distance as defined by Equation (5), although commonly used in the literature, estimates only the upper limit of dissimilarities between two images. Therefore, following Mostayed et al. [20], we do not report a single (maximum) Hausdorff distance value but instead use Equation (5) to construct a percentile Hausdorff distance on edges Hp(X, Y)
| (6) |
where the Pth percentile Hausdorff distance Hp between two images means that ‘P’ percent of total edge pairs have a Hausdorff distance below Hp.
We report Hausdorff distance value for different percentiles. A plot of such values (see Section 3.2.1) immediately reveals the percentage of edges that have acceptable misalignment errors. Accuracy of detection of image features (represented here by Canny edges) is limited by the image resolution. Therefore, following the previous studies [19, 20], we consider any edge pair having Hausdorff distance value less than twice the voxel size of the original source image to be successfully registered. However, in surgery it is natural to maximise the registration accuracy for a given patient [2]. Therefore, for some applications, such as localisation of the tumour boundaries and determining the tumour dimensions, accuracy requirements more stringent than twice the voxel size used here may need to be satisfied.
In many studies of whole-body CT image registration, the average maximum-likelihood Hausdorff distance (M-HD) is used as the registration accuracy measure [48]. Therefore, we also report this measure for the image registrations we conducted to enable comparison with the results obtained by other researchers (Table 6). We use the maximum-likelihood Hausdorff distance (M-HD) as defined by Suh et al. [48].
Table 6.
95, 85, 75 and 60 -percentile and average edge-based HD metric (mm) between the registered and target images for the whole-body image-sets analysed in this study. The voxel size (maximum dimensions) is in mm. The image-sets are shown in Figure 1.
| 95- percentile HD metric (mm) |
85- percentile HD metric (mm) |
75- percentile HD metric (mm) |
60- percentile HD metric (mm) |
Average HD metric (mm) |
Voxel Size (mm) |
|
|---|---|---|---|---|---|---|
| Case I | 7.21 | 5.00 | 4.47 | 4.00 | 3.92 | 2.5 |
| Case II | 7.05 | 5.00 | 4.24 | 3.60 | 3.83 | 2.5 |
| Case III | 5.09 | 4.12 | 3.60 | 3.16 | 3.36 | 2.5 |
To compare the performance of our biomechanical registration using the meshless algorithm with traditionally-used image-based registration methods, we also report the Hausdorff distance between the edges in registered and target images for rigid registration and non-rigid using the BSpline algorithm. Following [19, 20] we used the BSpline (free form deformation FFD) non-rigid registration algorithm from 3D SLICER (www.slicer.org) with a 10×10×10 grid. The rigid registration algorithm used here is also from 3D SLICER.
3 Results
3.1 Verification
For over 99.5% of nodes, nodal displacements computed using models implemented using meshless (MTLED) and Finite Element (TLED) models differed by less 1 mm (Figure 5). As the resolutions of whole-body CT image datasets are 1.06×1.06×2.5 (Case I), 0.8×0.8×2.5 (Case II) and 0.98×0.98×2.5 (Case III), this difference can be safely regarded as negligible. For a very small number of nodes (less than 0.5% of the nodes) located at the outer boundary of the models, the differences are larger and up to 3–4 mm which is still within the accuracy threshold of twice the voxel size commonly used in image registration. These differences were observed in the areas where curvature changes form concave features in the boundary. As we used regular placement of nodes and integration points, such features tend to lead to integration cells with relatively few nodes and integration points that are not connected to all their immediate neighbours. Consequently, the nodes that are visibly close to each other can move apart — a phenomenon described by Horton et al. [17]. Adaptive integration schemes, such as that we recently proposed and verified in [49], may provide one possible solution.
Figure 5.
Verification of the meshless discretisation (MTLED algorithm) with fuzzy tissue classification as a tool for computing organ/soft tissue deformations for whole-body image registration. Comparison of the nodal displacements in the models implemented using the MTLED algorithm and previously validated non-linear finite element models for the image datasets analysed in this study. For over 99.5% of the nodes, the differences are for practical purposes negligible (much smaller than the image voxel size — 1 mm×1 mm×2.5 mm). For a very small number of nodes located at the outer boundary (skin and subcutaneous) of the models, the differences are up to 3–4 mm which is still within the accuracy threshold of twice the voxel size commonly used in image registration.
3.2 Validation: Evaluation of the Registration Accuracy
3.2.1 Qualitative Evaluation
With the exception of some local misalignments, the edges extracted from the registered (i.e. the source images warped using the deformations predicted by the proposed biomechanical model) and target images using a Canny filter with the same parameters closely overlap (Figures. 6, 7 and 8). The overlap tended to be better in the posterior than anterior and lateral image parts. One possible explanation for this tendency can be that the biomechanical models for computing the tissue deformations were loaded in the posterior part by prescribing the vertebrae motion as described in Section 2.2.3. This is confirmed by nearly ideal overlap of edges in the registered image and anatomical structures in the target image in the vertebrae area (Figure 8).
Figure 6.
Qualitative evaluation of the registration accuracy for three CT image datasets analysed in this study (transverse slices). Left-hand-side column: comparison of the edges in the source and target image. Right-hand-side column: comparison of the edges in the registered (i.e. warped using the deformation computed by biomechanical models developed in this study) and target images. Edges in the source image are indicated by red colour; edges in target image — by green; and the edges in the registered image — by pink. Good overlap (indicated by blue colour, with some local misalignment) between the edges in registered and target images is evident.
Figure 7.
Qualitative evaluation of the registration accuracy for three CT image datasets analysed in this study (frontal slices). For each case, (left) Figure indicates comparison of edges in the source and target images; and (right) — comparison of edges in the registered (i.e. warped using the deformation computed by biomechanical models developed in this study) and target images. Edges in the source image are indicated by red colour; edges in target image — by green colour; and the edges in the registered image — by pink colour. Good overlap (indicated by blue colour, with some local misalignment) between the edges in registered and target images is evident.
Figure 8.
Qualitative evaluation of the registration accuracy for three CT image datasets analysed in this study (vertebrae area). Edges from the registered image are shown on the target image. Note nearly ideal overlap of the edges in the registered image with the vertebrae contours in the target image.
3.2.2 Quantitative Evaluation
The percentile Hausdorff Distance (HD) metric on edges is used to quantitatively measure the spatial distance between the original source and target images. As stated in Section 2.3.2, we consider edges having the Hausdorff distance (HD) below twice the voxel size (5 mm) as successfully registered.
As one may anticipate, the results indicate higher accuracy of non-rigid than rigid registration (Figure 9). Figure 9 clearly indicates that for Cases I and II, 85th percentile HD equals 5 mm for registration using our meshless algorithm. This means that 85% of edges in these two image-sets were successfully registered. For registration using the BSpline algorithm, around 75% of edges were successfully registered for Cases I and II. For Case III, application of our meshless algorithm resulted in 90% of edges successfully registered while for BSpline, 80% of edges were successfully registered. Although the improvement is not dramatic, these results clearly indicate that the accuracy achieved using our meshless algorithm tends to exceed that of non-rigid registration using BSpline.
Figure 9.
The percentile edge-based HD metric for all three cases/image-sets analysed in this study. For each case, the HD metric is used to measure the spatial Euclidean distance between the source and target images and between the registered and target images. The results were obtained for three registration methods: rigid registration, registration using the BSpline free form deformation (FFD) algorithm, and registration using our meshless biomechanical algorithm that computes the deformations to align (register) the source and target images.
As mentioned in section 2.3.2, in many studies of whole-body CT image registration, the average maximum-likelihood Hausdorff distance (M-HD) rather than HD percentile is used as the measure of registration accuracy. Suh et al. [48] reported the M-HD of around 4 image voxels when conducting non-rigid registration for rat whole-body CT and positron emission tomography (PET) images. Li et al. [50] reported an average error of 2 voxels with standard deviation of 1.3 voxels for an interactive 3D volumetric voxel registration technique applied in registration of human whole-body CT and MR images. Akbarzadeh et al. [51] recently reported landmark-based Hausdorff distances (HD) of 10 mm (four times the voxel size) between the registered and target images for whole-body CT image registration using the B-spline deformable transform.
The results obtained here compare well with those reported in [48, 50, 51]. For all three image data-sets we analysed, the M-HD between the edges in registered and target images was between 3 and 4 mm for registration using our meshless algorithm (Table 6). This is within the accuracy threshold of 5 mm (twice the voxel size) for successful non-rigid image registration used in the literature [25]. However, as no information about initial image misalignment (i.e. HD between source and target images) is provided in [48, 50, 51], caution is needed when drawing quantitative conclusions from comparison of the registration accuracy (as measured by M-HD) we report in this study and results in [48, 50, 51].
For the CT image datasets analysed in this study, the percentile edge-based HD curves tend to rise steeply at around 95th percentile (Figure 9). This phenomenon was also observed in our study on non-rigid neuroimage registration in which the brain deformation was predicted using non-linear finite element models [19]. Therefore, it appears that most edge pairs that lie between the 96th and 100th percentiles do not have any correspondence (i.e. edges in the registered and target images do not correspond to each other) and are possible outliers.
4 Discussion
We showcased a fuzzy meshless framework for patient-specific biomechanical modelling for computing 3-D deformations of soft tissues and organs for registration of whole-body radiographic images. In previous studies on brain deformation computation [7, 9], we identified that integration of anatomical geometric data extracted from medical images with information about material properties is one of the key challenges in patient-specific brain modelling. Because of the presence of multiple tissue types and multiple organs with complex geometry, the task is even more formidable for the models of the entire torso we created and used in this study.
We eliminated the need for image segmentation and mesh generation when building patient-specific biomechanical models by extracting the material properties directly from the images using fuzzy tissue classification and incorporating them within our meshless algorithm. Such automated extraction of material properties may lead to local tissue misclassification. However, this has a weak impact on the results of computation of deformations because as in our previous studies [7, 9, 10, 19, 36], we used the formulation of computational mechanics problems in which the loading is defined by prescribing displacements at selected points of the boundary. In this formulation the computed displacements are only weakly sensitive to uncertainty/variation in the material properties. Geometric non-linearity (large deformations) still needs to be taken into account [36].
Qualitative and quantitative validation (see section Validation: Evaluation of the Registration Accuracy) indicates that the accuracy of predicting the organ/tissue deformations we achieved was sufficient to successfully register the whole-body CT image-sets analysed in this study, and compares well with that achieved using image processing techniques (such as BSpline) to compute the transformation for whole-body image registration.
This study confirms that integration of fuzzy tissue classification (that may lead to local tissue misclassification and does not delineate organ boundaries) within the meshless algorithms of solid mechanics in the context of patient-specific biomechanical modelling facilitates sufficient accuracy for computing 3-D deformations for whole-body image-registration while eliminating the need for time-consuming image segmentation when building the models. One may be tempted to state that this opens the way for fully-automated (compatible with existing clinical workflows) generation of patient-specific models of complex anatomical systems directly from the medical image without the need for reliable non-invasive methods for determining patient-specific material properties of soft tissues. Caution, however, is needed when extrapolating the conclusion of this study to applications that require locating anatomical features with an accuracy better than twice the voxel size used in image-guided surgery and diagnosis.
Acknowledgments
The first author is a recipient of an APA scholarship and acknowledges the financial support of the University of Western Australia.
This work was supported in part by the National Health and Medical Research Council (Grant No. APP1006031) and Australian Research Council (Discovery Grant DP120100402).
Ron Kikinis acknowledges financial support of NIH grants P41EB015902, P41EB015898 and U24CA180918.
In addition, the authors also gratefully acknowledge the financial support of National Centre for Image Guided Therapy (NIH U41RR019703) and the National Alliance for Medical Image Computing (NAMIC), funded by the National Institutes of Health through the NIBIBNIH HHS Roadmap for Medical Research Program, Grant U54 EB005149.
The whole-body CT image datasets analysed in this study were obtained from The Cancer Imaging Archive (https://public.cancerimagingarchive.net/ncia/login.jsf) database.
The authors acknowledge the contribution of Dr Guiyong Zhang (Dalian University of Technology, formerly at The University of Western Australia) to the development of the meshless code used in this study.
The authors thank Dr Angus Tavner of the School of Mechanical and Chemical Engineering, The University of Western Australia for proofreading the manuscript.
References
- 1.Jenkinson M, Smith S. A global optimisation method for robust affine registration of brain images. Medical Image Analysis. 2001;5(2):143–156. doi: 10.1016/s1361-8415(01)00036-6. [DOI] [PubMed] [Google Scholar]
- 2.Warfield SK, et al. Capturing intraoperative deformations: research experience at Brigham and Women’s Hospital. Medical Image Analysis. 2005;9(2):145–162. doi: 10.1016/j.media.2004.11.005. [DOI] [PubMed] [Google Scholar]
- 3.Rueckert D, et al. Nonrigid registration using free-form deformations: Application to breast MR images. IEEE Transactions on Medical Imaging. 1999;18(8):712–721. doi: 10.1109/42.796284. [DOI] [PubMed] [Google Scholar]
- 4.Hagemann A, et al. Biomechanical modeling of the human head for physically based nonrigid image registration. IEEE Transactions on Medical Imaging. 1999;18(10):875–884. doi: 10.1109/42.811267. [DOI] [PubMed] [Google Scholar]
- 5.Wittek A, et al. Patient-specific model of brain deformation: Application to medical image registration. Journal of Biomechanics. 2007;40(4):919–929. doi: 10.1016/j.jbiomech.2006.02.021. [DOI] [PubMed] [Google Scholar]
- 6.Picinbono G, Delingette H, Ayache N. Non-linear anisotropic elasticity for real-time surgery simulation. Graphical Models. 2003;65(5):305–321. [Google Scholar]
- 7.Wittek A, et al. Patient-specific non-linear finite element modelling for predicting soft organ deformation in real-time; Application to non-rigid neuroimage registration. Progress in Biophysics & Molecular Biology. 2010;103(2–3):292–303. doi: 10.1016/j.pbiomolbio.2010.09.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Hu JW, et al. Intraoperative brain shift prediction using a 3D inhomogeneous patient-specific finite element model. Journal of Neurosurgery. 2007;106(1):164–169. doi: 10.3171/jns.2007.106.1.164. [DOI] [PubMed] [Google Scholar]
- 9.Zhang JY, et al. Patient-specific computational biomechanics of the brain without segmentation and meshing. International Journal for Numerical Methods in Biomedical Engineering. 2013;29(2):293–308. doi: 10.1002/cnm.2507. [DOI] [PubMed] [Google Scholar]
- 10.Miller K, et al. Beyond finite elements: A comprehensive patient-specific neurosurgical simulation utilizing a meshless method. Journal of Biomechanics. 2012;45(15):2698–2701. doi: 10.1016/j.jbiomech.2012.07.031. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Yang K, King A. Modeling of the Brain for Injury Simulation and Prevention. In: Miller K, editor. Biomechanics of the Brain. New York: Springer; 2011. pp. 91–110. [Google Scholar]
- 12.Sharma N, Aggarwal LM. Automated medical image segmentation techniques. J Med Phys. 2010;35(1):3–14. doi: 10.4103/0971-6203.58777. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Li M, et al. Patient-specific biomechanical model as whole-body CT image registration tool. Medical Image Analysis. 2015;22(1):22–34. doi: 10.1016/j.media.2014.12.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Grosland NM, et al. IA-FEMesh: An open-source, interactive multiblock approach to anatomic finite element model development. Computer Methods and Programs in Biomedicine. 2009;94(1):96–107. doi: 10.1016/j.cmpb.2008.12.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Jermyn M, et al. Fast segmentation and high-quality three-dimensional volume mesh creation from medical images for diffuse optical tomography. Journal of Biomedical Optics. 2013;18(8) doi: 10.1117/1.JBO.18.8.086007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Zhang GY, et al. A three-dimensional nonlinear meshfree algorithm for simulating mechanical responses of soft tissue. Engineering Analysis with Boundary Elements. 2014;42(0):60–66. [Google Scholar]
- 17.Horton A, et al. A meshless Total Lagrangian explicit dynamics algorithm for surgical simulation. International Journal for Numerical Methods in Biomedical Engineering. 2010;26(8):977–998. [Google Scholar]
- 18.Li M, et al. Patient-Specific Meshless Model for Whole-Body Image Registration. In: Bello F, Cotin S, editors. Biomedical Simulation. Springer International Publishing; 2014. pp. 50–57. [Google Scholar]
- 19.Garlapati RR, et al. More accurate neuronavigation data provided by biomechanical modeling instead of rigid registration. J Neurosurg. 2014;120(6):1477–1483. doi: 10.3171/2013.12.JNS131165. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Mostayed A, et al. Biomechanical model as a registration tool for image-guided neurosurgery: Evaluation against BSpline registration. Annals of Biomedical Engineering. 2013;41(11):2409–2425. doi: 10.1007/s10439-013-0838-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Fedorov A, et al. Evaluation of brain MRI alignment with the robust Hausdorff distance measures. Advances in Visual Computing, Pt I, Proceedings. 2008;5358:594–603. [Google Scholar]
- 22.Joldes GR, Wittek A, Miller K. Stable time step estimates for mesh-free particle methods. International Journal for Numerical Methods in Engineering. 2012;91(4):450–456. [Google Scholar]
- 23.Miller K, et al. Total Lagrangian explicit dynamics finite element algorithm for computing soft tissue deformation. Communications in Numerical Methods in Engineering. 2007;23(2):121–134. [Google Scholar]
- 24.Joldes GR, Wittek A, Miller K. Computation of intra-operative brain shift using dynamic relaxation. Computer Methods in Applied Mechanics and Engineering. 2009;198(41–44):3313–3320. doi: 10.1016/j.cma.2009.06.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Clark K, et al. The Cancer Imaging Archive (TCIA): Maintaining and operating a public information repository. Journal of Digital Imaging. 2013;26(6):1045–1057. doi: 10.1007/s10278-013-9622-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Aerts HJWL, et al. Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach. Nature Communications. 2014;5:4006. doi: 10.1038/ncomms5006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Amine A, et al. Computer Science and Its Applications. Springer International Publishing; 2015. Comparison of Automatic Seed Generation Methods for Breast Tumor Detection Using Region Growing Technique; pp. 119–128. [Google Scholar]
- 28.Armato SG, et al. The Reference Image Database to Evaluate response to therapy in lung cancer (RIDER) project: A resource for the development of change-analysis software. Clinical Pharmacology & Therapeutics. 2008;84(4):448–456. doi: 10.1038/clpt.2008.161. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Balagurunathan Y, et al. Test-retest reproducibility analysis of lung CT image features. Journal of Digital Imaging. 2014;27(6):805–823. doi: 10.1007/s10278-014-9716-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Jackson EF, et al. Magnetic resonance assessment of response to therapy: Tumor change measurement truth data and error sources. Translational Oncology. 2009;2(4):211–215. doi: 10.1593/tlo.09241. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Kinahan PE, et al. PET/CT assessment of response to therapy: Tumor change measurement truth data and error. Translational Oncology. 2009;2(4):223–230. doi: 10.1593/tlo.09223. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.McNitt-Gray MF, et al. Computed tomography assessment of response to therapy: Tumor volume change measurement truth data and error. Translational Oncology. 2009;2(4):216–222. doi: 10.1593/tlo.09226. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Meyer CR, et al. Quantitative imaging to assess tumor response to therapy: Common themes of measurement, truth data, and error sources. Translational Oncology. 2009;2(4):198–210. doi: 10.1593/tlo.09208. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Fedorov A, et al. 3D Slicer as an image computing platform for the Quantitative Imaging Network. Magnetic Resonance Imaging. 2012;30(9):1323–1341. doi: 10.1016/j.mri.2012.05.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Miller K, Wittek A, Joldes G. Biomechanical Modeling of the Brain for Computer-Assisted Neurosurgery. In: Miller K, editor. Biomechanics of the Brain. New York: Springer; 2011. pp. 111–136. [Google Scholar]
- 36.Wittek A, Hawkins T, Miller K. On the unimportance of constitutive models in computing brain deformation for image-guided surgery. Biomechanics and Modeling in Mechanobiology. 2009;8(1):77–84. doi: 10.1007/s10237-008-0118-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Miller K, Lu J. On the prospect of patient-specific biomechanics without patient-specific properties of tissues. Journal of the Mechanical Behavior of Biomedical Materials. 2013;27:154–166. doi: 10.1016/j.jmbbm.2013.01.013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Bezdek JC, Ehrlich R, Full W. Fcm - the Fuzzy C-Means Clustering-Algorithm. Computers & Geosciences. 1984;10(2–3):191–203. [Google Scholar]
- 39.Pham DL, Prince JL. Adaptive fuzzy segmentation of magnetic resonance images. IEEE Transactions on Medical Imaging. 1999;18(9):737–752. doi: 10.1109/42.802752. [DOI] [PubMed] [Google Scholar]
- 40.Green MA, et al. Measuring anisotropic muscle stiffness properties using elastography. Nmr in Biomedicine. 2013;26(11):1387–1394. doi: 10.1002/nbm.2964. [DOI] [PubMed] [Google Scholar]
- 41.Snedeker JG, et al. Strain-rate dependent material properties of the porcine and human kidney capsule. Journal of Biomechanics. 2005;38(5):1011–1021. doi: 10.1016/j.jbiomech.2004.05.036. [DOI] [PubMed] [Google Scholar]
- 42.Miller K. Constitutive modelling of abdominal organs. Journal of Biomechanics. 2000;33(3):367–373. doi: 10.1016/s0021-9290(99)00196-7. [DOI] [PubMed] [Google Scholar]
- 43.Joldes GR, Wittek A, Miller K. Suite of finite element algorithms for accurate computation of soft tissue deformation for surgical simulation. Medical Image Analysis. 2009;13(6):912–919. doi: 10.1016/j.media.2008.12.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Canny J. A Computational Approach to Edge-Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1986;8(6):679–698. [PubMed] [Google Scholar]
- 45.Rohlfing T. Image similarity and tissue overlaps as surrogates for image registration accuracy: Widely used but unreliable. IEEE Transactions on Medical Imaging. 2012;31(2):153–163. doi: 10.1109/TMI.2011.2163944. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Fedorov A, et al. Image registration for targeted MRI-guided transperineal prostate biopsy. J Magn Reson Imaging. 2012;36(4):987–992. doi: 10.1002/jmri.23688. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47.Garlapati RR, et al. Objective Evaluation of Accuracy of Intra-Operative Neuroimage Registration. In: Wittek A, Miller K, Nielsen PMF, editors. Computational Biomechanics for Medicine. New York: Springer; 2013. pp. 87–99. [Google Scholar]
- 48.Suh JW, et al. CT-PET weighted image fusion for separately scanned whole body rat. Medical Physics. 2012;39(1):533–542. doi: 10.1118/1.3672167. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Joldes GR, Wittek A, Miller K. A Total Lagrangian based method for recovering the un-deformed configuration in finite elasticity. Applied Mathematical Modelling. 2015;39(14):3913–3923. [Google Scholar]
- 50.Li G, et al. A novel 3D volumetric voxel registration technique for volume-view-guided image registration of multiple imaging modalities. International Journal of Radiation Oncology Biology Physics. 2005;63(1):261–273. doi: 10.1016/j.ijrobp.2005.05.008. [DOI] [PubMed] [Google Scholar]
- 51.Akbarzadeh A, et al. Evaluation of whole-body MR to CT deformable image registration. Journal of Applied Clinical Medical Physics. 2013;14(4):238–253. doi: 10.1120/jacmp.v14i4.4163. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52.Alcaraz J, et al. Microrheology of human lung epithelial cells measured by atomic force microscopy. Biophysical Journal. 2003;84(3):2071–2079. doi: 10.1016/S0006-3495(03)75014-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Samani A, Zubovits J, Plewes D. Elastic moduli of normal and pathological human breast tissues: an inversion-technique-based investigation of 169 samples. Physics in Medicine and Biology. 2007;52(6):1565–1576. doi: 10.1088/0031-9155/52/6/002. [DOI] [PubMed] [Google Scholar]
- 54.Collinsworth AM, et al. Apparent elastic modulus and hysteresis of skeletal muscle cells throughout differentiation. American Journal of Physiology-Cell Physiology. 2002;283(4):C1219–C1227. doi: 10.1152/ajpcell.00502.2001. [DOI] [PubMed] [Google Scholar]
- 55.Rosen J, et al. Biomechanical properties of abdominal organs in vivo and postmortem under compression loads. Journal of Biomechanical Engineering-Transactions of the ASME. 2008;130(2) doi: 10.1115/1.2898712. [DOI] [PubMed] [Google Scholar]
- 56.Lim YJ, et al. In situ measurement and modeling of biomechanical response of human cadaveric soft tissues for physics-based surgical simulation. Surg Endosc. 2009;23(6):1298–1307. doi: 10.1007/s00464-008-0154-z. [DOI] [PMC free article] [PubMed] [Google Scholar]










