Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 Aug 6.
Published in final edited form as: J Xray Sci Technol. 2013;21(2):251–282. doi: 10.3233/XST-130369

3D Segmentation of Maxilla in Cone-beam Computed Tomography Imaging Using Base Invariant Wavelet Active Shape Model on Customized Two-manifold Topology

Yu-Bing Chang a, James J Xia b, Peng Yuan f, Tai-Hong Kuo c, Zixiang Xiong a, Jaime Gateno d, Xiaobo Zhou e,*
PMCID: PMC3735231  NIHMSID: NIHMS498666  PMID: 23694914

Abstract

Recent advances in cone-beam computed tomography (CBCT) have rapidly enabled widepsread applications of dentomaxillofacial imaging and orthodontic practices in the past decades due to its low radiation dose, high spatial resolution, and accessibility. However, low contrast resolution in CBCT image has become its major limitation in building skull models. Intensive hand-segmentation is usually required to reconstruct the skull models. One of the regions affected by this limitation the most is the thin bone images. This paper presents a novel segmentation approach based on wavelet density model (WDM) for a particular interest in the outer surface of anterior wall of maxilla. Nineteen CBCT datasets are used to conduct two experiments. This mode-based segmentation approach is validated and compared with three different segmentation approaches. The results show that the performance of this model-based segmentation approach is better than those of the other approaches. It can achieve 0.25 ± 0.2mm of surface error from ground truth of bone surface.

Keywords: 3D segmentation, Active shape model (ASM), Statistical shape model (SSM), Craniomaxillofacial (CMF) surgeries, Cone-beam computed tomography (CBCT)

1 Introduction

The field of craniomaxillofacial (CMF) surgery involves the correction of congenital and acquired deformities of the skull and face. It includes dentofacial deformities, congenital deformities, combat injuries, post-traumatic defects, defects after tumor ablation, and deformities of the temporomandibular joint. Due to the complex nature of the craniomaxillofacial skeleton, the surgical correction of CMF deformities is among the most challenging. These types of the surgeries usually require extensive surgical planning. The success of these surgeries depends not only on the technical aspects of the operation, but to a larger extent on the formulation of a precise surgical plan. During the past 50 years, there have been significant improvements in the technical aspects of surgery (e.g., rigid fixation, resorbable materials, distraction osteogenesis, minimally invasive approaches, etc). However, the planning methods remain mostly unchanged [14]. At present, in CMF surgery, it is clear that many unwanted surgical outcomes are the result of deficient planning.

The advent of computed tomography (CT) and its 3D reconstruction have brought about a revolution in diagnostic radiology since cross-sectional imaging becames available [57]. 3D rendered visualization provides a surgeon with readily recognizable images of complex anatomic structures. It can exactly record and represent the life-size and the shape of soft tissure and bone for precise surgical planning and simulation. In conjunction with appropriate computer software and hardware, computer-aided surgical simulation (CASS) has been developed and has created a number of options for CMF surgeries [3,812].

Cone-beam computed tomography (CBCT) has been adopted rapidly in the past decades and widely used in dentomaxillofacial imaging and orthodontic practices [1315]. The most important reason is that currently, the effective dose of CBCT for a head scan (from several dozens up to several hundreds μSv) is significantly lower than that of CT for a head scan (from several hundreds up to several thousands μSv) [13,16,17]. Furthermore, reported spatial resolution (voxel resolution) varies from 0.076 to 0.4mm [13]. Although the spatial resolution of a slice in CT can be as small as 0.4mm, the thinnest axial thickness is 0.625mm. CBCT also offers particular accessibility for most of the medical units because of its low cost compared with CT. Commercially available CBCT allows patients to seat or stand vertically during scanning. Natural head position (NHP) can be acquired directly for 3D cephalometry. These advantages have created the possibilites of replacing CT with CBCT for 3D imaging and modeling of CASS [1821].

Surgical planning and CASS require accurate contours of facial bones1. The acquisition of a skull model in traditional CT includes two steps: thresholding segmentation and bone surface reconstruction. The thresholding segmentation can easily classify bone voxels because of the calibrated Hounsfield unit (HU) gray values and high contrast resolution (signal-to-noise ratio) of CT. These bone images are then fed to surface reconstruction algorithm such as Marching Cube Algorithm to obtain bone surface. Therefore, accurate segmentation of bone voxels is one of the most essential tasks for 3D rendering and CASS. However, poor image quality becomes its major limitation in establishing skull models in CBCT. Fig. 1 illustrates a skull model obtained by thresholding segmentation and Marching Cube Algorithm in CBCT imaging. The characteristics of degraded images in CBCT imaging can be summarized as follows. First, intensities of CBCT images cannot be accurately represented by standard HU because of no absolute HU calibration. It varies between scanners and between scans. In order to reconstruct bone surfaces with reasonable quality, deciding the threshold for segmenting bone images becomes an art (by guessing and trying). Second, when a unique global threshold is applied, some thin bony structures are usually not included. The sella turcia, orbital walls, nasal, surrounding thin bones of maxillary sinus, condyles, and ramus of the mandible are among the most affected bone structures. This is mainly because their image intensities are averaged with those of air during filter-backprojection reconstruction of CBCT. Third, the effect of low contrast resoluton in CBCT creates randomly scattered noisy images and the bumpy surfaces of the skull model. It results from any combinations of beam-hardening effect, truncation effect, and Compton scattering. Finally, metal artifact can be the most detrimental in building skull models. It is mostly unavoidable since the dental fillings, implants, surgical plates, and orthodontic appliances are among the most common metallic materials existing inside patients' teeth and heads. Although metal artifacts are reduced in CBCT images compared with those in CT images, its effect is still considerable compared with the other effects described above.

Fig. 1.

Fig. 1

A skull model reconstructed by Marching Cube Algorithm after thresholding segmentation is performed on a 3D CBCT volumetric image.

In order to solve the addressed problems above, there are a number of the studies in advanced segmentation and reconstruction approaches of facial bone images in CT and CBCT imaging. They are segmentations based on statistical shape model (SSM) in CT imaging [2224] and in CBCT imaging [25] and histogram thresholding segmentation in CT imaging [26]. Loubele et al. [27] models local histogram as a mixture of Gaussian distributions and determines the thresholds of jawbones (mandible and part of maxilla) and soft tissues in CBCT imaging. However, studies on developing robust segmentation algorithms versatile for most of the facial bones in CBCT imaging are still limited. Facial bones are characterized by complex and inhomogeneous structures (e.g. soft tissues, sinuses, and pieces of soft bones). Voxel-based global and local segmentation based on assumption of homogeneous structures may not be appropriate. Only the trained specialists can manually identify the internal structures of facial bones. This limitation makes it difficult to design segmentation algorithms for the bone voxels and bone surfaces of the internal structures of facial bones. In CASS, the visualization and measurement of simulation relies on the outer surfaces of facial bones. The requirement on accuracy of outer surface of facial bones is more critical than that on accuracy of internal structures of facial bones.

In this study, a new segmentation approach based on wavelet density model (WDM) for a particular interest on left half of anterior maxilla will be proposed. This region consists of thin bones around maxilla simus and is most likely to be influenced by limilation of CBCT. The procedure of performing segmentation throughout this study is summarized step-by-step in Fig. 2. Table 1 lists all acronyms in the study. This model-based segmentation approach calculates the outer bone surface on left half of anterior wall of maxilla. Fig. 3 shows four different skulls. The marked regions on left half of anterior wall of maxilla represent four different shapes. It is obvious that the shape for SSM is a partial open-surface with closed boundary in a skull model. The outline of this manuscript is listed as follows. First of all, the acquisition and preprocessing of CBCT images will be presented in Section 2. Then, a practical procedure to creating training shapes with regularized landmarks will be proposed in Section 3. It includes extractation and customization of training shapes and landmark digitalization on training shapes. In Section 4, we will describe two statistical models. The first statistical model is wavelet-based SSM called WDM. The second statistical model is IFM. Based on those two statistical models, a new model-based segmentation algorithm BIWASM with a new initialization method CWBI will be proposed in Section 5. Next, BIWASM with CWBI will be validated and compared with three different approaches in Section 6. Section 7 will discuss these approaches, results, and the feasibility of BIWASM in CASS. Finally, Section 8 concludes this study. The notation used in this paper is described as follows. Bold symbols A and a are represented as a matrix and a column vector, respectively. aT defines matrix transpose of a. aaTa is the Euclidean norm of a column vector a. A 3D point a is represented as a 3-tuple column vector (ax, ay, az)T in a Cartesian coordinate system. The symbol T() with any kinds of subscripts and superscripts denotes spatial transformation of 3D points. W and W1 denote wavelet transform and inverse wavelet transform.

Fig. 2.

Fig. 2

The procedure of performing segmentation in the study. The statistical models are acquired off-line (enclosed by dashed lines) by using training CBCT datasets. Once the statistical models are built, a new CBCT image (other than the training datasets) is segmented based on the information of statistical models.

Table 1.

List of Acronyms

ASM Active Shape Model
BIWASM Base Invariant Wavelet Active Shape Model
CASS Computer-aided Surgical Simulation
CBCT Cone-beam Computed Tomography
CMF Craniomaxillofacial
CT Computed Tomography
CWBI Customized Wavelet Base Initialization
DSWT Discrete Surface Wavelet Transform
DWT Discrete Wavelet Transform
HU Hounsfield Unit
IFM Image Feature Model
RBI Registration-based Initialization
SSM Statistical Shape Model
PDM Point Distribution Model
WASM Wavelet Active Shape Model
WDM Wavelet Distribution Model

Fig. 3.

Fig. 3

Four different skull models are illustrated in (a), (b), (c), and (d). The marked regions are anterior maxilla in the left half of skull and will be the particular interest in the study.

2 Data Acquisition and Preprocessing

Nineteen patients were scanned using CBCT (Sirona, Bensheim, Germany) with a voxel resolution of 0.287mm × 0.287mm × 0.287mm, 512 × 512 × 512 voxels, 0° gantry tilt, and 1:1 pitch. Nineteen CBCT volumetric images were acquired. Thresholding segmetation was first applied to each of these volumetric images to obtain bone images. Due to limitation in CBCT imaging, these bone images were recovered by manually editing images slice by slice. Then, Marching Cube Algorithm was applied to each of the recovered bone images to calculate surfaces (meshes) of bone images. Finally, the surfaces of bone images were smoothened using the software (Amira, San Diego, CA) to remove the bumpy surfaces. These bone surfaces are the ground truths for surfaces of physical skeleton. We will use these 19 ground truths of bone surfaces: 1) to customize the shapes and generate landmarks in Section 3; and 2) to validate the proposed segmentation approach in Section 6.

3 Training Shape Customization

The shape information of a target in a subject is characterized by a desnse set of points and mesh structures. A SSM of the target is based on shape statistics of customized points on a set of the training datasets. These customized points are called landmarks. Before contructing a SSM, each training shape is remeshed into a new shape with the same number of landmarks and same mesh structures by digitalizing the shape either manaully or automatically. Landmarks must be placed at topologically and structurally corresponding positions over all training shapes.

In this section, we will describe approaches to extracting and customizing a shape from the ground truth of bone surface and generating its landmarks for SSM. The shape (the target) in our study is the marked region on the skull (the subject) illustrated in Fig. 3. In the following, we will use the shape in Fig. 3 (a) to illustrate results calculated by each of the steps in the approaches. These steps are summarized in Fig. 4.

Fig. 4.

Fig. 4

The procedure of generating training shapes with digitalized landmarks. These training shapes will be used to build SSM in Section 4.

3.1 Training Shape Extraction and Patch Decomposition

The training shape of the target is identified by first manually pinpointing anatomical control landmarks on ground truth of bone surface and then determining the boundaries of the training shape. The training shape is the regions defined by these enclosed boundaries. Patch decomposition is performed by dividing the training shape into several patches. It can be implemented similarly by manually pinpointing anatomical control landmarks on the training shape and then determining the boundaries of patches. Each of the boundaries is obtained by calculating a shortest path or a path2 composed of several connected shortest paths on the ground truth of bone surface between the anatomical control landmarks. Fig. 5 (a) shows that 8 anatomical control landmarks are used to define the training shape, and one anatomical control landmark is used to define four patches. Fig. 5 (b) illustrates the boundaries. Finally, these patches are extracted from the bone surfaces to form a customized training shape shown in Fig. 5 (c).

Fig. 5.

Fig. 5

Training shape extraction and patch decomposition. (a) 9 anatomical control landmarks. (b) Patch decomposition using shortest paths as boundaries. (c) Extraction of the training shape and its patches.

The shortest path on a mesh is defined and calculated as follows. Assume A and B are two points on the mesh. When the mesh is unfolded onto a plane (by breaking the connection of cells and rotating the cells), A and B are also translated onto the plane accordingly. Let A′ and B′ be the points correponding to A and B on the plane, repectively. It is possible to draw a straight line between A′ and B′ on the plane so that the line passes through cells of this unfolded mesh. This line on the unfolded mesh correponds to a path on the original mesh. Of the note that this path passes through the valid cells on the mesh instead of only vertices and edges. The shortest path on the mesh is the one with minimum distance (distance of straight line) among all existing paths of that kind. Since calculating shortest paths without any simplifications is computationally expensive, we use Chen & Han's efficient algorithm to calculate shortest paths [28]. Since this searching algorithm uses all the cells in the mesh to find the shortest path, it is necessary to reduce the searching region [29]. It can be done by two steps of precalculation to determine this searching region. The first step of precalculation is that Dijkstra's shortest path algorithm [30] is used to calculate the Dijkstra's shortest path between A and B. This algorithm can quickly find the shortest path of connecting edges. The second step of precalculation is that the searching region can be determined by searching all the neighboring cells in a specific distance to the Dijkstra path. We use 0.2 times of the Dijkstra's shortest path as this specific distance.

3.2 Parameterization

Once the patches of the training shape are obtained, we will define their corresponding planar parameterization domains and the mappings. We use one of the patches (denoting P) shown in Fig. 6(a) to illustrate its planar parameterization domain and mapping shown in Fig. 6(b).

Fig. 6.

Fig. 6

(a) One of four patches in Fig. 5(c). (b) The polygonal domain and its parameterization mapping.

First, the planar parameterization domain of P is determined as follows. Since P is defined by anatomical control landmarks, the planar parameterizaiton domain can be defined as a polygon by those anatomical control landmarks. Assume {lk}k=0Kp1 denotes the Kp anatomical control landmarks forming P, and lk and l(k+1)Kp are the anatomical control landmarks connected by a boundary path of P, where (k)Kp means k modulo Kp. Let {Ti}i=0Kp3, Ti ≡ Δ(l0, li+1, li+2) be the planar triangles for patch P. Since {Ti}i=0Kp3 are usually not coplanar, we can “unfold” these planar triagles {Ti} onto a common plane create a planar Kp-polygon. This new planar Kp-polygon will become the parameterization domain for P. The edges of the planar Kp-polygon are composed of line segments between lk and l(k+1)Kp (distance between two points in 3D space) while the boundaries of P are formed by the shortest pathes on the mesh between lk and l(k+1)Kp. Therefore, it can be claimed that P resembles the planar Kp-polygon.

Second, the parameterization mapping of P will be calculated using barycentric mapping [31]. Mean Value Coordinate will be used to calculate the spring constants. The planar Kp-polygon is assumed to be convex in order to guarantee the bijectivity of parameterization [31]. The convexity of the K-planar polygon can be achieved by carefully configuring {lk}k=0K1 when performing patch decomposition. Moreover, before calculating barycentric mapping, the mapping of boundaries between P and its planar Kp-polygon has to be defined. A simple approach can be done by proportionally projecting the boundary vertices of P onto the edges of its planar Kp-polygon domain to obtain the boundary mapping.

3.3 3D Landmarks Digitalization Using Calmull-Clark Subdivision

The landmarks of each training shape will be digitalized by the Calmull-Clark subdivision proposed by Catmull et al. [32] in several steps. The first step is to use the Calmull-Clark subdivision algorithm to generate new points on the parameterization domain (i.e. planar Kp-polygon) of P. This planar Kp-polygon is a base mesh at 0-th subdivision. Fig. 7(a) illustrates base meshes of a triangle and a quadrilateral. The vertices of the base meshes denote v0. After performing one subdivision on the base meshes, it creates new points f1, and e1 and the updated vertices v1 shown in Fig. 7(b). f1 is called face point corresponding to the cells at the base meshes. e1 is called edge point corresponding to the edges at the base meshes. v1 corresponds to the original vertex v0 at the base meshes. New edges of the supermesh are formed by connecting f1 and e1 and by connecting v1 and e1. Before performing the next subdivision, all f1, e1, and v1 are relabeled to v1 as the input of the second subdivision.

Fig. 7.

Fig. 7

Calmull-Clark subdivision. (a) Triangle base mesh and quadrilateral-like base mesh. (b) The supermeshes generated by one subdivision of the base meshes.

Similarly, after the jth subdivision is performed, vj (vertex), fj (face point), and ej (edge point) can be acquired in the same way. The j-th Calmull-Clark subdivision operated on vj−1 is summarized in Algorithm 1. nv is called the valence of vertex vj (the number of edges connecting vj). The average operator km means the vertex of type m is obtained by using the neighboring vertices of k and calculating the average of them. Fig. 8 summarizes all possible average operators. m represents one of the corresponding averaged vertex, face point, and edge point. Of the note that type m in kmj1 is calculated by averaging centroid of face (f), midpoint of edge (e), or original vertex (v) in the submesh [the mesh before performing subdivision] instead. On the other hand, type m in kmj is calculated by averaging vj, fj, or ej in the supermesh (the mesh after performing subdivision).

Fig. 8.

Fig. 8

The average operators kmj without superscript index j.

At the second step, the patch P with landmarks are remeshed by inverse mapping the points (generated by subdivision) on the parameterization domains (planar Kp-polygon) onto P. The same number of the subdivisions is applied to each of the patches in each training shape. Finaly, the remeshed shape with regularized landmarks can be obtained by stitching these remeshed patches. Since two patches sharing a boundary have the same parameterization mapping on the shared boundary, both the patches will result in the same subdivision on this boundary. Hence, the remeshed patches can be stitched by merging the vertices at boundaries. Fig. (9) illustrates the remeshed and stitched training shape calculated after 0-th, first, third, and fifth subdivision of four base meshes. The training shape corresponding to 0-th subdivision is the mesh formed by anatomical control landmarks. In our study, the number of the subdivisions is 5.

Fig. 9.

Fig. 9

The remeshed shape with landmarks after different number of Calmull-Clark subdivisions. (a) 0-th subdivision. (b) First subdivision. (c) Third subdivision. (d) Fifth subdivision.


Algorithm 1 Calmull-Clark subdivision algorithm

fjvfj1
ej12(vej1+fej)
vj1nv(fvj+vvj1+(nv2)vj1)
vj{fj,ej,vj}

4 Statistical Models

Two statistical models summarized in Fig. 10 will be built by using N training images and their corresponding N training shapes with regularized landmarks (caluclated in Section 3). The first statistical model is WDM using training shapes. It is the multiscale statistical shape model based on PDM (Section 4.1) by using DSWT (Section 4.2). The second statistical model is IFM created by using both training shapes and training images.

Fig. 10.

Fig. 10

The flowchart of creating WDM and IFM.

4.1 Point Distribution Model

Point distribution model (PDM) is introduced by Taylor et al. [33]. Let Si be the set of n landmarks [illustrated in Fig. 9(d)] in ith training shape. We say Si consists of landmarks in the image space. All the training shapes are transformed (rotation, translation, and scaling) to a common coordinate of model space using Procrustes Analysis [34] to minimize their squares of distance. Let S~i be the transformed training shape. We say S~i is the ith training shape in the model space. Assume xi is the 3n-dimensional shape vectors formed by concatenation of the coordinates of all transformed landmarks in S~i. xi can be expressed by the mean vector x1Ni=0N1xi and shape variation vectos Δxixix. The distribution of SSM can be characterized by the covariance matrix of shape variation vectors given by

Cx1N1i=0N1ΔxiΔxiT. (1)

In order to capture the dominant shape varitaions, Principle Component Analysis is applied to the set of shape variation vectors. A set of orthogonal principle axes (also called principle modes) of shape variations can be found by calculating the orthogonal eigenvectors of Cx, and the corresponding eigenvalues are the variances of the distribution in the direction of principle axes. Assume the columns of P are the orthogonal eivenvectors with the corresponding nonzeros eivenvalues λk in the descending order. A shape vector x~ in the model space can be generated by a shape parameter b~:

x~=x+Pb~. (2)

Any shape vector x in the model space can be approximated by projecting the shape variation vector Δxxx onto the subspace (spanned by P) of the model

xx+Pb. (3)

Its shape parameter b can be calculated by the least square solution of minimizing ∥ΔxPb2

b=PTΔx. (4)

A shape vector x is said to be defined in the model space if it is expressed by (2) or approximated by (3). A shape vector y is said to be defined in an image space if its landmark points are located under the coordinate of a volumetric image. Once the PDM is built, a shape vector in the image space can be represented or approximated by a shape vector in PDM. Assume a PDM with shape priors (P, λk) is calculated from {Si}. Let y be a shape vector of an arbitrary shape S in the image space. Of the note that the shape S is not among any of the training shapes Si. One of the approaches to calculating a shape vector in the model space to best represnet y is to find a tranformatoin T and a shape parameter b to minimize mean squares:

yT(x+Pb)2. (5)

Algorithm 2 A simple algorithm of calculating suboptimal T and b by minimizing (5) and incorporating shape contraints.

b0
while Until convergence do
xx+Pb; generate a shape in the model space using (2).
 Calculate T by minimizing yT(x)2 using [35]
xT1(y); calculate inverse spatial transformatoin of y.
bPT(xx); calculate the shape parameters fitting x′ using (3).
 Apply shape constraints on b using (6)
end while

T is transformatoin consisting of rotation, translation, and scaling of a shape. The eigenvalues λk can be used to provide the bounds of the model in order to ensure the shape is plausible. The values of b are contrained between

aλk<bk<aλk (6)

where 2 ≤ a ≤ 3 generally, and bk is an element of b. A simple algorithm of calculating suboptimal T and b by minimizing mean squares (5) and incorporating shape constraints is summarized in Algorithm 2.

4.2 Discrete Surface Wavelet Transform Based on Calmull-Clark Subdivision

In the following, we will perform multiscale analysis on each aligned training shape S~i in the model space. By properly choosing anatomical control landmarks and performing patch decomposition (such as Fig. 5), we can generate landmarks on the training shapes with a rectilinear structure at any level of Calmull-Clark subdivision. Since the rows and columns of the rectilinear training shapes can be well-defined, DSWT of the training shapes can be calculated by performing 2D DWT on rectilinear shapes in x-, y-, and z-coordinate3. However, there are exceptions. When the training shape is decomposed into patches such as the ones in Fig. 14(a) using the same set of anatomical control landmarks, there are two pentagon-like patches and one quadrilateral-like patch forming the training shape [Fig. 14(b)]. After any level of Calmull-Clark subdivision on the patches is performed and the remeshed patches are stitched, the resulting training shape always has an extraordinary landmark (except landmarks on the boundaries) with valence three [32]. Fig. 14(c) shows the remeshed shape with the fifth subdivision. It is obvious that 2D DWT cannot be performed on this grid-like mesh since rows and columns of a rectilinear structure are not well-defined.

Fig. 14.

Fig. 14

(a) Different patch decomposition applied to the same shape in Fig. 5. (b) The extracted shape (c) The shape at five Calmull-Clark subdivisions. (d) Scaling coefficients after one decomposition of DSWT. (e) Scaling coefficients after three decomposition of DSWT. (f) Scaling coefficients (base) after five decompositions of DSWT

To overcome this problem, Bertram et al. [36] proposed a new construction of wavelet on Calmull-Clark subdivision surfaces of arbitray two-manifold topology by designing a new lifting scheme of biorthogonal wavelet transform. The training shape and its patches can be customized, and its DSWT can be constructed accordingly. The lifting scheme of 2D DWT and Bertram et al.'s proposed lifting scheme will be first introduced based on the rectilinear grid. Then, we will discuss how this new lifting scheme can skillfully perform multiscale decomposition on the training shape with a non-rectilinear structure.

Fig. 11 shows the decompositions in 1D DWT. An input sm,J at a resolution of J-scale can be decomposed into wavelet coefficients wm,J−1 and scaling coefficients sm,J at (J − 1)-scale by a high pass filter (H) and a low pass filter (L) followed by downsamples. wm,J−1 represents the signal details of sm,J at (J − 1)-scale, whereas sm,J−1 represents the content of coarser scale of sm,J. It is shown that one decomposition of any classic DWT with finite filters in Fig. 11 can be implemented by starting from the Lazy wavelet transformation (splitting) and then performing a finite number of alternating lifting steps [37]. The DWT using lifting scheme consists of two steps. At the first step of lifting scheme, the scaling coefficient sm,j at j-th scale is split into s-set and w-set:

sm,j1s2m,j (7)
wm,j1s2m+1,j. (8)

Fig. 11.

Fig. 11

The decompositions in 1D DWT.

At the second step of lifting scheme, a number of consecutive lifting steps are calculated. Only one of s-lifting (9) and w-lifting (10) steps is calculated at l-th lifting step:

sm,j1α(l)sm,j1+kαk(l)wk,j1 (9)
wm,j1β(l)wm,j1+kβk(l)sk,j1 (10)

The inverse DWT using lifting scheme is simply the analogous operations of (9) and (10) in the reverse order to satisfy perfect reconstruction:

sm,j11α(sm,j1kαkwk,j1) (11)
wm,j11β(wm,j1kβksk,j1) (12)

Fig. 12(a) shows lifting steps of one decomposition of 2D DWT. 1D DWT is first performed on the rows and then performed on the columns by using lifting steps.

Fig. 12.

Fig. 12

(a) One decomposition of 2D DWT using lifting steps (b) One decomposition of 2D DWT using composite liting steps.

The new lifting scheme designed by Bertram et al. is shown in Fig. 12(b). Instead of performing the lifting steps first on the rows and then on the columns, each row lifting step and its corresponding column lifting step will be performed at the same time before moving on the next row and column lifting steps. Since lifting steps are linear operations, the resulting decomposition of 2D DWT in Fig. 12(b) is equivalent to the one in Fig. 12(a). This new lifting scheme also consists of two steps. At the first step of the new lifting scheme, the splitting is performed on the landmarks of the trainging shape based on the structure of Calmull-Clark subdivision. As described in Section 3.3, the landmarks of the trainging shape are obtained by Calmull-Clark subdivision and can be classified into vertex v, face points f, and edge points e before relabeling in Algorithm 1 (The superscript is ignored for simplicity). v is split as scaling coefficients similar to (7). f is split as wavelet coefficients similar to (8). e can be split as scaling coefficients (when s-lifting is performed) or wavelet coefficients (when w-lifting is performed). At the second step of the new lifting scheme, the operations of lifting steps in each of row and column liftings) are defined as

sm,j1αsm,j1+α~(wm,j1+wm1,j1) (13)
wm,j1βwm,j1+β~(sm,j1+sm1,j1) (14)

The superscript l is ignored for simplicity. It can be shown in Appendix A that the composite s-lifting step for each of the vertex v and each of the edge points e can be determined by

vα2v+4α~2fv+4αα~eveαe+2α~fe (15)

It can be found that the composite s-lifting step is simply an expression of the generalized s-lifting step in (9). Similarly, the composite w-lifting step for each of the face points f and each of the edge points e has the following expression:

fα2f+4α~2vf+4αα~efeαe+2α~ve (16)

After one decomposition of 2D DWT is completed, the resulting v is the scaling coefficients as the input of the next decomposition of 2D DWT. The resulting f and e are the wavelet coefficients of 2D DWT.

It can be found that (15) is involved with the average of four neighboring edge points and the average of four face points for all v. If an extraordinary vertex exists with valence nv, fv and ev of (15) can be generalized to the average of nv neighboring face points and the average of nv edge points, respectively. Therefore, the composite lifting steps in (15) and (16) can be performed no matter whether rows and columns are well-defined. It simply requres the structure of Calmull-Clark subdivision. The inverse lifting steps can be obtained similarly using (11) and (12).

In this study, the wavelet and scaling filters construction is based on dyadic refinement of linear B-spline scaling function. The lifting scheme of this wavelet construction based on linear B-spline scaling function is composed of only two composite lifting steps in each of 2D DWT decomposition: one composite s-lifting step followed by one composite w-lifting step. The parameters in (15) and (16) are α = 1, α~=12, β = 1, and β~=14. Fig. 13 shows the scaling coefficients after one, three, and five decompositions of Fig. 9(d) by using this new lifting scheme. Fig. 14(d)–(f) shows the scaling coefficients of another customized shape shown in Fig. 14(c), in which there is an extraordinary landmark with valence three. This example (shown in Fig. 14) demonstrates that DSWT of arbitrarily customized training shapes can be constructed accordingly.

Fig. 13.

Fig. 13

DSWT of Fig. 9(d) (a) Scaling coefficients after one decomposition of DSWT. (b) Scaling coefficients after three decomposition of DSWT. (c) Scaling coefficients after five decompositions of DSWT

4.3 Wavelet Distribution Model

Since the rank of covariance matrix Cx in PDM is at most N − 1, the number of the valid principle axes corresponding nonzero eigenvalues is at most N − 1. Typically, it is the case that nN in the 3D SSM. The number of principle axes may not be sufficient enough to well-represent a shape with a large number of landmarks. Moreover, since each shape in PDM is a global linear combination of principle axes, PDM may not be able to capture fine shape details. Davatzikos et al. [38] first proposes multiscale SSM to solve this problem. Several studies applying different multiscale analysis to SSM are proposed such as spherical wavelet on spherical topology [39], subdivision based surface wavelet [40] on spherical topology, and diffusion wavelet on arbitrary surface topology [41].

In the following, DSWT will be incorporated with PDM to form a multiscale SSM. The training shapes in the model space are decomposed into multiscale representations, and a PDM is built in the subspace associated with each scale. This wavelet-based SSM is called wavelet density model (WDM). For simplicity of notatoin, all the scaling coefficients at 0-scale will be viewed as wavelet coefficients at (−1)-scale. Let w^i,k,l be DSWT of S~i at scale l, spatial location k, shape i. Define a collection of wavelet coefficients Bi,l={w^i,k,l,k} at each scale, and concatenate them to form wi,l(w^i,0,lT,w^i,1,l,T)T. Finally, PCA is performed on wi,l at each scale over all the training shapes to obtain the matrices Pl of eigenvectors. A set of wavelet coefficients w~l of a shape at specific scale l can be generated by a shape parameter b~l

w~l=wl+Plb~l (17)

Similarly, the wavelet coefficients wl of any shape at scale l can be approximated by projecting wl onto the subspace of Pl

wlwl+Plbl (18)

with

bl=PlT(Wlwl) (19)

It can be found that the total number of the eigenvectors in the model is increased by around J times. Similarly, the algorithm of mimimizing (5) can be developed and is summarized in Algorithm 3.


Algorithm 3 A similar algorithm analogous to Algorithm 2 when WDM is used.

bl0
while Until convergence do
wlwl+Pbl; generate wavelet coefficients in the model space using (17).
xW1(wl); Calculate inverse DSWT of wl.
 Calculate T by minimizing yT(x)2 using [35]
xT1(y); calculate inverse spatial transformatoin of y
wlW(x); Calculate DSWT of x
blPlT(wlwl); calculate the shape parameters fitting wl using (19).
 Apply shape constraints on bl analogous to (6)
end while

4.4 Image Feature Model

In addition to WDM, IMF is built by using training images and shapes (before performing Procrustes alignment) to describe the image features embedded on the landmarks in the image space. For each of the landmarks of Si in the image space, a line L passing through it and perpendicular to the shape is calculated [Fig. 15(a)]. Then L is uniformly sampled to obtain a set of ordered 2M + 1 points {zm}m=MM (M points on each side). Image intensities along the line can be interpolated at each point zm. Fig. 15(b) illustrates a set of the sampled points along L and their image intensities. An image feature vector g (such as first order derivative of image intensity) can be calculated using the interpolated image intensities. Let g and Sg be the mean and covariance matrix of g over all the training images. The IFM is characterized by g and Sg.

Fig. 15.

Fig. 15

(a) At each of the landmarks, a line L (dashed line) perpendicular to the training shape in the image space is defined for calculating the image feature. (b) The interpolated image profile on sample points along line L. Without loss of generality, z0 is the landmark.

In our study, three kinds of image features are calculated: zero order (g(0)), first order (g(1)), and second order (g(2)) derivatives of image intensity. g(i) is normalized by ∥g(i)∥ in order to avoid the effect caused by non-calibration in CBCT imaging.

5 Segmentation

Once the statistical models are built, we will calculate the outer surface of anterior wall of maxilla in a CBCT image by using statistical priors of the statistical models. In the follwing, we propose a novel model-based algorithm called base invariant wavelet active shape model (BIWASM) based on WDM with an initialization method called customized wavelet base intialization (CWBI) to calculate it. A brief flowchart is summarized in Fig. 16.

Fig. 16.

Fig. 16

The flowchart of performing the proposed CWBI and BIWASM.

5.1 Preliminary Studies

In the following, we will introduce two model-based segmentation algorithms ASM and WASM and one of the most commom initialization methods called RBI. ASM is introduced by Cootes et al. [42] and is an active contour iterative searching algorithm using PDM. WASM is the variation of ASM by applying multiscale analysis and is introduced in [38] using WDM. The final shape can be calculated by RBI followed by two-step iterative shape searching of WASM (ASM) based on WDM (PDM).

RBI can be one of the most simple initializations for ASM and WASM. The mean shape W1({wl,l})4 [See (17)] in the model space can be viewed as the most probable shape for initialization. The initial shape can be obtained by transforming the mean shape from the model space to the image space. Therefore, RBI can be performed in two steps. The first step of RBI is to use user interaction to select a number of the points in the image space described in Appendix B. They correspond to control landmarks of the mean shape. The control landmarks of a shape are defined based on the construction of the training shape and are described as follows. In Fig. 9(a), 9 anatomical control landmarks of the training shape form a mesh at 0-th subdivision. After any level of subdivision is performed, new landmarks are added. It can be observed that these 9 landmarks exist in the remeshed training shape at each level of subdivision. We define these 9 landmarks as control landmarks of a shape. Fig. 17(a) shows these enumerated 9 control landmarks [they correspond to anatomical 9 control landmarks of the mesh at 0-th subdivision in Fig. 9(a)]. Therefore, there are 9 corresponding control landmarks in the mean shape. The selection of the points in RBI is based on the assumption: if we were able to know the true shape5 (in the image space) composed of regularized landmarks with the same mesh structure as the shape in SSM, each of these selected points (in the image space) is assumed to be exactly one of the control landmarks on the true shape (in the image space). Furthermore, the selection criterion in RBI is the same as the configuration of anatomical control landmarks for training shapes demonstrated in Fig. 5(a). However, the datasets used for selection of the points are different: the points selected for calculation of initial shape are pinpointed on slices of the image using user interaction (Appendix B); and the points selected for patch decomposition and shape extraction in Section 3.1 are pinpointed on the ground truth of bone surface. Two examples of point selection for calculation of initial shape in RBI are shown in Fig. 17(a) and Fig. 17(b). The first example is that we can select 4 points corresponding to 4 of the control landmarks (such as the corner control landmarks) of the mean shape (in the model space). The second example is that we can select 9 points corresponding to 9 control landmarks of the mean shape (in the model space). At the second step of RBI, a transformation from the model space to the image space can be calculated by registering these selected points (in the image space) and their corresponding control landmarks of the mean shape (in the model space). The initial shape calculated by RBI using 9 selected points is illustrated in Fig. 18(a)

Fig. 17.

Fig. 17

(a) Nine control landmarks of a shape are defined and derived from anatomical control landmarks of the mesh at 0-th subdivision in Fig. 9(a). (b) Four selected points using user interaction (Appendix B). These points with labeled numbers are assumed to be eactly the same as the corresponding (See the corresponding numbers) control landmarks of the true shape. (c) Nine selected points correspond to all the control landmarks of the true shape.

Fig. 18.

Fig. 18

(a) The initial shape (accompanied with a slice in the image space) calculated by RBI using 9 selected points. (b) The initial shape (accompanied with a slice in the image space) calculated by CWBI using 9 selected points.

The first step of iterative shape searching of WASM (ASM) is that given a shape y~(k) in the image space at the kth iteration, each candidate landmark is examined along the line passing through it and perpendicular to the evolving shape y~(k) by using IFM. Similar to the notation in Section 4.4, a set of ordered 2K + 1 points {zm}m=KK can be acquired along the line, where K > M. The image feature vector g~(gK,gK+1,,gK) with length 2K + 1 can be calculated. A searching window defining a temporary image feature vector gm ≡ (gM+m, gM+m+1, …, gM+m), m = −(KM), …, (KM) with legnth 2M + 1 scans the image feature vector g~. The candidate landmark zm^ can be selected among {zm}m=(KM)(KM) so that

m^=argminmf(gm) (20)

Where f(gg)TSg1(gg) is Mahalanobis distance evaluated at g. The landmarks calculated by 20 form a new evolving shape y(k).


Algorithm 4 WASM (ASM)

while Until convergence do
y(k)y~(k); calculate a candidate shape by examining the neighboring region of each of the landmark points.
y~(k+1)y(k); given y(k), calcuate y~(k+1) by minimizing y~(k+1)y(k) using Algorithm 3 (for WASM) or Algorithm 2 (for ASM).
end while

It can be observed that when the distribution of g is assumed to be jointly Gaussian, (20) is equivalent to finding a image feature vector maximizing the probability of a joint Gaussian distribution and then assigning the center point of the optimal image feature vector g to be the candidate point. However, this assmumption is not appropriate in our study. Therefore, we set the covariance of image feature vector to be identity, i.e. Sg = I. Furthermore, a new evaluation function in (20) for candidate landmarks is defined as f~f(0)+f(1)+f(2) in our study, where f(i) is Mahalanobis distance for image feature vector g(i) The second step of iterative shape searching of WASM (ASM) is to generate a shape in the model space and transform it to obtain a shape y~(k+1) in the image space in order to fit y(k). y~(k+1) can be calculated by minimizing y~(k+1)y(k)2 using Algorithm 3 (Algorithm 2). The algorithm of WASM (ASM) is summarized in Algorithm 4.

5.2 Customized Wavelet Base Initialization

Although RBI incorporated with user interaction can quickly generate an initial shape, this initial shape contains only information of the mean shape, i.e. bl = 0 (in WASM) or b = 0 (in ASM). We will exploit the information of selected points to design a customized initial shape by using WDM.

Let Sb be the mesh formed by 9 selected points by using user interaction on slices of images illustrated in Fig. 17(c). Assume yb is the 27×1 vector formed by concatenation of the coordinates of all the points in Sb. Define

f(T,bl,l)ybT(x~(bl,l))2 (21)

where

x~(bl,l)C(W1(wl+Plbl,l)). (22)

C() is the operator identifying the corresponding control landmarks from a shape in the model space and concatenating the coordinates of all the control landmarks to form a 27×1 vector (same formation as yb). Therefore, x~(bl,l) is a 27 × 1 vector.

To generate a better initial shape, we need to calcuate optimal T and bl, ∀l so that (21) is minimized:

(T,bl,l)=argminT,bl,lf(T,bl,l) (23)

The initial shape can be determined by

y~(0)=T(W1(wl+Plbl,l)) (24)

It can be observed that f(T,bl,l) is a non-linear function. We can use simulated annealing approach to solve it. However, this optimal searching approach is computationally expensive. Therefore, in this study, we propose a simple and efficient approach (CWBI) to estimate an initial shape as follows.

There are two observations for Calmull-Clark subdivision and DSWT of a training shape. The first observation is as follows. Each of the training shapes are remeshed using Calmull-Clark subdivision to generate landmarks [Fig. 9(d)]. To contruct WDM, each of the remeshed shapes with regularized landmarks is aligned to the model space using Procrustes alignment and decomposed into scaling coefficients and wavelet coefficients by DSWT. Based on the construction of DSWT described in Section 4.2, the scaling coefficients at each scale correspond to the landmarks at each level of subdivision (i.e. they have same mesh structure). For example, when the scale coefficients at coarsest scale [Fig. 13(c)] is reached, they correspond to the mesh at 0-th Calmull-Clark subdivision [Fig. 9(a)]. Although the mesh of the scaling coefficients at each scale and the remeshed training shape at its corresponding level of Calmull-Clark subdivision have the same mesh structure, they are defined in different ways: the former is in wavelet domain, and the latter is in the model space. The second observation is as follows. In each training shape, there are 8 vertices at the boundaries of the base mesh and one vertex at the central area of the base mesh [Fig. 9(a)]. Since the operation of Calmull-Clark subdivision on these 8 corresponding vertices at the boundaries of the training shape cannot be well-defined using Algorithm 1 at each level of subdivision, these vertices at the boundaries of the training shape are unchanged at each level of subdivision. Similarly, when the remeshed training shape is decomposed into wavelet coefficients and scaling coefficients, the corresponding 8 scaling coefficients at the boundaries of the scaling mesh cannot be well-defined using (15) at each scale. These 8 corresponding scaling coefficients at the boundaries of scaling meshes are kept unchanged during wavelet decomposition at each scale of DSWT. Therefore, the values of these 8 corresponding vertices at the boundaries of the training shape are unchanged at all levels of Calmull-Clark subdivision and the same as the values of the corresponding 8 scaling coefficients of scaling meshes at all scales of DSWT. For example, the values of 8 vertices at the boundaries of the mesh illustrated in Fig. 13(a) are exactly the same as the corresponding 8 scaling coefficients at the boundaries of the coarsest scaling mesh illustrated in Fig. 9(c). However, the values of the vertex at the cetnral area of the mesh are usually not equal to the values of its corresponding scaling coefficient at the central areas of the coarsest scaling mesh.


Algorithm 5 Customized Wavelet Base Initialization (CWBI)

Tb(yb,P1,w1); use Algorithm 2 to calculate the suboptimal transformation Tb of (26) without shape constraints.
x~bTb1(Sb)
w1,bx~b; define a new scaling coefficients w−1,b at the coarsest scale based on the assumption of (28) and set it equal to x~b.
xW1(w1,b{wl,l0})
y~(0)Tb(x)

Based on two observations above, we claim that there exists bl, l ≥ 0 so that the value of each element in x~(bl,l) [see (22)] can be approximated by the value of each element in w1+P1b1 given b−1:

x~(bl,l)w1+P1b1 (25)

Based on the assumption in (25), we reduce complexity of the problem in (23) by calculating

(T,b1)=argminT,b1ybT(w1+P1b1) (26)

(26) can be solved (approximately) using Algorithm 2 without applying shape constraints. Let Tb be the suboptimal transformation of (26). It will be used to define a transformation between the model space and image space and will play an essentail role in the proposed BIWASM. A new base in the model space is defined by

x~bTb1(yb) (27)

We define a new scaling coefficient w−1,b at the coarsest scale by assigning the values of w−1,b to be x~b. Based on the similar assumption in (25), we claim that there exists bl, l ≥ 0 so that

x~bC(W1(w1,b{wl+Plbl,l0})) (28)

By combining local details using the mean wavelet coefficients {wl,l0} of WDM, the initial shape can be constructed by

y~(0)=Tb(W1(w1,b{wl,l0})) (29)

Algorithm 5 summarizes CWBI. Fig. 17(b) illustrates the initial shape generated by the proposed CWBI.

5.3 Base-Invariant Wavelet Active Shape Model

In most of the studies, the shapes of subjects are closed surfaces such as spherical topology [23, 39, 4346], open surfaces representing most of the whole subject [41, 47], and tubular topology [48]. The shape in our study is different. It is a partial surface in skull model and is a shape-customized open surface with closed boundary. We observe that WASM and ASM may become unreliable in order to recognize the corresponding partial surface in the skull model. At the first step of WASM and ASM, they recognize the topology of the true partial shape by searching for the topology of neighboring candidate landmarks. The candidate landmarks at the boundary of this open surface are most likely to be out of the true shape (such as the parts of maxilla near nasal bones and orbits). Once the candidate landmarks out of the true shape, the evolving shape at the second step of WASM and ASM will erroneously fit these candiate landmarks. This is because WASM and ASM are involved with update of T from the model space to the image space. Therefore, it is necessary to incorporate selected points Sb defined in Section 5.2 to constrain the evolving shape.

Based on the concept in Section 5.2, we design a new model-based algorithm (BIWASM) based on WDM to overcome this problem. It is similar to WASM, but the differences are described as follows. First, the transformation between model space and image space is no longer be calculated during the iterative steps. Tb in (27) is used to define this invariant transformation. Second, the coarsest scaling coefficients w−1,b will be used instead of generating a new coarsest scaling coefficients w−1 using (17). Third, to keep the evolving shape contrained by the selected points Sb, the control landmarks of the evolving shape corresponding to Sb in the image space will be unchanged during the iteration of BIWASM. The proposed BIWASM is summarized in Algorithm 6.

6 Validations and Results

Nineteen sets of CBCT images were used for the validation. Their ground truths of bone surface were manually established as described in Section 2. They served as a control group. The outer surfaces of anterior wall of maxilla, illustrated in Fig. 3, were segmented using our BIWASM with CBWI. The same images were also segmented using ASM with Registration-Based Initialization (RBI), WASM with RBI, and WASM with CWBI, respectively. They all served as an experimental group.


Algorithm 6 Base Invariant Wavelet Active Shape Model (BIWASM)

(y~(0),Tb,w1,b)(yb,P1,w1); use Algorithm 5 to calculate the initial shape, the transformation, and scaling coefficients at the coarest scale.
while Until convergence do
y(k)y~(k); calculate a candidate shape by examining the neighboring region of each of the landmark points, and the corresponding control landmark points in y(k) are replaced with Sb.
xTb1(y(k)); calculate inverse transformatoin of y(k)
wlW(x); DSWT.
blPlT(wlwl),l0; calculate the shape parameter fitting wl using (19).
 Apply the constraints on bl, l ≥ 0 using (6)
wlwl+Pbl,l0; generate wavelet coefficients using (17)
xW1(w1,b{wl,l0}); inverse DSWT.
y~(k+1)Tb(x)
end while

6.1 Data Preparation

The segmentation dataset, referred as the datasets to test segmentation approaches, were labeled Di, i = 1, 2, … , 19. Each of the segmentation datasets Di consisted of a set of CBCT volumetric images (target datasets) and its corresponding ground truth of bone surface. The model datasets, referred as the training datasets in the base invariant active shape model, were defined by 19 shapes and 19 sets of CBCT volumetric images. They were labeled Mi, i = 1, 2, … , 19 in the same order. Those 19 shapes were extracted from 19 ground truths of bone surface and were decomposed into patches using the approaches described in Section 3.1. Nine anatomical control landmark points were pinpointed to form four patches in each of the shapes as illustrated in Fig. 5(a) and (b). The patches in each of the shapes were parameterized (Section 3.2), subdivided five times, remeshed, and stitched to form a remeshed shape (Section 3.3) with 4225 regularized landmarks (illustrated in Fig. 9(d)).

Once N model datasets were built as training datasets, one segmentation (target) dataset, other than N model datasets, was used to compare our developed approaches to the three traditional approaches. The preparation of target dataset was completed in the following four steps and summarized in Fig. 19.

Fig. 19.

Fig. 19

The flowchart of data preparations.

Step 1 was landmark digitization. Nine control landmarks were digitized interactively as described in Section 5.1 for initialization. They were selected based on the same criterion as control landmarks illustrated in Fig. 5(a). A set of 9 landmarks was digitized in each segmentation dataset. Once digitized, these landmarks were used for all the experiments.

Step 2 was initialization. The digitized control landmarks were used to create two initial shapes using two initialization methods: RBI and our newly developed CWBI. RBI was used to register these selected control landmarks of the shapes in the image space and their corresponding control landmarks of mean shape in the model space, and to transform the mean shape in the model space into the image space. The resulted initial shape served as the input of ASM and WASM. CWBI was used to calculate a transformation between the image space and the model space by using those 9 selected points. It was also used to add the mean local details using WDM to obtain an initial shape. The resulted initial shape served as the input of WASM and BIWASM.

Step 3 was to calculate the final shapes. The shapes initialized by RBI were fed into ASM and WASM approaches respectively, resulting in ASM-RBI and WASM-RBI final shapes. They represented the final shapes calculated by the traditional approaches. In addition, the shapes initialized by CWBI were fed into WASM approach and our newly developed BIWASM approaches respectively, resulting in WASM-CWBI and BIWASM-CWBI final shapes. While WASM-CWBI shape represented the outcome generated using our initialization and the traditional segmentation approaches, the latter represented the outcome generated using both our developed initialization and final segmentation approaches. During the computation, the number of the iterations in ASM and WASM approaches (Algorithm 4) is 20. The number of the iterations in BIWASM (Algorithm 6) is 20. The maximum number of the iterations in Algorithm 2 is 40, and 10 in Algorithm 3. We intentionally reduce the maximum iterative number in Algorithm 3 to 10 instead of 40. This was because DSWT was computationally expensive. In all of ASM, WASM, and BIWASM approaches, 23 points (K = 12) were sampled along a line perpendicular to landmarks in order to exam neighboring region and find candidate landmarks. The sample distance was 0.2mm. In addition, the shape constraints were a = 3 for both b in PDM and bl in WDM. This step produced four kinds of final shapes: ASM-RBI, WASM-RBI, WASM-CWBI, and BIWASM-CWBI.

Step 4 was to compare the ground truth to the final shapes generated by different approaches. It was done by calculating surface deviations, the closest distances, and Hausdorff distance between the ground truths and the final shapes generated in step 3. Surface deviation was a set of the closest distances between the landmarks of the ground truth and the final shape. Hausdorff distance was also calculated. Therefore, there were 4225 surface distances and one Hausdorff distance produced from each target dataset.

The validation was achieved by two sets of comparisons. In the following, the first set of the comparisons was to detect the variabilities amongst 4 approaches when the number of training datasets was static. The second set of comparisons was to detect the variabilities amongst 4 approaches when the number of training datasets was dynamic. In addition, the computational times were also compared amongst the 4 approaches.

The first set of comparisons was conducted using leave-one-out arrangement (cross validation). Six groups (total of 69) leave-one-out experiments were conducted: i = 1, 2, … , 19 (19 experiments), i = 1, 2, … , 16 (16 experiments), i = 1, 2, … , 13 (13 experiments), i = 1, 2, … , 10 (10 experiments), i = 1, 2, … , 7 (7 experiments), i = 1, 2, … , 4 (4 experiments). The dataset was randomly selected using SPSS software. In each experiment, the target dataset was excluded from the training datasets. For example, in the second group, the experiment of the datasets i = 1, 2, … , 16 was conducted using Mi, i = 1, 2, … , 11, 13, … , 16, and the target D12 dataset was excluded from the training dataset. After final shapes were generated by four approaches (ASM-RBI, WASM-RBI, WASM-CWBI, BIWASM-CWBI), they were compared to their ground truths. In each of the six groups of experiments, the mean and standard deviation of surface distances were calculated over 80275 (19 × 4225), 67600 (16 × 4225), 54925 (13 × 4225), 42250 (10 × 4225), 29575 (7 × 4225), and 16900 (4 × 4225) surface distances, respectively. The mean Hausdorff distances was also calculated over the 19, 16, 13, 10, 7 and 4 final shapes, respectively. The results are shown in Fig. 20 (a)–(c).

Fig. 20.

Fig. 20

(a), (b), and (c) are mean surface distances, standard deviation of surface distances, and mean Hausdorff distances in the first set of the experiements. (d), (e), and (f) are mean surface distances, standard deviation of surface distances, and mean Hausdorff distances in the second set of the experiements.

The second set of comparisons was conducted by using 13 segmentation datasets Di, i = 1, 2, … , 13 and varying the number of model datasets by 12, 15, and 18. The dataset was also randomly selected using SPSS software. Three groups of the datasets were used to conduct 39 experiments. Again, the target dataset was excluded from the training datasets. The first group of the datasets were Di, i = 1, 2, … , 13 and Mi, i = 1, 2, … , 13. Each experiment was conducted by choosing one segmentation dataset from Di, i = 1, 2, … , 13, and the rest 12 model datasets from Mi, i = 1, 2, …, 13. The second group of the datasets were Di, i = 1, 2, …, 13 and Mi, i = 1, 2, …, 16. Each experiment was conducted by choosing one segmentation dataset from Di, i = 1, 2, …, 13, and the rest 15 model datasets from Mi, i = 1, 2, …, 16. The third group of the datasets were Di, i = 1, 2, …, 13 and Mi, i = 1, 2, …, 19. Each experiment was conducted by choosing one segmentation dataset from Di, i = 1, 2, …, 13, and the rest 18 model datasets from Mi, i = 1, 2, …, 18. In each group, 13 experiments were conducted using each segmentation approach, respectively. Means and standard deviations of surface distances were calculated over 54925 (13 × 4225) surface distances, respectively. The mean Hausdorff distances were also calculated over the 13, 13, and 13 final shapes, respectively. The results are shown in Fig. 20 (d)–(f).

6.2 Results

The results (Fig. 20) showed that our BIWASM-CWBI approach outperformed the others in each of six groups (the first set of comparisons) and in each of three groups (the second set of comparisons). It also indicated that the more accurate result was achieved with more training dataset. The results indicated that our BIWASM-CWBI approach was capable of capture the outer surface of thin bones (1mm) in the skull model. Fig. 21 shows the visualization of the evolving shapes and the grouth truth in a single experiment. This single experiment was in the first set of experiment based on Di and Mi, i = 1, 2, …, 16, 18, 19

Fig. 21.

Fig. 21

The initial and final shapes (blue meshes) calcuated by four approaches using the segmentation dataset (D17) and 18 model datasets. (a) The left half of the anterior surface of the skull is segmented and extracted manually from the ground truth of bone surfaces to validate the results. It is visualized as the red mesh in the rest of the subfigures. (b) and (e) are the initial and final shapes in ASM-RBI, respectively. (c) and (f) are the initial and final shapes in WASM-RBI, respectively. It can be found (b) and (c) are the same when they are calculated using RBI under PDM and WDM. (d) is the initial shape in both WASM-CWBI and BIWASM-CWBI. The final shapes in WASM-CWBI and BIWASM-CWBI are shown in (g) and (h), respectively.

Finally, the computational times of the 4 approaches are presented in Table 2. This was calculated in the first set of experiment based on Di and Mi, i = 1, 2, …, 16, 18, 19. The computer was Intel i7 2.8Hz with 4G RAM. The result revealed that the computational time of our approach was comparable with that of ASM-RBI and significantly shorter than WASM-RBI and WASM-CWBI.

Table 2.

The Computation Times in the First Experiment

ASM-RBI WASM-RBI WASM-CWBI BIWASM-CWBI
164s 575s 618s 205s

7 Discussion

The correspondence of landmarks over all the training shapes can be properly constructed structually and geometrically by using patch decomposition and mesh subdivision. In Section 3, the training shapes are extracted from ground truths of bone surfaces. Each of the training shapes is partitioned into several polygon-like patches. These patches can be customized according to anatomical structures of the training shapes. When the boundaries of patches are chosen along high curvature ridges and edges of the training shapes, structural shape correspondence is created. When each of the patches is characterized by smooth surface and barely has promonient features, the correspodence of landmarks among the corresponding patches can be constructed geometrically by regular subdivision. The shape correspodence constructed by patch decomposition and regular subdivion can be done in a short time compared to the description length approach for model building in [46]. This approach may take hours to days to build a 3D SSM.

The shortest path used to create boundaries of patches is different from other studies. In [43,4951], they use Dijkstra's algorithm to define patches. It will lead to zigzag boundaries along the patches. Barycentric mapping used to calculate parameterization of patches is simple and fast. It requires the boundary of a patch to be fixed onto a convex topology. However, if the boundary of the convex toplogy cannot reflect the 3D boundary of the patch, high distortion on parameterization mapping will occur near the boundary [52]. Therefore, the shortest path calculation we apply in the study can prevent it since the patch boundary resembles straight lines on the mesh. Regarding the computational time of shortest path calculation in our study, the calculation of each shortest path in Fig. 5 (b) requires from 2s to 5s. Hence, patch decomposition on each of the training shapes used in our study can be completed in less a minute. WDM provides more dimensions and captures more local features to model a shape than PDM. However, it does not mean WASM can always outperform ASM. We will have two observations from the results to demonstrate it. First, it can be found in the studies of Davatzikos et al. [38] and Nain et al. [39] that there are no significant differences of mean distance bewteen ASM and WASM in occasional circumstances when the number of the models are increased. Our results also demonstrate the point. The mean distances in ASM-RBI and WASM-RBI have no significant differences when at least 15 models are used [shown in Fig. 20 (a) and (d)]. Second, the STD distance and Hausdorff distance in WASM-RBI are larger than that in ASM-RBI. By visualizing the results on a single case in Fig. 21, the shape at the boundary is distorted in both WASM-RBI and ASM-RBI. WASM-RBI has higher dimensions to capture this distortion while ASM-RBI has fewer dimensions to capture it (instead, ASM-RBI constrains it). Therefore, ASM-RBI has smaller Hausdorff distance than WASM-RBI. Since WASM-RBI still captures local details better than ASM-RBI, it is explained that the range (STD) of surface distance in WASM-RBI can be larger than that in ASM-RBI.

According to the results shown in Fig. 20, our approach BIWASM-CWBI can achieve mean surface distance as low as 0.25mm and standard deviation of surface distance less than 0.2mm. Due to the limitation of CBCT, transition of image intensities from soft tissues to bone is very smooth. The thinnest visible anterior wall in CBCT image can be as small as 1mm6. It means that our aporoach can capture the outer surface of maxillary thin bones without significant loss of the shapes in the skull model. Quatitatively, we can expect our BIWASM-CWBI incurrs 0.25 ± 0.2 surface errors when performing any kinds of surgical planning and simulations in CASS system. Based on our clinical experience and published literature [5356], there would be no clinical significance if the surface error is less than 0.5mm. Therefore, we demonstrate that our approach BIWASM-CWBI is robust in building CBCT skull model for CASS.

8 Conclusion

We develop a segmentation approach BIWASM and an initialization approach CWBI to calculate the outer surface of the anterior wall in maxilla. 19 CBCT datasets are used to test these approaches. We design two sets of the experiments to validate our segmentation approach by using 19 CBCT datasets. Three other model-based segmentation approaches are also applied in the experiments to compare their results. Our proposed approach outperforms the other three approaches in both sets of the experiments. It achieves 0.25 ± 0.2 surface errors. This small degree of deviation does not have clinical significance.

A Composite Lifting Step

Assume the rectilinear grid is applied. Fig. 22 shows an example of the grid. r and c denote the row and colume indices. The s-lifting step in the row using (13) is

vr,cαvr,c+α~(er,c+er,c+1) (A.1)
e~r,c+iαe~r,c+i+α~(fr+i,c+fr+i,c+1) (A.2)

Similaryly, the s-lifting step in the column using (13) can be written by

vr,cαvr,c+α~(e~r,c+e~r+1,c) (A.3)
er+i,cαer+i,c+α~(fr,c+i+fr+1,c+i) (A.4)

Incorporated with (A.1), (A.3) can be written as

vr,cα2vr,c+4α~2(fr,c)+4αα~er,c (A.5)

where ev,r,ce~r,c+e~r+1,c+er,c+er,c+14 and fr,cfr,c+fr+1,c+fr,c+1+fr+1,c+14. (A.5) is the composite s-lifting step on vertex point. (A.2) and (A.4) become the s-lifting step of edge points aound the vertex vr,c.

Fig. 22.

Fig. 22

An example of a rectilinear grid.

B User Interaction for Initialization

The user interaction can be implemented by incorporating CBCT volumetric image and its skull shape [Fig. 23(a)]. This skull shape is calculated by the simplest approach: thresholding segmentation and March Cube Algorithm. It is only used to provide reference for user interaction. Only a slice of the volumetric image is visualized in 3D space each time. It can be switched to the previous slice or the next slice. The skull shape can be temporarily hidden to visualize the internal content of the slice. Therefore, we have two objects to help identify a desired landmark point: the rough skull model and slices of the volumetric image. Landmark points are pinpointed on the slice of the image, and their coordinates are recorded.

Fig. 23.

Fig. 23

(a) Selection of landmark points for initialization using user interaction. (b) The same user interaction but the skull shape is hidden to visualize the internal contect of the slice.

Footnotes

1

Facial bones includes maxilla, zygomatic bone, lacrimal bone, nasal bone, and volmer in lower part of cranium, and temporal bone, sphenoid bone, parietal bone, and frontal bone in upper part of cranium

2

When the shortest path cannot be found, several auxiliary points can be placed in-between and connected by shortest paths form a path.

3

It is equivalent to perform 2D DWT on a 2D image. The image intensity is each of the coordinates (e.g. x-coordinate) of landmarks.

4

It can be shown that mean shapes of PDM and WDM are exactly the same given the same training shapes are used, i.e. x [See (2)] = W1({wl,l})

5

The true shape means the most ideal shape

6

This measurement is based on our experiences on CBCT datasets. The real bone thickness of anterior maxillary walls can be smaller than 1mm.

References

  • [1].Gateno J, Xia JJ, Teichgraeber JF, Christensen AM, Lemoine JJ, Liebschner MA, Gliddon MJ, Briggs ME. Clinical feasibility of computer-aided surgical simulation (CASS) in the treatment of complex cranio-maxillofacial deformities. J. Oral Maxillofac. Surg. 2007;65:728–34. doi: 10.1016/j.joms.2006.04.001. [DOI] [PubMed] [Google Scholar]
  • [2].Swennen GRJ, Barth E-L, Eulzer C, Schutyser F. The use of a new 3D splint and double CT scan procedure to obtain an accurate anatomic virtual augmented model of the skull. International journal of oral and maxillofacial surgery. 2007;36(2):146–52. doi: 10.1016/j.ijom.2006.09.019. [DOI] [PubMed] [Google Scholar]
  • [3].Xia JJ, Gateno J, Teichgraeber JF. Three-dimensional computer-aided surgical simulation for maxillofacial surgery. Atlas Oral Maxillofac. Surg. Clin. North Am. 2005;13(1):25–39. doi: 10.1016/j.cxom.2004.10.004. [DOI] [PubMed] [Google Scholar]
  • [4].Bell WH, Guerrero CA. Distraction Osteogenesis of the Facial Skeleton. BC Decker; 2006. [Google Scholar]
  • [5].Ferencz C, Greco J. A Method for the Three-dimensional Study of Pulmonary Arteries. Chest. 1970;57(5):428–434. doi: 10.1378/chest.57.5.428. [DOI] [PubMed] [Google Scholar]
  • [6].Herman GT, Liu HK. Three-dimensional display of Human Organs from Computed Tomograms. Computer Graphics and Images Processing. 1979;9(1):1–21. [Google Scholar]
  • [7].Pickering RS, Hattery RR, Hartman GW, Holley KE. Computed tomography of the excised kidney. Radiology. 1974;113:643–647. doi: 10.1148/113.3.643. [DOI] [PubMed] [Google Scholar]
  • [8].Altobelli DE, Kikinis R, Mulliken JB, Cline H, Lorensen W, Jolesz F. Computer-assisted three-dimensional planning in craniosurgical planning. Plast. Reconstr. Surg. 1993;92(4):576–85. [PubMed] [Google Scholar]
  • [9].Bill J, Reuther JF, Betz T, Dittmann W, Wittenberg G. Rapid prototyping in head and neck surgery planning. J. Craniomaxillofac. Surg. 1996;24:20–29. [Google Scholar]
  • [10].Xia J, Ip HHS, Samman N, Wang D, Kot CSB, Yeung RWK, Tideman H. Computer-assisted three-dimensional surgical planning and simulation: 3D virtual osteotomy. International Journal of Oral & Maxillofacial Surgery. 2000;29(1):11–17. [PubMed] [Google Scholar]
  • [11].Xia JJ, Ip HHS, Samman N, Wong HTF, Gateno J, Wang D, Yeung RWK, Kot CSB, Tideman H. Three-dimensional virtual-reality surgical planning and soft-tissue prediction for orthognathic surgery. IEEE Trans. Inf. Technol. Biomed. 2001;5(2):97–107. doi: 10.1109/4233.924800. [DOI] [PubMed] [Google Scholar]
  • [12].Zachow S, Hege HC, Deuflhard P. Computer-assisted planning in craniomaxillofacial surgery. J. Computing. and Inf. Technol. 2006;14(1):53–64. [Google Scholar]
  • [13].Scarfe WC, Farman AG, Levin MD, Gane D. Essentials of Maxillofacial Cone Beam Computed Tomography. Alpha Omegan. 2010;103(2):62–67. doi: 10.1016/j.aodf.2010.04.001. [DOI] [PubMed] [Google Scholar]
  • [14].Quereshy FA, Savell TA, Palomo JM. Applications of cone beam computed tomography in the practice of oral and maxillofacial surgery. J. Oral Maxillofac. Surg. 2008;66(4):791–6. doi: 10.1016/j.joms.2007.11.018. [DOI] [PubMed] [Google Scholar]
  • [15].Hechler SL. Cone-Beam CT: Applications in Orthodontics. Dental Clinics of North America. 2008;52(4):809–823. doi: 10.1016/j.cden.2008.05.001. [DOI] [PubMed] [Google Scholar]
  • [16].White SC, Pharoah MJ. Oral Radiology: Principles and Interpretation. Mosby; St. Louis, MI: 2003. [Google Scholar]
  • [17].Scarfe WC, Farman AG, Sukovic P. Clinical applications of cone-beam computed tomography in dental practice. J. Can. Dent. Assoc. 2006;72(1):75–80. [PubMed] [Google Scholar]
  • [18].Tucker S, Cevidanes LHS, Styner M, Kim H, Reyes M, Proffiit W, Turvey T. Comparison of actual surgical outcomes and 3-dimensional surgical simulations. J. Oral Maxillofac. Surg. 2010;68(10):2412–21. doi: 10.1016/j.joms.2009.09.058. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [19].Cevidanes LHC, Tucker S, Styner M, Kim H, Chapuis J, Reyes M, Proffit W, Turvey T, Jaskolka M. Three-dimensional surgical simulation. Am. J. Orthod. Dentofacial. Orthop. 2010;138(3):361–71. doi: 10.1016/j.ajodo.2009.08.026. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [20].Swennen GRJ, Mollemans W, Schutyser F. Three-dimensional treatment planning of orthognathic surgery in the era of virtual imaging. J. Oral Maxillofac. Surg. 2009;67(10):2080–92. doi: 10.1016/j.joms.2009.06.007. [DOI] [PubMed] [Google Scholar]
  • [21].Swennen G, Schutyser F. Three-dimensional cephalometry: Spiral multi-slice vs cone-beam computed tomography. Am. J. Orthod. Dentofacial. Orthop. 2006;130(4):410–416. doi: 10.1016/j.ajodo.2005.11.035. [DOI] [PubMed] [Google Scholar]
  • [22].Babalola K, Cootes T. AAM Segmentation of the Mandible and Brainstem. In Proceedings of the 3rd Workshop on 3D Segmentation in the Clinic: MICCAI.2009. [Google Scholar]
  • [23].Kainmueller D, Lamecker H, Seim H, Zachow S, Antipolis S. Multi-object Segmentation of Head Bones. MIDAS Journal, Contribution to MICCAI Workshop - Head and Neck Auto-Segmentation Challenge [Google Scholar]
  • [24].Zachow S, Lamecker H, Elsholtz B, Stiller M. Reconstruction of mandibular dysplasia using a statistical 3D shape model. International Congress Series. 2005;1281:1238–1243. [Google Scholar]
  • [25].Kainmueller D, Lamecker H, Seim H, Zinser M, Zachow S. MICCAI 2009, Part II. 2009. Automatic Extraction of Mandibular Nerve and Bone from Cone-Beam CT Data; pp. 76–83. [DOI] [PubMed] [Google Scholar]
  • [26].Dupillier MP. IDEAL'09. 2009. An Automatic Segmentation and Reconstruction of Mandibular Structures from CT-Data; pp. 649–655. [Google Scholar]
  • [27].Loubele M, Maes F, Schutyser F, Marchal G, Jacobs R, Suetens P. Assessment of bone segmentation quality of cone-beam CT versus multislice spiral CT: a pilot study. Oral surgery, oral medicine, oral pathology, oral radiology, and endodontics. 2006;102(2):225–34. doi: 10.1016/j.tripleo.2005.10.039. [DOI] [PubMed] [Google Scholar]
  • [28].Chen J, Han Y. Proceedings of the sixth annual symposium on Computational geometry - SCG '90. ACM Press; New York, New York, USA: 1990. Shortest paths on a polyhedron; pp. 360–369. [Google Scholar]
  • [29].Floater MS, Hormann K, Reimers M. Approximation Theory X: Abstract and Classical Analysis. 2002. Parameterization of Manifold Triangulations; pp. 197–209. [Google Scholar]
  • [30].Dijkstra EW. A note on two problems in connexion with graphs. Numerische Mathematik. 1959;1:269–271. [Google Scholar]
  • [31].Hormann K, Lévy B, Sheffer A. SIGGRAPH 2007 Course Notes. ACM; New York, NY, USA: 2007. Siggraph Course Notes Mesh Parameterization: Theory and Practice; pp. 1–122. [Google Scholar]
  • [32].Catmull E, Clark J. Recursively generated B-spline surfaces on arbitrary topological meshes. Computer-Aided Design. 1978;10:350–355. [Google Scholar]
  • [33].Taylor CJ, Cooper DH, Graham J. Training Models of Shape from Sets of Examples. Proc. British Machine Vision Conference.1992. pp. 9–18. [Google Scholar]
  • [34].Goodall C. Procrustes Methods in the Statistical Analysis of Shape. Journal of the Royal Statistical Society. 1991;53(2):285–339. [Google Scholar]
  • [35].Horn BKP. Closed-form solution of absolute orientation using unit quaternions. Journal of the Optical Society of America A. 1987;4(4):629–642. [Google Scholar]
  • [36].Bertram M, Duchaineau M. a., Hamann B, Joy KI. Generalized B-spline subdivision-surface wavelets for geometry compression. IEEE transactions on visualization and computer graphics. 2004;10(3):326–38. doi: 10.1109/TVCG.2004.1272731. [DOI] [PubMed] [Google Scholar]
  • [37].Daubechies I, Sweldens W. Factoring wavelet transforms into lifting steps. The Journal of Fourier Analysis and Applications. 1998;4(3):247–269. [Google Scholar]
  • [38].Davatzikos C, Tao X, Shen D. Hierarchical active shape models, using the wavelet transform. IEEE transactions on medical imaging. 2003;22(3):414–23. doi: 10.1109/TMI.2003.809688. [DOI] [PubMed] [Google Scholar]
  • [39].Nain D, Haker S, Bobick A, Tannenbaum A. Multiscale 3-D shape representation and segmentation using spherical wavelets. IEEE Transactions on Medical Imaging. 2007;26(4):598–618. doi: 10.1109/TMI.2007.893284. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [40].Li Z, Ning X, Wang Z. A Fast Segmentation Method for STL Teeth Model. 2007. pp. 163–166. [Google Scholar]
  • [41].Essafi S, Langs G, Deux J-F, Rahmouni A, Bassez G, Paragios N. 2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro. 2009. Wavelet-driven knowledge-based MRI calf muscle segmentation; pp. 225–228. [Google Scholar]
  • [42].Cootes T, Taylor CJ, Cooper DH, Graham J. Active Shape Models-Their Training and Application. Computer Vision and Image Understanding. 1995;61(1):38–59. [Google Scholar]
  • [43].Berlin I. ZIB-Report 04-09. Segmentation of the Liver using a 3D Statistical Shape Model. [Google Scholar]
  • [44].Essafi S, Langs G, Paragios N. MICCAI 2009, Part II, LNCS 5762. 2009. Left Ventricle Segmentation Using DiffusionWavelets and Boosting; pp. 919–926. [DOI] [PubMed] [Google Scholar]
  • [45].Fripp J, Warfield S, Crozier S, Ourselin S. Automatic Segmentation of the Knee Bones using 3D Active Shape Models. 18th International Conference on Pattern Recognition (ICPR'06).2006. pp. 167–170. [Google Scholar]
  • [46].Davies RH, Twining CJ, Cootes TF, Taylor CJ. Building 3D Statistical Shape Models by Direct Optimisation. IEEE Tansactions on Medical Imaging. 2009;29(4):961–981. doi: 10.1109/TMI.2009.2035048. [DOI] [PubMed] [Google Scholar]
  • [47].Ma JW, Fan YH. Face segmentation algorithm based on ASM. 2009. [Google Scholar]
  • [48].de Bruijne M, van Ginneken B, Viergever M. a., Niessen WJ. Adapting Active Shape Models for 3D segmentation of tubular structures in medical images. Information processing in medical imaging : proceedings of the … conference. 2003;18:136–47. doi: 10.1007/978-3-540-45087-0_12. [DOI] [PubMed] [Google Scholar]
  • [49].Zöckler M, Stalling D, Hege H-C. Fast and intuitive generation of geometric shape transitions. The Visual Computer. 2000;16(5):241–253. [Google Scholar]
  • [50].Lamecker H, Seebaß M, Hege H.-c., Deuflhard P. Proc. SPIE Medical Imaging 2004: Image Processing. 2004. A 3D Statistical Shape Model Of The Pelvic Bone For Segmentation; pp. 1341–1351. [Google Scholar]
  • [51].Dalal P, Ju L, Mclaughlin M, Zhou X, Fujita H, Wang S. ICCV09. Kyoto: 2009. 3D Open-Surface Shape Correspondence for Statistical Shape Modeling : Identifying Topologically Consistent Landmarks; pp. 1857–1864. [Google Scholar]
  • [52].Lee Y. Mesh parameterization with a virtual boundary. Computers & Graphics. 2002;26(5):677–686. [Google Scholar]
  • [53].Mollemans NN, Daelemans A, Hemelen GV, Schutyser F, Berge S. Virtual occlusion in planning orthognathic surgical procedures. Int. J. Oral Maxillofac. Surg. 2010;39:457–462. doi: 10.1016/j.ijom.2010.02.002. [DOI] [PubMed] [Google Scholar]
  • [54].Chang YB, Xia J, Gateno J, Xiong Z, Zhou X, Wong S. An Automatic and Robust Algorithm of Re-establishment of Digital Dental Occlusion. IEEE Transactions On Medical Imaging. 2010;29(9):1652–63. doi: 10.1109/TMI.2010.2049526. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [55].Gateno J, Xia JJ, Teichgraeber JF, Rosen A. A new technique for the creation of a computerized composite skull model. J. Oral and Maxillofac. Surg. 2003;61(2):222–227. doi: 10.1053/joms.2003.50033. [DOI] [PubMed] [Google Scholar]
  • [56].Xia JJ, Gateno J. Accuracy of a Computer-Aided Surgical Simulation (CASS) System in the Treatment of Complex Cranio-Maxillofacial Deformities: A Pilot Study. J Oral Maxillofac Surg. 2007;65(2):248–54. doi: 10.1016/j.joms.2006.10.005. [DOI] [PubMed] [Google Scholar]

RESOURCES