Abstract
In developing treatment plan before complete denture restoration, doctors need to help the patient regain chewing ability while considering facial shape reconstruction after the surgery. At present, facial deformation prediction depends on the subjective judgment and experience of doctors; thus, an accurate basis for scientific quantitative analysis is lacking. With the development of computer technology, this paper proposed new facial morphology prediction method based on principal component analysis. Firstly, the curvature feature template with few feature points is constructed to replace the deformed areas of facial models. Secondly, the principal component analysis method is used to construct an elastic deformation prediction model for complex skin tissue. Finally, the Laplacian deformation technology is used to reconstruct the facial model and to obtain an intuitive digital 3D model. This method can adjust the facial deformation amplitude interactively by controlling shape parameters and predict the effect in consideration of different doctors' varied needs and habits. The experiments show that this method can predict the facial models interactively and the average deviation between the prediction models and the post-treatment facial models is between −2.102 and 2.102 mm by adjusting the shape parameters.
Keywords: Curvature feature template, Laplacian deformation, Principal component analysis, Facial morphology prediction
Graphical abstract
Facial morphology prediction based on principal component analysis: firstly, feature templates are constructed on preoperative model and postoperative model; secondly, an elastic deformation prediction model of soft tissue is constructed by principal component analysis; finally, the deformation simulation of the edentulous model is performed by using the Laplacian deformation technology.
Highlights
-
•
This paper proposed a new prediction method for facial morphology after complete denture restoration.
-
•
This method can interactively predict facial soft tissue deformation quickly and accurately.
-
•
Principal component analysis is firstly used for predicting facial morphology after complete denture restoration.
-
•
The doctors can make personalized treatment scheme for the complete denture of patients.
1. Introduction
Many specialized oral therapies require doctors to predict facial deformation after treatment. This prediction is quickly applied to the treatment process to develop a treatment plan. It is also digitally shown to patients. Thus, rapid prediction of facial deformation is a vital requirement.
Among many oral treatments, complete denture restoration best reflects this need. Complete denture restoration operation is mainly performed on edentulous patients. In developing the best treatment plan before surgery, doctors need to consider helping the patient regain chewing ability while achieving the full appearance of the face. At present, facial deformation prediction depends on the subjective judgment and experience of doctors; thus, an accurate basis for scientific quantitative analysis is lacking. This article focuses on facial shape reconstruction after complete denture restoration.
Numerous techniques for the computer simulation of deformable objects have been proposed. These techniques can be generally classified into two categories, namely, deformation technology based on geometric models and deformation technology based on physical models.
The deformation technology based on geometric models can be divided into control point and space deformation methods. In control point deformation methods, curves or surfaces are represented by a series of control points. These methods have high computational efficiency and interactive operational capacity. Thus, they are mainly used in computer-aided geometric design. For example, Hoch et al.1 proposed an adaptive face normalization model technique that uses B-spline surfaces to fit face data acquired through three-dimensional (3D) laser scanning and combines the Facial Action Code System to simulate changes in facial expressions. However, these methods are not used to calculate deformation during surgical applications and are thus rarely found in surgical simulation systems. Space deformation methods primarily employ skeleton lines, FFD, and other approaches for deformation. Users can define the transformation matrices of a given series of control handles. Transformation is transferred to each vertex of models in accordance with weight distribution. For example, Parke2 simulated facial animation directly on a parametric surface. Liu et al.3 mapped the parameterized mesh of a standard face model to a cartoon face template and matched the input face to the face template to obtain the vertex coordinates of the cartoon face model. The face model is then deformed by deforming the template. Although free deformation has important design advantages, its major drawback is the difficulty encountered in expressing the mechanical action of the physical model in accordance with mathematical parameters. In addition, the application of free deformation in voxel models is complicated.
The deformation technology based on physical models can be used to solve problems wherein the deformation of the nonphysical model lacks quantifiable realism. Two physical meaning-based computational models for human tissue deformation in surgical simulations exist: mass spring model and finite element model (FEM). (1) The basic principle underlying the use of the mass spring model to simulate deformed objects is the construction of a two-dimensional (2D)/3D mesh. In this model, the nodes are set as particles, and the edges are set as springs. Although this method is easy to implement, the stiffness matrix requires solution. Numerous workers have utilized the mass spring model in various surgical applications because it has a simple structure and experiences low computational loss during interaction. Teschner et al.4 proposed a multilayer nonlinear mass spring model that is based on static constraints to calculate the equilibrium state of the model directly. Keeve et al.5 constructed a facial multilayer particle spring structure that includes a skin layer, a muscle layer, and a skull layer. They then determined the core of the facial structure. They also found the intersection point of the remaining two layers through the vertex connection between the core and the skin layer model. They reconstructed the topological structure by taking the intersections as the insertion points of each layer. Finally, they obtained a complete mass spring model. The above method can only be used to address the small deformations of physical models and is incapable of handling large deformation. (2) The FEM can simulate facial tissue organization by using various element types and shape functions.6, 7, 8 Kim et al.9 developed a novel three-stage FEM that combines real tissue slip to improve the simulation of facial soft tissue deformation. Mendizabal et al.10 used a surface-based smoothed FEM of four vertex voxel elements to overcome the excessive stiffness of soft tissue deformation caused by the traditional FEM and to improve the solution accuracy and convergence speed of solid mechanics problems. Keeve et al.5 constructed a FEM on the basis of feature point displacement to imitate the displacements of facial soft tissues after skull position adjustment. They represented the FEM element with a triangular prism. The two undersides of the prism are the meshes of the skin layer and the skull layer. Based on the displacement of feature points, internal force can be obtained by combining Hooker's law. The displacement of the skin layer in reaction to force was solved by using the differential equation of the system. Chabanas et al.11 used the FEM to simulate skull changes and facial deformations after maxillofacial treatment and to simulate facial functional behaviors. Zachow et al.12 proposed a FEM comprising tetrahedron elements for simulating soft tissue to improve calculation speed and attempted to apply the model in clinical settings. Gladilin13 employed a FEM model of tetrahedron elements to simulate facial muscles. Nevertheless, the FEM model is expensive to calculate and is unsuitable for applications in animation simulations.
Many scholars are now studying the application of deep neural networks in facial recognition and facial feature extraction with the development of artificial intelligence. Deep neural networks are artificial neural networks that can imitate the human way of thinking in data reorganization to obtain high-dimensional and abstract data features.14, 15, 16, 17, 18, 19, 20 In 2017, Park et al.21 proposed a facial alignment method that applies the deep neural network of local feature learning and cyclic regression. This method is based on the convolution neural network and can automatically learn local feature descriptions from a constructed local facial symbol database. It produces facial alignment results with increased competitiveness by studying faces with low-level composition characteristics. Rhee et al.22 proposed a facial recognition technique that combines 3D model information and 2D color information in deep convolution neural networks. In this technique, 2D and 3D facial models are input into separate convolution layers to improve low-level kernel learning in accordance with various characteristics. Subsequently, the low-level kernel that completes learning is merged and input into the upper layer of the deep neural network. Zhang et al.23 proposed an attitude-invariant facial expression recognition method that is based on principal component analysis and the attitude stability characteristics of convolution neural network learning. First, positive facial images are studied through principal component analysis. The learning features are employed as the targets of convolution neural networks to map positive and nonpositive facial image features. The mapping results are then used to describe nonpositive facial images and to obtain a descriptor for any facial image. Finally, attitude stability features are used to train a single classifier for facial expression recognition. Zhao et al.24 proposed a fatigue expression recognition structure on the basis of facial dynamic multivariate information image and dual-mode deep neural network to improve traffic safety. Lei et al.25 used multilayer image description to extend shallow facial descriptions to deeply distinguished facial features. Additional complex facial information is thus extracted, and the discrimination and closeness of feature representation are improved. Dong et al.26 proposed a deep learning structure for age classification tasks wherein labels that represent age groups are assigned to facial images. This model is used to extract complex high-level age-related visual properties on the basis of deep convolution neural networks, and the age group of the input facial images is predicted. The disadvantage of this method is that it requires collecting a massive amount of real human body measurement data. Given this requirement, this method involves a protracted and expensive data acquisition process.
At present, no study has explored the application of principal component analysis to facial morphology after dental treatment. The main challenge of facial deformation in complete denture restoration is the restoration of the lower 1/3 area of the face. In this paper, the relationship between the morphology of the lower third of the face and complete denture restoration is analyzed. A method for predicting the facial soft tissue distortion after complete denture restoration is proposed based on principal component analysis. First, a feature template is constructed to replace the face deformation area and to improve computational efficiency. Then, given the non-uniformity, anisotropy, and nonlinear material properties of soft tissue, an elastic deformation prediction model of soft tissue is constructed by principal component analysis to predict the amount of elastic deformation. Finally, the deformation simulation of the edentulous model is performed by using the Laplacian deformation technology and in accordance with the predicted new feature template.
2. Materials and methods
The data for edentulous patients are obtained from School of Stomatology, Peking University, including 48 sets of facial model data, each of which contains facial models with both pre-treatment edentulous state and post-treatment dentigerous state. The tool used to collect the facial model is the Face Scan 3D facial scanner produced by 3D-SHAPE, Germany. (Scanning speed: 0.3/0.8s; scanning accuracy: 0.1 mm). Finally, both edentulous and dentigerous 3D point cloud data of 48 patients are obtained. All patients agreed to participate in the study.
The main objective of this study is to control the facial deformation of edentulous patients after complete denture restoration. The main work flow is shown in Fig. 1, including the following steps:
-
(1)
Model matching. The model after treatment is matched to the pre-treatment model based on the locations of the eyes and nose, which do not change significantly before and after treatment.
-
(2)
Construct a curvature feature template. The curvature feature template is constructed on the surface of the edentulous and dentigerous models. This feature template represents the facial deformation areas and acquires feature point coordinates. The amount of displacement corresponding to the feature points of the two models is the amount of elastic deformation.
-
(3)
Construct a soft tissue elasticity prediction model. The elastic deformation is taken as the initial data, and its principal components are extracted. Combined with the initial position of the feature points of the edentulous model, an elastic prediction model based on principal component analysis is constructed.
-
(4)
Prediction and deformation. Deformation simulation of the edentulous facial model is performed by using the predicted output obtained by principal component analysis and Laplacian deformation technology.
-
(5)
Verification. The prediction model is compared with the dentigerous model after treatment.
Fig. 1.
Pipeline of facial morphology prediction.
2.1. Feature template construction based on curvature feature
No correlation is found between the vertices of the 3D facial point cloud model acquired by optical scanning, and the numbers of triangles and vertices differ. In order to improve the operability and computational efficiency of the data and simplify the point cloud model, we construct a standardized curvature feature template to replace the face deformation areas. The most obvious feature in the facial models is the curvature feature, which is extracted using the average curvature method.
2.2. Curvature solving based on quadratic surface fitting
In differential geometry, the curvature tensor is expressed in the form of a spatial curvature. If each point on a curve can find a circle with the same curvature, the curvature radius of the point becomes the radius of the circle. As shown in Fig. 2, the radius of the curvature circle at the on the curve is , and the curvature of the curve at the is as follows:
| (1) |
Fig. 2.

The curvature circle of the curve.
The case of discrete surfaces is more complicated. The surface consists of triangular faces, as shown in Fig. 3. Quadratic equations are used for fitting. For the vertex on the surface, a set of one ring neighborhood points is taken, and is the neighborhood point number. The point is plotted on the tangent plane as . The surface is constructed as:
| (2) |
Fig. 3.

The discrete surfaces.
According to the principle of least squares, equation (2) is minimized, as follows:
| (3) |
The coefficients in equation (3) are respectively derived, and the value is 0, then:
| (4) |
With simultaneous equations, the values of the coefficients can be solved. According to the differential geometry relationship, the first-order partial derivative and the second-order partial derivative are obtained. Combining the unit normal vector at the point , the average curvature at the point is as follows:
| (5) |
2.3. Feature template creation and mapping
Before constructing the feature template, the pre-treatment edentulous facial model is matched with the post-treatment dentigerous facial model in the same coordinate system with the guidance of a doctor and based on the unobvious changes in the eyes and nose, as shown in Fig. 4.
Fig. 4.
Model matching.
According to the corrected facial model, a feature template is constructed, as shown in Fig. 5(b). The main workflow process includes the following steps:
-
(1)
Search for the fastest vertex A in the Z-axis direction in the coordinate system shown in Fig. 4.
-
(2)
Obtain the vertical contour of the Y axis direction through A point, as shown in Fig. 5(a). The feature points under the nose and lips are located according to the curvature change: 1, 3, 4, and 5. The change in curvature is determined according to the average curvature obtained above. The feature points are determined by the direction of curvature. In the XY plane, the midpoints 2 of 1 and 3, and the symmetry points 6 and 7 of 1 and 2 with respect to 4 are calculated.
-
(3)
Get the horizontal contour of the A point in the X-axis direction, and locate the two sides of the nose according to the curvature change: E, and F. In the XY plane, calculate the midpoint G of A and E, and the midpoint H of A and F.
-
(4)
Proceed according to (2). Obtain the vertical contour of the Y-axis direction over E, G, H, and F, and locate the feature points according to the curvature change. Finally, feature templates consisting of feature points numbered 1–29 are obtained.
Fig. 5.
Constructing a feature template: (a) Vertical contours over the tip of the nose; (b) Feature template.
2.4. Statistical model based on elastic principal component features
After extracting the eigenvalues and eigenvectors of the sample data, the principal component analysis method interactively adjusts the face deformation amplitude by controlling the shape parameters. At present, facial shape recovery effect after oral treatment is mainly based on the doctor's subjective judgment. The method can predict the effect even with variations in doctors' needs, thereby meeting the standards of different doctors to the greatest extent. The technical route is shown in Fig. 6.
Fig. 6.

The technical route of principle component analysis model.
This study uses principal component analysis to obtain the elastic deformation characteristics of the feature template between the edentulous facial model and the dentigerous facial model to construct the prediction model of the facial elastic deformation. The method can interactively adjust the elastic features according to the treatment methods of different users and is beneficial in designing a flexible prediction model.
Principal component analysis is a simple and non-parametric method that extracts relevant information from a confusing data set and is therefore widely used in many multi-domain applications. With very few operations, principal component analysis can provide a technical route to reduce complex data sets to lower dimensions, thereby revealing many hidden information and simplifying the dynamics hidden underneath. For example, the anatomical structure of the facial model is complex, the shape is irregular, and each person's face is different. Hence, it is difficult for the computer to quickly and accurately extract the useful information.
Solving principal component analysis requires the use of algebraic methods. The most common method is the use of singular value decomposition to obtain eigenvalues and eigenvectors. First, the statistical characteristics of the elastic model are constructed. The position difference of the feature points in curvature feature template with edentulous facial model and dentigerous facial model is expressed as a vector:
| (6) |
Where is the coordinate value of the k vertex , and n is the number of vertices. When the facial model is linear, the m groups facial model are a linear subspace, which can be represented by a matrix . For any new facial model, can be represented by the following:
| (7) |
| (8) |
The principal component analysis is used to eliminate the correlation among original face data, reduce the amount of data, and calculate the square matrix . The following variables can be defined, is a standard orthogonalized eigenvectors corresponding to the eigenvalues , then:
| (9) |
is a positive number and is called a singular value.
The Gaussian distribution of the former (<r) eigenvalues are the shape parameters, and the corresponding eigenvectors are used as the base. Then, equation (7) can be rewritten as follows:
| (10) |
| (11) |
Where , . The PCA features extraction algorithm is shown as algorithm 1. The shape parameter conforms to the positive distribution with variance , , and the prediction result that is more in line with the individualized feature requirements can be obtained by adjusting .
2.5. Facial deformation based on global Laplacian deformation technology
For the facial point cloud model, more control points need to be moved to achieve the required deformation. If only one control point is moved at a time, the amount of calculation is large. The new coordinates of all points on the deformed object need to be calculated when a point is moved each time. The ideal method involves moving all control points to the corresponding positions and then calculating the changes of the corresponding positions.
2.6. Laplacian coordinates
The original feature points of the edentulous model surface are the control points of deformation. The Laplacian deformation technology is used to calculate the coordinates of all points on the model surface after the simultaneous movement of all control points to the new positions. stands for the edentulous facial model, stands for the vertices in the model, stands for the edges in the model, and stands for the triangles in the model. The initial coordinate of the vertex in the model is the absolute coordinate in the Cartesian coordinate system, and the Laplacian coordinate of the vertex is defined to represent the absolute coordinate of the vertex and the centroid difference in the neighborhood.
| (12) |
Where , represents the number of points in the neighborhood of the vertex . From the perspective of differential geometry (Fig. 7), equation (12) can be rewritten as follows:
| (13) |
Fig. 7.
Laplacian coordinates: (a) Laplacian coordinates within one ring neighborhood of the vertices; (b) the cotangent weight.
Then, Laplacian coordinates are compressed within one ring neighborhood of the vertices to ensure their local correlation.
The transformation of Cartesian coordinates and Laplacian coordinates can be expressed in the form of a matrix. Set is the connection matrix:
| (14) |
Set is the diagonal matrix, and . Then, the absolute coordinates can be transformed into relative coordinates, as follows:
| (15) |
It is more convenient to use the symmetric form of matrix , which can be defined as follows:
| (16) |
| (17) |
Coordinate transformation can be performed using , for example,
| (18) |
| (19) |
| (20) |
The matrix is called a topological Laplacian operator. Fig. 8 shows an example of matrix , and matrix can be expressed as follows:
Fig. 8.
The mesh example of Laplacian operator matrix.
Laplacian operators with geometric discreteness can approximate the characteristics well. In contrast to the Laplacian coordinate derived from uniform weights, the cotangent weights are proposed to assign the weights of the transformation points, as follows:
| (21) |
Where represents the area of the triangle units around the vertex , and and represent the relative angles of the side . As shown in Fig. 7(b), the direction is similar to the normal of the mean curvature. These Laplacian coordinates constructed from the weights of the geometric information only contain the normal direction. Unlike the previously defined , still contains the tangential direction. However, the cotangent weights are less efficient than the uniform weights, and the uniform weights are used in this algorithm.
2.6.1. Facial reconstruction based on global Laplacian operators
A mesh reconstruction is performed using the differential surface representation defined above to cover the edentulous facial model in original Cartesian coordinate system. The feature points in the feature template in edentulous facial model are taken as control points. The statistical model is used to predict the target position of the control points, under which the edentulous facial model is reconstructed.
When performing modeling operations, the absolute points need to be fixed around the edentulous facial model.
| (22) |
The Laplacian coordinates of the model are matched with the Laplacian operators to solve the coordinates of the remaining vertices. At the same time, the constraint points need to be controlled with the least squares method, thereby resulting in the following error equation:
| (23) |
It is minimized to obtain the appropriate vertex coordinates. This quadratic minimization can be transformed into a sparse linear equation system and solved.
The basic principle of adapting Laplacian coordinates is to ensure the details of the shape while the relevant position information of the vertices is stored in . However, these coordinates are sensitive to linear transformation. Therefore, the structural details of the shape can be translated but not rotated and scaled. If the constraint points guarantee a linear transformation, the structural details do not change.
Based on the final position of the vertex, the transformation matrix is designed for each vertex, and the transformation matrix corresponding to the position of the calculated vertex is . Then, equation (23) can be rewritten as follows:
| (24) |
and in equation (24) are unknown. However, if the coefficient matrix is a linear function of , it is necessary to obtain before solving . is defined by converting control points to :
| (25) |
The transformation matrix is calculated according to the initial coordinates and the predicted coordinates of the feature points in the feature template of the edentulous facial model.
Linear matrices are combined as follows:
| (26) |
Where is the prediction coordinates set of control points. According to the change of the locations of all control points, the new coordinates of all vertices on the surface of the edentulous facial model are calculated to reconstruct the edentulous facial model, thereby obtaining the predicted dentigerous model.
3. Results
In this study, the deformation simulation, the structural feature template construction, and the elastic deformation prediction model based on principal component analysis are written in Visual C++ program.
This study is the first to apply principal component analysis to the prediction of facial skin deformation after complete denture restoration to construct an elastic deformation prediction model that can be adjusted interactively. Data are obtained from a total of 48 patients. A sample matrix is composed with 45 sets of samples as training data. Forty-five sets of eigenvectors and their corresponding eigenvalues are extracted, and the remaining three sets of samples are predicted by adjusting shape parameters.
Forty-five sets of edentulous facial models and dentigerous facial models are taken as sample data, and the coordinates of the feature points in the corresponding feature template are extracted. Then, the difference between the locations of the feature points of the edentulous facial model and the dentigerous facial model becomes elastic data and is saved. The elastic matrix consists of 45 elastic vectors. The eigenvalues and eigenvectors are extracted by principal component analysis. The elastic difference of the curvature feature template in the prediction sample is predicted by adjusting the shape parameter to obtain the location of new feature points. Finally, the deformation simulation is performed with the global Laplacian deformation technology. By adjusting in the shape parameter , the prediction results of the three sets of samples are obtained, as shown in Fig. 9, Fig. 10, Fig. 11, Fig. 12, Fig. 13, Fig. 14.
Fig. 9.
The first original edentulous facial model.
Fig. 10.
The first prediction results by adjusting the shape parameter.
Fig. 11.
The second original edentulous facial model.
Fig. 12.
The second prediction results by adjusting the shape parameter.
Fig. 13.
The third original edentulous facial model.
Fig. 14.
The third prediction results by adjusting the shape parameter.
Five sets of models are randomly selected from samples as deformation analysis data. By matching the prediction results under different shape parameters with the corresponding post-treatment facial models, a prediction model closer to the post-treatment model is obtained. Fig. 15 shows the adjusted data from the five sets of predictions that best match post-treatment models.
Fig. 15.
The adjusted model from the five sets of predictions that best match post-treatment models.
The lower third of the face is defined as the major deformation area after complete denture restoration. It is shown as the red framed area in Fig. 15. According to the color gradation chart, the range of deviation of the five sets of prediction data in this region is shown in Table 1. In main deformation areas, more than 90% of the partial deviation is within 2 mm. The maximum deviation range between the prediction model of the five sets of data and the facial model after treatment is −2.365–2.365 mm, and the minimum deviation range is −1.875–1.875 mm.
Table 1.
The range of deviation in the main deformation area.
4. Discussion
The changes in facial morphology before and after complete denture repair are subjectively judged by doctors. Quantitative analysis has become the key development trend with the development of computer technology. At the same time, the changes in facial morphology before and after treatment will become an important constraint in combination with the new technologies of 3D printing and digital tooth arrangement.
4.1. Matching analysis
The facial models of patients with clinical edentulism are used in the facial shape prediction and recovery experiment by using principal component analysis. The results show that 1) facial morphology expands when the shape parameter is 0.1–0.5, and high values are associated with extensive expansion; and 2) facial morphology contracts when the shape parameter is −0.1 to −0.5, and low values are indicative of limited contraction. Different amplitudes and facial morphologies can be changed by adjusting the value of the shape parameter . The algorithm adjusts the deformation of the interactive facial model on this basis. Prediction results for different deformation amplitudes and modes can be obtained by adjusting shape parameters. All results are acquired from empirical data that had been accumulated by doctors in the treatment of patients with edentulism. These empirical data are transformed into an elastic deformation model through principal component analysis. The results of matching analysis show that the average deviation range between the prediction model under specific shape parameter and the facial model after treatment can reach −2.102–2.102 mm and can meet the prediction needs of facial morphology after complete denture treatment. Furthermore, the method can be used to predict the interactive facial model. Therefore, doctors can use this method to develop personalized treatment options for individual patients. This application illustrates the flexibility of the method.
4.2. Comparative analysis
In our previous research, the facial deformation after complete denture restoration based on BP neural networks was introduced.27 Comparing the deformation prediction results of facial models based on BP neural network and principal component analysis in main deformation areas, the average of the main deviations of facial model prediction based on BP neural network is −2.021–2.021 mm, whereas the average of the main deviations of facial model prediction based on principal component analysis is −2.102–2.102 mm. Thus, the facial model deformation prediction based on BP neural network is slightly more accurate. However, in terms of prediction time, the principal component analysis method is faster and more controllable than the BP neural network method.
Both the neural network model and the principal component analysis belong to the statistical model. The biggest difference between the two is that the neural network model improves its own prediction parameters while continuously learning the feature data to reach the optimal solution. By contrast, the principal component analysis can adjust the facial deformation amplitude interactively by adjusting shape parameters after the eigenvalues and eigenvectors are extracted. The neural network model has higher requirements for quality and quantity of samples, whose prediction is more consistent with the sample characteristics, but with longer prediction time. The principal component analysis has faster prediction speed and higher stability. Both prediction methods have their own characteristics. The recovery of facial morphology after oral treatment is based on the doctor's subjective judgment, and there is no objective standard and reference. Thus, the principal component analysis can predict the effect in consideration of different doctors' varied needs and habits. This method meets the standards of different doctors to the greatest extent.
Chabanas et al.11 simulated the facial soft tissue deformation after maxillofacial treatment by using FEM, and the calculation time was 2–7min. The deviation between the deformation result and the model after treatment was within 2 mm. Therefore, its accuracy is similar to this study; however, this study is more efficient and can be completed in less time.
5. Conclusions
In traditional oral treatment process, doctors need to predict the changes in facial appearance and soft tissue after surgery, which increases the frequency of repeated diagnosis and treatment. In view of the shortcomings in a previous study, this present research proposes the application of a principal component analysis method to predict the facial deformation after complete denture restoration and to design a prediction model according to the characteristics of the principal component analysis. A curvature feature template with few feature points is constructed to replace the deformed areas of facial models. The principal component analysis method is used to construct an elastic deformation prediction model for complex skin tissue. The Laplacian deformation technology is used to establish the edentulous facial model and to finally obtain an intuitive digital 3D model. Facial recovery is subjectively judged according to doctors' treatment habits and experience. Hence, an interactive deformation method is provided to meet the varied needs of different doctors. This method is preferable, as proven by results of comparisons between the deformation model and the post-treatment model.
Principal component analysis is a linear analysis method that requires a combination of nonlinear operations on feature extraction and final computation to adapt to more complex and variable models, thereby playing a greater role in virtual reality. In our following work, we will continue focusing on this aspect.
Conflict of interest statement
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgements
The volunteers agreed to participate in this research.
This study was supported by the National Natural Science Foundation of China (51205192), scientific research foundation of Nanjing Institute of Industry Technology (YK18-03-02) and scientific research foundation of Nanjing Institute of Industry Technology (YK18-03-03).
Biographies

Cheng Cheng He received his Ph.D. from Nanjing University of Aeronautics and Astronautics in 2017. His research interests include digital geometry processing and CAD/CAM for special use and biomedical modeling.

Xiaosheng Cheng He received his Ph.D. from Nanjing University of Aeronautics and Astronautics in 2007. His research interests include reverse engineering techniques, digital geometry processing, CAD/CAM/CAPP/PDM techniques and 3D printing.

Ning Dai He received his Ph.D. from Nanjing University of Aeronautics and Astronautics in 2006. His research interests include reverse engineering techniques, digital geometry processing, intelligent CAD/CAM and 3D printing.

Tang Tao He received his M.S. from Nanjing University of Aeronautics and Astronautics in 2012. His research interests include Navigation guidance and control and CFD for helicopters.

Zhenteng Xu He received his M.S. from Nanjing University of Aeronautics and Astronautics in 2016.His research interests include Aircraft fault diagnosis and digital geometry processing.

Jia Cai She received her M.S. from Nanjing University of Aeronautics and Astronautics in 2014. Her research interests include compressible fluid dynamics and CFD.
Footnotes
Supplementary data to this article can be found online at https://doi.org/10.1016/j.jobcr.2019.06.002.
Contributor Information
Cheng Cheng, Email: 2018100891@niit.edu.cn.
Xiaosheng Cheng, Email: 1603841029@qq.com.
Ning Dai, Email: dai_ning@nuaa.edu.cn.
Tao Tang, Email: 2016100837@niit.edu.cn.
Zhenteng Xu, Email: 2016100831@niit.edu.cn.
Jia Cai, Email: 2017100879@niit.edu.cn.
Appendix A. Supplementary data
The following is the Supplementary data to this article:
References
- 1.Hoch M., Fleischmann G., Girod B. Modeling and animation of facial expressions based on B-Splines. Vis Comput. 1998;11(2):87–95. [Google Scholar]
- 2.Parke F.I. Parameterized models for facial animation. IEEE Comp Graphics Appl. 1982;2(9):61–68. [Google Scholar]
- 3.Liu S., Wang J., Zhang M. Three-dimensional cartoon facial animation based on art rules. Vis Comput. 2013;29(11):1135–1149. [Google Scholar]
- 4.Teschner M., Girod S., Girod B. Direct computation of nonlinear soft-tissue deformation. VMV. 2003;3:383–390. [Google Scholar]
- 5.Keeve E., Girod S., Kikinis R. Deformable modeling of facial tissue for craniofacial surgery simulation. Comput Aided Surg. 2015;3(5):228–238. doi: 10.1002/(SICI)1097-0150(1998)3:5<228::AID-IGS2>3.0.CO;2-I. [DOI] [PubMed] [Google Scholar]
- 6.Smith J.D., Thomas P.M., Proffit W.R. A comparison of current prediction imaging programs. Am J Orthod Dentofacial Orthop. 2004;125(5):527–536. doi: 10.1016/S0889540604001210. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Cousley R.R., Grant E., Kindelan J.D. The validity of computerized orthognathic predictions. J Orthod. 2003;30(2):149–154. doi: 10.1093/ortho/30.2.149. [DOI] [PubMed] [Google Scholar]
- 8.Picinbono G., Delingette H., Ayache N. Non-linear anisotropic elasticity for real-time surgery simulation. Graph Model. 2003;65(5):305–321. [Google Scholar]
- 9.Kim D., Ho C.Y., Mai H. A clinically validated simulation method for facial soft tissue change prediction following double-jaw orthognathic surgery. Med Phys. 2017;44(8):4252–4261. doi: 10.1002/mp.12391. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Mendizabal A., Duparc R.B., Bui H.P. Face-based smoothed finite element method for real-time simulation of soft tissue. SPIE Medical Imaging. 2017:101352H. [Google Scholar]
- 11.Chabanas M., Luboz V., Payan Y. Patient specific finite element model of the face soft tissues for computer-assisted maxillofacial surgery. Med Image Anal. 2003;7:131–151. doi: 10.1016/s1361-8415(02)00108-1. [DOI] [PubMed] [Google Scholar]
- 12.Zachow S., Gladiline E., Hege H. Elsevier Science B.V; 2000. Finite-element Simulation of Soft Tissue Deformation. Computer Assisted Radiology and Surgery (CARS) pp. 23–28. [Google Scholar]
- 13.Gladilin E. Free University; Berlin: 2003. Biomechanical Modeling of Soft Tissue and Facial Expressions for Craniofacial Surgery Planning. [Google Scholar]
- 14.Jalali A., Mallipeddi R., Lee M. Sensitive deep convolutional neural network for face recognition at large standoffs with small dataset. Expert Syst Appl. 2017;87:304–315. [Google Scholar]
- 15.Barros P., Parisi G.I., Weber C. Emotion-modulated attention improves expression recognition: a deep learning model. Neurocomputing. 2017;253(C):104–114. [Google Scholar]
- 16.Abousaleh F.S., Lim T., Cheng W.H. A novel comparative deep learning framework for facial age estimation. Eurasip J Image Video Processing. 2016;2016(1):47. [Google Scholar]
- 17.Xia Y.Z., Zhang B.L., Coenen F. Face occlusion detection using deep convolutional neural networks. Int J Pattern Recognit Artif Intell. 2016;30(9):401–408. [Google Scholar]
- 18.Zhang Z.P., Luo P., Loy C.C. Learning deep representation for face alignment with auxiliary attributes. IEEE Trans Pattern Anal Mach Intell. 2016;38(5):918–930. doi: 10.1109/TPAMI.2015.2469286. [DOI] [PubMed] [Google Scholar]
- 19.Tian L., Fan C.X., Ming Y. Multiple scales combined principle component analysis deep learning network for face recognition. J Electron Imaging. 2016;25(2):23–25. [Google Scholar]
- 20.Shah S.A.A., Bennamoun M., Boussaid F. Iterative deep learning for image set based face and object recognition. Neurocomputing. 2016;174:866–874. [Google Scholar]
- 21.Park B.H., Oh S.Y., Kim I.J. Face alignment using a deep neural network with local feature learning and recurrent regression. Expert Syst Appl. 2017;89:66–80. [Google Scholar]
- 22.Rhee S.M., Yoo B.I., Han J.J. Deep neural network using color and synthesized three-dimensional shape for face recognition. J Electron Imaging. 2017;26(2):020502. [Google Scholar]
- 23.Zhang F.F., Yu Y.B., Mao Q.R. Pose-robust feature learning for facial expression recognition. Front Comput Sci. 2016;10(5):832–844. [Google Scholar]
- 24.Zhao L., Wang Z.C., Wang X.J. Human fatigue expression recognition through image-based dynamic multi-information and bimodal deep learning. J Electron Imaging. 2016;25(5) [Google Scholar]
- 25.Lei Z., Yi D., Li S.Z. Learning stacked image descriptor for face recognition. IEEE Trans Circuits Syst Video Technol. 2016;26(9):1685–1696. [Google Scholar]
- 26.Dong Y., Liu Y.N., Lian S.G. Automatic age estimation based on deep learning algorithm. Neurocomputing. 2016;187:4–10. [Google Scholar]
- 27.Cheng C., Cheng X.S., Dai N. Prediction of facial deformation after complete denture prosthesis using BP neural network. Comput Biol Med. 2015;66:103–112. doi: 10.1016/j.compbiomed.2015.08.018. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.














