Abstract
Background:
Guinea pigs are frequently used in otologic research as animal models of cochlear-implant surgery. In robot-assisted surgical insertion of cochlear-implant electrode arrays, knowing the cochlea pose is required. A preoperative CT scan of the guinea-pig anatomy can be labeled and registered to the surgical system, however, this process can be expensive and time consuming.
Methods:
Anatomical features from both sides of 11 guinea-pig CT scans were labeled and registered, forming sets. Using a groupwise point-set registration algorithm, errors in cochlea position and modiolar-axis orientation were estimated for 11 iterations of registration where each feature set was used as a hold-out set containing a reduced number of features that could all be touched by a motion-tracking probe intraoperatively. The method was validated on 2000 simulated guinea-pig cochleae and six physical guinea-pig-skull cochleae.
Results:
Validation on simulated cochleae resulted in cochlea-position estimates with a maximum error of 0.43 mm and modiolar-axis orientation estimates with a maximum error of 8.1° for 96.7% of cochleae. Physical validation resulted in cochlea-position estimates with a maximum error of 0.80 mm and modiolar-axis orientation estimates with a maximum error of 12.4°.
Conclusions:
This work enables researchers conducting robot-assisted surgical insertions of cochlear-implant electrode arrays using a guinea-pig animal model to estimate the pose of a guinea-pig cochlea by locating six externally observable features on the guinea pig, without the need for CT scans.
Hypothesis:
The pose (i.e., position and orientation) of a guinea-pig cochlea can be accurately estimated using externally observable features, without requiring CT scans.
INTRODUCTION
Guinea pigs are frequently used as an experimental animal in otologic research due to favorable anatomy such as the greater size of the cochlea compared to other lab animals,1 and an opening of the tympanic bulla that enables wide access to the tympanic cavity and to the cochlea.2,3 These characteristics make the guinea pig a suitable model for preclinical research involving the surgical insertion of cochlear-implant electrode arrays (EAs)4,5. Robot-assisted insertion with magnetic steering has been reported to reduce insertion forces in vitro and in human cadavers,6–9 which is believed to be correlated with improved hearing outcomes by minimizing electrode injury to the scala-tympani wall and the basilar membrane.10 Proper placement of the robotic insertion stage and the magnetic field source, as well as the coordinated motion plan, is reliant on knowing the position and orientation of the cochlea in humans and the corresponding animal model used for the development of the technique: the guinea pig.
In the typical image-guided surgery paradigm, a preoperative computed-tomography (CT) scan is used to label the guinea-pig anatomy, which is then registered to markers or specific external features of the guinea pig during surgery. However, the process of scanning, labeling, and registering can be both expensive and time consuming. This study presents a method to estimate the pose of the cochlea, i.e., the cochlea’s position and orientation, not with a preoperative CT scan, but instead using several localized external features of a guinea pig (including the round window, which is exposed in preparation for EA surgical insertion) using a motion-tracking probe.
Locating anatomical features based on the locations of easily identifiable anatomical landmarks has been explored in various medical contexts. To predict hip joint centers (HJC) in posture and motion analysis, both functional and regression methods have been used. Functional methods locate the HJC by estimating the average center of rotation when performing hip circumduction movement, whereas regression methods develop equations to locate the HJC from anthropometric measurements between externally palpable bone landmarks.11,12 Estimating origins and insertions of the knee ligaments for knee surgery or gait analysis has also been explored. Ascani et al. obtained estimates by defining a point cloud from 16 anatomical landmarks to create a registration atlas using an affine transformation on a small data set.13 With a much larger dataset, Asseln et al. used a fully automatic mesh morphing method to estimate ligament attachments involving anatomic surface data fitted using a variant of the point set registration method known as the iterative closest point algorithm.14
Point-set registration is used throughout the medical and computer-vision literature to match point sets accurately, whether fusing multiple medical images or aligning facial landmarks for face recognition.15 Point-set registration estimates the transformation between two point sets (pairwise point-set registration) or more than two point sets (groupwise point-set registration).15 Evangelidis et al. presented the groupwise registration method called Joint Registration of Multiple Point Clouds (JRMPC), which is an expectation conditional maximization algorithm that optimally estimates all registration parameters. It was shown to outperform pairwise and other groupwise registration methods when evaluated on multiple point sets, and refinements have continued to improve this method. 15,17 We implement this method to register multiple point sets of guinea-pig features to accurately estimate features of incomplete guinea-pig point sets. Various deep-learning approaches also exist for anatomical localization and segmentation; however, training for these methods typically require very large data sets.18,19
Localizing inner-ear anatomy using CT scans has been successfully performed for a number of years. For example, Stelter et al. obtained standard-deviation registration results of 0.27 mm, 0.21 mm, and 0.18 mm for localization of the mastoid, round window, and ear canal on humans, respectively.20 A method to estimate cochlea pose (guinea pig or otherwise) without using CT data of the immediate subject has never been described previously; this is the principal contribution of this study. Additionally, this is the first study to quantitatively characterize the anatomical properties (mean and variance) of the cochlea poses of guinea pigs. This work describes a method to estimate the pose of a guinea-pig cochlea without the need of CT scans by utilizing previously scanned CT data from guinea pigs and labeling specified feature points, by matching and estimating points using a group-wise point matching algorithm and customized feature-optimization process, and by using the distribution of features observed in the data to extrapolate results to the larger population of guinea pigs.
MATERIALS AND METHODS
Micro-CT scans of 11 Dunkin-Hartley albino guinea pigs, taken at the Hannover Medical School, were used for this study; ten males had been purchased from Charles River Laboratories (Écully, France, strain code 051) and one female had been purchased from Harlan Laboratories (Horst, Netherlands), with weights ranging from 355–746 grams. Standard micro-CT imaging procedures were used in each case. The pixel and slice resolution were equivalent and consistent for each individual scan. The maximum adjusted CT resolution used was less than 0.25 mm. The scans were performed on the post-mortem guinea pigs of which all previous procedures on the living guinea pigs were performed in accordance with the European Council directive (2010/63/EU), approved by the Local Institutional Animal Care and Research Advisory Committee (IACUC) and permitted by the local authority (Lower Saxony State Office for Consumer Protection, Food Safety, and Animal Welfare Service (LAVES); approval number 17/2396 and 10/0045).
Twenty features were used for labeling: 19 were selected from recognizable anatomical landmarks chosen from the CT scan (Fig. 1), and one was generated from a few of these selected points since it could not be easily or accurately labeled. Although the inner-ear anatomy of a guinea pig differs from that of a human, the selected features were chosen to be easily recognizable to clinicians familiar only with human inner-ear anatomy. Although automated methods for segmenting a cochlea and describing its pose exist for CT scans in humans,21–23 none exist for guinea pigs; thus, features were labeled by hand.
Figure 1.

17 of the 20 selected feature points labeled on micro-CT scans used in this study. An exterior view of a live guinea-pig head and guinea-pig skull with associated features are also included. The RWA and RWP were selected directly on the 3D segmented model, hence associated CT scans for these points are not shown. Visualization of the two coordinate frames used is shown. The RW coordinate frame is used to align the hold-out set, and the CO coordinate frame is used to align the 11 CT point sets and compute the CO position. The table shows a list of the internal and external features.
The 20 labeled feature points are described in greater detail as follows, with visual examples shown in Figure 1. The cochlea apex (CA) was defined at the apex of the cochlea spiral aligned with the modiolus. The cochlea base (CB) was defined at the intersection of the center of the modiolus with the basal turn. The modiolar axis is defined as the vector from the CB to the CA, in a basal cochlear coordinate system fashion.20 The cochlea connection point (CC) is the intersection of the cochlea with the medial wall of the ventral tympanic bulla. The cochlea base lateral connection point (CL) is the protruding intersection of the basal turn of the cochlea with the ventral tympanic wall visible in the CT transverse view. The dorsal tympanic lateral point (DTL) is the pointy tip of the small lateralmost extrusion inside of the dorsal tympanic bulla. The dorsal tympanic malleus head (DTM) was defined at the point of the malleus head inside the dorsal tympanic bulla. The anterior and posterior footplate points (FPA, FPP) were defined at the bony region adjacent to the anterior and posterior edges of the footplate. The long limb of the incus (LLI) was defined at the incus end point closest to the round window. Four points were defined at the anteriormost, posteriormost, lateralmost, and medialmost edges of the round window (RWA, RWP, RWL, RWM). The cochlea position, denoted the cochlea origin (CO), was generated from the CA, CB, and RWA and RWP points defined below in greater detail. The superior and inferior points for the opening of the osseous canal (OS, OI) were labeled at the widest opening of the osseous canal at the superior and inferior edges. The base of the incisors (IN) was defined at the intersection of the front two incisors and the inferiormost point of the maxilla. The malleus handle point (MH) is defined at the bony point directly adjacent to the superiormost visible point of the malleus handle from the outer ear canal. The mandible feature point (MN), was defined at the posteriormost and superiormost point of the mandible in the CT coronal frame adjacent to the temporal bone. The lateralmost aspect of the zygoma (ZL) was defined at the lateralmost point of the temporal bone. Feature positions were validated by visual inspection on a 3D segmented model generated from the CT data, which enabled fine tuning of point placement.
The CO, the origin of the coordinate frame defining the nominal position of the cochlea, was generated from features marking the modiolar axis and round window. In previous studies, the nominal position of the cochlea has been defined as the intersection of the modiolar axis and the basal plane.7,8 In our study we maintained the spirit of that definition by defining the CO as the point on the modiolar axis that is closest to the round window; that is, the CO is defined such that the vector from the CO to the round window is orthogonal to the modiolar axis. The point selected to represent the round window was defined as the midpoint between the RWA and RWP. The round window was used in place of the basal plane due to ease of selection and its common use in EA insertion. To better understand the relationship between the 20 selected features, a comparison matrix is presented in Figure 2, displaying the between-feature distances as means and standard deviations across guinea pigs.
Figure 2.
Comparison matrix of the between-feature distances, for both left and right sides, for all 11 guinea pigs. Values provided are mean distances, with standard deviations in parenthesis in units mm. The feature points included are the cochlea apex (CA), cochlea base (CB), cochlea connection point (CC), cochlea lateral point (CL) cochlea origin (CO), dorsal tympanic lateral point (DTL), dorsal tympanic malleus head (DTM), footplate anterior (FPA), footplate posterior (FPP), incisors (IN), long limb of the incus (LLI), malleus handle (MH), mandible (MN), osseous canal superior (OS), osseous canal inferior (OI), round window anterior (RWA), round window lateral (RWL), round window medial (RWM), round window posterior (RWP) and lateralmost aspect of the zygoma (ZL).
Of the 20 features, 11 were “external features” that could be physically touched by a probe, either on the external surface before any surgical incisions, or inside the bulla, which is exposed intraoperatively before EA insertion. These locations can be recorded by an external tracking system (i.e., touched with a motion-tracking probe) to form a point cloud used in the anatomy estimation algorithm. Figure 1 lists internal and external features.
The process flow of testing is depicted in Figure 3A. All 20 features in each guinea-pig CT scan were labeled on both right and left sides; in the case of the IN, the same point was used for both sides. The CO point was computed for all sets as previously described. For general alignment before fitting, each point set was transformed, so that its CO was at the global origin. This was done by creating an orthonormal coordinate frame with the x-axis directed from the CO to the CA for the left side (reversed for the right side), the y-axis directed from the CO to the round window midpoint defined previously, and the z-axis as the cross product between the x- and y-axis vectors to form a right-handed coordinate frame. This is denoted as the CO coordinate frame and is visualized in Figure 1.
Figure 3.
Methods and tests included in the paper to find the best feature combinations. After the JRMPC algorithm runs, an error test with the original 22 sets, a test on 2000 simulated guinea-pig features, and a physical test on guinea-pig skulls is conducted.
The JRMPC algorithm was used to match the point cloud sets for the left and right sides, each side registered separately. We performed a cross-validation study, which is a common method used to assess how well a model will generalize to a new data set. In cross-validation, data is partitioned into two subsets, where model fitting is performed on one subset, denoted the training set, and tested on the other, denoted the validation set; multiple rounds are typically performed where the data are partitioned in different ways and results from all rounds are combined for the final solution. In our study, ten of the 11 point sets were used as a training set, while one hold-out set was used as the validation set. The training set would consider subsets (more below) of the full set of 20 features, whereas the validation set only included subsets of external features from the training set since the validation set is meant to represent the features that could be obtained from a guinea pig without requiring a CT scan. Eleven iterations (i.e., each of the 11 point sets being used once as the validation set) were run; the results were combined to quantify the error in the modiolar-axis orientation and CO position estimation. Such a method is required because the points that define the modiolar axis and cochlea position (CA, CB, CO) cannot be explicitly touched.
For each iteration, a different point set was selected as the validation set. Without the CB, CO, and CA in the external feature set, the validation set cannot be transformed into the CO coordinate frame. A second frame was developed using external feature points to transform these points in close proximity with those in the CO coordinate frame. This second frame, the RW coordinate frame (see Figure 1), was defined by an orthonormal coordinate frame where the x-axis was the normalized vector from the RWA to the RWP on the left side (reversed for the right side), the origin being the point on the RWA-RWP axis closest to the LLI; the y-axis was defined as pointing to the LLI, and the z-axis was computed as the cross product between the x- and y-axes. This coordinate frame origin was then translated in its -y direction, by an amount equal to the distance from the RWA-RWP axis to the LLI, to move it closer to the actual position of the CO (to improve the step to follow). The JRMPC algorithm then registered this point cloud with all previous 10 point clouds initialized into the CO coordinate frame together. The initial alignment was found to improve solving times during optimization.
Error metrics for each point-set registration involved finding the weighted sum of the modiolar-axis and CO position errors for all 11 iterations of the cross-validation test for each side, explained below using our custom optimization. To find this cumulative error, the final rotation and translation metrics for all 11 point sets were applied to the full 20 features of each respective point set. The modiolar-axis angle and CO position of the hold-out point set was then compared with the mean values of the modiolar-axis angle and CO position of the ten point sets in the training set. The cochlea-origin error () was computed as the L2 norm between CO positions, as just described. The modiolar-axis angular error () was computed by finding the difference between the modiolar-axis angles as follows: we define a vector pointing from the mean CO to the mean CA of the training set, which we represent by ; the modiolar-axis vector of the validation set is represented by ; the modiolar-axis angular error between these two vectors can then be computed as
Due to the possibility that using the full set of the 20 (somewhat arbitrarily) selected features might not be the optimal choice, a custom optimization algorithm was used to find the best subset of features. Feature subsets varied from three to 20 features for the training set and from three to 11 features for the validation set, the number of three features being the assumed minimum number of features for point matching. The feature subsets of the validation set were constrained to contain only features that were included in the current feature subset of the training set. The optimization algorithm began with all 20 features for the training set, then reduced features one by one. In each iteration, the reduced feature set was selected from the previously determined best feature set; for example, the best 19 features previously selected from the original 20 became the base feature set when selecting the best 18 features for the next iteration. Each of these feature sets considered for the training set included a sub-level optimization in which the number of features in the validation set was evaluated in an analogous way as in the training set, beginning with all (at most 11) external features in the training set and reducing down to three.
The total error (), used as the optimization objective function, is a weighted sum of and for both the right and left sides (11 per side, 22 values total, for each error metric).
The optimization function seeks to minimize . Our rationale for choosing the 0.2 weighting coefficient was to approximately equate 1 mm of with 5° of . It is difficult to know precisely how to compare linear and angular measurement units, so this value was chosen through some tuning.
Using the results of the previous steps, validation was performed on an expanded simulated data set to estimate robustness to the larger Dunkin-Hartley guinea-pig population. This was done by first using the JRMPC algorithm to match the 11 point sets and find the mean and covariance of each feature point cloud in 3D space, resulting in 20 mean points and 20 covariance matrices. Samples were drawn from a Gaussian distribution formed by these mean points and covariance matrices to generate 1000 new simulated guinea-pig features sets for both right and left sides, totaling 2000 feature sets. Measuring the errors for each of these 1000 simulated sets, relative to the means of the respective 11 sets, occurred using the same method as described previously.
A final, physical validation test collected and labeled CT scans for both sides of three new guinea-pig skulls. Ground truth for the CO position and modiolar axis was established directly from the CT scans. A 3-mm-diameter hole was made 3 mm anterior and 1 mm inferior to the center of the outer ear canal to expose the inner anatomy of the skulls. The six external features determined in the previous steps were touched on the skulls using a motion-tracking probe (see Figure 4). Eight additional prominent features were selected on the skull and labeled on the CT scan for registration of the skull to the CT scan, using the iterative closest point (ICP) point-matching algorithm.24,25 The CO position and modiolar-axis orientation estimates computed from the touched points were compared to ground truth.
Figure 4.
Example of setup for physical validation test. The skull is secured and touched with a motion-tracking probe. A microscope is used to accurately position the probe with respect to anatomical features.
RESULTS
The largest variability in feature positions, shown in Figure 2, is between the IN and all other feature points. The smallest occur in the FPP closely followed by the LLI.
Results displaying each level of optimization, including the best of the 3–20 feature sets of the training set, as well as the best of the 3–11 features of the validation set for the optimal training set, are reported in Figure 5. The optimal combination of training-set features comprised 16 of the original 20 features (see Figure 5A). This included all features except the CC, FPP, DTL, and RWM. The optimal combination of validation-set features comprised 6 of the original 11 external features, shown in Figure 5B. This included the CL, MN, OS, RWA, RWP, and LLI.
Figure 5.
Graphs displaying the best error estimate found for each iteration of features selected for the training and validation sets. (A) Objective function outputs for the optimal subset for each number of training subset features drawn from the complete training set of 20 features. In each iteration, the reduced feature set was selected from the previously determined best feature set, reducing down to 3 sets. (B) Objective function outputs for feature subsets of the optimal training set, which included 16 features, for the validation set. Only up to 10 features are considered because one of the 11 externally visible features was not included in the optimal training set (RWM). For both graphs, a lower value represents less error, hence a more optimal result.
Using this optimal combination of training-set and validation-set features, we obtained estimates of the modiolar-axis orientation for the 22 original point sets (all of which were part of the training data) with mean, median, and maximum errors of 3.3°, 2.7°, and 8.1°, respectively, shown in Figure 6A. The CO position was estimated with mean, median, and maximum errors of 0.22 mm, 0.19 mm, and 0.43 mm, respectively, shown in Figure 6B. The range of each histogram of Figure 6 spans from the minimum to the maximum of the estimation errors of the 22 data points.
Figure 6.
The top row shows histograms of modiolar-axis orientation error measured in degrees (A) and cochlea-origin position error measured in millimeters (B) for the 11 cross-validation iterations for the left and right sides; each bin displays the total number of validation data sets that obtained the respective bin-sized error when matched with the remaining 10 training data sets. The bottom row shows box-whisker plots of modiolar-axis orientation error (C) and cochlea-origin position error (D) for 2000 simulated guinea-pig feature sets generated using the statistical distribution of the data in the corresponding histogram. Circles overlaid on the box-whisker plots represent error results from the physical validation tests.
The box-whisker plots shown in Figures 6C and 6D display respective modiolar-axis orientation errors and CO position errors using the 2000 simulated guinea-pig feature sets. The modiolar-axis orientation results predict a mean error of 2.8°, a median error of 2.7°, and a whisker at 7.0°. The CO position results predict a mean error of 0.21 mm, a median error of 0.20 mm, and a whisker at 0.50 mm. Outliers beyond the whiskers are not plotted; these outliers made up 3.2% of the simulated feature sets. The results of the physical validation test are included as red circles in Figures 6C and 6D, showing an average CO position error of 0.65 mm with a maximum of 0.80 mm and an average modiolar-axis orientation error of 9.3° with a maximum of 12.4°.
DISCUSSION
Using the method described, the pose of a guinea-pig cochlea can be estimated using six externally localized observable features on the guinea pig. This work enables researchers conducting cochlear-implant surgical insertions, and robot-assisted insertions in particular, to estimate the location of a guinea-pig cochlea, without the need for any CT scans, with submillimeter CO position error and modiolar-axis orientation error largely within the small-angle approximation.
The wide range of guinea-pig weights used in this study, with the heaviest guinea pig being 110% heavier than the lightest guinea pig, suggests that our model is not particularly sensitive to the size of the guinea pig.
Results from the physical validation test with guinea-pig skulls had larger errors than those predicted in simulation, however, these results are likely to be more indicative of what should be expected during an actual surgical procedure. Ultimately, demonstration of atraumatic insertion using this approach will need to be validated in an in-vivo model.
Instructions and code for using this algorithm and a labeled segmentation example file can be found at https://www.telerobotics.utah.edu/ and https://github.com/dusevitch/gp-cochlea-estimator.
Acknowledgments
Conflicts of Interest and Sources of Funding
Research reported in this publication was supported by the National Institute on Deafness and Other Communication Disorders of the National Institutes of Health under Award Number R01DC013168. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The authors would like to thank the Cluster of Excellence Hearing4All 2.0, funded by the German Research Foundation, for financing V.S.
The authors report no conflicts of interest. The authors alone are responsible for the content and writing of the paper.
Contributor Information
David E. Usevitch, Department of Mechanical Engineering, University of Utah.
Albert H. Park, Department of Surgery, Division of Otolaryngology, University of Utah.
Verena Scheper, Department of Otolaryngology, Hannover Medical School, and Cluster of Excellence Hearing4all..
Jake J. Abbott, Department of Mechanical Engineering, University of Utah.
REFERENCES
- 1.Niedermeier K, Braun S, Fauser C, Straubinger RK, Stark T. Pneumococcal meningitis post cochlear implantation: Development of an animal model in the guinea pig. Hear Res. 2012;287:108–115. [DOI] [PubMed] [Google Scholar]
- 2.Lo J, Sale P, Wijewickrema S, Campbell L, Eastwood H, O’Leary SJ. Defining the Hook Region Anatomy of the Guinea Pig Cochlea for Modeling of Inner Ear Surgery. Otol Neurotol. 2017;38(6):e179–e187. [DOI] [PubMed] [Google Scholar]
- 3.Wysocki J. Topographical anatomy of the guinea pig temporal bone. Hear Res. 2005;199:103–110. [DOI] [PubMed] [Google Scholar]
- 4.Scheper V, Hoffmann A, Gepp MM, Schulz A, Hamm A, Pannier C, Hubka P, Lenarz T, Schwieger J. Stem Cell Based Drug Delivery for Protection of Auditory Neurons in a Guinea Pig Model of Cochlear Implantation. Front Cell Neurosci. 2019;13:177. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Ramekers D, Klis SFL, Versnel H. Simultaneous rather than retrograde spiral ganglion cell degeneration following ototoxically induced hair cell loss in the guinea pig cochlea. Hear Res. 2020;390:107928. [DOI] [PubMed] [Google Scholar]
- 6.Clark JR, Leon L, Warren FM, and Abbott JJ. Magnetic Guidance of Cochlear Implants: Proof-of-Concept and Initial Feasibility Study. J Med Devices. 2012;6:035002. [Google Scholar]
- 7.Leon L, Warren FM, Abbott JJ. An In-Vitro Insertion-Force Study of Magnetically Guided Lateral-Wall Cochlear-Implant Electrode Arrays. Otol Neurotol. 2018;39(2):e63–e73. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Leon L, Warren FM, Abbott JJ. Optimizing the Magnetic Dipole-field Source for Magnetically Guided Cochlear-implant Electrode-array Insertions. J Med Robot Res. 2018;3(1):1850004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Bruns TL, Riojas KE, Ropella DS, Cavilla MS, Petruska AJ, Freeman MH, Labadie RF, Abbott JJ, and Webster RJ III. Magnetically Steered Robotic Insertion of Cochlear-Implant Electrode Arrays: System Integration and First-In-Cadaver Results. IEEE Robot Autom Lett. 2020;5(2):2240–2247. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Hendricks CM, Cavilla MS, Usevitch DE, Bruns TL, Riojas KE, Leon L, Webster RJ III, Warren FM, and Abbott JJ. Magnetic Steering of Robotically Inserted Lateral-wall Cochlear-implant Electrode Arrays Reduces Forces on the Basilar Membrane In Vitro. Otology & Neurotology, DOI: 10.1097/MAO.0000000000003129. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Peng J, Panda J, Van Sint Jan S, Wang X. Methods for determining hip and lumbosacral joint centers in a seated position from external anatomical landmarks. J Biomech. 2015;48:396–400. [DOI] [PubMed] [Google Scholar]
- 12.Hara R, McGinley J, Briggs C, Baker R, Sangeux M. Predicting the location of the hip joint centres, impact of age group and sex. Sci Rep. 2016;6:1–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Ascani D, Mazzà C, De Lollis A, Bernardoni M, Viceconti M. A procedure to estimate the origins and the insertions of the knee ligaments from computed tomography images. J Biomech. 2015;48:233–237. [DOI] [PubMed] [Google Scholar]
- 14.Asseln M, Haenisch C, Alhares G, Quack V, Radermacher K. A Mesh Morphing Based Method To Estimate Cruciate Ligament Attachments Based on CT-Data. Proceedings of the Annual Meetings of CAOS International, Vancouver, British Colombia. 2015. [Google Scholar]
- 15.Zhu H, Guo B, Zou K, et al. A Review of Point Set Registration: From Pairwise Registration to Groupwise Registration. Sensors. 2019;19:1191. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Evangelidis G, Kounades-bastian D, Horaud R, Psarakis E. A Generative Model for the Joint Registration of Multiple Point Sets. European Conference on Computer Vision, Zurich, Switzerland. 2014;109–122. [Google Scholar]
- 17.Moulin L, Rogge S, Munteanu A, “Joint Registration of Multiple Point Sets with Refinement,” 2019. IEEE International Symposium on Multimedia (ISM). 2019;72–727. [Google Scholar]
- 18.Milletari F, Ahmadi SA, Kroll C, et al. Hough-CNN: Deep learning for segmentation of deep brain regions in MRI and ultrasound. Comput Vis Image Underst. 2017;164:92–102. [Google Scholar]
- 19.Shen D, Wu G, Suk H-I. Deep Learning in Medical Image Analysis. Annu Rev Biomed Eng. 2017;19(1):221–248. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Stelter K, Ledderose G, Hempel JM, Morhard DFB, Flatz W, Krause E, Mueller J (2012) Image guided navigation by intraoperative CT scan for cochlear implantation. Comput Aided Surg. 2012;17:3,153–160. [DOI] [PubMed] [Google Scholar]
- 21.Noble JH, Labadie RF, Majdani O, Dawant BM. Automatic segmentation of intracochlear anatomy in conventional CT. IEEE Trans Biomed Eng. 2011;58(9):2625–2632. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Demarcy T, Vandersteen C, Guevara N, Raffaelli C, Gnansia D, Ayache N, Delingette H. Automated analysis of human cochlea shape variability from segmented μCT images. Comput Med Imag Grap. 2017;59:1––12. [DOI] [PubMed] [Google Scholar]
- 23.Wimmer W, Vandersteen C, Guevara N, Caversaccio M, Delingette H. Robust Cochlear Modiolar Axis Detection in CT. Med Image Comput Comput Assist Interv. 2019. October; 22: 3–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Chen Y, Medioni G. Object Modelling by Registration of Multiple Range Images. Image Vision Comput. Butterworth-Heinemann. 1992;10(2):145–155. [Google Scholar]
- 25.Besl PJ, McKay ND. A Method for Registration of 3-D Shapes. IEEE Trans Pattern Anal Mach Intell. Los Alamitos, CA: IEEE Computer Society. 1992; 14(2):239––256. [Google Scholar]





