Abstract
Femoroplasty is a proposed alternative therapeutic method for preventing osteoporotic hip fractures in the elderly. Previously developed navigation system for femoroplasty required the attachment of an external X-ray fiducial to the femur. We propose a fiducial-free 2D/3D registration pipeline using fluoroscopic images for robot-assisted femoroplasty. Intraoperative fluoroscopic images are taken from multiple views to perform registration of the femur and drilling/injection device. The proposed method was tested through comprehensive simulation and cadaveric studies. Performance was evaluated on the registration error of the femur and the drilling/injection device. In simulations, the proposed approach achieved a mean accuracy of 1.26±0.74 mm for the relative planned injection entry point; 0.63±0.21° and 0.17±0.19° for the femur injection path direction and device guide direction, respectively. In the cadaver studies, a mean error of 2.64 ± 1.10 mm was achieved between the planned entry point and the device guide tip. The biomechanical analysis showed that even with a 4 mm translational deviation from the optimal injection path, the yield load prior to fracture increased by 40.7%. This result suggests that the fiducial-less 2D/3D registration is sufficiently accurate to guide robot assisted femoroplasty.
Keywords: 2D/3D Registration, Femur Registration, X-ray Navigation, Robot-Assisted Femoroplasty
I. Introduction
OSteoporosis, characterized by decreased bone mass and microarchitectural deterioration of bone tissue, is a global health problem. There is a wealth of research on the extent to which bone loss may impair strength and increase the risk of fracture. The rate of mortality one year after osteoporotic hip fracture has been reported to be between 20% and 45% [1], [2], and the rate of a second fracture occurrence is 610 times higher in patients who have already suffered an osteoporotic hip fracture [3]. Current approaches on reducing the risk of hip fracture consider long-term preventive measures such as estrogens and selective estrogen receptor modulators, calcitonin, and bisphosphonates. These approaches, however, do not provide much-needed immediate prevention especially for the elderly.
Femoroplasty is proposed as an alternative therapeutic treatment for patients with osteoporosis [4]. It aims to prevent potential osteoporotic hip fractures through augmenting the osteoporotic femoral neck and trochanter area by injecting bone cements such as Polymethylmethacrylate (PMMA) [5]. Recent studies suggest that femoroplasty is a short-term preventive approach and has the potential to reduce the risk of fracture in an osteoporotic hip [5], [6], [7], [8]. Patient-specific femoroplasty requires a precise intraoperative navigation of the injection device according to a planned trajectory. Thus, an accurate estimation of the 3D pose of the femur anatomy and the injection device is necessary for the success of the procedure.
Previous studies have presented a navigation system utilizing an image-based 2D/3D registration framework with intraoperative X-ray images and fiducial-based C-arm tracking [7]. This navigation system utilizes an optically-tracked fluoroscope fiducial and a custom designed fluoroscopy tracking fiducial (FTRAC) [9] in order to register the anatomy with respect to a hand-held injection device. However, attaching external pins to the bone may not be an ideal option for patients with severe osteoporosis. Furthermore, optical tracking system requires a clear line-of-sight. In addition, simultaneous viewing of the X-ray fiducial and the anatomy of interest may become challenging given the limited field of view of the fluoroscopy. Hand-held manipulation of the injection device requires interaction with the surgeon, which makes the procedure complicated.
In contrast, we attached a custom-designed device guide to the end effector of a 6 DOF positioning robot, UR-10 (Universal Robot, Odense, Denmark) for a more precise robot-assisted femoroplasty. The device does both the drilling and injection in a single unit and the operator only changes the tool. The procedure for robot-assisted femoroplasty involves (Fig. 1): 1) a patient-specific planning based on biomechanical analysis of pre-operative CT; 2) an intraoperative registration of the drilling/injection (D/I) device and the anatomy; 3) positioning the device to the entry point of the planned trajectory and performing drilling followed by injection of the PMMA; 4) an intraoperative assessment to evaluate the 3D pattern of the injected cement and update the plan. This paper focuses on the registration paradigm for the mentioned system.
Fig. 1:
Top left: An example augmentation scheme of the optimized injection pattern (green), practical injection volume (blue) and line of injection (red) [8]. Bottom left: Components of robot-assisted femoroplasty system. Right: Illustration of the intensity-based 2D/3D registration for the femur and D/I device in the C-arm coordinate frame. The red cross arrows are the coordinate frames of each component; the blue arrows illustrate the transformations; the pink dotted line shows the planned trajectory for femoroplasty.
Fig. 1 presents the rigid transformations relating C-arm coordinates to the femur and D/I device coordinates. Fiducial-free 2D/3D registration of the proximal femur is challenging because femur has fewer distinct features in 2D projection domain compared to the other anatomies, such as pelvis. Gueziec et al. proposed an anatomy-based registration using CT and fluoroscopic image data of a femur bone [10]. The method was promising but the registration was sensitive to the contour extraction accuracy and the initial estimated point correspondences. Although a number of different optimization frameworks have been studied for femur registration [11], [12], [13], these approaches require either control points or contours as initialization setup with additional annotations. Miao et al. proposed a learning-based multi-agent registration pipeline using dilated fully-connected network and achieved favorable results on spine X-rays [14]. However, the femur has fewer features compared to spine, which is much more challenging to extract features for accurate registration. In a related study we proposed using pelvis registration to initialize the femur registration [15]. In this paper, we use pelvis as a fiducial and find relative pose of the femur and the D/I device using 2D/3D registration on their relevant X-ray views. We perform both simulation and cadaveric studies to investigate whether our registration paradigm is sufficiently accurate for navigating the D/I devices.
Conventional single-view intensity-based 2D/3D registration algorithms register an acquired 2D image and a 3D image (e.g. CT scan) by initializing and optimizing the 6D pose parameters using a cost function such as image intensity similarity metric. Multi-view 2D/3D registration algorithms jointly optimize a similarity metric relating to multiple 2D acquisitions with varying projection geometries. The registration performance largely depends on the initialization and the multi-view geometry.
Intraoperative fluoroscopy-guided surgical navigation systems using 2D/3D registrations commonly require manual initialization, which is a tedious task and easily interrupts surgical workflows [16]. A robust automatic initialization method can dramatically improve the success rate of 2D/3D registration by reducing the likelihood of the optimization converging to local minima. Anatomical landmarks are biologically meaningful locations in anatomy, which have been used to initialize image registration [17]. Convolutional neural networks (CNNs) have shown superior performance on the task of fluoroscopy landmark detection when large-scale training data is available [18]. Payer et al. evaluated different CNN architectures to detect multiple landmark locations in hand X-rays by regressing a single heat map for each landmark [19]. Bier et al. proposed an approach to detect multiple anatomical landmarks in X-ray images from arbitrary view directions [20]. Esteban et al. proposed an iterative landmark detection pipeline to initialize 2D/3D registration [21]. Several researchers combined landmark detection and semantic segmentation tasks using multi-task networks and achieved favorable results. Kordon et al. segmented the bones of knee joint and located two anatomical landmarks [22]. Laina et al. proposed a concurrent segmentation and localization network to annotate tools in laparoscopy and retinal microsurgery [23]. Gao et al. applied this idea and designed a U-net based structure for concurrently segmenting a continuum manipulator and two distinct landmarks in fluoroscopy images [24]. Grupp et al. further investigated this method and proposed a pipeline that segments 6 hip anatomical structures and 14 landmarks in hip fluoroscopies, which allows a fully automatic intraoperative registration during fluoroscopic navigation of the hip [25].
We extend the latter initialization method to be used within a registration paradigm applicable to femoroplasty.
Concurrent optimization of multi-view projections can over-come single-view ambiguity due to the use of geometric-varying information. However, the relative pose information of the projection views must be accurately estimated. Gong et al. proposed to use intensity-based registration to estimate the position of bone fragments with four fluoroscopic views. Though the registration error was small, the C-arm position was assumed to be tracked by an optical tracking technique or using specialized fiducials in the image [26]. Otake et al. used a custom-designed hybrid fiducial for C-arm tracking and pose estimation [27]. Yao et al. used a corkscrew fiducial object for C-arm extrinsic calibration by identifying object features within an X-ray image [28]. These methods were precise but introduced an extra bone pin fixation to the patient. There are researchers using intraoperative fluoroscopy images to navigate a surgical robot for cutting or drilling of the bone [10], [29], [30]. However, these methods rely on optical tracking or a specially designed fiducial for multi-view calibration, which have the same drawbacks of fiducial-based methods.
Grupp et al. proposed a multi-view fluoroscopy navigation method for pose estimation of periacetabular osteotomy fragments without fiducials [31]. Because the pelvis is abundant in features and can be accurately registered using intensity-based algorithms, they treated the pelvis itself as a “fiducial” object. Single-view registration of the pelvis was first performed for each view and the pelvis coordinate frame was then used as the C-arm world frame for further registrations. They have reported a clinically acceptable accuracy for the fragment pose and shape estimation. We extend this method by including both pelvis and femur in single-view registrations, and then using the pelvis coordinate frame to align the C-arm poses for multi-view registrations of the femur and the D/I device. Our proposed method is fiducial-free, and the registration result can be applied to navigate the robot for positioning the D/I device.
The contributions of this paper are as follows: 1) development of an automatic fiducial-free 2D/3D registration pipeline for intraoperative pose estimation of the femur and D/I device for robot-assisted femoroplasty; 2) demonstration of the feasibility of the proposed pipeline via simulation and cadaveric studies; 3) analysis of the effect of registration inaccuracy on biomechanics.
II. METHODS
Fig. 2 presents our proposed registration pipeline for femoroplasty. The intraoperative registration consists of three sequential stages: 1) pelvis registration, 2) femur registration, and 3) D/I device registration. The C-arm detector plane is positioned close to the patient in order to capture 50 to 80 percent of the pelvis anatomy. The patient is assumed to be stationary during the procedure. After registration, the robot moves the D/I device to the entry on the proximal femur and then perform the planned drilling followed by bone cement injection.
Fig. 2:
Illustration of registration pipeline. Left: 3D pelvis anatomical landmark locations and an example of CNN-based 2D landmark detection and bone segmentation. Stages 1-3 show three multi-view intensity-based registrations. The pink arrows correspond to the initialization procedures. The transformations are described in Section III-B.
A. Preoperative Processing and Intraoperative Data Acquisition
Lower torso preoperative CT scan images are acquired. The CT voxel spacing is 1.0 × 1.0 × 0.5 mm with dimensions 512 × 512 × 1056. Pelvis and the target femur volumes are resampled to 1 mm isotropic voxel spacing and then segmented using the automatic method described in [32] and used in Grupp et al.’s work [31]. 3D pelvis anatomical landmarks consisting of anterior superior iliac spine (ASIS), center of femoral head (FH), superior pubic symphysis (SPS), inferior pubic symphysis (IPS), medial obturator foramen (MOF), inferior obturator foramen (IOF), and the greater sciatic notch (GSN) are shown in Fig. 2. These landmarks are manually annotated from the preoperative CT scan images..
Intraoperatively, from three independent C-arm poses, 6 fluoroscopic images are obtained. For each pose, a fluoroscopic image is taken from the anatomy of the interest. The device is then moved to the field of view and another fluoroscopic image is acquired. Fig. 3 shows an example set of simulated fluoroscopic images. The first fluoroscopic view (image➀) is approximately anterior/posterior (AP). We then keep the C-arm pose (C-arm Pose1) the same and passively move the robot and the I/D device to underneath the patient bed, centering along the source-to-detector focal line. The initial I/D device position is set to be around 50 cm away from the C-arm capture range. The robot trajectory and pose configurations are saved for future automatic repetitions of the movement. We take a second image (image➁). We then rotate the C-arm along its orbit +10° (C-arm Pose2). The robot pose is kept the same and we take image➂. Again, we keep the C-arm pose the same and move the robot outside the capture range, and we take image➃. Following the same routine, we rotate the C-arm −15° (C-arm Pose3) with respect to the first view and take image➄. We then command the robot to move to the saved pose configuration, and we take image➅.
Fig. 3:
Top: Illustrations of the multi-view C-arms, anatomy, patient bed and robot setup. The top blue arrows show the C-arm rotations. The first view C-arm frame is shown in RGB cross arrows at the source position. Bottom: An example set of simulated fluoroscopic images. The first row in orange is used for pelvis and femur registration. The second row in blue is used for device registration.
Because of sub-millimeter accuracy, we assume the device pose is the same after the repetitive movements. To this end, images➀➃➄ will be used for pelvis and femur registration; images➁➂➅ will be used for device registration.
B. Intraoperative Registration
The diagram in Fig. 4 shows the registration workflow. The registration steps are as follows:
Fig. 4:
Registration workflow.
1). Pelvis Registration:
We use a pretrained CNN to initialize pelvis registration as described in Section II-A. The CNN takes image➀ as input and performs segmentation and pelvis landmark detection. An initial pose estimate of the first view C-arm frame (C1) with respect to the pelvis (PV), , is obtained by solving the PnP problem [33] using the detected 2D landmarks and their corresponding 3D anatomical landmarks. Once the initial pose is estimated, a single-view intensity-based registration of the pelvis is performed. Single-view intensity-based 2D/3D registration is achieved by creating Digitally Reconstructed Radiographs (DRRs) and calculating a similarity score between each DRR and the intraoperative image I. DRR is generated by calculating ray casting line integrals through a perspective projection of the 3D volume onto a 2D image plane. Using a pre-operative CT scan (V), a DRR operator (), similarity metric (), and regularizer over plausible poses (), the registration recovers the pelvis pose (θp) by solving the following optimization problem:
(1) |
Normalized gradient cross correlation (Grad-NCC) scores are computed throughout image patches, and the mean values used as the similarity measure for the intensity-based registration [34]. Ray casting for DRR and similarity metric computation are accelerated with GPU. The 2D image is downsampled 4× in each dimension. The optimization is conducted using a state of the art optimization strategy, “Co-variance Matrix Adaptation: Evolotuionary Search” (CMA-ES) [35]. The registration produces single-view pose estimate of the pelvis, .
In order to estimate the poses of the subsequent two views, we perform an exhaustive search starting from the registered pose of the pelvis in the first view. The search space differs from the first view by a rotation along the C-arm orbit of ± 45° in 0.5° increments. The pose corresponding to the best similarity score is used to initialize single-view intensity-based registrations for the other two pelvis views. To avoid local minima, both pelvis and femur are used for the exhaustive search.
The local minima problem occurs when the pelvis occupies less than half of the image (e.g. image➃). To remedy this problem, we perform a single-view registration of the femur after the single-view pelvis registration. We use the pelvis registration pose estimation to initialize the femur registration, . The femoral head (FH) center point is set as rotation center using , where is obtained from preprocessing 3D landmark annotation. The registration follows the definition of Eqn. 1, but we constrain the optimization search space to be rotation only (θf ∈ SO(3)) with respect to the femoral head center point (FH). The pelvis is fixed in the background. We use the same downsampling and optimization strategy. The registration will produce a single-view femur pose estimation, . This registration is sufficient for the exhaustive search process. During the search, we use both the pelvis and femur single-view registration pose estimates (, ) to generate DRRs for similarity score calculation. The exhaustive search produces pose estimations of the pelvis with respect to the other two C-arm views, , , which will serve as initializations for the subsequent single-view intensity-based registrations.
The above single-view pelvis registration is used to produce pose estimates in three C-arm extrinsic views: . Thus, the relative poses of these three views can be recovered using
(2) |
We then use the three-view geometry recovered from the pelvis coordinate frame to run multiple-view pelvis registration. Registration with multiple 2D views is accomplished by creating DRRs at each view and summing the similarity scores for each view [27], [31]. The multi-view registration optimizes the pelvis pose (θp) with 3 intraoperative views (I1, I2, I3):
(3) |
The similarity metric, the downsampling rate and the optimization algorithm remain the same as used for single view pelvis registration as described above. The multi-view registration will produce a refinement of the pelvis pose estimate .
2). Femur Registration:
We initialize multi-view femur registration with the first C-arm view femur registration result, . The femoral head center position is refined using and constrains the femur registration to be rotation only. The registration shares the same formulation as Eqn. 3, but we constrain the femur pose to be rotation only (θf ∈ SO(3)) as in single-view registration. We keep the downsampling and optimization algorithm the same. Multi-view femur registration will estimate the femur pose with respect to the first C-arm view, .
After the multi-view pelvis and femur registration, we conduct a multi-view multiple object registration by jointly registering the pelvis and femur in order to perform a local search. The formulation is inspired by [31] as
(4) |
where N = 2 corresponds to θp and θf. The optimization strategy is the Bounded optimization by Quadratic Approximation (BOBYQA) [36]. BOBYQA is less robust to local minima as compared to CMA-ES, however, it runs significantly faster [31].
3). Device Registration:
We use the same multi-view intensity-based registration method to estimate the device pose. The 3D volume is constructed using the CAD model of the fabricated device as shown in Fig. 3. For the D/I device, the metal HU value is set to 5000. The initial pose of the Injection Device (ID) with respect to the first view C-arm frame (C1), , is a pre-defined configuration, which places the D/I device along the source-to-detector focal direction and 15 cm below the patient’s bed. We use the same C-arm three-view geometries recovered from the pelvis to perform multi-view registration of the device. The optimization strategy is CMA-ES. The registration produces pose estimate of the device with respect to the first C-arm view, .
We finally perform a multi-view multi-object registration of the femur and the device, which formulates the same as Eqn. 4. The optimization strategy is BOBYQA. To this end, we register both the femur and the device to the multi-view geometry. The relative poses of these two objects can be obtained using .
C. Clinical Evaluation
To evaluate the effect of registration error on clinical efficiency of the procedure, we create a Finite Element (FE) model of the segmented target femur and determine the desired location and volume of the cement injection profile following the preoperative planning paradigm fully described in [8]. The planning paradigm consists of three steps: 1) Utilizing a FE model of the femur to find an optimized cement profile, 2) approximating the cement profile with a series of ellipsoids for a single-path injection and determining its corresponding drill path (i.e. drill entry point and direction), and 3) hydrodynamic simulation to predict the cement diffusion within the trabecular bone. We then overlay the volumetric geometry of the cement on the femur model and perform FE analysis. Once the desired injection pattern is known, we translate the injection profile by 4mm to resemble the potential worst case scenario for our registration algorithm. After this modification, we perform the hydrodynamic simulation again and overlay the new injection volume on the femur model, perform FE analysis for the new model. We then compare the results of the FE analysis for the mentioned models. For comparison, we modify FE model of the femur by translating the cement elements and estimate its corresponding yield fracture load of the femur as described in [5]. An overview of the procedure for biomechanical analysis is shown in Fig. 5.
Fig. 5:
Biomechanical analysis workflow: An FE model (top right) is created from segmented CT scans of the specimen. 3 Step Pre-operative planning (bottom left) was performed on the specimen and evaluated in both optimal drilling path (solid blue) and a path 4mm inferior of the optimal (dotted blue)
III. EXPERIMENTS
A. Simulation Study
We verified the accuracy of the proposed method with a series of simulation studies with randomized projection geometries and anatomical poses. We simulated the projection geometry by approximating the intrinsic parameters of a Siemens CIOS Fusion C-Arm, which has image dimensions of 1536 × 1536, isotropic pixel spacing of 0.194 mm/pixel, a source-to-detector distance of 1020 mm, and a principal point at the center of the image. The three views used for this study included a perturbed AP view and two views at random rotations about the C-arm orbit with a mean and STD of +10 ± 3° and −15 ± 3°. Random movements of the pelvis were sampled uniformly to simulate patient pose variations, including translation from 0 to 10 mm and rotation from −10 to 10 in degree. Rotations of femur were sampled from random rotations with respect to the center of femoral head (FH). The axis of rotation was sampled uniformly between −15 and 15 degrees. Perturbed movements of the injection device with respect to the C-arm were sampled including translation from 0 to 10 mm and rotation from −5 to 5 in degree. The full pipeline was initialized using the annotations as determined by the CNN. We used the registration workflow described in Section II-B to produce the pose estimation of the femur and the D/I device.
To evaluate the performance of our registration algorithm, we report the registration accuracy based on our simulated “groundtruth” poses of the objects and the registration results. The rotation errors across anatomical axes are computed by decomposing the rotation matrix of the delta frame into Euler angles using the xyz convention. The total rotation error is the axial angle of the rotation matrix. Pelvis registration is reported with respect to the rotation center frame (PC) at the center of pelvis volume to the first AP view C-arm frame (C1), , where PC is annotated from the segmented pelvis volume. The transformation error is reported using
(5) |
where is the groundtruth transformation when the C-arm is set to AP view. Femur registration accuracy is reported with respect to the rotation center frame (FH) at the center of femoral head when C-arm is set to the AP view (C1) using . The transformation error is:
(6) |
where is the groundtruth transformation when the C-arm is set to AP view. The device registration accuracy is reported using the injection device guide center frame (ID) with respect to AP view C-arm frame (C1) . The transformation error is:
where is the groundtruth transformation. Fig. 6 shows the D/I axis. δTpel, δTfem and δTinj are along the D/I axis.
Fig. 6:
Upper: Overlay example of registration convergence stage. 2D overlay of multiple input simulation X-rays (background) and DRR-derived edges in green. Lower: Coordinate frames and path direction vectors used to report the registration error are marked with RGB cross arrows. Coordinate frames include: PC - Pelvis volume center; FH - Femoral head center; ID - Injection device guide center. Path direction vectors include pfem and pinj, which are described in Section IV-A.
We also report the multi-view C-arm pose estimation accuracy by calculating the relative transformation of the second and third C-arm frames with respect to the AP view C-arm frame. The transformation error is reported using
(7) |
(8) |
The accuracy of the entry point with respect to the guide tip is critical to the success of drilling and cement injection. We define the following metrics in order to quantify the displacement error and direction error of our registration algorithm.
We select the D/I entry point (EP) on the greater trochanter surface based on the biomechanical analysis, and report the error in the AP C-arm frame using
(9) |
where is calculated using . The tip position of the injection device is annotated on the center of the guide tip (TIP) in the device model. We report the error of the guide tip in the AP view C-arm frame using
(10) |
where is calculated using . The relative entry point and guide tip error is calculated as follows:
(11) |
The planned injection path is shown by a vector from an entry point () to a target point (the center of the femoral head ), . Similarly, the injection guide direction is shown by the vector . We calculate the rotation error of the path planning vector and guide direction vector at the AP view of the C-arm as follows:
(12) |
(13) |
B. Cadaver Study
A female specimen, including lower torso, pelvis, and femurs, was used for the study. A Siemens CIOS Fusion C-arm with 30 cm flat panel detector was used to collect intraoperative fluoroscopy. The setup for C-arm, specimen bed, and robot-held D/I device is shown in Fig. 7.
Fig. 7:
Left: Cadaver study setup with injection device, specimen and C-arm. Middle: Two example AP view intraoperative fluoroscopic images corresponding to images➀➁ in Fig. 3. Right top: The injected BBs are zoomed. One example BB location is marked with orange circle. Right bottom: Picture of BBs glued on the surface of the injection device for groundtruth injection pose calculation. One example BB is marked with blue circle.
To obtain the groundtruth poses for the femur and the D/I device, metallic BBs were implanted into the femoral head and glued onto the surface of the device as shown in Fig. 7. The BBs were implanted closer to the trochanter and the femoral head center region in order to accurately estimate the femoral head pose. An injection device (Halifax Biomedical Inc., St. Johns, Canada) was used to insert seven BBs with 1 mm diameter to the femoral head. In addition, five BBs with 1.5 mm diameter were glued evenly and distributed around the plastic head of the D/I device. We then dismounted the injection device and took a CT scan. The 3D locations of the BBs were manually labeled in the CT scans of the specimen and the injection device. The 2D BB locations were manually annotated from the fluoroscopic views corresponding to the AP C-arm view (image➀➁ in Fig. 3). We performed 6 registration workflows with varying C-arm geometries and specimen poses, resulting in 36 fluoroscopic images. Error metrics were computed as described in the previous section.
IV. RESULTS
A. Simulation Study
We performed a total of 1,000 simulations with varying projection geometries and pelvis initializations. Fig. 6 shows an example of 2D and 3D overlays when registration converges, showing that the 3D pose prediction matches the groundtruth pose successfully. The mean femur registration error (δTfem) was 0.81 ± 0.76 mm and 0.73 ±0.23°, a median of 0.74 mm and 0.71°; mean injection device registration error (δTinj) of 1.00 ± 0.77 mm and 0.23 ± 0.21°, a median of 0.80 mm and 0.20° reported in translation and rotation, respectively. The details of registration error about each axis is tabulated in Table I. The mean error of multi-view C-arm pose estimation (, ) was 1.37 ± 0.82 mm, 0.23 ± 0.16° and 0.69±0.45 mm, 0.16±0.14°, respectively. Fig. 8 presents the statistical histogram distribution of femur, pelvis, injection device and the multi-view C-arm pose errors.
TABLE I:
Simulation Results of Registration Errors
Translation error (mm) | Rotation error (degrees) | ||||||||
---|---|---|---|---|---|---|---|---|---|
x (IS) | y (LR) | z (AP) | total | x (IS) | y (LR) | z (AP) | total | ||
Pelvis | mean | 0.23 ± 0.23 | 0.11 ± 0.23 | 0.85 ± 0.76 | 0.92 ± 0.80 | 0.11 ± 0.28 | 0.08 ± 0.23 | 0.06 ± 0.09 | 0.17 ± 0.36 |
median | 0.20 | 0.07 | 0.72 | 0.77 | 0.08 | 0.05 | 0.04 | 0.13 | |
Femur | mean | 0.14 ± 0.15 | 0.12 ± 0.17 | 0.77 ± 0.74 | 0.81 ± 0.76 | 0.65 ± 0.23 | 0.25 ± 0.15 | 0.11 ± 0.10 | 0.73 ± 0.23 |
median | 0.12 | 0.09 | 0.72 | 0.74 | 0.64 | 0.24 | 0.09 | 0.71 | |
Injection Device | mean | 0.22 ± 0.26 | 0.39 ± 0.38 | 0.85 ± 0.68 | 1.00 ± 0.77 | 0.13 ± 0.13 | 0.14 ± 0.17 | 0.08 ± 0.10 | 0.23 ± 0.21 |
median | 0.17 | 0.31 | 0.68 | 0.80 | 0.10 | 0.11 | 0.06 | 0.20 | |
C-arm view2 | mean | 0.36 ± 0.34 | 0.19 ± 0.18 | 1.23 ± 0.84 | 1.37 ± 0.82 | 0.09 ± 0.08 | 0.15 ± 0.14 | 0.10 ± 0.10 | 0.23 ± 0.16 |
median | 0.27 | 0.14 | 1.11 | 1.25 | 0.07 | 0.11 | 0.06 | 0.19 | |
C-arm view3 | mean | 0.22 ± 0.22 | 0.23 ± 0.18 | 0.55 ± 0.43 | 0.69 ± 0.45 | 0.10 ± 0.13 | 0.08 ± 0.08 | 0.05 ± 0.05 | 0.16 ± 0.14 |
median | 0.15 | 0.20 | 0.47 | 0.60 | 0.08 | 0.06 | 0.04 | 0.14 |
Fig. 8:
(a)-(e): Normalized 2D histograms of pelvis pose (δTpel), femur pose (δTfem), injection device pose (δTinj), C-arm view2 (), C-arm view3 () error for the simulation studies. (f)-(h): Normalized histogram of l2 distance error in mm of femur entry point (), injection device guide tip () and their relative error ( ()). (i)-(j): Normalized histogram of direction error in degree of femur path vector () and the injection guide direction vector ().
The entry point error (δxEP), injection device tip error (δxTIP), and their relative error () are presented in Table II. The proposed method achieved a mean entry point l2 distance error of 1.70 ± 0.94 mm and a median of 1.64 mm. The mean femur path direction error was 0.63 ± 0.21° with a median of 0.62°. The mean guide path error was 0.17 ± 0.19° with a median of 0.14°. Fig. 8 shows the histogram distribution of these errors.
TABLE II:
Simulation Study Results of Error Metrics
mean | median | |
---|---|---|
Entry Point (mm) | 1.70 ± 0.94 | 1.64 |
Guide Tip (mm) | 0.93 ± 0.81 | 0.74 |
Relative (mm) | 1.26 ± 0.74 | 1.15 |
Femur Path (°) | 0.63 ± 0.21 | 0.62 |
Guide Path (°) | 0.17 ± 0.19 | 0.14 |
B. Cadaver Study
The proposed registration algorithm was tested on our collected intraoperative fluoroscopic images. We reported the errors on femur entry point, injection device guide tip, and the path direction. Table III presents the numeric results of 6 independent trials. The mean relative entry point and guide tip error () was 2.64 ± 1.10 mm, and mean direction error was 1.36 ± 0.78° and 0.31 ± 0.26° for the femur injection path (δθfem) and the guide direction (δθinj), respectively.
TABLE III:
Cadaver Study Results of Error Metrics
Trial ID | I | II | III | IV | V | VI |
---|---|---|---|---|---|---|
Entry Point (mm) | 1.34 | 2.44 | 2.41 | 1.99 | 3.67 | 4.38 |
Guide Tip (mm) | 3.17 | 0.84 | 1.48 | 2.93 | 1.79 | 0.83 |
Relative (mm) | 1.98 | 2.88 | 3.44 | 1.32 | 1.94 | 4.28 |
Femur Path (°) | 1.35 | 0.73 | 1.82 | 0.83 | 0.73 | 2.69 |
Guide Path (°) | 0.75 | 0.12 | 0.07 | 0.29 | 0.13 | 0.47 |
C. Biomechanical Analysis
The biomechanical analysis demonstrated that the optimal pattern of cement injection can increase the yield load for fracture by 43.2% (from 1400N to 2005N). When we introduced a 4 mm error to the optimal location by moving the pattern inferior, FE estimate for the yield load was lowered to 1970N (i.e. 40.7% improvement of the yield load).
V. DISCUSSION
The results suggest the feasibility of applying our proposed fiducial-free 2D/3D registration method for robot-assisted femoroplasty. The use of the pelvis as a fiducial gives an accurate estimation of the multi-view C-arm poses. Pelvic bone has detailed features that are very helpful for multi-view registration. Also, an accurate initialization of the femoral head center (FH) helps constrain the femur registration search space and avoids bad local minima. Fig. 9 shows the correlation matrix scatter plots between pelvis error and femur error. We report the pelvis translation error as measured from the femoral head center (FH) frame to better illustrate the relationship with the femur translation error. The femur registration accuracy and the entry point accuracy are heavily related to the pelvis registration accuracy with the coefficient factors of 0.90 and 0.75, respectively. The result supports the conclusion that a more accurate pelvis registration leads to a better femur registration result.
Fig. 9:
Left: Scatter plot of correlation matrix between femoral head center translation error and pelvis translation error reported in femoral head center. Right: Scatter plot of correlation matrix between femur entry point error and pelvis translation error reported in femoral head center. Correlation coefficients are marked on the right bottom of each plot.
The device registration is not designed for tracking the robot movement but is used to accurately position the device to the entry point. The robot’s motion will then be determined based on the preoperative patient-specific plan and the kinematics of the robot with sub-millimeter accuracy to move the robot close to the entry point. A potential alternative approach can be the use of D/I device features for multi-view registration. We have conducted simulations on using the same steps described in Section II-B pelvis registration but replacing the pelvis with the injection device as “fiducial” to estimate the C-arm poses. The mean registration error of the two adjacent C-arm views, and , are 1.49 mm/0.69° and 1.55 mm/0.67° in translation and rotation, respectively. No improvements were observed to the use of pelvis as the fiducial for estimating the relative C-arm views.
The registration accuracy reduced in cadaver studies. The potential reasons are: 1) spectrum and exposure of the real fluoroscopic images are different from the DRR image; 2) the simulated C-arm projection geometries are different from the cadaveric studies; 3) the BB injection, annotation and segmentation are likely to introduce errors. The registration accuracy is also related to the relationship of magnification factor and the integral appearance of the pelvis. Capturing a larger portion of the pelvis includes more features, however the magnification factor is reduced. In this paper, we did not investigate the optimal relationship. Our future work will include studying the relationship of projection geometry and registration accuracy.
Previous studies have shown that the patient-specific pre-operative planning for femoroplasty yields to a significantly higher biomechanical benefit compared to those of the generalized injections [8]. For the FE simulations, boundary conditions were set to simulate sideway fall. Sideway fall constitutes 80 percent of the falls in the elderly [4]. The fiducial free registration pipeline proposed in this study results in small orientation error for the femur ( 0.63 ± 0.21°). The biomechanical analysis confirmed the hypothesis that the errors due to fiducial-less registration, do not significantly affect the biomechanical outcome of the femoroplasty. The FE analysis of the simulated sideway fall, with entry point errors comparable to that of the registration error, showed only a 2.5% reduction in the yield load when compared to the optimal injection pattern. Note that the optimal cement augmentation increased the yield load by 43.2%. To evaluate the impact of a translation error on biomechanics, we performed FE simulations to optimize the pattern of injection. Yield load estimation for a modified pattern (4mm inferior) results in only 2.5% decrease of improvements (from 43.2% to 40.7%), suggesting that registration accuracy is sufficient.
Apart from 2D/3D registration, a surgical system for femoroplasty requires real-time control of a robotic arm and injection device to deliver the cement. While the focus of this study is the introduction of the fiducial-free registration algorithm, the proposed method with simultaneous registration of the anatomy and tool (i.e. drilling guided) can also help to reduce overall system errors.
Our registration pipeline is developed based on the X-ray imaging because the C-arm X-ray machine is a common device in all of the orthopedic operating rooms. Therefore, using intraoperative X-ray does not add additional imaging hardware for the procedure. Although our method requires radiation exposure on the patient, using X-rays in orthopedic applications is common. Taking six X-rays is usually not excessive as compared to other orthopedic applications.
VI. CONCLUSION
We propose a fiducial-free 2D/3D registration pipeline, which uses multiple view fluoroscopic images to register the femur and the drilling/injection device for robot-assisted femoroplasty. The method was evaluated through 1,000 simulations with varying geometries and initializations, and a cadaveric specimen study. The proposed method showed the feasibility of an image-based and fiducial-free registration approach for robot drill and injector positioning in patient-specific cement injection planning.
Acknowledgment
This research has been finacially supported by NIH R01EB023939, NIH R21EB020113, and Johns Hopkins University Applied Physics Laboratory internal funds. The funders had no role in the study design, data collection, analysis of the data, writing of the manuscript, or the decision to submit the manuscript for publication.
Contributor Information
Cong Gao, Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA 21211.
Amirhossein Farvardin, Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA 21211.
Robert B. Grupp, Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA 21211
Mahsan Bakhtiarinejad, Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA 21211.
Liuhong Ma, Department of Cranio-maxillo-facial Surgery Center, Plastic Surgery Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, CHN,100144.
Mareike Thies, Pattern Recognition Lab, Friedrich-Alexander-Universitt Erlangen-Nrnberg, Erlangen, Germany 91058.
Mathias Unberath, Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA 21211.
Russell H. Taylor, Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA 21211.
Mehran Armand, Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA 21211; Department of Orthopaedic Surgery and Johns Hopkins Applied Physics Laboratory, Baltimore, MD, USA 21224.
References
- [1].Goldacre MJ, Roberts SE, and Yeates D, “Mortality after admission to hospital with fractured neck of femur: database study,” Bmj, vol. 325, no. 7369, pp. 868–869, 2002. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [2].Teng GG et al. , “Mortality and osteoporotic fractures: is the link causal, and is it modifiable?” Clinical and experimental rheumatology, vol. 26, no. 5 0 51, p. S125, 2008. [PMC free article] [PubMed] [Google Scholar]
- [3].Dinah A, “Sequential hip fractures in elderly patients,” Injury, vol. 33, no. 5, pp. 393–394, 2002. [DOI] [PubMed] [Google Scholar]
- [4].Land J, Russell L, and Khan S, “Osteoporosis,” Clinical Orthopaedics and Related Research, vol. 372, pp. 139–150, 2000. [DOI] [PubMed] [Google Scholar]
- [5].Basafa E and Armand M, “Subject-specific planning of femoroplasty: a combined evolutionary optimization and particle diffusion model approach,” Journal of biomechanics, vol. 47, no. 10, pp. 2237–2243, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [6].Basafa E, Murphy RJ, Otake Y, Kutzer MD, Belkoff SM, Mears SC, and Armand M, “Subject-specific planning of femoroplasty: an experimental verification study,” Journal of biomechanics, vol. 48, no. 1, pp. 59–64, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [7].Otake Y, Armand M, Sadowsky O, Armiger RS, Kutzer MD, Mears SC, Kazanzides P, and Taylor RH, “An image-guided femoroplasty system: development and initial cadaver studies,” in Medical Imaging 2010: Visualization, Image-Guided Procedures, and Modeling, vol. 7625. International Society for Optics and Photonics, 2010, p. 76250P. [Google Scholar]
- [8].Farvardin A, Basafa E, Bakhtiarinejad M, and Armand M, “Significance of preoperative planning for prophylactic augmentation of osteoporotic hip: A computational modeling study,” Journal of biomechanics, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [9].Kuo N, Lee J, Deguet A, Song D, Burdette EC, and Prince J, “Automatic segmentation of seeds and fluoroscope tracking (ftrac) fiducial in prostate brachytherapy x-ray images,” in Medical Imaging 2010: Visualization, Image-Guided Procedures, and Modeling, vol. 7625. International Society for Optics and Photonics, 2010, p. 76252T. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [10].Guéziec A, Kazanzides P, Williamson B, and Taylor RH, “Anatomy-based registration of ct-scan and intraoperative x-ray images for guiding a surgical robot,” IEEE Transactions on Medical Imaging, vol. 17, no. 5, pp. 715–728, 1998. [DOI] [PubMed] [Google Scholar]
- [11].Yu W and Zheng G, “2d-3d regularized deformable b-spline registration: Application to the proximal femur,” in 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI). IEEE, 2015, pp. 829–832. [Google Scholar]
- [12].Zhang X, Zhu Y, Li C, Zhao J, and Li G, “Sift algorithm-based 3d pose estimation of femur,” Bio-medical materials and engineering, vol. 24, no. 6, pp. 2847–2855, 2014. [DOI] [PubMed] [Google Scholar]
- [13].Zaman A and Ko SY, “Improving the accuracy of 2d-3d registration of femur bone for bone fracture reduction robot using particle swarm optimization,” in Proceedings of the Genetic and Evolutionary Computation Conference Companion. ACM, 2018, pp. 101–102. [Google Scholar]
- [14].Miao S, Piat S, Fischer P, Tuysuzoglu A, Mewes P, Mansi T, and Liao R, “Dilated fcn for multi-agent 2d/3d medical image registration,” in Thirty-Second AAAI Conference on Artificial Intelligence, 2018. [Google Scholar]
- [15].Gao C, Grupp RB, Unberath M, Taylor RH, and Armand M, “Fiducial-free 2d/3d registration of the proximal femur for robot-assisted femoroplasty,” in Medical Imaging 2020: Image-Guided Procedures, Robotic Interventions, and Modeling, vol. 11315. International Society for Optics and Photonics, 2020, p. 113151C. [Google Scholar]
- [16].Markelj P, Tomaževič D, Likar B, and Pernuš F, “A review of 3d/2d registration methods for image-guided interventions,” Medical image analysis, vol. 16, no. 3, pp. 642–661, 2012. [DOI] [PubMed] [Google Scholar]
- [17].Johnson HJ and Christensen GE, “Consistent landmark and intensity-based image registration,” IEEE transactions on medical imaging, vol. 21, no. 5, pp. 450–461, 2002. [DOI] [PubMed] [Google Scholar]
- [18].Unberath M, Zaech J-N, Gao C, Bier B, Goldmann F, Lee SC, Fotouhi J, Taylor R, Armand M, and Navab N, “Enabling machine learning in x-ray-based procedures via realistic simulation of image formation,” International journal of computer assisted radiology and surgery, vol. 14, no. 9, pp. 1517–1528, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [19].Payer C, Štern D, Bischof H, and Urschler M, “Regressing heatmaps for multiple landmark localization using cnns,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2016, pp. 230–238. [Google Scholar]
- [20].Bier B, Unberath M, Zaech J-N, Fotouhi J, Armand M, Osgood G, Navab N, and Maier A, “X-ray-transform invariant anatomical landmark detection for pelvic trauma surgery,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2018, pp. 55–63. [Google Scholar]
- [21].Esteban J, Grimm M, Unberath M, Zahnd G, and Navab N, “Towards fully automatic x-ray to ct registration,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2019, pp. 631–639. [Google Scholar]
- [22].Kordon F, Fischer P, Privalov M, Swartman B, Schnetzke M, Franke J, Lasowski R, Maier A, and Kunze H, “Multi-task localization and segmentation for x-ray guided planning in knee surgery,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2019, pp. 622–630. [Google Scholar]
- [23].Laina I, Rieke N, Rupprecht C, Vizcaíno JP, Eslami A, Tombari F, and Navab N, “Concurrent segmentation and localization for tracking of surgical instruments,” in International conference on medical image computing and computer-assisted intervention. Springer, 2017, pp. 664–672. [Google Scholar]
- [24].Gao C, Unberath M, Taylor R, and Armand M, “Localizing dexterous surgical tools in x-ray for image-based navigation,” arXiv preprint arXiv:1901.06672, 2019. [Google Scholar]
- [25].Grupp R, Unberath M, Gao C, Hegeman R, Murphy R, Alexander C, Otake Y, McArthur B, Armand M, and Taylor R, “Automatic annotation of hip anatomy in fluoroscopy for robust and efficient 2d/3d registration,” arXiv preprint arXiv:1911.07042, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [26].Gong RH, Stewart J, and Abolmaesumi P, “Multiple-object 2-d–3-d registration for noninvasive pose identification of fracture fragments,” IEEE transactions on biomedical engineering, vol. 58, no. 6, pp. 1592–1601, 2011. [DOI] [PubMed] [Google Scholar]
- [27].Otake Y, Armand M, Armiger RS, Kutzer MD, Basafa E, Kazanzides P, and Taylor RH, “Intraoperative image-based multiview 2d/3d registration for image-guided orthopaedic surgery: incorporation of fiducial-based c-arm tracking and gpu-acceleration,” IEEE transactions on medical imaging, vol. 31, no. 4, pp. 948–962, 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [28].Yao J, Taylor RH, Goldberg RP, Kumar R, Bzostek A, Van Vorhis R, Kazanzides P, Gueziec A, and Funda J, “A progressive cut refinement scheme for revision total hip replacement surgery using c-arm fluoroscopy,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 1999, pp. 1010–1019. [Google Scholar]
- [29].Yao J, Taylor RH, Goldberg RP, Kumar R, Bzostek A, Van Vorhis R, Kazanzides P, and Gueziec A, “A c-arm fluoroscopy-guided progressive cut refinement strategy using a surgical robot,” Computer Aided Surgery, vol. 5, no. 6, pp. 373–390, 2000. [DOI] [PubMed] [Google Scholar]
- [30].Yi T, Ramchandran V, Siewerdsen JH, and Uneri A, “Robotic drill guide positioning using known-component 3d–2d image registration,” Journal of Medical Imaging, vol. 5, no. 2, p. 021212, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [31].Grupp RB, Hegeman R, Murphy R, Alexander C, Otake Y, McArthur B, Armand M, and Taylor RH, “Pose estimation of periacetabular osteotomy fragments with intraoperative x-ray navigation,” IEEE Transactions on Biomedical Engineering, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [32].Krčah M, Székely G, and Blanc R, “Fully automatic and fast segmentation of the femur bone from 3d-ct images with no shape prior,” in 2011 IEEE international symposium on biomedical imaging: from nano to macro. IEEE, 2011, pp. 2087–2090. [Google Scholar]
- [33].Hartley R and Zisserman A, Multiple view geometry in computer vision. Cambridge university press, 2003. [Google Scholar]
- [34].Grupp RB, Armand M, and Taylor RH, “Patch-based image similarity for intraoperative 2d/3d pelvis registration during periacetabular osteotomy,” in OR 2.0 Context-Aware Operating Theaters, Computer Assisted Robotic Endoscopy, Clinical Image-Based Procedures, and Skin Image Analysis. Springer, 2018, pp. 153–163. [Google Scholar]
- [35].Hansen N and Ostermeier A, “Completely derandomized self-adaptation in evolution strategies,” Evolutionary computation, vol. 9, no. 2, pp. 159–195, 2001. [DOI] [PubMed] [Google Scholar]
- [36].Powell MJ, “The bobyqa algorithm for bound constrained optimization without derivatives,” Cambridge NA Report NA2009/06, University of Cambridge, Cambridge, pp. 26–46, 2009. [Google Scholar]