Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Oct 1.
Published in final edited form as: IEEE Trans Autom Sci Eng. 2020 May 6;17(4):2154–2161. doi: 10.1109/tase.2020.2986503

Camera-Robot Calibration for the da Vinci® Robotic Surgery System

Orhan Özgüner 1, Thomas Shkurti 1, Siqi Huang 1, Ran Hao 1, Russell C Jackson 1, Wyatt S Newman 1, M Cenk Çavuşoğlu 1
PMCID: PMC7978174  NIHMSID: NIHMS1635116  PMID: 33746640

Abstract

The development of autonomous or semi-autonomous surgical robots stands to improve the performance of existing teleoperated equipment, but requires fine hand-eye calibration between the free-moving endoscopic camera and patient-side manipulator arms (PSMs). A novel method of solving this problem for the da Vinci® robotic surgical system and kinematically similar systems is presented. First, a series of image-processing and optical-tracking operations are performed to compute the coordinate transformation between the endoscopic camera view frame and an optical-tracking marker permanently affixed to the camera body. Then, the kinematic properties of the PSM are exploited to compute the coordinate transformation between the kinematic base frame of the PSM and an optical marker permanently affixed thereto. Using these transformations, it is then possible to compute the spatial relationship between the PSM and the endoscopic camera using only one tracker snapshot of the two markers. The effectiveness of this calibration is demonstrated by successfully guiding the PSM end effector to points of interest identified through the camera. Additional tests on a surgical task, namely grasping a surgical needle, are also performed to validate the proposed method. The resulting visually-guided robot positioning accuracy is better than the earlier hand-eye calibration results reported in the literature for the da Vinci® system, while supporting intraoperative update of the calibration and requiring only devices that are already commonly used in the surgical environment.

Keywords: Medical Robots and Systems, Surgical Robotics: Laparoscopy, da Vinci Research Kit (dVRK)

Note to Practitioners:

The problem of hand-eye calibration for the da Vinci robotic surgical system and kinematically similar systems is addressed in this paper. Existing approaches have insufficient accuracy to automate low-level surgical subtasks and often require external patterns or subjective human intervention, none of which are applicable to practical RMIS scenarios. This study breaks down the calibration procedure into systematic steps to reduce the error accumulation. Most of the time-consuming steps are performed offline, allowing them to be retained between movements. Each time the passive joints of the manipulator or the endoscope move, all that needs to be done is to refresh the transformation between the fixed markers. This key idea enables intraoperative updates of the hand-eye calibration to be performed online without sacrificing precision. The calibration method presented here demonstrates that the achieved accuracy is sufficient for automating basic surgical manipulation tasks, such as grasping a suturing needle. The hand-eye calibration will be incorporated into a visually guided manipulation framework to perform high precision autonomous surgical tasks.

I. Introduction

Due to the nature of the master-slave teleoperation required to operate systems like the da Vinci® robotic surgical system (Intuitive Surgical, Inc., Sunnyvale, CA), robotic minimally invasive surgery (RMIS) usually results in longer operation times and the learning curve is steeper than other minimally invasive surgery techniques [1]. In addition, the narrow laparoscopic camera view and limited workspace can make low-level tasks challenging and time-consuming. In order to reduce operation time and enhance surgeon performance, autonomous robotic surgical assistants [2] have been proposed to perform low-level surgical manipulation tasks such as suturing [3]–[8], debridement [9], dissection, and retraction [10].

In order to perform these automated tasks with precision, it is necessary to know the transformation between the base frame of the robot manipulators (‘hands’) and the stereo endoscopic camera (‘eye’). Once the hand-eye transformation is known, a detected object in the camera coordinate system can be located in the manipulator coordinate system and the necessary motions to manipulate the object can be performed autonomously. Unfortunately, the surgical environment possesses several undesirable attributes that preclude traditional calibration strategies — most prominently, external objects such as calibration grids cannot be safely introduced into the patient, and the camera and patient-side manipulator arms (PSMs) are repositioned with respect to each other multiple times during an operation. This last condition further demands that calibration be performed or updated ‘online’ in a time-efficient manner.

This study focuses on computing the transformation between the endoscopic camera and the robotic manipulators in real time using an external tracking system. The presented work was performed using a Polaris Vicra ® optical tracking system (Northern Digital Inc., Waterloo, Canada) and da Vinci® robotic surgery system, but the methods developed can be used to calibrate any RCM-based robot with any tracking system. In order to reduce the accumulation of error, this study proposes a procedure which is divided into systematic steps instead of finding a hand-eye transformation directly. Most of the time-consuming elements of the calibration are performed offline in a manner that allows them to be retained between movements and even between surgical procedures, leaving the remainder able to be computed in real time without sacrificing precision. This key idea enables the intraoperative updates of the hand-eye transformation to be performed instantly.

The paper is organized as follows: Section II discusses related studies on robot-camera (hand-eye) calibration. The experimental hardware is described in Section III. In Section IV, the problem formulation and proposed calibration methods are described. The details of the hardware-based validation tests and the validation results are presented in Section V, followed by the conclusions in Section VI.

II. Related Studies

The problem of hand-eye calibration in robotic systems has been well studied by many researchers using various techniques. Early studies described the problem by separating the transformation into translation and rotation components [11]–[15]. Others then argued that the separation implies that the rotation component has nothing to do with the translation component, which is not a viable assumption, and attempted to find a solution simultaneously [16]–[19]. The problem with this method was that orientation errors were propagating into the positional errors. To reduce the rotational error propagation, researchers then proposed to solve the problem iteratively [20]–[22]. The extension of the robot-world calibration problem to hand-eye calibration has also been studied [23]–[25].

Wang et al. [26] proposed a method for hand-eye calibration of remote-center-of-motion (RCM)-based robots such as the da Vinci® Robotic Surgery System. The translation part of the calibration matrix is computed by moving the endoscope to at least two different positions, and the orientation is obtained by constructing an optimization problem with different images of the robotic manipulator. The average translational error is reported as 16.08 mm.

Zhang et al. [27] proposed to solve the calibration problem for RMIS systems using an internal dot pattern attached to the tip of the manipulator. The hand-eye relationship was computed iteratively and the convergence of this iterative computation was proven. While the resulting transforms were found to be self-consistent over multiple end-effector positions they were not compared to a ground truth.

D’Ettore et al. [28] developed a vision-guided method for automatically grasping the suturing needle. The hand-eye calibration is identified by establishing 3D point correspondences between the camera and the robot tool tip while positioning the tip at corners of a calibration grid and detecting these corners from the camera.

Pachtrachai et al. [29] developed a method without a calibration object for RMIS systems. The surgical manipulator’s pose is estimated relative to the endoscopic camera using the 3-D instrument tracking method described in [30]. The minimum instrument tracking error was reported as about 20 mm while the manipulator was in motion.

Seita et al. [31] studied the problem by employing a two-phase calibration procedure. In the first phase, the trajectories in the workspace are automatically explored with random targets, using red tape tied to the end-effector to visually track the location. A deep neural network (DNN) was trained on the obtained data. In the second phase of the procedure, the manipulator’s end-effector was sent to a target on a printed calibration grid. The error of the end-effector was corrected by a human expert. A random forest (RF) was then trained on the collected data to predict the residual error.

These previous studies in the literature on hand-eye calibration showed great promise. However, achieved precision is either insufficient or requires an external pattern in order to perform low-level subtasks autonomously. Furthermore, some of the studies rely on simplifying assumptions such as an artificially colored end effector, vision-based tracking algorithms more accurate than is computationally feasible, and subjective human expertise — none of which are applicable to practical RMIS scenarios.

Previous studies tried to solve the hand-eye calibration problem directly to find the transformation between the camera and the manipulator. In contrast, the presented study aims to perform the calibration procedure in a divide-and-conquer fashion, performing the majority of the calibration procedure preoperatively using an external optical sensor (which is already commonly used in operating rooms [32]). These components of the calibration are not affected by surgical instrument change or movement during the procedure. Once the relative transformations of the endoscopic camera and the manipulator RCM point to fixed passive markers are computed preoperatively, the remainder of the calibration procedure requires only a single snapshot of the relative marker transformations — this is very quick, operating-room friendly, and compatible with all RCM-based manipulators. This systematic breakdown also reduces errors in kinematic parameters such as link and joint angle offsets.

To the best of our knowledge, this study is the only method currently available which is applicable to a realistic laparoscopic surgical scenario, since it is capable of recovering hand-eye calibration in real time following movement of the robot’s passive joints. The presented method highlights and accommodates significant deviations from the idealized and experimentally identified DH parameters for the physical da Vinci® PSM. This kinematic intrinsic identification method can be used to characterize and correct any RCM-based robot. Additionally, the open-loop point-reaching accuracy results achieved in this study are equivalent or better than earlier results reported in the literature. Specifically, the achieved open-loop point-reaching accuracy is equivalent to accuracy of earlier studies that rely on closed-loop visual servo control, and significantly better than methods that reported open-loop reaching accuracy.

III. Hardware Description

A. Da Vinci Research Kit

In order to control the da Vinci® surgical robotic system in an automated manner, we employed the da Vinci Research Kit (dVRK) developed by Johns Hopkins University and Worcester Polytechnic Institute (WPI) [33] (Fig. 1). The dVRK acts as a substitute for the teleoperation master station via a ROS interface that can be controlled from any desktop computer [33]. Forward and inverse kinematics allow this joint-level control to be leveraged into workspace (3-D cartesian) control of the robot in the PSM base frame. Multiple PSMs can be managed in parallel by the device. In this study, only a single manipulator was employed; however, the procedure can be extended to calibrate multiple manipulators at the same time or in sequence.

Fig. 1.

Fig. 1.

da Vinci® surgical robotic system with the da Vinci Research Kit.

B. Polaris Optical Tracker

In order to identify kinematic calibration parameters, an optical tracker system — the NDI Polaris Vicra® [34] — is employed as an external sensor. The Polaris Vicra® can measure the position and orientation of passive markers composed of retroreflective spheres in specific and unique geometric shapes [34]. The unique marker geometry allows the system to track multiple markers simultaneously and the ROS-compatible wrapper package enables a desktop computer to read the marker position and orientation at 20Hz. The accuracy of the system is reported as 0.25 mm RMS within its operating volume [34]. The system is already approved for and commonly used in surgical operations, most prominently in neurosurgery [32].

Our calibration setup requires two permanent passive markers — one affixed to the base of the PSM and the other affixed to the camera, both external to the patient. In order to perform the offline stages of the calibration, we have also designed a removable adapter which is attached to the gripper and able to hold a third marker (Fig. 2), and a machined calibration board with a fourth marker embedded in its surface (Fig. 4). Neither of these devices need to be present during the surgical procedure, as they are only used during the preoperative calibration phase.

Fig. 2.

Fig. 2.

Polaris Vicra® optical tracking system (a), passive markers (b), and custom-built adapter (c).

Fig. 4.

Fig. 4.

Custom-built board with known transformation between tracking marker and grid.

IV. Calibration Procedure

The hand-eye calibration procedure finds the transformation between the camera and the manipulators in three independent steps by introducing an external optical sensor. In the first step of the procedure, which is explained in detail in Section IV-B, a fixed marker is attached to the camera and the transformation between the camera and the marker is computed via a custom-built calibration board using the frame assignments shown in Section IV-A. The second step of the procedure is achieved by attaching an optical marker onto the robot manipulator base and the transformation between the robot base and the marker is calculated using the method detailed in Section IV-C. In the third step, the transformation between the fixed markers are computed using the optical tracking system. The final hand-eye calibration is calculated by using these three intermediate transformations.

A. Frames and Transformations of Interest

  • C is the camera optical frame or ‘eye’ frame. Pixel locations deprojected from image space to 3-D Cartesian space using the endoscope’s binocular vision are natively expressed in this frame.

  • Mc is the frame of a Polaris marker rigidly affixed to the endoscopic camera pole.

  • P is the PSM base frame, ‘portal’ frame, or ‘hand’ frame. It is the origin of the workspace of the PSM, meaning that forward and inverse kinematics operate on points expressed in this frame.

  • Mp is the frame of a Polaris marker rigidly affixed to the PSM base.

  • D is the frame of a camera calibration grid freely movable within the robot workspace.

  • Md is the frame of a Polaris marker rigidly affixed to that calibration grid.

  • Mt is the frame of a Polaris marker rigidly affixed to the PSM tool tip.

It should be noted that the Polaris sensor reports the 6-DOF location of a marker (position and orientation) in a Cartesian frame N internal to the device — more precisely, a transform between the origin of N and the origin of the marker. This location will change if the Polaris is rotated or moved, and has no direct physical relevance to the da Vinci® unit or objects within its workspace. Given any two reported locations X and Y from the same tracker position, it is possible to compute a transform gXY between them that is independent of the tracker and thus persistent following its removal:

X=gNX,Y=gNY[gNX]1gNY=gXNgNY=gXY (1)

This method is used to compute several of the transforms (Fig. 3) described below, and can be performed using a single Polaris reading in real time:

Fig. 3.

Fig. 3.

Relevant frames and transformations for the da Vinci® surgical system.

  • gMcC is the transform between the camera and the marker affixed to it, which also remains constant when the camera is moved. The determination of this transform is discussed in Section IV-B.

  • gMpP is the transform between the PSM base and the marker affixed to it, which remains constant even when the PSM is moved. The determination of this transform is discussed in Section IV-C.

  • gMpMc is the transform between Mp and Mc. It changes whenever the PSM or camera is moved, but can be redetermined in real-time using only a single Polaris reading of both markers, as per (1).

  • gDC is the camera-to-calibration-grid transform. It changes based on the location of the calibration grid, and does not exist at all when (as in surgery) the grid is not present. It is discussed further in Section IV-B.

  • gMdD is the transform between the calibration grid and the marker affixed to it. The physical structure holding both components is CNC-machined so that this transform is constant and known to high precision:

gMdD=[001010080mm01000001] (2)
  • gMdMc is the transform between the calibration grid and camera markers. It is discussed further in Section IV-B.

  • gPC is the hand-eye transform of interest. Using this transform, a physical point identified in C can be transformed into P, and inverse kinematics employed to make the da Vinci® tool travel to that point. Once gMpMc, gMpP, and gMcC are known,

gPC=[gMpP]1gMpMcgMcC. (3)

B. Camera to Polaris Registration

One of the preoperative calibration procedures involves finding the transformation between the camera frame C and the marker frame Mc attached to the camera (Fig. 6). The transformation gMcC is computed offline using a custom-built visual calibration board that includes a tracking marker (Fig. 4), which serves as a connection between the camera and the Polaris tracker. Before hand-eye calibration, the endoscopic camera system optical intrinsics are calculated by the ROS industrial-calibration package [35]. The resulting intrinsics show a 0.37 px reprojection error, which contributes (along with detection errors impossible to directly measure) to a total error in locating the positions of detected objects which averages 0.49mm and has a maximum of 2.8mm [36].

Fig. 6.

Fig. 6.

Camera to Polaris calibration schematic diagram.

First, the board is positioned such that it is visible to both the Polaris tracker and the endoscopic camera. Using the images from the endoscope, the grid corner locations are then detected in the camera frame (Fig. 5) using a corner detection algorithm [37]. The corner locations can be used as inputs to the OpenCV solvePnP optimization algorithm [37] to compute gCD using the camera’s intrinsic parameters, the corner locations of the grid in camera frame (C), and corresponding corner locations in the grid frame (D). Since gMdD is known and a single Polaris reading gives gMdMc with (4), gMcC can be computed as follows:

gMcC=[gMdMc]1gMdDgDC. (4)

Fig. 5.

Fig. 5.

Corners of the board detected in the camera frame.

Since the corner detection and Polaris readings might be affected by the board position and lighting conditions, 500 data samples using different positions and orientations of the board are collected. The developed software has the ability to automatically collect data while the board is moved under the camera and tracker. After collecting the data, a constrained nonlinear optimization problem is constructed as:

gMcC=argmingMcCi=0nerror(gMcC,gDCi,gMdMci),gxi=gMcC[gDCi]1[gMdD]1gMdMci=(Rxpx0001),error=px+α|arccos(Trace(Rx)12)| (5)

where n is the size of the data pool (500 data sets were enough to cover the region in our case) and α is a scale factor identified empirically. The error is the weighted sum of the translational and rotational error where the rotational error is given by the angle of the axis-angle representation. This optimization problem is solved using Matlab’s fmincon utility using the default (interior-point) optimization algorithm with no linear constraints or region.

gMcC does not depend on the specific location of the calibration board. Once it is established, it remains valid even if joints of the endoscope-holding arm are moved. Therefore, the procedures described above can be done off-line and preserved across multiple camera positions and multiple surgeries.

C. PSM Kinematic Model and PSM to Polaris Registration

Joint axes 1 through 3 (ω1ω3) of the da Vinci® PSM are designed to intersect at a single point (namely, RCM) and be mutually perpendicular, defining the PSM base frame P’s position and orientation (Fig. 7). On a physical robot, this may not be the case because of imperfections in the mechanism, making the definition of P and the subsequent kinematic calculations inaccurate. In order to improve the accuracy of the PSM kinematics and the PSM to Polaris registration, an appropriately defined Denavit-Hartenberg (DH) parametrization of the PSM kinematics, taking into account these imperfections, is identified as part of the process.

Fig. 7.

Fig. 7.

Robot illustration with base frame and DH 0, 1 and 2 frames. The kinematics of the PSM is represented by DH convention. The first three frames are ideally intersecting at the origin of base frame, and are mutually perpendicular.

The PSM to Polaris registration and the identification of the PSM kinematic parameters are performed with the help of a passive marker Mt affixed to the PSM end effector (Fig. 2) and tracked by the Polaris sensor.

In order to determine ω1, joint J1 is swept from its minimum to maximum value while freezing joints J2 through J7 at 0 displacement . The passive marker affixed to the PSM end effector will describe an arc in 3-space, which can be fit by a least-squares method to a full circle [38]. ω1 passes through the center of the circle and is parallel to the normal vector of the plane containing it, allowing its position and orientation to be calculated in N. Analogously, sweeping J2 while holding J1 and the other joints at 0 displacement allows us to determine ω2 (Fig. 8).

Fig. 8.

Fig. 8.

Point cloud containing marker positions obtained by ‘circle-drawing’ and ‘sphere-drawing’ motions is shown on the left. Fitted arcs and sphere centers are shown on the right.

To determine ω3, J1 and J2 are held at 0 displacement and J3 is moved to four different positions. At each, wrist joints J4 and J5 are swept from their minimum to maximum values, causing the tip of the gripper and the marker affixed thereto to describe a small (≈ 40mm diameter) sphere. The origin of each sphere is then identified using a similar least-squares fitting algorithm [38]. A line passing through all four sphere center points can then be determined by another least-squares fit [38] which describes ω3 in the Polaris frame (Fig. 8).

Based on the aforementioned calculations, it was confirmed that ω1, ω2, and ω3 indeed are not perpendicular and do not intersect. The difference between the idealized and experimentally identified DH parameters for the physical da Vinci® PSM used in the experiment is shown pictorially in Fig. 9 and numerically in Table I.

Fig. 9.

Fig. 9.

Idealized (right) and real (left) axes of Joints 1 through 3. Frame 0, 1 and 2 are intersecting at the origin of base frame under the ideal assumption, which conflicts with the reality. The difference increases the cumulative positioning error significantly.

TABLE I.

Ideal and realistic DH frame parameters of the PSM.

Joint θideal (rad) αideal (rad) aideal (mm) dideal (mm)

J1 θ1 π2 0 0
J2 θ2+π2 π2 0 0
J3 0 0 0 θ3

Joint θreal (rad) αreal (rad) areal (mm) dreal (mm)

J1 θ1 + π π20.0062 5.74 0
J2 θ2π20.022 π20.010 1.12 −2.97
J3 0 0 0 θ3 – 7.41

Once the position and orientation of the PSM base frame relative to the Polaris frame, gNP, are known, (1) can be used to determine gMpP using gNMp. Similar to gMcC, the end result of this process does not depend on the position of the Polaris unit and remains valid even if the joints of the PSM-holding arm are moved, or the tip marker Mt removed. Therefore, the procedures described can be done off-line and preserved across multiple PSM positions and multiple surgeries.

V. Experimental Validation

A. Experiment Design

The accuracy of the calibration method can be evaluated by locating a point-of-interest (POI) or a set of POIs through the endoscopic camera, and then commanding the da Vinci® robot to move its PSM to those points (Experimental setup is shown in Fig. 10). A consistent offset between the physical POIs Ri and the end effector points-of-arrival Qi represents an error in the calibration. The Ri are generated from the inner intersections of a 5×7 grayscale checkerboard with 15mm squares, which are located in the camera space using [37]. As these points correspond to the physical features of an object, it is possible to precisely measure the distance εi between each Ri and the resulting Qi = gPC ·Ri as X, Y, and Z components in an arbitrary 3-D frame (in this case the board frame D):

εi=RiQi=εXi2+εYi2+εZi2 (6)

Fig. 10.

Fig. 10.

Experimental test setup. The board is placed under the endoscope (a) to detect the corners. Carbon paper is placed on the board to record the X and Y error components. Conductive foil is then placed on top to record the Z-axis error as seen in (b). The manipulator is set to touch to the board corners (c).

In order to determine these components, after the POIs are recorded but before the robot moves, the calibration grid is covered with a sheet of carbon paper backed by conductive foil (with a total thickness less than 0.1mm). The foil is connected to the positive terminal of a computerized voltmeter, the conductive metal tip of the PSM is wired to a +5V DC power source, and the voltmeter and power source share a common ground. Thus, when the PSM tip makes physical contact with the foil (and, by extension, the checkerboard intersection beneath) the voltage reported by the meter software jumps from 0V to approximately +5V.

For each Qi, the system is commanded to move first to a position 5mm above the calculated Z-value, then descend with ±0.02mm increments until an increase in voltage is registered — this means the PSM tip is in contact with the grid and therefore at the true physical Z-coordinate of Ri. RZiQZi=εZi. Additionally, when the tip applies pressure to the foil, a dark mark is transferred from the carbon paper beneath it to the checkerboard. The displacement from this mark to the corresponding corner of the board is measured using calipers to determine εXi and εYi. Each test involved 15 points, and the tests were repeated 13 times with the board at different locations under the camera field of view, for a total data set of 195 unique points.

In order to measure calibration performance in terms of rotation, the robot was commended to approach identified checkerboard points with the gripper held at various angles (90°, 60°, 30°). Then the angle between the robot grippers and the board surface was measured manually with a digital angle gauge (Pittsburgh Digital Angle Gauge, with accuracy = ±0.3° as reported in the tool user manual). The difference between the commanded angle and the physically measured angle was recorded as the rotational error. Each angle was tested with 100 trials (total of 300 unique positions). The experimental procedure is summarized in the video attachments.

In addition to the quantitative measurements, the quality of the calibration was evaluated in an application-relevant task, namely by grasping a surgical needle. The needle was placed randomly into the camera view (ensuring the needle is reachable by the robot end effector). A flat white surface, a flat printout of a photographed surgical procedure, a soft flat pad and a suturing training pad were used as backgrounds. The location of the needle is determined by a needle tracking algorithm [3] in the camera frame. By using the calibration, coordinates of the needle are then transformed into robot base frame and the robot is commanded to grasp the needle. A total of 20 needle grasping trials were performed.

The quality of the intra-operative updates of the hand-eye calibration after motion of the camera and the robot base was also experimentally evaluated. Specifically, after the calibration procedure is performed and the camera-robot calibration were obtained, the passive joints of the robot arm were manually moved to several new configurations. At each configuration, the optical tracker was used to obtain the transformation between fixed markers attached to the camera and the robot base frames, and the new camera-robot calibration was obtained by updating only the affixed marker transformation. A total of 5 different configurations obtained by manually moving the passive joints were tested, where each configuration was evaluated by performing 5 sets of POI positioning test (total of 375 unique points) and 5 visually-guided needle grasps (totalling 25 trials).

B. Results

Table II shows a statistical breakdown of the error across all 195 points. εXi and εYi are measured to a resolution of ±0.1 mm using calipers; εZi was measured using the descent method described above to a resolution of ±0.02mm. Fig. 11 and Fig. 12 show the distribution of the error values. Table II also shows the rotational error breakdown εθ. Out of 300 trials, mean rotational error is calculated as 3.2° with the standard deviation of 3.4° and maximum of 6°.

TABLE II.

Descriptive statistics of component and total errors (mm and °).

εX εY εZ ε εθ

Mean 1.5 1.1 0.5 2.1 3.2°
Std. Dev. 1.8 1.2 0.3 0.9 3.4°
Max. 3.8 3.0 1.4 4.3 6.0°

Fig. 11.

Fig. 11.

Euclidean-distance and component-axis errors plotted with respect to point number.

Fig. 12.

Fig. 12.

Histogram of Euclidean errors before and after passive joint motions.

The needle grasping experiments were performed a total of 20 times. The robot successfully grasped the needle in 18 cases and failed in 2 cases. In the failure cases, the robot grippers were very close to the needle. In both cases, the gripper made contact with the needle. The video attachment contains several successful needle grasps under varying background surfaces as well as the failure cases.

The reliability of the calibration after movement of the passive joints was tested with a total of 5 different configurations with 25 needle grasping experiments (5 experiments for each configuration). After the manual motion on the passive joints, using the updated calibration, the robot successfully grasped the needle in 21 cases and failed in 4 cases (the gripper hit the needle in 2 cases, and missed the needle in 2 cases). The measured POI positioning errors (cumulative for the 5 configurations) were 2.0mm (RMS), 0.7mm (std. dev.), and 4.0mm (max).

The calibration procedure described in this study takes about 40 minutes. Specifically, part described in Section IV-B takes ∼5 min. (data collection 4 min., processing 1 min.), and Section IV-C takes ∼35 min. (data collection 34 min., processing 1 min.). A video of the robot moving to different POIs after movement of the passive joints and calibration update is included in the video attachment.

VI. Conclusions

In this study, a method for hand-eye calibration of the da Vinci® robotic surgery system is presented. Although the present study specifically focuses on the calibration of the da Vinci system, the proposed method is broadly applicable to any RCM-based robotic mechanism, such as the Raven surgical robotic system [39].

Instead of trying to directly find a transformation between the camera and the manipulators, the calibration procedure is performed in three independent steps to reduce the cumulative error and to allow the calibration procedure to be partitioned into pre-operative (off-line) and intra-operative (on-line) parts. An external optical sensor — the NDI Polaris Vicra® — is used to identify the kinematic calibration parameters during the calibration procedure. In the first step of the procedure, a fixed marker is attached to the camera and the transformation between the camera and the marker via a custom-built calibration board is computed. In the second step of the procedure, an optical marker is attached onto the robot manipulator base and the transformation between the robot base and the marker is calculated. These two steps can be pre-operatively performed off-line, and the transforms discovered persist between surgical operations. In the third step, the Polaris Vicra® is used to identify the transformation between the affixed markers to find the final calibration. Each time the passive joints of the manipulator or the endoscope move, all that needs to be done is to refresh the transformation between the fixed markers, enabling quick, on-line intra-operative update of the hand-eye transformation.

The resulting calibration methodology produced visually-guided end-effector motions with an RMS positioning error of 2.1mm and a maximum error of 4.3mm. The rotation accuracy of the method is calculated as 3.2° with the standard deviation of 3.4° and maximum of 6°. The resulting accuracy of the system after calibration is better than the earlier hand-eye calibration results reported in the literature for the da Vinci Research Kit. The quality of the calibration is evaluated in an application relevant task, namely, by grasping a surgical needle (45 experiment, 39 successful needle grasp). This functional validation results also demonstrated that the achieved calibration accuracy is sufficient for basic surgical manipulation tasks, such as needle grasping. Additionally, following a one-time off-line calibration step, the intra-operative update of the calibration can be performed in the order of a few seconds in real time without human intervention, excessive computational resources, or introduction of non-surgical equipment into the operating area. Although the relatively small visual range of the Polaris Vicra® limits the workspace in which calibration is possible, this limitation could easily be rectified by substituting another optical tracker, such as the Polaris Spectra® or Vega®, with a larger workspace. Non optical trackers can also be employed (e.g., [40]). Since the calibration procedure depends on the external tracking equipment, the cost should be taken into account, although it is typically a small fraction of the overall system cost.

Supplementary Material

supp2-2986503
Download video file (12.5MB, mp4)
supp1-2986503
Download video file (37.7MB, mp4)

Acknowledgments

This work was supported in part by the National Science Foundation under grants CISE IIS-1524363 and CISE IIS-1563805, and National Institutes of Health under grant R01 EB018108.

References

  • [1].Yohannes P, Rotariu P, Pinto P, Smith AD, and Lee BR, “Comparison of robotic versus laparoscopic skills: is there a difference in the learning curve?” Urology, vol. 60, no. 1, pp. 39–45, 2002. [DOI] [PubMed] [Google Scholar]
  • [2].Moustris GP, Hiridis SC, Deliparaschos KM, and Konstantinidis KM, “Evolution of autonomous and semi-autonomous robotic surgical systems: a review of the literature,” The International Journal of Medical Robotics and Computer Assisted Surgery, vol. 7, no. 4, pp. 375–392, 2011. [DOI] [PubMed] [Google Scholar]
  • [3].Ozguner O, Hao R, Jackson RC, Shkurti T, Newman W, and Cavusoglu MC, “Three-dimensional surgical needle localization and tracking using stereo endoscopic image streams,” in IEEE International Conference on Robotics and Automation (ICRA), 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [4].Hao R, Ozguner O, and Cavusoglu MC, “Vision-based surgical tool pose estimation for the da vinci robotic surgical system,” in Int. Conference on Intelligent Robots and Systems (IROS), 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [5].Jackson RC and Cavusoglu MC, “Needle path planning for autonomous robotic surgical suturing,” in IEEE International Conference on Robotics and Automation (ICRA), 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [6].Liu T and Cavusoglu MC, “Needle grasp and entry port selection for automatic execution of suturing tasks in robotic minimally invasive surgery,” IEEE Transactions on Automation Science and Engineering, vol. 13, no. 2, pp. 552–563, April 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [7].Jackson RC, Desai V, Castillo JP, and Cavusoglu MC, “Needletissue interaction force state estimation for robotic surgical suturing,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Oct 2016, pp. 3659–3664. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [8].Jackson RC, Yuan R, Chow DL, Newman WS, and avuoglu MC, “Real-time visual tracking of dynamic surgical suture threads,” IEEE Transactions on Automation Science and Engineering, vol. PP, no. 99, pp. 1–13, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [9].Kehoe B, Kahn G, Mahler J, Kim J-H, Lee A, Lee A, Nakagawa K, Patil S, Boyd WD, Abbeel P, and Goldberg K, “Autonomous multi-lateral debridement with the raven surgical robot,” in IEEE International Conference on Robotics and Automation (ICRA), 2014. [Google Scholar]
  • [10].Kehoe B, Kahn G, Mahler J, Kim J-H, Lee ALA, Nakagawa K, Patil S, Boyd WD, Abbeel P, and Goldberg K, “Autonomous tumor localization and extraction: Palpation, incision, debridement and adhesive closure with the da vinci research kit,” in Hamlyn Surgical Robotics Conference, 2015. [Google Scholar]
  • [11].Shiu Y and Ahmad S, “Calibration of wrist-mounted robotic sensors by solving homogeneous transform equations of the form ax = xb,” IEEE Transactions on Robotics and Automation, vol. 5, no. 1, pp. 16–29, 1989. [Google Scholar]
  • [12].Tsai R and Lenz R, “A new technique for fully autonomous and efficient 3d robotics hand/eye calibration,” IEEE Transactions on Robotics and Automation, vol. 5, no. 3, pp. 345–358, 1989. [Google Scholar]
  • [13].Wang C, “Extrinsic calibration of a vision sensor mounted on a robot,” IEEE Transactions on Robotics and Automation, vol. 8, no. 2, p. 161175, 1992. [Google Scholar]
  • [14].Park F and Martin B, “Robot sensor calibration: Solving ax = xb on the euclidean group,” IEEE Transactions on Robotics and Automation, vol. 10, no. 5, p. 717721, 1992. [Google Scholar]
  • [15].Horaud R and Dornaika F, “Hand-eye calibration,” International Journal of Robotics Research, vol. 14, no. 3, p. 195210, 1995. [Google Scholar]
  • [16].Chen HH, “A screw motion approach to uniqueness analysis of head-eye geometry,” in In IEEE Proceedings of Computer Vision and Pattern Recognition (CVPR), 1991, p. 145151. [Google Scholar]
  • [17].Daniilidis K, “Hand-eye calibration using dual quaternions,” Int. Journal of Robotics Research, vol. 18, no. 3, p. 286 298, 1999. [Google Scholar]
  • [18].Bayro-Corrochano E, Daniilidis K, and Sommer G, “Motor algebra for 3d kinematics: The case of the hand-eye calibration,” Journal of Mathematical Imaging and Vision, vol. 13, no. 3, p. 79100, 2000. [Google Scholar]
  • [19].Zhao Z and Liu Y, “Hand-eye calibration based on screw motions,” in 18th International Conference on Pattern Recognition (ICPR), 2006, p. 1022 1026. [Google Scholar]
  • [20].Zhao Z, “Hand-eye calibration based on screw motions,” in IEEE International Conference on Robotics and Automation (ICRA), 2011. [Google Scholar]
  • [21].Angeles J, Soucy G, and Ferrie FP, “The online solution of the hand-eye problem,” IEEE Trans. Robot. Autom, vol. 16, no. 6, p. 720731, 2000. [Google Scholar]
  • [22].Hirsh RL, DeSouza GN, and Kak AC, “An iterative approach to the hand-eye and base-world calibration problem,” in IEEE International Conference on Robotics and Automation (ICRA), 2011. [Google Scholar]
  • [23].Zhuang H, Roth ZS, and Sudhakar R, “Simultaneous robot/world and tool/flange calibration by solving homogeneous transformation equations of the form ax = yb,” IEEE Trans. Robot. Autom, vol. 10, no. 4, pp. 549–554, 1994. [Google Scholar]
  • [24].Shah M, “Solving the robot-world/hand-eye calibration problem using the kronecker product,” J. Mech. Robot, vol. 5, no. 3, pp. 549–554, 2013. [Google Scholar]
  • [25].Li H, Ma Q, Wang T, and Chirikjian GS, “Simultaneous hand-eye and robot-world calibration by solving the ax = yb problem without correspondence,” IEEE Robotics and Automation Letters, vol. 1, no. 1, pp. 145–152, 2016. [Google Scholar]
  • [26].Wang Z, Liu Z, Ma Q, Cheng A, Liu Y. h., Kim S, Deguet A, Reiter A, Kazanzides P, and Taylor RH, “Vision-based calibration of dual rcm-based robot arms in human-robot collaborative minimally invasive surgery,” IEEE Robotics and Automation Letters, vol. 3, no. 2, pp. 672–679, 2018. [Google Scholar]
  • [27].Zhang Z, Zhang L, and Yang G-Z, “A computationally efficient method for hand–eye calibration,” International Journal of Computer Assisted Radiology and Surgery, vol. 12, no. 10, pp. 1775–1787, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [28].D’Ettorre C, Dwyer G, Du X, Chadebecq F, Vasconcelos F, Momi ED, and Stoyanov D, “Automated pick-up of suturing needles for robotic surgical assistance,” CoRR, vol. abs/1804.03141, 2018. [Google Scholar]
  • [29].Pachtrachai K, Allan M, Pawar V, Hailes S, and Stoyanov D, “Hand-eye calibration for robotic assisted minimally invasive surgery without a calibration object,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2016, pp. 2485–2491. [Google Scholar]
  • [30].Allan M, Chang P-L, Ourselin S, Hawkes DJ, Sridhar A, Kelly J, and Stoyanov D, “Image based surgical instrument pose estimation with multi-class labelling and optical flow,” in Medical Image Computing and Computer-Assisted Intervention (MICCAI), Navab N, Hornegger J, Wells WM, and Frangi A, Eds., 2015, pp. 331–338. [Google Scholar]
  • [31].Seita D, Krishnan S, Fox R, McKinley S, Canny JF, and Goldberg K, “Fast and reliable autonomous surgical debridement with cable-driven robots using a two-phase calibration procedure,” CoRR, vol. abs/1709.06668, 2017. [Google Scholar]
  • [32].Khadem R, Yeh CC, Sadeghi-Tehrani M, Bax MR, Johnson JA, Welch JN, Wilkinson EP, and Shahidi R, “Comparative tracking error analysis of five different optical tracking systems,” Computer Aided Surgery, vol. 5, no. 2, pp. 98–107, 2000. [DOI] [PubMed] [Google Scholar]
  • [33].Kazanzides P, Chen Z, Deguet A, Fischer GS, Taylor RH, and DiMaio SP, “An open-source research kit for the da vinci® surgical system,” in IEEE International Conference on Robotics and Automation (ICRA), 2014. [Google Scholar]
  • [34].Polaris Vicra User Guide Rev 6, 2012. [Google Scholar]
  • [35].“Ros industrial” [Online]. Available: https://github.com/ros-industrial/
  • [36].Shkurti T, “Simulation and control enhancements for the da vinci surgical robot,” Master’s thesis, Case Western Reserve University, January 2019. [Google Scholar]
  • [37].“Opencv” [Online]. Available: https://opencv.org/
  • [38].Torr PHS and Zisserman A, “Mlesac: A new robust estimator with application to estimating image geometry,” Computer Vision and Image Understanding, vol. 71, no. 1, pp. 138–156, 2000. [Google Scholar]
  • [39].Hannaford B, Rosen J, Friedman DW, King H, Roan P, Cheng L, Glozman D, Ma J, Kosari SN, and White L, “Raven-ii: An open platform for surgical robotics research,” IEEE Transactions on Biomedical Engineering, vol. 60, no. 4, pp. 954–959, April 2013. [DOI] [PubMed] [Google Scholar]
  • [40].Liu X, Plishker W, Zaki G, Kang S, Kane TD, and Shekhar R, “On-demand calibration and evaluation for electromagnetically tracked laparoscope in augmented reality visualization,” International Journal of Computer Assisted Radiology and Surgery, vol. 11, no. 6, pp. 1163–1171, June 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

supp2-2986503
Download video file (12.5MB, mp4)
supp1-2986503
Download video file (37.7MB, mp4)

RESOURCES