Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Jan 1.
Published in final edited form as: IEEE ASME Trans Mechatron. 2020 May 22;25(6):2846–2857. doi: 10.1109/tmech.2020.2996683

Automatic Light Pipe Actuating System for Bimanual Robot-Assisted Retinal Surgery

Changyan He 1,2, Emily Yang 3, Niravkumar Patel 3, Ali Ebrahimi 3, Mahya Shahbazi 3, Peter Gehlbach 4, Iulian Iordachita 5
PMCID: PMC7745739  NIHMSID: NIHMS1597461  PMID: 33343183

Abstract

Retinal surgery is a bimanual operation in which surgeons operate with an instrument in their dominant hand (more capable hand) and simultaneously hold a light pipe (illuminating pipe) with their non-dominant hand (less capable hand) to provide illumination inside the eye. Manually holding and adjusting the light pipe places an additional burden on the surgeon and increases the overall complexity of the procedure. To overcome these challenges, a robot-assisted automatic light pipe actuating system is proposed. A customized light pipe with force-sensing capability is mounted at the end effector of a follower robot and is actuated through a hybrid force-velocity controller to automatically illuminate the target area on the retinal surface by pivoting about the scleral port (incision on the sclera). Static following-accuracy evaluation and dynamic light tracking experiments are carried out. The results show that the proposed system can successfully illuminate the desired area with negligible offset (the average offset is 2.45 mm with standard deviation of 1.33 mm). The average scleral forces are also below a specified threshold (50 mN). The proposed system not only can allow for increased focus on dominant hand instrument control, but also could be extended to three-arm procedures (two surgical instruments held by surgeon plus a robot-holding light pipe) in retinal surgery, potentially improving surgical efficiency and outcome.

Index Terms-: Light pipe actuating, bimanual control, hybrid velocity-force control, robot-assisted retinal surgery

I. Introduction

RETINAL surgery requires precise and bimanual manipulation of surgical instruments inside the confined space of an eyeball. In a typical retinal surgery, surgeons use their dominant hand to operate a functional instrument (e.g. cannula) to perform a surgical task, while using their non-dominant hand to hold a light pipe that illuminates the retina. Both tools are inserted into the eye through trocars installed at scleral ports (the diameters of which are smaller than 1mm). The surgeon looks into the interior of the eye through a surgical microscope that is placed above the patient’s head. The procedures in retinal surgery involve manipulation of tissues at micron scales, during which exertion of submillinewton forces (well below human tactile sensing ability) [1] can cause irreversible structural and functional damage to the eye. Also, given that the surgical instruments are physically constrained by the trocars in the scleral ports, their manipulation generates continuous forces on the sclera, and excessive forces may cause sclera injury. The outcomes of retinal surgery are impacted by many factors, including, but not limited to, physiological hand tremors, fatigue, poor kinesthetic feedback, patient movement, and the absence of force sensing. Some of these challenges have been evaluated and addressed in early work done by different teams including our own. However, the bimanual operation required in retinal surgery makes the procedures even more challenging and demands both strong surgical skills and excellent coordination. To address the distraction and additional fatigue caused by having to manually hold a light pipe throughout a lengthy surgical procedure, this paper introduces an automatic light pipe actuating system that leverages robotic technology to free up the surgeon’s non-dominant hand.

Robotic technology development for applications in retinal surgery has been ongoing for over 20 years. Various robotic devices and systems have been proposed to enhance and expand surgeons’ surgical skills. These can broadly be categorized into the following: robotic manipulators, force/distance sensors, and robotic controllers, which are described in more detail below.

(a). Robotic manipulators:

With the inherent challenges of retinal surgery, robotic manipulators can optimize surgical procedures by acting as intelligent mediators between surgeons and direct manipulation of surgical instruments. Robotic manipulators can filter out hand tremors, impose virtual fixtures and establish safe operating limits, and hold instruments in place without active input from the surgeon. They can be further divided into two subcategories: teleoperated and hands-on operated.

For teleoperated manipulators, the surgeon typically controls a master console, which remotely commands a slave system to perform the surgical task. An example of such a system was created by Edwards et al. [2], who developed the “Preceye” robotic system and demonstrated its operation in human clinical trials. Other groups have also developed different types of robotic arms and master consoles [3]–[6], which were either evaluated in in-vivo animal models or exvivo eyeball models.

Unlike a teleoperated system, a hands-on device does not utilize a separate master console, but instead keeps the user in the loop by having them directly manipulate tools mounted on the system’s end effector. An example of such a system is the Steady Hand Eye Robot (SHER) developed at Johns Hopkins University (JHU) [7]. Rather than scaling movements like a teleoperated system, SHER enables admittance-based force scaling, in which forces exerted by the user on the surgical instrument are interpreted and scaled through an admittance control strategy before appropriate velocities are transmitted to the robotic manipulator.

(b). Force/Distance sensors:

Optical sensors with high resolution, enhanced sensitivity, and reduced size are potentially useful in retinal environments. Yu et al. [8] designed optical coherence tomography (OCT)-guided forceps, which can provide real-time intraocular visualization of retinal tissues and also determine the distance between the tool tip and the tissue. Song et al. [9] developed OCT-based forceps to detect the contact between their tip and the retinal surface. A JHU research group developed microsurgical instruments based on F7iber Bragg Grating (FBG) sensors to measure not only forces at the tool tip, but also the location of force application and amount of force applied on the tool shaft [10]. Apart from optical sensors, impedance sensors have also been employed to detect puncture signals between the cannula tip and retinal vessel in retinal vein cannulation [11]. The above-mentioned sensors can be incorporated into robotic platforms to improve tactile and force perception of surgeons via auditory feedback [12] and/or haptic feedback [13].

(c). Robotic controllers:

To improve the safety of robot-assisted retinal surgery, on top of existing low-level robot controllers (e.g., velocity controller), Ebrahimi et al. [14], [15] proposed a 3 degree of freedom (DoFs) adaptive controller to restrict instrument movement in response to exceeding a prescribed force limit. He et al. leveraged deep learning to predict the manipulation force, in which the prediction result is then either fed back to surgeon via auditory substitution [16] or integrated into an admittance controller [17] to keep the force level below a desired threshold. In the realm of automatic surgery, Yang et al. [18] explored techniques for robot-aided intraocular surgery using monocular vision and proposed a new retinal surface estimation method. Braun et al. [19] proposed the “EyeSLAM” algorithm that can deliver 30 Hz real-time simultaneous localization and mapping of the human retina and vasculature during intraocular surgery.

The previously developed robotic technologies described above provide solutions for various operational challenges in retinal surgery. However, the primary research focus has been on the active surgical instruments held in the surgeon’s dominant hand. The topic of bimanual control and automatic light pipe actuation has not been explored, but it is a promising area for improving surgical efficiency and decreasing complexity. In this paper, an automatic light pipe actuating system is proposed based on a hybrid velocity-force control scheme.

The system consists of a customized light pipe with force-sensing capability and a force-sensing cannulation tool, which are installed at the end effectors of two SHERs. The SHER with the cannulation tool (designated as the leader robot) is manipulated by the user to perform the surgical task and establishes the target position on the retinal surface. The other SHER with the light pipe (designated as the follower robot) is commanded automatically without any active user input through a hybrid velocity-force controller to track and illuminate the target position with the light pipe. Apart from the tracking task, the light pipe is controlled to pivot about the scleral port to keep the scleral force within a prescribed limit. The leader robot uses an admittance control scheme that takes the manipulation force applied by the user and the scleral force as inputs. The admittance controller provides a force-based virtual fixture at the scleral port for the cannulation tool to minimize the scleral force.

The proposed system is evaluated on an rubber eyeball phantom. The accuracy of target tracking of the light pipe as well as the ability to maintain acceptable scleral force values applied by both tools are analyzed. The results demonstrate that the light pipe can successfully track the target position on the retinal surface with an insignificant offset, and both tools can maintain scleral forces within a preset threshold. Therefore, the proposed system is capable of freeing up the surgeon’s non-dominant hand, potentially allowing the surgeon to concentrate more closely on their dominant hand and the surgical task. Furthermore, the automated system can enable three-arm procedures (two surgical instruments held by the surgeon plus a light pipe held by the follower robot) in retinal surgery, making it possible to develop new procedures that may improve surgical outcome and efficiency.

II. Hardware system

The proposed automatic light pipe actuating system mainly consists of two components: the robotic system and the force sensing system, shown in Fig. 1 and described in detail as follows. For this study, it is assumed that the leader robot holds a functional surgical tool (e.g., cannulation tool) that the surgeon would use to actively perform a task and that the follower robot holds a light pipe.

Fig. 1.

Fig. 1.

(a) Experimental setup consisting of the SHER 2.0 and the SHER 2.1, two force sensing tools that are each connected to an FBG interrogator, an eyeball phantom, and a microscope to view inside the eyeball. (b) The force sensing tools, which include a light pipe and a cannulation tool. The scleral force is defined as the interactive force between the tool shaft and the scleral port on the eyeball. (c) Orientation of FBG fibers around the circumference of the tool shaft. Both the cannulation tool and light pipe outer diameters are 23Ga, and the FBG fibers are secured along the tool shafts using medical-grade adhesive.

A. Robotic system

The robotic system is comprised of the previously developed SHER 2.0 [20] and SHER 2.1 [7]. Both SHERs have five DoFs and similar mechanical structure. Three linear motion stages, one rotational stage, and a parallel link mechanism enable the achievement of high translation accuracy (less than 3μm) and rotational accuracy (0.005°). Surgical tools are mounted on the SHER’s end effectors, and users are able to grasp and manipulate the tools to perform tasks. Instead of the interaction forces applied by the user being translated directly to the tool, these forces are captured in real time by force/torque sensors and used to compute SHER velocities. This enables the robot to filter out hand tremors and/or excessive tool-to-tissue forces. In this study, SHER 2.1 acts as the leader robot that is controlled by the user’s dominant hand via a surgical tool. SHER 2.0 acts as the follower robot that actuating the light pipe automatically.

B. Force sensing system

The force sensing system consists of two sensorized instruments, a cannulation tool and a light pipe, each connected to a separate FBG interrogator. The tools are designed and fabricated based on the method proposed in our previous work [10] to measure the scleral force as shown in Fig. 1 (b). Each tool integrates three FBG fibers (Technica S.A, Atlanta, GA, USA), with three active areas per fiber, along the tool shaft as shown in Fig. 1 (c). Using the calibration algorithm presented in [10], changes in FBG wavelength data collected by the interrogators can be correlated to applied force, resulting in sensorized tools that are able to sense scleral force. Root mean square (RMS) errors of the scleral force measurement for the light pipe and cannulation tool are 2.6 mN and 1.4 mN, respectively.

III. Control system

The leader robot is controlled by the user manipulating the functional surgical tool. The follower robot (which holds the light pipe) is programmed to move automatically. The goal of the follower robot is to align the light pipe with the calculated intersection point between the leader tool axis and the retinal surface, thus illuminating this desired point on the retina. Given that the location of the scleral ports may change during the procedure due to eyeball rotation within the socket, both robots use force-based virtual fixtures at their respective scleral ports to minimize damage to the sclera. The control system can be divided into three main components that are detailed in subsequent sections: preoperative registration, light pipe actuation control for the follower robot, and user-involved admittance control for the leader robot.

A. Preoperative Registration

1). Robot registration:

The first step in the control scheme is to establish the coordinate frames of the leader and follower robots with respect to one another. The robot registration scenario is shown in Fig. 2. A calibration cube with the dimension of 40 × 40 × 40 mm is placed in the shared workspace of the two robots. Papers drawn with target points are pasted on the four side surfaces and the top surface of the cube. Each surface has 5 target points, that is to say, a total of 25 target points is well-spread in the robots’ common workspace. When the registration starts, one of the robots is manipulated manually to touch the 25 target points with its tool tip in a prescribed sequence, the tip coordinates of the tool in its robot base frame are recorded when the tool tip hits the target points. Then, in the same sequence, the other robot is controlled manually to touch the target points with the tool tip and, the tip coordinates are recorded the same way as the former robot. The microscope is used to observe the contact between the tool tips and the markers to minimize data collection error. In total 25 sets of datapoints are collected. For each pair of recorded points, the registration of two robots can subsequently be formulated as follows:

{P1r2=Tr1r2P1r1,P2r2=Tr1r2P2r1,Pnr2=Tr1r2Pnr1 (1)

where Pir1 and Pir2 denote the tool tip coordinates of the leader robot and follower robot1, respectively, n is the number of collected points. Tr1r2 represents the transformation from the leader robot base frame {r1} to the follower robot base frame {r2}. Tr1r2 can be resolved using the least squares method2.

Fig. 2.

Fig. 2.

Robot registration scenario. A cube with markers is placed in the workspace of both robots. The two robots are manipulated in turn to touch the same markers in a prescribed sequence with the tool tips.

To evaluate the registration accuracy, 20 points were randomly selected in the shared workspace of the two robots. The robots were controlled to have their tool tips touch these points in the same order. The tip coordinates were recorded in their respective base frames, and the coordinates in the leader robot base frame were transformed using Tr1r2 so that all of the coordinates could be directly compared. The average error of the registration was calculated according to Eq. 2 and found to be 0.4 mm.

errorrobot=1ninPir2Tr1r2Pir12, (2)

where ∥·∥2 is the Euclidean distance, i = 1,2,…,n, and n = 20.

2). Eyeball registration:

Following the registration of the two robots with respect to one another, the eyeball also needs to be registered so that the target point for the follower robot, i.e., the intersection point between the axial line of the instrument on the leader robot and the retinal surface, can be calculated. Since the two robots are registered in section III-A1, the eyeball only needs to be registered in the base frame of either the leader or the follower robot. The chosen robot is manipulated under the microscope view to randomly touch different points on the eyeball surface with the tool tip, and the tip coordinates are recorded. Then the eyeball center location, denoted as (ex,ey,ez), and its radius, denoted as r, can be formulated as Eq. 3.

{(x1ex)2+(y1ey)2+(z1ez)2=r2,(x2ex)2+(y2ey)2+(z2ez)2=r2,(xnex)2+(yney)2+(znez)2=r2 (3)

where (xi,yi,zi) (i = 1,2,…,n) are the coordinates of the tool tip when in contact with the eyeball surface and n is the number of collected data points.

By subtracting the first row from the remaining rows one by one, Eq. 3 can be resolved as Eq.:

[a1b1c1a2b2c2an1bn1cn1][exeyez]=[d1d2dn1] (4)

where ai = 2(x1xi+1),bi = 2(y1yi+1),ci = 2(z1zi+1), di=x12xi+12+y12yi+12+z12zi+12, and 1 < i < n − 1.

Then, the eyeball center (ex,ey,ez) can be obtained from Eq. 4 using least squares method. Lastly the radius of the eyeball, i.e., r, can be calculated using one of the rows in Eq. 3.

Similar to the robot registration evaluation procedure described in the previous section, an evaluation experiment for the eyeball registration is performed by collecting tip coordinates as the robot is manipulated to touch the eyeball surface at random locations. The average error was obtained using Eq. 5 and found to be 0.1 mm.

erroreye=1nin(xiex)2+(yiey)2+(ziez)2r22, (5)

where (xi,yi,zi) are the coordinates of the tool when touching the eyeball surface, i = 1,2,…,n is the number of collected data points, and n = 20.

B. Automatic light pipe actuating control algorithm

The goal of this study is to automatically control the light pipe mounted on the follower robot based on the position and orientation of the surgeon-controlled tool mounted on the leader robot. To achieve this, control of orientation and insertion depth of the light pipe are decoupled and considered separately. Additionally, a force-based virtual fixture is applied to keep the light pipe moving about a pivot point, which is located at the scleral port.

1). Orientation alignment control:

Orientation control of the light pipe is designed to align the axis of the light pipe with a desired point on the retina, denoted as Pd. This desired point (as shown in Fig. 3) is the intersection between the axis of the leader instrument, which can be calculated with Eq. 6, and the retinal surface, which can be formulated using the eyeball center (ex,ey,ez) and the radius r calculated with Eq. 3. Given these constraints, Pdr1 can be obtained by solving the set of equations consisting of Eq. 6 and Eq. 3, where the eyeball center location, (ex,ey,ez), and radius, r, are known from the registration procedures.

{x=xa+(xaxb)ty=ya+(yayb)tz=za+(zazb)t, (6)

where (xa,ya,za) and (xb,yb,zb) are the coordinates of two arbitrary points on the tool axis in the base frame of the leader robot, e.g., tool tip and end.

Fig. 3.

Fig. 3.

Illustration of the light pipe orientation control. The light pipe is controlled to point to the desired point Pd, which could be achieved by pivoting the light pipe around the scleral port and aligning the scleral frame {s2} to the desired frame {s2*}. The Z-axis of the desired frame {s2*} aligns with the vector PdPs2. Pdr1 and Pdr2 are the coordinates of the desired point in the base frame of leader robot and that of the follower robot, respectively. ωx and ωy are the andangular velocities around X and Y-axis of the scleral frame {s2}, respectively.

With this desired location on the retina calculated in the base frame of the leader robot, the point can then be transformed into the base frame of the follower robot, denoted as Pdr2, with Eq. 7:

Pdr2=Tr1r2Pdr1 (7)

This transformation allows for subsequent calculations for the light pipe orientation control to be performed in the follower robot base frame.

The light pipe is constrained to pivot about the scleral port, designated as Ps2, so that it does not apply excessive force to the sclera while making orientation adjustments3. Ps2r2 can be obtained by solving for the intersection point between the original light pipe axis, which can be described with Eq. 6 with two arbitrary points on the light pipe axis, and the eyeball sphere from Eq. 3. As shown in Fig. 3, a scleral frame, designated as {s2}, can be established with its origin at the scleral port and its Z-axis aligned with the axis of the light pipe. Given the robot joint angles used to achieve the current light pipe orientation, the frame {s2} can be resolved using kinematics of SHER [20].

A vector PdPs2 is defined to represent the desired light pipe orientation, as shown in Fig. 3. This desired orientation can be extended to establish a desired frame {s2*}, whose origin coincides with the light pipe scleral port, Ps2, and whose Z-axis aligns with PdPs2. The X- and Y-axes can be resolved by using SHER’s kinematics [20] and the known Z-axis direction, PdPs2.

With the {s2} and {s2*} frames established, the next step is to determine the relative transformation between them so that the current scleral frame, {s2}, can be aligned with the desired frame, {s2*}. Since the origins of both frames coincide at the scleral port, the transformation consists only of rotation without any translation. The rotation matrix of the frame {s2}, denoted as Rs2, represents the current orientation of the light pipe relative to the base frame r2. Likewise, the rotation matrix of frame {s2*}, denoted as Rs2*, represents the desired orientation of the light pipe relative to the base frame r2. The transitional rotation matrix, ΔR, which represents the transformation from the current scleral frame to the target desired frame by pivoting about the scleral port, can be calculated using Eq. 8:

ΔR=(Rs2)1Rs2*. (8)

This transitional rotation ΔR can be decomposed into incremental rotations about the scleral frame {s2} X- and Y-axes, Rotx) and Roty), respectively, as shown in Eq. 9:

Rot(Δx)Rot(Δy)=ΔR (9)

These two incremental rotations, Δx and Δy, can be used to calculate angular velocities ωx and ωy as shown in Eq. 10 according to a desired velocity profile (Fig. 4). The profile has two phases: a uniform phase and a deceleration phase. At the start of the control scheme, when the initial rotational offset between the current and desired orientations is large, the uniform phase sets a maximum angular velocity threshold for safety purposes. A sinusoidal trajectory is used for the deceleration phase to achieve a smooth and gradual velocity decrease as the current and desired orientations align.

ωi={Asin(BΔi), |Δi|maxΔsign(Δi)maxωxy, |Δi|>maxΔ, (10)

where ∣·∣ represents absolute value, ωi is the angular velocity and Δi is the incremental rotations in Eq. 9, i = x,y; maxΔ defines the range of deceleration phase, and maxωxy is the maximum value for the angular velocity. A and B are the parameters tuning the desired velocity trajectory, which could be set as: A = maxωxy,B = π/(2maxΔ). Based on experimental evaluations, maxΔ and maxωxy are set as 0.2 rad and 0.4 rad/s, respectively.

Fig. 4.

Fig. 4.

Desired angular velocity profile that consists of two phases: a uniform phase and a sinusoidal deceleration phase. Here, only the positive part of the profile is illustrated.

2). Insertion/Retraction control:

In addition to aligning the light pipe with the desired orientation, the insertion depth of the light pipe also needs to be controlled to provide desired illumination for the retinal surface. In this control scheme, the distance between the tip of the light pipe and the retinal surface, L2, matches the distance between the leader instrument tip and the retinal surface, L1, as shown in Fig. 5. L2 and L1 can be obtained using Eq. 11. The aim of this approach is to achieve an adjustable illumination range, i.e., illuminating a larger retinal area when the leader instrument is further away from the retina and providing more focused illumination when the leader instrument is closer to the retina.

Li=PtiriPdri2 (11)

where Ptiri is the tool tip point in the robot base frame and Pdri is the intersection point between the tool axis and the retinal surface, i = 1,2.

Fig. 5.

Fig. 5.

Illustration of the insertion/retraction control of the light pipe. The distance between the light pipe tip and the retinal surface, L2, is limited to stay within the range specified by maxL and minL. If the former condition is true, L2 is controlled to match the distance from the leader tool tip, L1.

The distance tracking control scheme described above for the light pipe is calculated in the scleral frame {s2} as shown in Eq. 12. A similar desired velocity profile combining a sinusoidal phase and a uniform phase shown in Fig. 4 is used to generate the insertion/retraction velocity vz.

vz={Asin(BΔL), minLL2maxL and |ΔL|maxΔLsign(ΔL)maxvz, minLL2maxL and |ΔL|>maxΔLmaxvz, L2<minL0, other (12)

where ΔL = L1L2, minL and maxL are the motion limits to avoid the light pipe touching the retinal surface and retracting out of the eyeball, respectively, maxΔL defines the range of the deceleration phase of the desired velocity, and maxvz is the maximum value of the insertion/retraction velocity. A,B are parameters that are used to control the desired velocity trajectory that could be set as A = maxvz,B = π/(2maxΔL). Based on our experimental evaluations, minL, maxL, maxΔL, and maxvz are set as 5 mm, 20 mm, 5 mm, and 12 mm/s, respectively, for the eye phantom in this study.

3). Force-based virtual fixture adjustment:

During retinal surgery, it is possible for the eyeball to rotate within the eye socket due to involuntary movement from the patient. Translational motion of the eyeball is limited by fixing the patient’s head prior to starting the procedure. However, since the orientation and insertion/retraction controls described above pivot the tool about a virtual fixture at the scleral port, any eyeball rotation could cause excessive scleral force that may damage the eyeball tissue. To address this, a force-based virtual fixture adjustment scheme is proposed. Using the sensorized capabilities of the light pipe, if the scleral forces exceed a certain threshold, these forces are fed into an admittance controller that applies velocity in the X and/or Y direction to adjust the X and/or Y coordinates of the virtual fixture at the scleral port and accommodate for any eyeball rotation as shown in Fig. 6. The velocity trajectory shown in Fig. 4 can be modified to calculate the X and Y velocities, vx and vy, of the light pipe relative to the follower robot scleral frame {s2} as shown in Eq. 13.

vi={Asin(BΔfsis2),0<Δfsis2maxΔfss2sign(fsis2)maxvxy,Δfsis2>maxΔfss20, other (13)

where Δfsis2=|fsis2|fgate, fsis2 is the scleral force, i = x,y; fgate defines the threshold value of the signalling force to activate the virtual fixture adjustment controller. maxvxy denotes the maximum value of the linear velocities, and maxΔfSS2 determines the deceleration range of the desired velocity trajectory. A,B are the parameters controlling the sinusoidal shape in the velocity trajectory that could be set as A = maxvxy,B = π/(2maxΔf). Based on our experimental evaluations, fgate, maxvxy, and maxΔfSS2 are set as 50 mN, 5 mm/s, and 20 mN, respectively.

Fig. 6.

Fig. 6.

Illustration of the force-based virtual fixture. For the follower robot, the tool is controlled based on the scleral forces fss1 to pivot about the corresponding scleral ports, the locations of which are subject to changes due to eyeball rotation. For the leader robot, the user applies manipulation force at the tool handle to fully control the tool, where the handle frame is denoted as {h1}; when the scleral force is beyond the threshold, the linear motions at X and Y directions of the tool are constrained based on the scleral force.

The above-calculated velocity components form the light pipe velocity in the scleral frame {s2}, as shown in Eq. 14,

vs2s2=[vx,vy,vz,ωx,ωy,0] (14)

where the last component is 0 because the robot has 5 DoFs.

Then, vs2s2 is transformed into the follower robot base frame {r2} as shown in Eq. 15.

vs2r2=Adgs2r2vs2s2, (15)

where Adgs2r2=[Rs2r2p^s2r2Rs2r20Rs2r2], with Rs2r2 as the rotation matrix and ps2r2 as the translation vector of the rigid transformation from the sclera frame to the robot base frame; p^s2r2 is the skew symmetric matrix4 that is associated with the vector ps2r2.

Lastly, the robot joint velocities can be calculated using standard Jacobian pseudo-inverse solution as shown in Eq 16. and transmitted to the low-level velocity controller to actuate the robot.

q˙r2=Jr21vs2r2 (16)

where Jr21 is the pseudo-inverse of the follower robot’s velocity Jacobian matrix.

C. Virtual fixture adjustment for leader robot instrument

A variable admittance control scheme is adopted, based on our previous velocity-level admittance control method [22], to implement a virtual fixture at the scleral port for the leader robot instrument. During operation, both the user and the robot hold the tool, and interaction forces from the user’s hand applied to the instrument handle are measured and fed into the admittance controller. Tool motion is constrained at the scleral port, i.e., the tool can only pivot about the scleral port and insert/retract along the tool axis. At the same time, the scleral forces fed into the admittance controller are used to calculate the linear velocities whose directions are perpendicular to the tool axis. This controls the instrument to accommodate the eyeball rotational movements, as shown below.

Firstly, the handle force is resolved in the scleral frame (Fig. 6) as shown in Eq. 17:

fhs1=Adgh1s1fhh1 (17)

where fhh16×1 and fhs16×1 are the user’s hand force in the handle frame {h1} and scleral frame {s1}, respectively. Adgh1s1 is the adjoint transformation from the handle frame to the sclera frame, which can be written as Adgh1s1[Rh1s1p^h1s1Rh1s10Rh1s1], with Rh1s1 as the rotation matrix and ph1s1 as the translation vector of the rigid transformation from the handle frame to the scleral frame; p^h1s1 is the skew symmetric matrix associated with the vector ph1s1.

Then, the velocity of the instrument at the scleral frame vs1s1 can be obtained using Eq. 18:

vs1s1=W1vd+γW2fh1s1 (18)

where W1 and W2 are diagonal admittance matrices and can be set as W1 = diag([1,1,0,0,0,0]T),W2 = diag([0,0,1,1,1,1]T), γ is the admittance gain adjusted by the user in real-time via a foot pedal. vd is the desired compensational velocity, determined by the scleral forces (fsxr1 and fsyr1), to control the instrument to follow the eyeball motion, given as Eq. 19.

vd=[vdx,vdy,0,0,0,0]T (19)

where vdx and vdy are given by Eq. 20. The trajectory described in Fig. 4 is modified for these desired velocity phases.

vdi={Asin(BΔfsis1),0<Δfsis1maxΔfss1sign(fsis1)maxvxy,Δfsis1>maxΔfss10, other (20)

where Δfsis1=|fsis1|fgate, fsis1 are scleral force in the X or Y direction in the scleral frame {s1}, respectively, i = x,y, fgate, maxvxy, maxΔfss1 A, B are the parameters that can be obtained by referring Eq. 13.

Lastly, the tool velocity is resolved in the leader robot base frame using Eq. 21, which is can be further used to obtain robot joint velocity using Jacobian pseudo-inverse solution similar with Eq. 16.

vs1r1=Adgs1r1vs1s1 (21)

where Adgs1r1[Rs1r1p^s1r1Rs1r10Rs1r1], with Rs1r1 as the rotation matrix and ps1r1 as the translation vector of the rigid transformation from the sclera frame to the robot Cartesian frame; p^s1r1 is the skew symmetric matrix that is associated with the vector ps1r1.

IV. Experiments and Results

A. Experimental setup

The experimental setup is shown as Fig 1. The two robot systems are connected by a local area network to minimize the delay of signal transmission between the two robot controllers. The robot control systems are programmed based on the CISST library [23]. Data transmission between the two robots is accomplished by Robot Operating System (ROS) [24]. The controllers run at a rate of 2 kHz. The force sensing tools are mounted at the end effectors of the SHERs, and FBG interrogators (SI 115 and SM 130, Micron Optics Inc., GA USA) collect FBG sensor readings at a 2 kHz refresh rate. An eye phantom created using silicon rubber is placed into a 3D-printed socket. The eye phantom can freely rotate, mimicking the human eyeball. A binocular microscope (ZEISS, Germany) is utilized to provide a magnified view of the target and inner part of the eyeball. A Point Grey camera (FLIR Systems Inc., BC, Canada) is attached on the microscope for recording purpose, and a monitor is used to display the camera view for easy visualization of the eyeball interior.

B. Static evaluation

To statically evaluate the tracking accuracy of the proposed system, two non-force-sensing tools are mounted on the end effectors of the leader and follower SHERs. Long, thin needles, with a diameter of 0.3 mm, are inserted into each tool. The needles can manually telescope within the tool shafts to provide physical extensions of the needle trajectories along the tool axes. 20 paper targets, each with 3 printed concentric circles, are glued uniformly on the bottom surface of the eyeball phantom as shown in Fig. 7. The diameters of circles are 1 mm, 2mm, and 3mm, respectively. For the experiment, the two tools are passed through separate scleral ports into the eyeball. The user controls the leader robot and directs the leader tool tip towards the center of a target, and the follower robot simultaneously and automatically adjusts accordingly. The experiments are carried out according to the following steps:

  • Insert the two tools into the eyeball through the scleral ports, then switch the follower robot control to automatic mode;

  • Manipulate the leader robot to align the tool with one of the targets. The follower robot moves automatically to the same target;

  • With both robots stationary, insert the telescoping needles into both leader and follower tools, and extend the needles so they are touching the eyeball surface;

  • Record the position tracking error ep based on the location of two extended tool tips:
    • ep = 1, if two tips are located inside of the inner circle (Φ = 1mm);
    • ep = 2, if two tips are located inside of the middle circle (Φ = 2mm) but one of the tips or both tips is/are out of the inner circle;
    • ep = 3, if two tips are located inside of the outer circle (Φ = 3mm) but one of the tips or both tips is/are out of the middle circle;
  • Measure and calculate the distance tracking error ed:
    • Secure the telescope needles within the tools;
    • Switch the robot controls to manual control mode and retract the tool outside of the eyeball;
    • Measure the length of the needles that extend beyond the tool tips using calipers with resolution of 0.01 mm to obtain distances: L1 and L2.
    • Calculate the distance tracking error ed = ∣L1L2∣.
  • Repeat the above-mentioned procedures until all the targets have been used.

Fig. 7.

Fig. 7.

The microscope view of the tracking accuracy evaluation experiment. Paper targets are attached on the bottom surface of the eyeball. Telescoping needles are inserted inside the tools to serve as physical extensions and touch the target.

To evaluate system repeatability, the experiment is performed a total of 10 times. The results show that the average position-tracking error is 2.0 mm with standard deviation of 0.6 mm, and the average distance-tracking error is 0.37 mm with standard deviation of 0.22 mm. The distributions of the position-tracking error and distance-tracking error are shown in Fig. 8.

Fig. 8.

Fig. 8.

Box-plot for position-tracking offset and depth-tracking error.

C. Dynamic evaluation

Following the static evaluation, a dynamic evaluation experiment is performed to verify the tracking accuracy and assess the ability of the force-based virtual fixtures to maintain scleral forces within a given threshold while the tools are moving. The force-sensing light pipe and the force-sensing cannula are used to collect real-time scleral forces. A paper cutout printed with several curved lines representing retinal vessels is glued on the eyeball inner surface to serve as a target. The different colored curved lines all intersect at the central point, which represents the optic disc. At the start of the dynamic experiment, the two tools are inserted into the eyeball through the scleral ports. The user then manipulates the leader robot to follow the vessels with the cannulation tool tip, while the light pipe is automatically controlled by the follower robot to provide illumination to the desired point on the retina. During the vessel following, the user keeps a close distance between the cannula tool tip and the eyeball surface, whereas the light pipe tip is maintained at a safe separation distance according to Eq. 12. The vessel following procedure is performed in a random order on the four vessels, and each are traced 10 times to assess repeatability. For this experiment, the force thresholds in Eq. 13 and Eq. 20 are both set to 50 mN.

The experiments are recorded using the camera installed on the microscope. To quantitatively evaluate the tracking accuracy of the light pipe, each frame of the video is processed by the algorithm 1.

The cannulation tool tip, Pt, is extracted, then the illumination range is estimated by a circle with the radius r. The center of the light, Pl, is calculated as shown in Fig. 9. Two metrics for assessing how well the light pipe follows the leader tool are proposed, including tracking offset d, which can be obtained using Eq. 22 and tracking offset ratio d/r.

d=αPlPt2, (22)

where α is the scale from single pixel of the image to mm. After processing and aggregating the results from all of the video, the average tracking offset, d was 2.45 mm with standard deviation of 1.33 mm, and the average offset ratio, d/r, was calculated to be 29% with standard deviation of 16%. The distributions of the tracking offset and offset ratio are shown shown in Fig. 10 (a) and (b).

Fig. 9.

Fig. 9.

Examples of detected illumination range (blue line) and the tool tip (green dot). The images are captured using the microscope and camera.

Fig. 10.

Fig. 10.

Box-plot for the position tracking offset and scleral forces in the dynamic experiments.

IV.

In addition, the scleral forces of two tools during the experiment were also collected and evaluated. The average force is calculated to be 29.2 mN with a standard deviation of 15.6 mN and 42.8 mN with standard deviation of 13.1 mN for the cannulation tool and light pipe, respectively. The force distributions for the cannulation tool and light pipe are shown in Fig. 10 (c) and (d), respectively.

V. Discussion

The experimental results show that the tracking offset is less than 2.5 mm. As confirmed by our clinical lead, such a small offset between the center of illumination on the retinal surface and the intersection point between the leader tool tip axis and the retinal surface would not significantly impact the surgeon, given the relatively wide illumination cone of the light pipe. Furthermore, the offset ratio, d/r is reported to be 29%, which indicates that the cannulation tool tip could be always covered by the central light cone with sufficient brightness. The distance tracking error of 0.37 mm indicates that the distance between the light pipe tip and the retinal surface relative to the same measure for the leader tool can be successfully controlled.

The average manipulation force on the sclera is approximately 50 mN for an expert clinician operating freehand on a rubber eyeball [25]. Herein, the same value was adopted as the scleral force threshold to restrict movement of the tools at the virtual fixtures. The results of the dynamic experiments show that the average scleral forces from both the leader cannulation tool and the light pipe were within the prescribed threshold shown in Fig. 10 (c) and (d), which suggests that the proposed control system can prevent injury to the sclera. It should be noted that the scleral force outliers are likely generated when the robots transition from stationary to moving. Nevertheless, the outliers fall below 120 mN, which is less than the maximum force exerted by the expert clinician in [25], so even the maximum forces exerted by either tool in this study are acceptable to avoid scleral damage.

In the static evaluation experiment, three circles with diameter differences of 1 mm were utilized to measure the tracking offset, and the measurements were observed manually, which is subject to human error. In order to minimize the inherent error, the measurements were repeated multiple times for each target, and the average value was calculated for the final reading. Additionally, the microscope was used to provide a magnified view of the targets and needle tip locations. In the dynamic evaluation experiment, since there has not been literature characterizing endo-illumination inside the eyeball during retinal surgery, a threshold-based image binarization method was used to determine the light projection area and quantify the illumination range from the video. The proposed method worked well for this study, as the light source emits light with constant intensity and the distance between the light pipe tip to the retinal surface was also constant during the dynamic experiment. Therefore, the illumination brightness also stayed fairly consistent throughout the experiment, allowing the results to be easily compared and aggregated. The dynamic evaluation experiment showed the tracking performance of SHER with user’s subjective manipulation, SHER can also follow injected movements, for e.g., the reciprocating motion with the amplitude of 240 μm at frequence of 1 Hz [26].

For the orientation alignment control, the solutions for Rotx)Roty) = ΔR in Eq. 9 theoretically require sequential movements, i.e., first rotate about the X-axis by Δx then rotate about the Y-axis by Δy. However, in this case, they were considered as simultaneous rotations about the two axes. This is a valid approximation for a robot controller running at sufficiently high frequency, since Δx and Δy are continuously updated. In this case, the SHER controller runs at 2 kHz, a frequency high enough to diminish the user-perceived differences between the sequential and simultaneous controls. Also, as a qualitative assessment observed during the experiments, the follower robot was able to smoothly track the leader robot, which illustrates the effectiveness and validity of the proposed control solution.

In the robot registration process, the SHERs were manually manipulated to touch the targets with its tool tip, the tip positions were then recorded as datasets for registration. Although the data collection was performed under the vision of microscope, the process would inevitably introduce data collection error due to the user’s subjective error. Therefore, to eliminate the data noise, a bunch of datapoints were collected and the registration were treated as a linear regression problem. The least squares method was finally utilized to obtain the transformation matrix. Such solution would result in that the obtained transformation matrix would not comply with the property of a homogeneous transformation matrix. However, this negative side effect only has an insignificant impact on the results, which can be seen by the small registration error of 0.4 mm. In fact, the above compromise is caused by the inaccurate data collection procedure, which could be relieved by using non-touch data collection approaches, e.g., mounting markers at the robots’ tool tips and obtaining the tip position using a laser tracker system, which is also one of the focuses in our future work.

One limitation of the current work is that, in order to obtain the desired location (the intersection point between the leader instrument axial line and the retinal surface) to direct the light pipe, the eyeball must be preoperatively registered in the robot frame. Furthermore, the eyeball is assumed to be a standard sphere. In a clinical scenario, other available technologies could be leveraged to handle the eyeball registration step, such as image based technology, e.g. OCT [27]. However, even without registration of eyeball, the proposed automatic light pipe actuating system could still work by instead directing the light pipe to follow the leader tool tip or another desired retina location that was obtained by real-time imaging [19]. An the same time, the registration accuracy of two SHERs is obtained as 0.4 mm using the presented least square method, which could be further improved by using an optical tracking system.

In this work, two SHERs are utilized to implement the automatic light pipe holding system. As an area of future investigation, the SHER used as the leader robot could be replaced by another robotic arm, a hand-held device, or even a regular surgical instrument. Future work will also consider human-robot interactions in surgery scenarios and will employ robotic manipulators with extra DoFs to explore redundancy resolution in robot control [28].

VI. Conclusion

In this study, an automatic light pipe actuating system for bimanual retinal surgery was presented to provide robot-controlled, targeted illumination inside the eye. The system consists of a customized force-sensing light pipe and a cannulation instrument that are mounted on the end effectors of two SHERs. The leader SHER with cannulation tool is manipulated by the user to perform surgical tasks, and the follower SHER with the light pipe is controlled automatically to track the desired location on the retina. Two sets of experiments were carried out, including a static evaluation of the tracking and distance following accuracy and also a dynamic evaluation to verify that the light pipe is able to successfully illuminate the desired location on the retina without applying excessive forces to the sclera. The results show that, (a) in the static experiment, the average tracking offset was 2.02 mm with standard derivation of 0.62 mm for position and 0.37 mm with standard derivation of 0.22 mm for distance, (b) in the dynamic experiment, the average light following offset was 2.45 mm with standard derivation of 1.33 mm and the average scleral forces were 29.2 mN with standard derivation of 15.6 mN and 42.8 mN with standard derivation of 13.1 mN for the cannulation tool and light pipe, respectively. The results demonstrate practicability of the proposed system to provide automatic illumination in retinal surgery.

Acknowledgments

This work was supported by U.S. National Institutes of Health under grant 1R01EB023943-01 and 1R01EB025883-01A1. The work of C. He was supported in part by the China Scholarship Council under grant 201706020074, National Natural Science Foundation of China under grant 51875011, and National Hi-tech Research and Development Program of China with grant 2017YFB1302702.

Biographies

graphic file with name nihms-1597461-b0002.gif

Changyan He received the B.E degree in school of mechanical engineering and automation at Beijing Jiaotong University, China, in 2015 and currently is a Ph.D. candidate in Beihang Univeristy. He was a visiting Ph.D. student at the Laboratory for Computational Sensing and Robotics (LCSR), Johns Hopkins University in 2017–2019. His research interests include medical robotics and instrumentation, force control, and deep learning.

graphic file with name nihms-1597461-b0003.gif

Emily Yang received her M.S.E in Robotics from the Johns Hopkins University (JHU) in 2020 and her B.S. in Mechanical Engineering from the Massachusetts Institute of Technology (MIT) in 2014. Her research interests include MRI-compatible medical devices as well as surgical robots and instrumentation.

graphic file with name nihms-1597461-b0004.gif

Niravkumar Patel is a postdoctoral fellow at Laboratory for Computational Sensing and Robotics, Johns Hopkins University. He received his Bachelors degree in Computer Engineering in 2005 from North Gujarat University, India and M.Tech. degree in Computer Science and Engineering in 2007 from Nirma University, India. He received the PhD degree in Robotics Engineering from Worcester Polytechnic Institute in 2017. His current research interests include medical robotics, image guided interventions, robot assisted retinal surgeries and path planning.

graphic file with name nihms-1597461-b0005.gif

Ali Ebrahimi received his M.S.E in Robotics from the Johns Hopkins University (JHU) in 2019, where he has been working toward the Ph.D. degree in Mechanical Engineering since 2017. He received his B.Sc. and M.Sc. in Mechanical Engineering from Amirkabir University of Technology (Tehran Polytechnique) and Sharif University of Technology, Tehran, Iran, in 2014 and 2016, respectively. His research interests include control theory, parameter estimation and computer vision with special applications in surgical robotics and instrumentation.

graphic file with name nihms-1597461-b0006.gif

Mahya Shahbazi is currently an R&D Engineer at Google LLC, Sunnyvale, CA, USA. She was a Postdoctoral Fellow at the Laboratory for Computational Sensing and Robotics, Johns Hopkins University in 2018–2019. Mahya received her Ph.D. degree in Electrical and Computer Engineering from the University of Western Ontario in 2017. She was a visiting research scholar at the University of Alberta in 2014, and a Postdoctoral Associate at Canadian Surgical Technologies and Advanced Robotics Centre in 2017–2018. Mahya has been recipient of several prestigious awards including the 2018–2019 NSERC Postdoctoral Fellowship. Her main research interests include Medical Robotics, Haptics and Teleoperation, Human-Robot Interaction, Advanced Control Systems, and Artificial Intelligence.

graphic file with name nihms-1597461-b0007.gif

Peter Gehlbach M.D., Ph.D. is the J.W. Marriott Professor of Ophthalmology at the Johns Hopkins University School of Medicine’s Wilmer Eye Institute with secondary appointment in the Whiting School of Engineering. He has built an internationally recognized retinal surgery practice and maintains a long standing interest in robotic applications to microsurgery.

graphic file with name nihms-1597461-b0008.gif

Iulian Iordachita (IEEE M’08, S’14) is a faculty member of the Laboratory for Computational Sensing and Robotics, Johns Hopkins University, and the director of the Advanced Medical Instrumentation and Robotics Research Laboratory. He received the M.Eng. degree in industrial robotics and the Ph.D. degree in mechanical engineering in 1989 and 1996, respectively, from the University of Craiova. His current research interests include medical robotics, image guided surgery, robotics, smart surgical tools, and medical instrumentation.

Footnotes

1

r1 denotes the base frame of the leader robot, and r2 denotes the base frame of the follower robot.

2

Pir1 and Pir2 are represented using homogeneous coordinates and Tr1r2 is homogeneous transform matrix [21].

3

s1 denotes the scleral frame of the leader robot, and s2 denotes the scleral frame of the follower robot.

4

The skew-symmetric matrix is defined via the cross product: Skew(a)b = a × b.

References

  • [1].Gupta PK, Jensen PS, and de Juan E, “Surgical forces and tactile perception during retinal microsurgery,” in International Conference on Medical Image Computing and Computer-Assisted Intervention Springer, 1999, pp. 1218–1225. [Google Scholar]
  • [2].Edwards T, Xue K, Meenink H, Beelen M, Naus G, Simunovic M, Latasiewicz M, Farmery A, de Smet M, and MacLaren R, “First-in-human study of the safety and viability of intraocular robotic surgery,” Nature Biomedical Engineering, p. 1, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [3].Gijbels A, Smits J, Schoevaerdts L, Willekens K, Vander Poorten EB, Stalmans P, and Reynaerts D, “In-human robot-assisted retinal vein cannulation, a world first,” Annals of Biomedical Engineering, pp. 1–10, 2018. [DOI] [PubMed] [Google Scholar]
  • [4].Rahimy E, Wilson J, Tsao T, Schwartz S, and Hubschman J, “Robot-assisted intraocular surgery: development of the iriss and feasibility studies in an animal model,” Eye, vol. 27, no. 8, p. 972, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [5].He C, Huang L, Yang Y, Liang Q, and Li Y, “Research and realization of a master-slave robotic system for retinal vascular bypass surgery,” Chinese Journal of Mechanical Engineering, vol. 31, no. 1, p. 78, 2018. [Google Scholar]
  • [6].Nasseri MA, Eder M, Nair S, Dean E, Maier M, Zapp D, Lohmann CP, and Knoll A, “The introduction of a new robot for assistance in ophthalmic surgery,” in 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, 2013, pp. 5682–5685. [DOI] [PubMed] [Google Scholar]
  • [7].He X, Roppenecker D, Gierlach D, Balicki M, Olds K, Gehlbach P, Handa J, Taylor R, and Iordachita I, “Toward clinically applicable steady-hand eye robot for vitreoretinal surgery,” in ASME 2012 international mechanical engineering congress and exposition. American Society of Mechanical Engineers, 2012, pp. 145–153. [Google Scholar]
  • [8].Yu H, Shen J-H, Shah RJ, Simaan N, and Joos KM, “Evaluation of microsurgical tasks with oct-guided and/or robot-assisted ophthalmic forceps,” Biomedical optics express, vol. 6, no. 2, pp. 457–472, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [9].Song C, Park DY, Gehlbach PL, Park SJ, and Kang JU, “Fiber-optic oct sensor guided smart micro-forceps for microsurgery,” Biomedical optics express, vol. 4, no. 7, pp. 1045–1050, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [10].He X, Balicki M, Gehlbach P, Handa J, Taylor R, and Iordachita I, “A multi-function force sensing instrument for variable admittance robot control in retinal microsurgery,” in Robotics and Automation (ICRA), 2014 IEEE International Conference on IEEE, 2014, pp. 1411–1418. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [11].Schoevaerdts L, Esteveny L, Gijbels A, Smits J, Reynaerts D, and Vander Poorten E, “Design and evaluation of a new bioelectrical impedance sensor for micro-surgery: application to retinal vein cannulation,” International journal of computer assisted radiology and surgery, vol. 14, no. 2, pp. 311–320, 2019. [DOI] [PubMed] [Google Scholar]
  • [12].Cutler N, Balicki M, Finkelstein M, Wang J, Gehlbach P, Mc-Gready J, Iordachita I, Taylor R, and Handa JT, “Auditory force feedback substitution improves surgical precision during simulated ophthalmic surgery,” Investigative ophthalmology & visual science, vol. 54, no. 2, pp. 1316–1324, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [13].Ebrahimi A, He C, Roizenblatt M, Patel N, Sefati S, Gehlbach P, and Iordachita I, “Real-time sclera force feedback for enabling safe robot-assisted vitreoretinal surgery,” in 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) IEEE, 2018, pp. 3650–3655. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [14].Ebrahimi A, Patel N, He C, Gehlbach P, Kobilarov M, and Iordachita I, “Adaptive control of sclera force and insertion depth for safe robot-assisted retinal surgery,” in Robotics and Automation (ICRA), 2019 IEEE International Conference on IEEE, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [15].Ebrahimi A, Alambeigi F, Zimmer-Galler IE, Gehlbach P, Taylor RH, and Iordachita I, “Toward improving patient safety and surgeon comfort in a synergic robot-assisted eye surgery: A comparative study,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) IEEE, 2019, pp. 7075–7082. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [16].He C, Patel N, Iordachita I, and Kobilarov M, “Enabling technology for safe robot-assisted retinal surgery: Early warning for unsafe scleral force,” in Robotics and Automation (ICRA), 2019 IEEE International Conference on IEEE, 2019, pp. (3889–3894. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [17].He C, Patel N, Shahbazi M, Yang Y, Gehlbach P, Kobilarov M, and Iordachita I, “Toward safe retinal microsurgery: Development and evaluation of an rnn-based active interventional control framework.” IEEE transactions on bio-medical engineering, vol. 67, no. 4, pp. 966–977, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [18].Yang S, Martel JN, Lobes LA Jr, and Riviere CN, “Techniques for robot-aided intraocular surgery using monocular vision,” The International journal of robotics research, vol. 37, no. 8, pp. 931–952, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [19].Braun D, Yang S, Martel JN, Riviere CN, and Becker BC, “Eyeslam: Real-time simultaneous localization and mapping of retinal vessels during intraocular microsurgery,” The International Journal of Medical Robotics and Computer Assisted Surgery, vol. 14, no. 1, p. e1848, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [20].Üneri A, Balicki MA, Handa J, Gehlbach P, Taylor RH, and Iordachita I, “New steady-hand eye robot with micro-force sensing for vitreoretinal surgery,” in Biomedical Robotics and Biomechatronics (BioRob), 2010 3rd IEEE RAS and EMBS International Conference on IEEE, 2010, pp. 814–819. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [21].Craig JJ, Introduction to robotics: mechanics and control, 3/E. Pearson Education India, 2009. [Google Scholar]
  • [22].Kumar R, Berkelman P, Gupta P, Barnes A, Jensen PS, Whitcomb LL, and Taylor RH, “Preliminary experiments in cooperative human/robot force control for robot assisted microsurgical manipulation,” in Robotics and Automation, 2000. Proceedings. ICRA’00. IEEE International Conference on, vol. 1 IEEE, 2000, pp. 610–617. [Google Scholar]
  • [23].Deguet A, Kumar R, Taylor R, and Kazanzides P, “The cisst libraries for computer assisted intervention systems,” in MICCAI Workshop on Systems and Arch. for Computer Assisted Interventions, Midas Journal, vol. 71, 2008. [Google Scholar]
  • [24].Quigley M, Conley K, Gerkey B, Faust J, Foote T, Leibs J, Wheeler R, and Ng AY, “Ros: an open-source robot operating system,” in ICRA workshop on open source software, vol. 3, no. 3.2 Kobe, Japan, 2009, p. 5. [Google Scholar]
  • [25].He C, Ebrahimi A, Roizenblatt M, Patel N, Yang Y, Gehlbach PL, and Iordachita I, “User behavior evaluation in robot-assisted retinal surgery,” in 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) IEEE, 2018, pp. 174–179. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [26].Jiahao W, He C, Zhou M, Ebrahimi A, Urias M, Patel N, Liu Y, Gehlbach PL, and Iordachita I, “Force-based safe vein cannulation in robot-assisted retinal surgery: A preliminary study,” in 2020 IEEE International Symposium on Medical Robotics (ISMR) IEEE, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [27].Zang P, Liu G, Zhang M, Wang J, Hwang TS, Wilson DJ, Huang D, Li D, and Jia Y, “Automated three-dimensional registration and volume rebuilding for wide-field angiographic and structural optical coherence tomography,” Journal of biomedical optics, vol. 22, no. 2, p. 026001, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [28].Yoshikawa T, Foundations of robotics: analysis and control. MIT press, 1990. [Google Scholar]

RESOURCES