Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2018 Dec 1.
Published in final edited form as: IEEE ASME Trans Mechatron. 2017 Sep 5;22(6):2440–2448. doi: 10.1109/TMECH.2017.2749384

Motorized Micro-Forceps with Active Motion Guidance based on Common-Path SSOCT for Epiretinal Membranectomy

Gyeong Woo Cheon 1, Berk Gonenc 2, Russell H Taylor 3, Peter L Gehlbach 4, Jin U Kang 5
PMCID: PMC5881930  NIHMSID: NIHMS927834  PMID: 29628753

Abstract

In this study, we built and tested a handheld motion-guided micro-forceps system using common-path swept source optical coherence tomography (CP-SSOCT) for highly accurate depth controlled epiretinal membranectomy. A touch sensor and two motors were used in the forceps design to minimize the inherent motion artifact while squeezing the tool handle to actuate the tool and grasp, and to independently control the depth of the tool-tip. A smart motion monitoring and a guiding algorithm were devised to provide precise and intuitive freehand control. We compared the involuntary tool-tip motion occurring while grasping with a standard manual micro-forceps and our touch sensor activated micro-forceps. The results showed that our touch-sensor-based and motor-actuated tool can significantly attenuate the motion artifact during grasping (119.81 μm with our device versus 330.73 μm with the standard micro-forceps). By activating the CP-SSOCT based depth locking feature, the erroneous tool-tip motion can be further reduced down to 5.11μm. We evaluated the performance of our device in comparison to the standard instrument in terms of the elapsed time, the number of grasping attempts, and the maximum depth of damage created on the substrate surface while trying to pick up small pieces of fibers (Ø 125 μm) from a soft polymer surface. The results indicate that all metrics were significantly improved when using our device; of note, the average elapsed time, the number of grasping attempts, and the maximum depth of damage were reduced by 25%, 31%, and 75%, respectively.

Index Terms: Biomedical optical imaging, Image sensors, Optical signal processing, Optical signal detection, Surgery

I. Introduction

In retinal microsurgery, various vitreoretinal procedures routinely require accurate manipulation of the surgical instrument close to the retina surface while trying to avoid severe complications. One such delicate operation is epiretinal membrane peeling (membranectomy), which is among the most commonly performed vitrectomy-based procedures. It is used to delaminate a very thin (micron scale) fibrous membrane adherent to the retinal surface, using either a pick or preferably a micro-forceps tool (Fig. 1a). Specifically, the internal limiting membrane (ILM), which is a very thin and transparent membrane on the surface of the retina, participates in the pathogenesis of vitreoretinal interface diseases such as macular hole (MH) and macular pucker (MP) as well as other maculopathies [1-3]. ILM peeling is a challenging procedure due to its single micron scale thickness and its transparency. Typically, the ILM is stained using dyes such as indocyanine green (ICG) in order to improve the visibility. However, there have been studies reporting toxicity of ICG to the retinal pigment epithelium (RPE) and neurosensory retina [4]. Besides the visibility issue, the procedure is still challenging to master due to involuntary patient movement and the physiological hand tremor of the surgeon, which is typically within the 6-12 Hz frequency range with several hundred microns of amplitude [5]. Considering that the thickness of epiretinal membrane is below 100 microns and the total retina thickness is about 250 microns, both the surgeon’s hand tremor and patient motion have significant potential to cause retinal injury.

Fig. 1.

Fig. 1

(a) Epiretinal membranectomy: after grasping the membrane edge with a micro-forceps the membrane is pulled away from the retina surface slowly; (b) The standard 23 Gauge disposable micro-forceps (Alcon, USA); Our motorized active motion-guided micro-forceps: (c) fabricated prototype, (d) inner structure with two motors (M1 and M2), and (e) the fiber-optic sensor (SSOCT) attached to the straightened arm of the jaws.

During membranectomy microsurgeons need to first insert the surgical tool tip to a desired depth for grasping and lifting the membrane edge without harming the underlying retina. To grasp, the standard disposable micro-forceps from Alcon, Inc. (Fort Worth, TX) (Fig. 1b) requires squeezing of the tool handle for tip actuation. Because of the large squeezing motion and the associated forces, the tool apex position is less stable during forceps’ jaw closure. After grasping the membrane, it is carefully delaminated from the retinal surface by deliberately moving the tool very slowly (typically 0.1-0.5 mm/s) in strategic delaminating directions. During both steps, the surgeon is dependent upon visualization through a stereomicroscope, with depth perception limited by the ability to resolve on the micron scale in the axial axis; and with potential obstruction of the surgical contact site by the tool shaft and functional tip. Optical coherence tomography (OCT) using near-infrared light source, which has a non-invasive and highly accurate depth resolved imaging capability, serves to overcome these limitations in this setting [6-13]. Our research group has been pursuing an axial motion sensing and guidance approach that utilizes a swept source OCT (SSOCT) as a distal sensor. In our approach, common-path SSOCT (CP-SSOCT) has been strategically adopted because it is simple in design, inexpensive and has the potential to in part, be incorporated in disposable surgical products [14-16]. In order to develop a real-time OCT intraoperative guidance system, we employed a parallel programming technique that uses a graphic processing unit (GPU) for fast computational speed and incorporated it into the microsurgical tool system [17]. This distal sensing application is distinguished from the typical OCT “B-scan/C-scan” imaging application [18] in that our distal sensing system measures a distance value by acquiring A-scan data, with its attendant complexity [19-25]. The OCT data in our application is subsequently transformed to distance information rather than morphological information, and is directly utilized for motion control in real-time. In this regard, our depth-guided micro-forceps embedding OCT distal sensor is the extension of the handheld instruments [22-32]; therefore, it is differentiated from the robotic arm based guidance systems [32-35] including the recent work that utilizes a micro-forceps and OCT B-scan images for guidance [36].

Integrating CP-SSOCT feedback with a micro-forceps via an intuitive control system can effectively bring the tip of the forceps at the desired depth and maintain it at that position until successful grasping, regardless of involuntary user or patient motion. Based on the approach shown in Fig. 2, our group previously developed a “SMART” micro-forceps and demonstrated the feasibility of this concept in stabilizing tool motion for safer grasping [24]. However, this previous tool posed limitations due to its mechanical design. Specifically, there was potential motion conflict between grasping motion of a user and motion compensation of the system because those two independent motions were connected each other in the previous mechanical design. Preserving the squeezing mechanism of the conventional micro-forceps, the inner wire providing the grasping action in this design interfered with the operation of the motor compensating the involuntary motion. This design approach posed a potential risk of conflict between these two actions. Ideally, the degrees of freedom (DOF) required for grasping and depth guidance actions need to be independent of each other. Therefore, this study builds on and substantially advances our prior work by strategically separating these two functions via two independent actuators [26]. One of the actuators provides the active motion guidance that is controlled by the OCT guidance system while the other provides the grasping functionality controlled by the user. The standard micro-forceps are actuated by squeezing the tool handle as shown in Fig. 1b, which may induce significant erroneous motion of the tool tip while grasping. Our present design replaces this actuation mechanism with a linear motor interfaced with a touch sensor. This new motorized architecture removes the squeezing motion from the tool handle and thus substantially reduces the motion artifact at the tool tip during grasping. Furthermore, the integrated CP-SSOCT feedback is used to actively guide and stabilize the tool tip relative to the tissue by compensating any residual motion due to forceps actuation, involuntary motion of the operator such as physiological hand tremor, and patient movement. In order to evaluate the performance of our tool, we compared the motion at the tool tip during grasping considering 3 cases: (1) using standard manual micro-forceps, (2) using our motorized micro-forceps without active OCT guidance, and (3) using our motorized micro-forceps with active OCT guidance.

Fig. 2.

Fig. 2

(a) CP-SSOCT fiber-optic setup: the reflected beam (RB), fiber-air interface (FAI), and the sample reflected beam (SB). Overview of our CP-SSOCT depth-guided micro-forceps system: (b) earlier “SMART: microforceps prototype using one actuator for depth guidance and the squeeze mechanism for grasping [24], (c) current design using a touch sensor and two independently actuated motors for the control of grasping (green) and depth guidance (red) actions.

II. Methods and Materials

Our motion-guided micro-forceps (MGMF) system consists of three main modules: (1) fiber-optic OCT module (distal sensor), (2) micro-forceps handpiece module (operating hardware), and (3) workstation (computing and control).

A. Fiber-Optic OCT Module

The OCT system consists of a swept source OEM engine (AXSUN, central wavelength λ0: 1060 nm, sweeping rate: 100 kHz, 3 dB axial resolution: 8 μm, scan range: 3.7 mm in air), a balanced photo-detector and a digitizer with a sampling rate of up to 500MSPS with 12-bit resolution, a Camera Link DAQ Board, and a Camera Link frame grabber (PCIe-1433, National Instruments). We strategically chose to use 1060nm-OCT system instead of 1300nm-OCT system, which is more popularly used in OCT imaging, because the former provides better axial resolution given the same bandwidth. For example, the axial imaging resolution of 1060nm-OCT system is about 8 μm while the axial resolution of 1310nm-OCT system is about 15 μm with 100 nm tuning range (bandwidth). Although 1310-OCT is more immune to scattering and allows deeper imaging, this is not needed in the case of retinal imaging since the thickness of human retina is only 200 – 250 μm. Also, we would like to point out in terms of penetration depth, 1060nm-system can easily image down to the sclera. The optical setup is compact due to the use of fiber-optic based setup. In the case of CP-OCT, the OCT system does not require reference arm because it uses the reflection beam from an interface between fiber (sample arm) and air (medium) as a reference beam as shown in Fig. 2a. Thus, CP-OCT is naturally immune to dispersion and polarization noises. It is one of the critical steps attaching the fiber sensor to the tool-tip during the assembly process. Currently, we manually attach the fiber sensor to the tool-tip; for this reason, there was variation in attaching position of the fiber sensor. However, we measure the variations in the offset distance that is the distance from the fiber-tip to the tool-tip by locating reflective surface at the end of the tool-tip. After attaching the fiber, record this offset distance and measure distance from the sample surface to the tool-tip is obtained by subtracting the offset distance from the measured distance.

B. Micro-Forceps Module

The design of our micro-forceps (Fig. 1c–e) involves two linear piezoelectric motors, a handpiece, a touch sensor, a fiber-optic sensor, an actuated tool-tip, and a custom-built control board. A prototype, shown in Fig. 1c, was 3D printed using acrylonitrile butadiene styrene (ABS) thermoplastic with a resolution of 0.010 inch. The design provides two actuated degrees of freedom, which are completely independent of each other: (1) involuntary motion compensation and (2) grasping. The first piezoelectric linear motor (LEGS-LT2010A, PiezoMotor), M1 in Fig. 1c, is for motion compensation and is fixed to the handpiece. The motor has its own driver (PMD101, PiezoMotor), and is used to move the actuated tool tip relative to the handpiece within a maximum stroke of 80 mm. The actuated tool tip consists of 3 main parts. The first part is a piezoelectric linear motor (M3-L, New Scale Technologies), M2 in Fig. 1c, which is used for opening and closing the forceps jaws. It embeds the motor driver, encoder, and electrical circuit that communicates with the custom built control board through a SPI communication protocol. Attached to the motor shaft, the orange piece in Fig. 1c is the second part. This part holds and slides up and down the guide tube for opening and closing the forceps jaws. The forceps jaws are open in default and firmly anchored to the motor body. One arm of the jaws is straightened and flattened providing a proper space for fiber-optic sensor attachment while the other arm is bent open (Fig. 1d). As the guide tube moves forward, the bent arm of the compliant jaws is squeezed and moves toward the other straight arm providing a firm-grasping platform. When the guide needle moves backward, the bent arm flexes back opening the jaws. During this grasping and releasing action, the straight arm never bends or moves so that there is no adverse effect on the fiber-optic sensor.

The grasping action is controlled by how firmly the user pushes on the touch sensor. The touch sensor (FSR 408, Interlink Electronics®) is an interface to transfer the user signal to activate the grasping. We tested and build a voltage divider circuit to measure its variable resistance that produced desired output response. As a result, we are now able to focus on a certain range of forces by selecting the reference resistor value. In our case, we used a 10 kΩ resistor matching the force pressure range used by an operating surgeon. Note that both the sensitivity and the dynamic range can now be deliberately tuned. Additionally, we developed a grasping calibration curve table in order to provide control flexibility of the touch sensor and also control of the grasping manipulation motor as shown in Fig. 3. The table is divided into three zones: 1) Threshold, 2) Active and 3) Saturation zones. By changing the threshold and saturation value, we can customize the sensitivity of the touch sensor for each user. The curve equation in an active area,

pos=Amax{cos(π(vV1+V2)V2)+1} (1)

where pos is the position of motor M2, Amax is the maximum traveling length, V1 and V2 are the voltages for threshold and saturation lines, and v is the input voltage applied to the touch sensor. To attach the CP-OCT distal sensor probe, we unfolded one of the forceps fingers. Also, a groove was made on the side of the guiding needle to protect the fiber to tool-tip interface as shown in Fig. 1e. Finally, we built a control PCB board with the ARM Cortex-M4 32b MCU (STM32F405, 168MHz, STMicroelectronics) in order to read touch sensor output and control the forceps-tip controlling motor. This control board communicates with the workstation through an USB HID (Human interface device class) protocol. The control board uses the built-in 12bit analog-to-digital (ADC) converter to read the voltage applied to the touch sensor and send a command to the motor driver through an SPI (Serial Peripheral Interface) protocol.

Fig. 3.

Fig. 3

Calibration curve for the touch sensor. The curve is divided into three zones: (1) dead zone where the tool-tip manipulating motor does not react to the touch sensor, (2) active zone that is responsible for motor control, and (3) saturation zone where the motor has reached its maximum position.

C. System Software Module

As shown in Fig. 4, the CP-OCT spectral data for the distal sensing is sequentially processed in the workstation following three steps [24]. The overall processing procedure consists of three main parts: 1) OCT signal process, 2) surface detection, and 3) motion control. The total processing time is significantly reduced with the help of parallel processing based on GPU programming (CUDA, Nvidia). Specifically, the first step of the processing is to transform the spectral data to A-scan data. 128 consecutive spectra consisting of 1376 data points are transmitted from the frame grabber in every 1.28 ms. A high-pass filter is applied to the spectrum data to remove the high DC component in spectrum data and a Low-pass filter is consecutively applied to remove high-frequency noise components. After FFT, averaging and background subtraction are conducted on the A-scan data. This produces an A-scan with 2048 data points that cover a 3686.4 μm axial distance from the end of the fiber. The final A-scan is passed to the next process that detects the tissue surface. The A-scan of a retina, which has a multi-layered structure, has a complex shape and the most significant reflection beam does not always come from the sample surface; as a result, simple peak detection is not appropriate for tracking the distance from the sample surface. Instead, we use an edge detection method after low-pass filtering the Fourier transformed A-line OCT signal [19]. Additionally, we need to improve the robustness of the surface detection process by focusing on the features of the A-scan that reflect the retinal anatomical structure. Thus, we applied circular cross-correlation to two consecutive A-scans to find the distance variations. Finally, the calculated position value is transmitted to the third step. Instead of using the position value directly to control the motor, a Kalman predictor is applied for more accurate motion compensation. This prediction step is necessary to make up for the computational and communication time delay between the spectrum measurement and the actuator operation. Finally, the compensating value is transmitted to the motor driver.

Fig. 4.

Fig. 4

Data processing flowchart consisting of 1) OCT signal processing, 2) Surface detection signal processing, and 3) Motion control processing. Each step is implemented in independent software module; especially, the OCT signal processing module uses parallel processing technique based on GPU programming.

One of the optimization issues of Kalman filter and compensating length decision process was finding the right balance between system responsiveness and stability. Sporadic large hand tremor could be reduced from several hundred micron to tens of micron with a higher responsiveness but this setup resulted in worse performance in terms of stability due to its own high frequency system vibration (10 – 20 um amplitude). On the other hand, the motion guidance cannot sufficiently compensate for the large hand tremor. For this reason, we prepared several options and choose one of the coefficient sets depending on the users. In these experiments, we compromised on the responsiveness based on the assumption that the manipulation of the surgeon was already well mannered and our system is expected to give minimal assistance in axial direction.

III. Experiments

In an epiretinal membrane peeling procedure, prior to membrane delamination, the surgeon first brings the forceps tip above the target area, and then carefully moves it to the surgical depth required to grasp the membrane edge. Due to the limited axial resolution through the surgical microscope and tool manipulation in the axial direction, as well as partial obstruction of the target by the tool tip, tool shaft, shadows and the retina itself when the tool is engaged in tissue, it is not trivial bring the tool tip to the surgical target depth with precision. In actual practice, surgeon routinely move the tool tip very slowly into the membrane and make multiple attempts to grasp until the target is successfully engaged and the membrane edge lifted. Ideally, a surgeon would bring the forceps tip directly to the precise desired target depth for efficient and safe grasping using a single attempt. This hypothetical ideal is not presently possible for most surgeons due in part to physiological hand tremor, patient movement and the inherent motion artifact during forceps actuation. Our CP-SSOCT guided micro-forceps’ depth-locking feature is designed to eliminate this problem and bring the tool tip to the desired depth with ease, precision, and safety. In order to demonstrate the performance of our tool, two sets of experiments were performed by an expert vitreoretinal surgeon.

In the first set, the aim was to monitor the motion artifact at the tool tip during grasping in three cases. First, the surgeon was asked to use a standard 23 gauge (Fig. 1b) manual micro-forceps (with an OCT fiber probe attached to the jaws’ tip) to complete 10 grasping attempts on a thin film target while trying to maintain the forceps tip as close to the target surface as possible without touching the surface. The tool tip position relative to the target surface was acquired from the attached CP-OCT distal sensor (sampling period = 1.28 ms). In the second and third cases, the same surgeon was asked to repeat the procedure using our micro-forceps without and with the motion guidance feature (target distance from the surface = 500 μm) respectively.

Our second set of experiments aimed at assessing the efficacy of our tool in terms of the time required to successfully pick up a target from a delicate surface, the total number of grasping attempts, and the maximum depth of unintended penetration (damage) to the underlying surface. In our trials, as the target to be grasped, we used pieces of optical fibers with comparable thickness to an epiretinal layer (125 μm in diameter) on top of a soft polymer as shown in Fig. 5a. The task of the surgeon was to grasp and pick up the fiber piece without damaging the underlying polymer surface in the shortest possible time. Two cases were investigated using our tool: with and without motion guidance. 20 trials were completed per case by randomly turning on and off the motion guidance feature for blind testing.

Fig. 5.

Fig. 5

(a) Grasp and pick-up experiments: small fiber pieces (Ø 125 μm) were picked up from a soft polymer surface using MGMF. (b) A 2D scanning OCT imaging system was used to analyze the damage created on the polymer surface after each grasp. (c) A snapshot of the developed software displaying real-time A-scan data, video view of the operation site, and the touch sensor output (red bar graph).

Fig. 5c is the screenshot of the developed GUI program for this experiment, which displays the synchronized real-time A-scan, video of the operation site, and output of the touch sensor. The recorded video was used to determine the number of attempts and the elapsed time to successfully grasp and pick up the fiber pieces, leading to a resolution of 1 s. In order to assess the maximum depth of damage on the polymer surface, we used another 2D scanning OCT imaging system as shown in Fig. 5b.

IV. Results and Discussion

A. Evaluation of Motion Artifact during Grasping

The measured tool tip position during the 10 grasps in each case is shown in Fig. 6. When the standard manual micro-forceps was used (Fig. 6a), the distance from the target significantly varied, with a large movement each time the jaws were closed and with some high-frequency oscillations due to operator’s hand tremor. The tool tip consistently moved forward while grasping and backward while releasing. Surface contact (distance = 0 μm) was observed in three of the ten grasps, which could lead to unintended force exertion on the retina and thus injury in case of an actual surgery. According to prior work, surgeons receive very little force feedback during retinal microsurgery [37]; therefore, responding in real time to inadvertent retina contact is not yet feasible without assistance.

Fig. 6.

Fig. 6

Measured distance variation as the surgeon opens and closes the forceps while trying to maintain the tool tip fixed above a thin film layer (a) using a conventional 23 gauge micro-forceps, using our micro-forceps (actuated via the touch sensor) (b) without and (c) with the motion guidance feature. 10 grasps were performed for each case. Vertical bars show the opening and closure of the forceps jaws. The motion artifact was significantly reduced by the using the touch sensor based actuation on our tool rather than the squeezing mechanism on the conventional micro-forceps. The OCT-based motion guidance successfully eliminated any distance variation.

The second and third case trials were completed using our micro-forceps. As shown in Fig. 6b, the motion artifact was significantly reduced during tool-tip manipulation even without the active motion guidance. A major contributor to this stabilization is that the user needs to apply a relatively lower force on the tool handle to grasp since the input is sensed by a touch sensor and actuation is achieved by a motor. The motorized architecture also has removed the moving parts on the tool handle as opposed to the squeezing mechanism on the conventional micro-forceps thereby yielding a potential stability advantage. When the motion guidance feature was turned on, the targeting error at each grasp was almost eliminated (Fig. 6c) and the forceps tip was successfully maintained 500 μm above the sample surface throughout the test. The standard deviations of the distance variation during the grasping action in these three cases were 330.73 μm, 119.81 μm, and 5.11 μm respectively. Fig. 7 shows the frequency analysis of the acquired distance data for each case, where the intensity (I) for each case was obtained using the following equation:

Ij=1Ni=1N[10×log(1+abs(FFT(|Dataimm|)j))] (2)

In equation (2), i, j, m, and N are the trial index, frequency component index, target depth, and total trial number, respectively. According to the graph, the usage of touch sensor reduced the intensity of the components over the entire frequency area. On the other hand, the compensating motion by the system effectively suppressed the low-frequency components located in 0 to 40 Hz. These results dramatically demonstrate the effectiveness of this approach in suppressing the grasping induced motion artifact present with manually operated and unassisted forceps. During actual surgical operation, the motion artifact is in part suppressed by the passage through trocars (Fig. 1a) which dampen mostly the lateral movements; though, the stabilizing effect in the axial direction by trocar use is limited.

Fig. 7.

Fig. 7

Frequency analysis of the distance variation during the grasping actions: (a) using a standard manual micro-forceps, using our motorized micro-forceps (b) without and (c) with the motion guidance feature.

B. Grasp and Pick up Experiments

The results of this experiment are shown in Fig. 8. Accordingly, the motion guided trials show significantly better performance in all considered aspects. Without the motion guidance, the surgeon spent 7.9 s to successfully grasp the fibers on the average (min: 5.0 s, max: 12.2 s). The number of grasping attempts had a mean of 1.6 (min: 1, max: 3) while the maximum depth of damage averaged 101.8 μm (min: 25.2 μm, max: 219.6 μm). When the OCT-based motion guidance feature was used, means were drastically reduced, leading to a mean operation time of 6.1 s (min: 4.0 s, max: 8.1 s), 1.05 attempts before successful grasping on the average (min: 1, max: 2), and a mean damage depth of 13.14 μm (min: 4.5 μm max: 27.0 μm).

Fig. 8.

Fig. 8

Results of the grasp and pick-up experiment: Statistically significant reduction in (a) elapsed time, (b) number of grasping attempts, and (c) the maximum depth of damage on the substrate with the use of our OCT-based motion guidance feature. The solid bars show the mean, and the error bars represent the maximum and minimum values.

The efficacy of motion guidance can be better observed when the maximum depth of damage is graphed relative to the elapsed time as shown in Fig. 9, where the point distribution of the trials without the motion guidance is significantly wider than that of the motion guided trials. We computed the total distances from the center of each distribution in order to quantify the degree of distribution, and the value of freehand trials was 6.32 times larger than that of guided trials. The three p-values of nonparametric Kruskal-Wallis One Way Analysis of Variance (ANOVA, significance level=0.05) on Ranks, which are 8.40E-4 (elapsed time), 3.37E-3 (number of grasping attempts) and 2.56E-10 (maximum depth of damage on the substrate), indicated that the results for the motion guided and unguided trials were statistically significantly different.

Fig. 9.

Fig. 9

The maximum depth of damage vs. elapsed time during the grasp and pick-up experiments. The trials without the motion guidance have a significantly wider distribution than the guided trials, which have are grouped around much smaller damage and elapsed time levels.

V. Conclusion

In conclusion, we have built and demonstrated a novel motion compensated micro-forceps that allows accurate depth targeting and precise depth locking during both grasping and peeling operations. One of the key innovations of this tool is that it separately controls the motion compensation and grasping motion of the forceps. To achieve this, we used two independently actuated motors and controlled the grasping electronically via a touch sensor rather than the mechanical squeezing mechanism on the standard forceps tools. This touch sensor-based actuation enabled the operator to grasp by applying less force on the tool handle and eliminated the squeeze induced motion artifact during grasping. The touch sensor alone represents a substantial improvement in instrument control and precision. Moreover, when both the touch sensor based actuation principle and the CP-SSOCT based motion guidance feature were used together, our preliminary grasping tests with a vitreoretinal surgeon showed significantly reduced operation time, fewer number of attempts and less damage on the underlying surface while grasping and lifting small fibers up from a soft polymer surface. In contrary to the conventional forceps, however, our depth-guided forceps incorporates electronics including high speed motors and touch sensor which increases the system cost and complexity. Also because of the complexity of the system and parts involved, sterilization becomes a major issue. Our current work is focused on further performance and feasibility evaluation of this technology on biological tissues using ex-vivo animal models and investigating mechanical structure and materials for better disposability and sterilization.

Acknowledgments

Research supported by the U.S. National Institute of Health and the National Eye Institute (NIH/NEI) under Grant R01EY021540-01; Research to Prevent Blindness, New York, USA, and gifts by the J. Willard and Alice S. Marriott Foundation, the Gale Trust, Mr. Herb Ehlers, Mr. Bill Wilbur.

Contributor Information

Gyeong Woo Cheon, Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA.

Berk Gonenc, ERC for Computer Integrated Surgery at Johns Hopkins University, Baltimore, MD, USA.

Russell H. Taylor, Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA.

Peter L. Gehlbach, Wilmer Eye Institute, Johns Hopkins School of Medicine, Baltimore, MD, USA.

Jin U. Kang, Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA.

References

  • 1.Kang KT, Kim KS, Kim YC. Surgical results of idiopathic and secondary epiretinal membrane. Int Ophthalmol. 2014;34(6):1227–1232. doi: 10.1007/s10792-014-0010-1. [DOI] [PubMed] [Google Scholar]
  • 2.Bagci AM, Shahidi M, Ansari R, Blair M, Blair NP, Zelkha R. Thickness Profiles of Retinal Layers by Optical Coherence Tomography Image Segmentation. Am J Ophthalmol. 2008;146(5):679–687. doi: 10.1016/j.ajo.2008.06.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Cabrera Fernández D, Salinas HM, Puliafito CA. Automated detection of retinal layer structures on optical coherence tomography images. Opt Express. 2005;13(25):10200–10216. doi: 10.1364/opex.13.010200. [DOI] [PubMed] [Google Scholar]
  • 4.Gandorfer A, Haritoglou C, Kampik A. Toxicity of Indocyanine Green in Vitreoretinal Surgery. Dev Ophthalmol. 2008;42:69–81. doi: 10.1159/000138974. [DOI] [PubMed] [Google Scholar]
  • 5.Singh SPN, Riviere CN. Physiological tremor amplitude during retinal microsurgery. Proceedings of the IEEE 28th Annual Northeast Bioengineering Conference. 2002:171–172. [Google Scholar]
  • 6.Erickson SJ, Godavarty A. Hand-held based near-infrared optical imaging devices: a review. Med Eng Phys. 2009;31(5):495–509. doi: 10.1016/j.medengphy.2008.10.004. [DOI] [PubMed] [Google Scholar]
  • 7.Zysk AM, Nguyen FT, Oldenburg AL, Marks DL, Boppart SA. Optical coherence tomography: a review of clinical development from bench to bedside. J Biomed Opt. 2007;12(5):051403. doi: 10.1117/1.2793736. [DOI] [PubMed] [Google Scholar]
  • 8.Ehlers JP, Tao YK, Farsiu S, Maldonado R, Izatt JA, Toth CA. Integration of a spectral domain optical coherence tomography system into a surgical microscope for intraoperative imaging. Invest Ophthalmol Vis Sci. 2011;52(6):3153–3159. doi: 10.1167/iovs.10-6720. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Ehlers JP, Tam T, Kaiser PK, Martin DF, Smith GM, Srivastava SK. Utility of intraoperative optical coherence tomography during vitrectomy surgery for vitreomacular traction syndrome. Retina. 2014;34(7):1341–1346. doi: 10.1097/IAE.0000000000000123. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Ehlers JP, Xu D, Kaiser PK, Singh RP, Srivastava SK. Intrasurgical dynamics of macular hole surgery: an assessment of surgery-induced ultrastructural alterations with intraoperative optical coherence tomography. Retina. 2014;34(2):213–221. doi: 10.1097/IAE.0b013e318297daf3. [DOI] [PubMed] [Google Scholar]
  • 11.Hahn P, Migacz J, O’Connell R, Izatt JA, Toth CA. Unprocessed real-time imaging of vitreoretinal surgical maneuvers using a microscope-integrated spectral-domain optical coherence tomography system. Graefes Arch Clin Exp Ophthalmol. 2013;251(1):213–220. doi: 10.1007/s00417-012-2052-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Hahn P, Migacz J, O’Donnell R, Day S, Lee A, Lin P, Vann R, Kuo A, Fekrat S, Mruthyunjaya P, Postel EA, Izatt JA, Toth CA. Preclinical evaluation and intraoperative human retinal imaging with a highresolution microscope-integrated spectral domain optical coherence tomography device. Retina. 2013;33(7):1328–1337. doi: 10.1097/IAE.0b013e3182831293. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Joos KM, Shen J-H. Miniature real-time intraoperative forward-imaging optical coherence tomography probe. Biomed Opt Express. 2013;4(8):1342–1350. doi: 10.1364/BOE.4.001342. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Vakhtin AB, Kane DJ, Wood WR, Peterson KA. Common-path interferometer for frequency-domain optical coherence tomography. Appl Opt. 2003;42(34):6953–6958. doi: 10.1364/ao.42.006953. [DOI] [PubMed] [Google Scholar]
  • 15.Sharma U, Fried NM, Kang JU. All-Fiber Common-Path Optical Coherence Tomography: Sensitivity Optimization and System Analysis. IEEE J Sel Top Quantum Electron. 2005;11(4):799–805. [Google Scholar]
  • 16.Kang JU, Han J, Liu X, Zhang K. Common-path Optical Coherence Tomography for Biomedical Imaging and Sensing. J Opt Soc Korea. 2010;14(1):1–13. doi: 10.3807/JOSK.2010.14.1.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Zhang K, Kang JU. Graphics processing unit accelerated non-uniform fast Fourier transform for ultrahigh-speed, real-time Fourier-domain OCT. Opt Express. 2010;18(22):23472–23487. doi: 10.1364/OE.18.023472. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Cogliati A, Canavesi C, Hayes A, Tankam P, Duma VF, Santhanam A, Thompson KP, Rolland JP. MEMS-based handheld scanning probe with pre-shaped input signals for distortion-free images in Gabor-Domain Optical Coherence Microscopy. Opt Express. 2016;24(12):13365–13374. doi: 10.1364/OE.24.013365. [DOI] [PubMed] [Google Scholar]
  • 19.Zhang K, Wang W, Han J, Kang JU. A surface topology and motion compensation system for microsurgery guidance and intervention based on common-path optical coherence tomography. IEEE Trans Biomed Eng. 2009;56(9):2318–2321. doi: 10.1109/TBME.2009.2024077. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Zhang K, Kang JU. Common-path low-coherence interferometry fiber-optic sensor guided microincision. J Biomed Opt. 2011;16(9):095003. doi: 10.1117/1.3622492. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Huang Y, Zhang K, Lin C, Kang JU. Motion compensated fiber-optic confocal microscope based on a common-path optical coherence tomography distance sensor. Opt Eng. 2011;50(8):083201. [Google Scholar]
  • 22.Huang Y, Liu X, Song C, Kang JU. Motion-compensated hand-held common-path Fourier-domain optical coherence tomography probe for image-guided intervention. Biomed Opt Express. 2012;3(12):3105–3118. doi: 10.1364/BOE.3.003105. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Song C, Gehlbach PL, Kang JU. Active tremor cancellation by a “smart” handheld vitreoretinal microsurgical tool using swept source optical coherence tomography. Opt Express. 2012;20(21):23414–23421. doi: 10.1364/OE.20.023414. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Song C, Park DY, Gehlbach PL, Park SJ, Kang JU. Fiber-optic OCT sensor guided “SMART” micro-forceps for microsurgery. Biomed Opt Express. 2013;4(7):1045–1050. doi: 10.1364/BOE.4.001045. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Cheon GW, Huang Y, Cha J, Gehlbach PL, Kang JU. Accurate real-time depth control for CP-SSOCT distal sensor based handheld microsurgery tools. Biomed Opt. 2015;6(5):1942–1953. doi: 10.1364/BOE.6.001942. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Cheon GW, Lee P, Gonenc B, Gehlbach PL, Kang JU. Active depth-guiding handheld micro-forceps for Membranectomy based on CP-SSOCT. Proc SPIE 9702. 2016 [Google Scholar]
  • 27.Riviere CN, Ang WT, Khosla PK. Toward Active Tremor Canceling in Handheld Microsurgical Instruments. IEEE Trans Robot Autom. 2003;19(5):793–800. [Google Scholar]
  • 28.Maclachlan RA, Becker BC, Tabarés JC, Podnar GW, Lobes LA, Jr, Riviere CN. Micron: An actively stabilized handheld tool for microsurgery. IEEE Trans Robot. 2012;28(1):195–212. doi: 10.1109/TRO.2011.2169634. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Tang S, Balicki M, MacLachlan RA, Liu X, Kang JU, Taylor RH, Riviere CN. Optical coherence tomography scanning with a handheld vitreoretinal micromanipulator. Proc of the 34th Ann Int Conf of the IEEE Eng In Med And Bio Soc, EMBS. 2012:948–951. doi: 10.1109/EMBC.2012.6346089. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Payne CJ, Marcus HJ, Yang GZ. A smart haptic hand-held device for neurosurgical microdissection. Ann Biomed Eng. 2015;43(9):2185–2198. doi: 10.1007/s10439-015-1258-y. [DOI] [PubMed] [Google Scholar]
  • 31.Yang S, MacLachlan RA, Riviere CN. Manipulator design and operation for a six-degree-of-freedom handheld tremor-canceling microsurgical instrument. IEEE/ASME Trans on Mechatronics. 2014;20(2):761–772. doi: 10.1109/TMECH.2014.2320858. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Li Z, Shen JH, Kozub JA, Prasad R, Lu P, Joos KM. Miniature forward-imaging B-scan optical coherence tomography probe to guide real-time laser ablation. Lasers Surg Med. 2014;46(3):193–202. doi: 10.1002/lsm.22214. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Taylor RH, Jensen P, Whitcomb L, Barnes A, Kumar R, Stoianovici D, Gupta P, Wang Z, Dejuan E, Kavoussi L. A Steady-Hand Robotic System for Microsurgical Augmentation. Int J Robot Res. 1999;18(12):1201–1210. [Google Scholar]
  • 34.Ueta T, Yamaguchi Y, Shirakawa Y, Nakano T, Ideta R, Noda Y, Morita A, Mochizuki R, Sugita N, Mitsuishi M, Tamaki Y. Robot-Assisted Vitreoretinal Surgery: Development of a Prototype and Feasibility Studies in an Animal Model. Ophthalmology. 2009;116(8):1538–1543. doi: 10.1016/j.ophtha.2009.03.001. [DOI] [PubMed] [Google Scholar]
  • 35.Balicki M, Han JH, Iordachita I, Gehlbach P, Handa J, Taylor R, Kang JU. Single fiber optical coherence tomography microsurgical instruments for computer and robot-assisted retinal surgery. Medical Image Computing and Computer-Assisted Intervention. 2009;12(Pt 1):108–115. doi: 10.1007/978-3-642-04268-3_14. [DOI] [PubMed] [Google Scholar]
  • 36.Yu H, Shen J, Shah RJ, Simaan N, Joos KM. Evaluation of microsurgical tasks with OCT-guided and/or robot-assisted ophthalmic forceps. Biomed Opt Express. 2015;6(2):457–472. doi: 10.1364/BOE.6.000457. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Gupta PK, Jensen PS, Dejuan E., Jr . Medical Image Computing and Computer-Assisted Interventions (MICCAI) Vol. 1679. Cambridge, U.K.: Springer; 1999. Surgical forces and tactile perception during retinal microsurgery; pp. 1218–1225. (Lecture Notes in Computer Science). 1999. [Google Scholar]

RESOURCES