Abstract
Subretinal injection (SI) is an ophthalmic surgical procedure that allows for the direct injection of therapeutic substances into the subretinal space to treat vitreoretinal disorders. Although this treatment has grown in popularity, various factors contribute to its difficulty. These include the retina’s fragile, nonregenerative tissue, as well as hand tremor and poor visual depth perception. In this context, the usage of robotic devices may reduce hand tremors and facilitate gradual and controlled SI. For the robot to successfully move to the target area, it needs to understand the spatial relationship between the attached needle and the tissue. The development of optical coherence tomography (OCT) imaging has resulted in a substantial advancement in visualizing retinal structures at micron resolution. This paper introduces a novel foundation for an OCT-guided robotic steering framework that enables a surgeon to plan and select targets within the OCT volume. At the same time, the robot automatically executes the trajectories necessary to achieve the selected targets. Our contribution consists of a novel combination of existing methods, creating an intraoperative OCT-Robot registration pipeline. We combined straightforward affine transformation computations with robot kinematics and a deep neural network-determined tool-tip location in OCT. We evaluate our framework’s capability in a cadaveric pig eye open-sky procedure and using an aluminum target board. Targeting the subretinal space of the pig eye produced encouraging results with a mean Euclidean error of 23.8μm.
I. Introduction
The leading cause of irreversible blindness and visual disability worldwide is vitreoretinal disease [1]. Subretinal injection (SI) is a surgical procedure to treat common diseases such as age-related macular degeneration and subretinal hemorrhage. The goal of a SI is to place a therapeutic solution in the subretinal space within the immediate proximity of photoreceptors between the internal limiting membrane (ILM) and the retinal pigment epithelium (RPE) layers of the retina [2], [3]. Such layers have a vertical separation of approximately 250 μm while the target injection area lies approximately in the range of 20–30 μm. Failing to deliver the drug in the correct retinal layer may lead to intraocular inflammation, retinal detachment, ocular hemorrhage, and reflux into the vitreous cavity, which can lead to an immune response [4], [5]. Several medical modalities, including 3D ultrasound or magnetic resonance imaging, can be used to acquire depth information [6]. Since the maximum allowed error is around 20–30 μm [6], a higher precision modality is needed. As an alternative, OCT has been successfully extended from diagnostic to interventional applications in ophthalmology [7], [8]. OCT can produce volumetric representations and enable cross-sectional visualizations of living biological tissue and surgical instruments with μm resolution [9]. However, unpredictable hemorrhage or other emerging complications might not be visible or detected because of the small field of view of the OCT. Therefore, the microscopic view is still essential to the procedure. Our objective is to combine the OCT’s fine detail and depth information, the microscope’s macro-scale image quality, and the robotic-guided needle’s stability and precision. Therefore, this framework for autonomous SI intraoperative OCT guidance is integrated with the microscope. The first phase, discussed in this paper, involves developing an automated injection system combining the robot with OCT guidance that an ophthalmologist will supervise. Thus, we present: 1) Demonstration of a novel and feasible intraoperative OCT-Robot registration pipeline, and 2) Testing of the high-precision registration framework on a testing board and a cadaveric pig eye.
A. Related Work
Mapping the robot space onto the image space is a problem that numerous researchers have widely investigated. Existing works, such as the one proposed in [10], require different calibration grids or markers. However, placing different markers inside the organs would not be practical during an ophthalmological procedure. Yu et al. Presented a B-mode OCT-integrated forceps tool for haptic-controlled microsurgery to assist in retinal membrane peeling [11]. As extended work, they presented a B-mode-based assistive robotic control, the mean tracking error was between 73–159 μm, with a standard deviation of 52 μm [12]. The relatively high error rate comes from having to account for optical and fan distortion of their custom OCT. Nasseri et al. introduced an OCT and robot guidance method that helped to perform precise injections for macular degeneration [13]. Draelos et al. presented a hand-guided robot that provides stabilization and OCT guidance in the cornea with autonomous needle insertion capabilities [14]. Their calibration-introduced error has a fixed offset of less than 250 μm, and their segmentation error lies in the range of 24±26 μm (non-automatic) and 30±32 (automatic) μm [14]. Zhou et al. presented a needle pose estimation process using an OCT-based needle-tip tracking and calibration scheme. Using a voting scheme, they identified pixels likely to belong to the needle from the OCT B-scans via morphological operations and segmented the needle from the background. Their reported error is 7.0 μm and 9.2 μm [15]. However, each lacks sufficient precision to directly apply to subretinal interventions since tracking capability uses an iterative closest point algorithm.
They extended their research to a 6 degree of freedom tracking capability using an iterative closest point algorithm with a needle-tip estimation error of between 2.4–34.8 μm [16]. Based on the work of Zhou et al., another paper by Yuan et al. created a calibration scheme for robot-assisted microsurgery for wound repair guidance. In order to calibrate the needle-tip and robot with regard to the OCT volume, they observe the needle voxel and record the robot coordinates. Then, an iterative closest point algorithm acquires an accurate robot-to-needle-tip transformation [17]. The paper showed a root mean square error of 75–227 μm. Del Giudice et al. developed continuum robots for multiscale motion and demonstrated a novel concept for teleoperated robot actuation for surgery requiring micro-scale motion like microvascular reconstruction and image-based (OCT) diagnosis. The robot has a positional resolution of 1 μm [18]. The advantage of our presented work is that we can calibrate and then maneuver the robot automatically by only supplying our system with a real-time OCT volume. We plan and direct the robot, which is entirely manipulatable via interface calculations, to a given target within a 24 μm error margin.
II. Methods
Our goal is to implement an autonomous system for SI based on microscope-integrated intraoperative OCT (MiOCT). As a preliminary stage, we built an autonomous steering framework for cadaveric pig eyes. The surgeon selects any location in the OCT volume. The framework transforms the locations into robot coordinates and steers the robot to move the needle-tip to the desired position. Our framework consists of an OCT machine, a steady-hand eye robot, and a target or specimen (see Fig. 1a). Additionally, our framework includes a mapping estimation from OCT-to robot coordinates and a deep learning model that identifies the needle.
Fig. 1:

(a) Robot with a needle attached to a gripper, placed underneath the OCT above the target. The objects are screwed to a table that compensates for environmental vibrations. Upper right corner: target board (b) Needle-tip inside of the retina. The green theta shows the angle at which the needle-tip is entering the retina while the orange is the new calculated refraction angle. The thickness of the retinal layers is approximately 500 μms for the pig eye. The blue lines mark the ILM and RPE layers
A. The Steady-Hand Eye Robot
Steady-Hand Eye Robot (SHER) is a surgical robot explicitly designed for retinal microsurgery applications developed at the Johns Hopkins University [19], [20]. The robot consists of three translational stages at the base and two rotational joints for the roll and pitch motion of the surgical tool. For our purposes, the rotational joints remained stationary, and we only executed the translational motion to reach the desired goal positions. The robot Cartesian stage (XYZ) has relative (attached to the motors) and absolute (attached to the linear stages) encoders. To control the Cartesian motion, we use the absolute encoders that enable precise motion up to a resolution of 1μm along each axes (see XYZ axes in Fig. 2).
Fig. 2:

Robot coordinate system is defined as X,Y,Z and the coordinates in the OCT volume are defined as U,V,W. Origin of the U,V,W is in the upper left corner of the first slice of the volume. The volume is 3mm in height, width and depth. The amount of pixels on the 3×3×3 mm volume is 512×1024×512.
Consider the forward kinematics and the robot jacobian defined as the following:
| (1) |
where is a state vector denoting the position and orientation of the surgical needle-tip with respect to the robot base frame, denotes the robot joint angles, and J(q) denotes the jacobian matrix derived from the forward kinematics of the robot. Given the desired goal location defined with respect to the robot base frame, we express the desired joint velocity as the following:
| (2) |
where α is the constant gain of the velocity chosen appropriately. We executed the joint velocities until we reached the desired goal within a single micron accuracy via the absolute encoders mentioned above.
B. Customized Swept-Source OCT
In this work, we used a customized swept-source OCT (SSOCT) system for the SI guidance [21]. The swept-source has a 1060 nm center wavelength, 110 nm tuning range, and a 100 kHz A-scan rate. Each A-scan (1 dim in V direction) contains 1024 pixels with a unit length of 3.6 μm per pixel in air, or equivalently 2.7 μm per pixel in tissue if we use the refractive index of water 1.33 as an approximation. Each B-scan (2 dim in UxV direction) consists of 600 A-scans, among which 560 scans forward and 40 scans backward. We only used 512 A-scans in the forward scanning for the visualization. Each C-scan (3 dim in UxVxW direction) contains 512 B-scans; thus, the C-scan operates at 0.326 Hz. We set the scanning size of the C-scan to be 3 mm along both the fast axes and the slow axes, which provided a large field of view and a high lateral scanning resolution for visualizing a needle with an 80 μm diameter. For U, V, and W directions see Fig.2.
C. Needle-Tip identification
Considering our custom-made swept-source OCT parameters, a custom needle segmentation or identification method is required. We used a deep learning segmentation model based on 3DUnet to identify the needle-tip position [22]. To train the network, resizing the volume and thus reducing the resolution was not an option because we are interested in the μm precision of the needle-tip placement. Instead, we randomly cut out a patches of size 20×200×200 in each epoch for every volume the network trained on. An example of a needle segmentation can be seen in Fig. 3.
Fig. 3:

Side view of a segmentation point cloud rendering of the needle-tip (blue), target board (grey) and the retinal layers (green) in the OCT volume.
To train the model we had 50 needle annotated volumes from another OCT machine that the network was pretrained on. Then we used 12 annotated volumes from our OCT that we continued training our model on. The algorithm then uses the segmentation mesh, finds the center point of the needle, and identifies the outermost part of the needle-tip mesh. The loss function was Dice loss and the learning rate was set to 0.0002. The correct B-scan slice is found based on the center point of the needle segmentation mesh. Finding the correct needle-tip in the OCT volumes plays a central part in the mapping of the robot coordinates to the needle-tip location in the OCT. The needle-tip identification can be seen in Fig. 4.
Fig. 4:

Visualization of the B-scan corresponding to the middle of the segmented needle mesh, identified by the segmentation algorithm. The green point highlights the identified tip of the needle. The two first images are experiments made on the target board and image three and four are from experiments made on the cadaveric pig eye.
D. OCT and robot registration
In order to compute the transformation matrix, we used a non-linear optimization algorithm, the Levenberg – Marquardt algorithm (LM), which solves the least square problem by an iterative method. The expression for the LM algorithm is the sum of squares of non-linear real-valued functions: the gradient descent method and the Gauss-Newton method [23]. Since we were mapping two 3D spaces to each other, it was crucial during the acquisition to select the points used for calibration that are in the general position required to reach an optimal solution. [24]. The number of points needed to get a homography between two 3D volumes is four [24]. Using more values in the optimization yields a better result. Choosing between more data for calibration and less time spent on calibration is a balancing act. We chose to use nine values for the calibration optimization based on an evaluation experiment with the calibration points. The points chosen were in the working area of the tissue and board (Full motion in U and W and approximately 1/6 from the bottom of the OCT to the top in Y direction). There is no reduction of the calibration error when adding more than nine values to the optimization. We do however, expect errors when identifying the needle-tip position. The errors originate from image noise and the partial volume effect that the needle-tip could be in between voxels.
E. Needle-tip dewarping
For our experiments, we used cadaveric pig eyes. Due to the limited range of imaging depth in our customized swept-source OCT system: it does not support simultaneous corneal and retinal imaging. Therefore, we prepared the pig-eye as in an open-sky procedure: we cut the eye in half and removed the vitreous, exposing the retinal surface. We used the pig eye retina as a target and placed it underneath the OCT scanner. The upper part of the OCT volume therefore consisted of air, while the remainder consisted of the pig eye retina (see Fig. 1b). Some occasional remaining vitreous could also be seen on the top of the retina. Since the optical path changes as the refractive index changes [25], it creates a visual angular displacement in OCT images when the needle enters the retinal layers (see Fig. 1b). This angular displacement can be compensated numerically in the OCT image frame [26], [27]. The refraction angle inside the retina can be calculated by,
| (3) |
where the refractive index of retina nretina is assumed to be 1.33 [28], [29] and the refractive index of air nair is known to be 1.0. Moreover, the physcial depth of the target inside retina can be compensated before being transformed into robot coordinates by,
| (4) |
where dphysical and dimage represent the physical depth and image depth of the target from the surface of the retina, respectively.
III. Experiment
A. Setup
The setup consists of a steady-hand eye robot with an injection needle attached; above the needle is an OCT machine, and the target lies beneath the needle. The setup is shown in Fig. 1a. The needle is custom-made and has a 80-μm outer diameter. The robot is explained in section II-A and the OCT is explained in section II-B. The robot is constrained to only execute translation in the X, Y, Z directions without rotation for this experiment (see axes in Fig. 2). The constraints are in place because the needle is attached to the end effector but not alongside it, resulting in an instantaneous center of curvature of the needle where the angle changes on the robot are not constant. To test the calibration between the robot and the OCT, we used a target board and a cadaveric pig eye that was placed underneath the OCT in our steering framework. The target board is made from aluminium and with a hole radius of around 117 μm and the hole distance between the holes’ centers (in any direction) is around 605 μm.
B. Calibration
We moved nine steps in the robot’s X-, Z-, and Y- direction (see axes in Fig. 2) based on the constraints discussed in section II-D, recorded the robot positions, and saved the corresponding OCT volumes. The needle-tip positions are localized from the saved OCT volume as described in subsection II-C and saved. Based on the nine corresponding needle-tip and robot positions, a transformation and translation matrix is computed through the LM algorithm as stated in subsection II-D. The calibration is only done once per procedure.
C. Target selection
The end goal of the procedure is to use it on human eyes intraoperatively. For these preliminary steps, we were using a target board and cadaveric pig eyes. We used the board because the targets identified inside of the pig eye tissue are not static. When a needle moves inside of the tissue, the target might also deform, move, or both. The board has a set of static targets for calculating the offset of the mapped estimated position of the needle and the actual position of the needle. Since our customized OCT machine does not support a whole eye imaging, we did an open-sky procedure as has been explained in section II-E.
D. Evaluation
To evaluate our method, we visualized the OCT volume in cross-sectional images and a biomedical expert with three years of experience manually selected the outermost point of the needle as the needle-tip. As can be seen in Fig.4 the needle is gray against a dark background and is easily identified. In addition, when we cut the eye in half, the retina easily detaches after a couple of needle insertions. Varying amounts of moisture on the retinal surface can also create optical artifacts that change over time. Therefore, we tested our framework ten times on the static target board in addition to the four times inside the retina. By the nature of our SSOCT system, we cannot clearly distinguish any anatomical landmarks other than the retinal layers. The targets were therefore chosen to be in the subretinal space of the cadaveric pig eye with an arbitrary W and U position. After calibration, we placed the target board or eye underneath the OCT and visualized the volume in our framework. We choose ten target points slightly above the holes using the framework. We mapped the desired voxel positions into robot positions using the mapping calculated in the calibration step. After each target was reached and before retracting the robot, we saved one OCT volume. Similar trials were made on four targets in the the pig eye but with the additional dewarping as described in section II-E and transformed to robot coordinates via the mapping from the calibration. As we did with the board, we moved the robot to these estimated target points, moving the needle back to the base position after each target was reached. Afterward, we calculated the errors between the chosen voxel position and the final needle-tip position per axes and Euclidean distance. This was done both for manually selected needle-tip positions and automatically selected positions. Since we know the dimensions and resolution of the OCT, we also converted the errors from voxels into μms.
IV. RESULTS
Our experiments demonstrated that our method yields a clinically acceptable target error on cadaveric pig eyes. We then tested the optimised transformation matrix by comparing the selected points. We used 14 desired target points, ten on the target board and four within the pig eye’s retina. The experiments yielded encouraging results: the lowest Euclidean norm error distance is 13.6 μm, while the highest Euclidean norm error distance is 34.0 μm for target board trials (see Table I) for intraretinal trials, the values are 17.2 μm and 33.6 μm, respectively (see Table I). The experiments yielded a mean Euclidean norm error of 24.7 μm for the target board and 23.3 μm for the pig eye (see Table I). Below the acceptable micron error of 25μm for SI, [6]. The errors between the target board trials and the retinal layers are comparable, showing that the suggested approach of optimizing the LM algorithm for nine related OCT-volume needle-tip and robot locations may provide a meaningful mapping in a variety of targets and specimens. The standard deviations of 5.9μm and 6.7μm for the target board and cadaveric pig eye assessments, respectively, indicate that the size of the errors vary (see Table II), showing that there is not, or at least not just, a continuous offset error from the mapping. When evaluating the framework’s overall performance, the root mean square error (RMSE) is important since it may be viewed as a measure of accuracy. The RMSE for the target board evaluation is 25.4 μm and for the pig eye evaluation is 24.3 μm.
TABLE I:
Mean error between the desired target points on the target board/retina and final needle-tip positions in voxels, μms and the Euclidean norm (L2).
| Target Board | ||||||||
|---|---|---|---|---|---|---|---|---|
| Mean error in Voxels | Mean error in μm | |||||||
| U | V | W | L2 | U | V | W | L2 | |
| Manual: | 2.4 | 4.4 | 1.6 | 5.6 | 14.1 | 15.8 | 9.4 | 24.7 |
| Automatic: | 3.0 | 5.0 | 1.2 | 6.5 | 17.3 | 18.0 | 6.8 | 29.1 |
| Retina | ||||||||
| Mean error in Voxels | Mean error in μm | |||||||
| U | V | W | L2 | U | V | W | L2 | |
| Manual: | 3.2 | 1.8 | 1.6 | 4. | 18.9 | 6.6 | 9.7 | 23.3 |
| Automatic: | 1.8 | 6.8 | 3.0 | 9.4 | 10.3 | 24.3 | 17.6 | 23.8 |
TABLE II:
Root mean ± square error standard deviation per axes in μms and the Euclidean norm(L2).
| Target Board | ||||
|---|---|---|---|---|
| Root mean square error ± standard deviation in μm | ||||
| U | V | W | L2 | |
| Manual: | 14.8 ± 4.7 | 17.6 ± 7.6 | 10.8 ± 5.4 | 25.4 ± 5.9 |
| Automatic: | 27.0 ± 10.6 | 15.2 ± 5.1 | 11.2 ± 8.4 | 33.0 ± 6.7 |
| Cadaveric pig eye | ||||
| Root mean square error ± standard deviation in μm | ||||
| U | V | W | L2 | |
| Manual: | 20.1 ± 6.7 | 8.4 ± 5.3 | 10.6 ± 4.5 | 24.3 ± 6.7 |
| Automatic: | 10.6 ± 2.5 | 45.1 ± 38.0 | 19.9 ± 9.3 | 50.4 ± 29.8 |
After establishing the robustness of our framework, we continue adding the automatic tip identification component in order to construct an autonomous calibration mechanism capable of automatically determining the current needle-tip location after each movement. The lowest Euclidean norm error is 15.7 μm, and the highest Euclidean norm error is 42.5 μm for the target board trials, and for the intraretinal trials we have 13.6 μm and 90.9 μm respectively (see Table I). Including the fourth point, with the outlier Euclidean norm error on 90 μm (the other points range from 13.6–26.2 μm), the mean Euclidean error for the experiments inside of the retinal layers is 23.8 μm compared to 23.3 μm for the manual needle identification.
We compared our results to those of the papers in section I-A that in different ways calibrated between a robot mounted tool and the OCT. Yu et al. presented a tracking error between 73–159 μm, with a standard deviation of 52 μm mm [12]. Draelos et al. reported a needle segmentation error in the range of 24±26 μm and 30±32 μm [14]. Zhou et al. reported a calibration error in the range of 2–35 μm with a mean error of 7.0 μm and 9.2 μm [15]. In their second paper Zhou et al. reported a needle-tip estimation error between 2.4–34.8 μm [16]. Yuan et al. showed a root mean square error of 75–227 μm [17]. We are in range or well below the errors presented in these prior works (see Table I).
Our results should be interpreted in light of several limitations. Due to the trade-off between the working distance and the lateral resolution of our customized swept-source OCT system, we need to sacrifice some lateral resolution to provide enough working distance for robotic needle control. This results in noisy, lower contrasted OCT volumes. It creates difficulty in identifying the needle-tip position. Moreover, due to the partial volume effect, the needle-tip position might lie between A- or B-scans, thus not being present in the acquired OCT volume. If we are misestimating the needle-tip with one voxel, our resolution gives an error between 3.6–5.9 μm (depending on the axes). The same is valid for the subretinal trials, if we’re estimating the RPE layer wrongly (it is needed for the angle calculaton) by just one pixel, the error from the needle-tip 3.6–5.9 μm will be added together with the error of the warping of an additional 3.6–5.9 μm. For example, the RMSE in the light of voxels for the manual segmentation is 5.6 voxels away (2.4, 4,4 and 1.6 voxels for u,v,w direction) from the assumed needle-tip position for the target board trials and 9.4 voxels (1.8, 6.8 and 3.0 voxels for u,v,w direction) for the trials in the subretinal space (Table I). These errors come from a calibration error and from not differentiating between the voxel in a small area. Furthermore, looking at the results for the automatic needle-tip finder, we can see that the model has trouble identifying the needle-tip mostly on the V-axes (we can see a 25-pixel error on the v-axes in Table I). The model has problems discriminating between grey pixels belonging to the needle and grey pixels belonging to the retinal layers.
The larger errors in the subretinal trials than in the target board ones come from the refractive index difference between the air, and the retina, creating the need for warping, which induces other errors in the framework. Besides the accumulative refraction error calculations, the absence of live pig eyes increases the error. The cadaveric open-sky procedure increases the risk of retinal detachment. It induces a time limit on the experiments before the retinal layers degrade, changing the structure and features of the retina.
V. CONCLUSION
This paper presents a novel approach for controlling the robot through a framework by calculating a mapping between the robot and the OCT volume without the need for markers, just with a movement to 9 arbitrary points. We demonstrated the method’s resilience by using it for a total of 14 trials with two targets with varying material properties: a static aluminum board and cadaveric pig eyes. Both experiments produced a mean error less than the 25μm error tolerance for subretinal injection. The automatic needle finding results are encouraging and while still being a subject for improvement, it shows that an automatic pipeline is feasible. Our future research will concentrate on developing more exact needle estimation, in occluded parts of the eye and injection route planning techniques.
ACKNOWLEDGEMENTS
This work was partially supported in by the U.S. National Institutes of Health under grant numbers 1R01EB023943-01 and 1R01EB025883-01, and by Johns Hopkins University internal funds.
References
- [1].Gilbert C, Burr J, and Lecturer S, “New issues in childhood blindness,” Journal of Community Eye Health, vol. 14, 01 2001. [Google Scholar]
- [2].Davis J, Gregori N, MacLaren R, and Lam B, “Surgical technique for subretinal gene therapy in humans with inherited retinal degeneration,” Retina, vol. 39 Suppl 1, p. 1, 07 2019. [DOI] [PubMed] [Google Scholar]
- [3].Jo Y-J, Heo D, Shin Y-I, and Kim J-Y, “Diurnal variation of retina thickness measured with time domain and spectral domain optical coherence tomography in healthy subjects,” Investigative ophthalmology and visual science, vol. 52, pp. 6497–500, 06 2011. [DOI] [PubMed] [Google Scholar]
- [4].Gaudana R, Ananthula H, Parenky A, and Mitra A, “Ocular drug delivery,” The AAPS journal, vol. 12, pp. 348–60, 09 2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [5].Ladha R, Meenink T, Smit J, and De Smet M, “Advantages of robotic assistance over a manual approach in simulated subretinal injections and its relevance for gene therapy,” Gene Therapy, 05 2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [6].Zhou M, Yu Q, Mahov S, Huang K, Eslami A, Maier M, Lohmann C, Navab N, Zapp D, Knoll A, and Nasseri MA, “Towards robotic-assisted subretinal injection: A hybrid parallel–serial robot system design and preliminary evaluation,” IEEE Transactions on Industrial Electronics, vol. PP, pp. 1–1, 08 2019. [Google Scholar]
- [7].Ehlers J, Modi Y, Pecen P, Goshe J, Dupps W, Rachitskaya A, Sharma S, Yuan A, Singh R, Kaiser P, Reese J, Calabrise C, Watts A, and Srivastava S, “The discover study 3-year results: Feasibility and usefulness of microscope-integrated intraoperative oct during ophthalmic surgery,” Ophthalmology, vol. 125, 02 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [8].Gregori N, Lam B, and Davis J, “Intraoperative use of microscope-integrated optical coherence tomography for subretinal gene therapy delivery,” Retina, vol. 39 Suppl 1, p. 1, 04 2017. [DOI] [PubMed] [Google Scholar]
- [9].Huang D, Swanson E, Lin C, Schuman J, Stinson W, Chang W, Hee M, Flotte T, Gregory K, Puliafito C, and JG F, “Optical coherence tomography,” Science, vol. 254, p. 1178, 12 1991. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [10].Tsai R and Lenz R, “Lenz, r.k.: A new technique for fully autonomous and efficient 3d robotics hand/eye calibration. ieee trans. on robotics and automation 5(3), 345–358,” Robotics and Automation, IEEE Transactions on, vol. 5, pp. 345–358, 07 1989. [Google Scholar]
- [11].Yu H, Shen J, Shah R, Simaan N, and Joos K, “Evaluation of microsurgical tasks with oct-guided and/or robot-assisted ophthalmic forceps,” Biomedical Optics Express, vol. 6, p. 457, 02 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [12].Yu H, Shen J, Joos K, and Simaan N, “Calibration and integration of b-mode optical coherence tomography for assistive control in robotic micro-surgery,” IEEE/ASME Transactions on Mechatronics, vol. 21, pp. 1–1, 12 2016. [Google Scholar]
- [13].Nasseri MA, Maier M, and Lohmann C, “A targeted drug delivery platform for assisting retinal surgeons for treating age-related macular degeneration (amd),” vol. 2017, 07 2017, pp. 4333–4338. [DOI] [PubMed] [Google Scholar]
- [14].Draelos M, Tang G, Keller B, Kuo A, Hauser K, and Izatt J, “Optical coherence tomography guided robotic needle insertion for deep anterior lamellar keratoplasty,” IEEE Transactions on Biomedical Engineering, vol. PP, pp. 1–1, 11 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [15].Zhou M, Hamad M, Weiss J, Eslami A, Huang K, Maier M, Lohmann C, Navab N, Knoll A, and Nasseri MA, “Towards robotic eye surgery: Marker-free, online hand-eye calibration using optical coherence tomography images,” IEEE Robotics and Automation Letters, vol. PP, pp. 1–1, 07 2018. [Google Scholar]
- [16].Zhou M, Hao X, Eslami A, Huang K, Cai C, Lohmann C, Navab N, Knoll A, Nasseri MA, and Xia C, “6dof needle pose estimation for robot-assisted vitreoretinal surgery,” IEEE Access, vol. PP, pp. 1–1, 04 2019. [Google Scholar]
- [17].Tian Y, Draelos M, Tang G, Qian R, Kuo A, Izatt J, and Hauser K, “Toward autonomous robotic micro-suturing using optical coherence tomography calibration and path planning,” 05 2020, pp. 5516–5522. [Google Scholar]
- [18].Del Giudice G, Orekhov A, Shen J, Joos K, and Simaan N, “Investigation of micro-motion kinematics of continuum robots for volumetric oct and oct-guided visual servoing,” IEEE/ASME Transactions on Mechatronics, vol. PP, pp. 1–1, 12 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [19].Fleming I, Balicki M, Koo J, Iordachita I, Mitchell B, Handa J, Hager G, and Taylor R, “Cooperative robot assistant for retinal micro-surgery,” vol. 11, 02 2008, pp. 543–50. [DOI] [PubMed] [Google Scholar]
- [20].He X, Roppenecker D, Gierlach D, Balicki M, Olds K, Gehlbach P, Handa J, Taylor R, and Iordachita I, “Toward clinically applicable steady-hand eye robot for vitreoretinal surgery,” vol. 2, 11 2012. [Google Scholar]
- [21].Wei S, Guo S, and Kang JU, “Analysis and evaluation of bc-mode oct image visualization for microsurgery guidance,” Biomedical optics express, vol. 10, no. 10, pp. 5268–5290, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [22].Cicek O, Abdulkadir A, Lienkamp S, Brox T, and Ronneberger O, “3d u-net: Learning dense volumetric segmentation from sparse annotation,” 10 2016, pp. 424–432. [Google Scholar]
- [23].Gavin HP, “The levenberg-marquardt algorithm for nonlinear least squares curve-fitting problems,” Department of Civil and Environmental Engineering Duke University. [Google Scholar]
- [24].Ma Y, Soatto S, Kosecka J, and Sastry SS, An Invitation to 3-D Vision: From Images to Geometric Models. SpringerVerlag, 2003. [Google Scholar]
- [25].Tearney G, Brezinski M, Southern J, Bouma B, Hee M, and Fuji-moto J, “Determination of the refractive index of highly scattering human tissue by optical coherence tomography,” Optics letters, vol. 20, no. 21, pp. 2258–2260, 1995. [DOI] [PubMed] [Google Scholar]
- [26].Turani Z, Fatemizadeh E, Xu Q, Daveluy S, Mehregan D, and Avanaki MRN, “Refractive index correction in optical coherence tomography images of multilayer tissues,” Journal of biomedical optics, vol. 23, no. 7, p. 070501, 2018. [DOI] [PubMed] [Google Scholar]
- [27].Draelos M, Tang G, Keller B, Kuo A, Hauser K, and Izatt JA, “Optical coherence tomography guided robotic needle insertion for deep anterior lamellar keratoplasty,” IEEE Transactions on Biomedical Engineering, vol. 67, no. 7, pp. 2073–2083, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [28].Baumann B, Götzinger E, Pircher M, Sattmann H, Schütze C, Schlanitz F, Ahlers C, Schmidt-Erfurth U, and Hitzenberger CK, “Segmentation and quantification of retinal lesions in age-related macular degeneration using polarization-sensitive optical coherence tomography,” Journal of biomedical optics, vol. 15, no. 6, p. 061704, 2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [29].Fabritius T, Makita S, Miura M, Myllylä R, and Yasuno Y, “Automated segmentation of the macula by optical coherence tomography,” Optics express, vol. 17, no. 18, pp. 15 659–15 669, 2009. [DOI] [PubMed] [Google Scholar]
