Abstract
In this paper we compare three approaches to solve the hand-eye and robot-world calibration problem, for their application to a Transcranial Magnetic Stimulation (TMS) system. The selected approaches are: i) non-orthogonal approach (QR24); ii) stochastic global optimization (SGO); iii) quaternion-based (QUAT) method. Performance were evaluated in term of translation and rotation errors, and computational time. The experimental setup is composed of a 7 dof Panda robot (by Franka Emika GmbH) and a Polaris Vicra camera (by Northern Digital Inc) combined with the SofTaxic Optic software (by E.M.S. srl). The SGO method resulted to have the best performance, since it provides lowest errors and high stability over different datasets and number of calibration points. The only drawback is its computational time, which is higher than the other two, but this parameter is not relevant for TMS application. Over the different dataset used in our tests, the small workspace (sphere with radius of 0.05m) and a number of calibration points around 150 allow to achieve the best performance with the SGO method, with an average error of 0.83 ± 0.35mm for position and 0.22 ± 0.12deg for orientation.
I. Introduction
In the neuroscience, Transcranial Magnetic Stimulation (TMS) is one of the most employed techniques to investigate human brain behaviors. Indeed, TMS is a non-invasive method to induce cortical activity painlessly. It generates a local stimulation inside the cortex through an electromagnetic coil placed on the scalp of the subject. A critical issue of this technique is the positioning of the coil, because, due to the density of the induced field, small changes in the coil orientation lead to huge variations of the induced current intensity inside the stimulated cortical area [1].
Usually the TMS is carried out manually, with the coil hand-held by an expert operator. In order to help the operator in coil positioning, often neuronavigation systems are employed [2]. These systems are composed of a stereotaxic system and a software that computes 3D models of the brain and the head, correlating geometrical data of the scalp with MR images.
To overcome the drawbacks due to manually performing the procedure, e.g. low accuracy and low repeatability in coil positioning and support, robot-aided TMS systems have been proposed by a number of studies [3], [4], [5], [6], [7]. Those systems employ a robotic manipulator to move the coil on the stimulation target with high accuracy and repeatability. Moreover, the robotic manipulator can easily compensate the subject’s head motion, maintaining the coil on the hot-spot, contrary to any human expert.
However, the use of robots in the TMS is strictly linked to the neuronavigation system and to the stereotaxic one. In order to let the entire system work properly, the robot and the stereotaxic system have to be calibrated.
In literature this problem is called tool-flange (or equivantely hand-eye or hand-in-eye) and robot-world calibration and leads to solve a matrix equation of the type MX = YN, where M is the robot end-effector pose with respect to the robot base frame, X is the unknown transformation between the end-effector and the connected coil, Y is the unknown transformation between the robot base frame and the camera base frame and N represents the coil pose with respect to the camera base frame [8].
In robotics this is a well-known problem that has been addressed with several different approaches in the last decades [9], [8], [10], [11], [12]. However they have been tested for general robotic applications, not taking into account constraints TMS related, e.g. limited workspace.
In the present paper three different calibration algorithms have been evaluated and compared for the specific application of robot-aided TMS [8], [10], [13]. Indeed, the aim of the present paper is also to customize the validation of general approaches for the TMS application.
Particularly two approaches, based on divergent ways to solve the problem and presented enough recently have been selected and compared with an oldest and well-known calibration method.
The paper is organized as follows: in section II-A the non-linear calibration problem is introduced and the different solving approaches are described in section II-B; section II-C explains the method implemented to compare the algorithms; then the results are shown and discussed in section III; finally section IV summarizes and concludes the present study, suggesting related future works.
II. Materials and Methods
A. Problem Formulation
The calibration of a robot-aided TMS system with a stereotaxic neuronavigation system could be addressed as a two-frame sensor calibration problem. The objective is to find rigid-body homogeneous transformation matrices X and Y that best fit a set of equations of the type
| (1) |
where, for the present case, the 4x4 homogeneous transformation matrices M and N represent, respectively, the robot end-effector pose with respect to the robot base frame and the coil pose with respect to the camera base frame, both measured for different poses i [10].
On the other hand the X and Y matrices, that are respectively the 4x4 homogeneous transformation matrix between the robot end-effector and the coil and between the robot base frame and the camera base frame, remain the sames for all measurements and are initially unknowns when the system is set up.
B. Calibration Algorithms
In this section two different approaches to solve the hand-eye calibration problem are illustrated: a non-orthogonal approach and a stochastic global optimization algorithm. These two approaches have been proposed enough recently and represent two different ways to solve the above mentioned calibration problem.
A third method, a quaternion-based algorithm, has been evaluated and compared with the other two. Since this last method is widely known and used in literature, for the sake of brevity it is not here; further details can be found in [13].
1). Non-Orthogonal Method
The non-orthogonal hand-eye calibration method was presented by Ernst et al. in 2012 [8]. It is based on a least-squares approach. Conversely to other methods in literature, this approach allows for non-orthogonal calibration matrices. As the quaternion-based one, it computes simultaneously the rotational and translational parts of the X and Y matrices.
By regarding the non-trivial elements of X and Y, composed into a vector w, the Eq. 1 can be expressed as a system of linear equations
| (2) |
where A ∈ ℝ12nX 24, b ∈ ℝ12n and w ∈ ℝ24.
The system in Eq. 2 is solved in a least squares sense by means of QR-factorisation, minimizing the Frobenius norm:
| (3) |
Since 24 elements have to be estimated and the QR-factorisation is employed, the method is also called QR24 calibration algorithm.
Due to the non-orthogonality of X and Y matrices, calibrated poses must be orthonormalised before they can be used. Authors of that study suggested to employ the orthonormalisation by means of singular value decomposition (SVD).
It is worth to notice that the present approach is sensitive with respect to the unit of the positional measures, since all matrices elements are treated equally. As reported in the study, lower errors are achieved using millimeters instead of meters for the position measurement.
2). Stochastic Global Optimization Method
In the 2016 Junhyoung Ha et al. presented a two-phase stochastic geometric optimization algorithm [10]. The method is based on a fast and numerically robust local optimization algorithm for the two-frame sensor calibration objective function. This optimization algorithm is repeatedly called as part of the stochastic global optimization one. This approach is also based on the minimization of the least-squares criterion
| (4) |
where the ‖ · ‖2 is defined as The ‖·‖F, as in Eq. 3, is the Frobenius norm, while RP, RQ ∈ SO(3) and pP, pQ ∈ ℝ3 are the rotations and positions of P, Q ∈ SE(3), respectively. SO(3) and SE(3) are the rotation group and the special Euclidean group, respectively. Finally ζ ∈ ℝ+ is a positive and real weighting factor for the translation part. Notice that in the following experimental session ζ will be set to 0.5.
The main advantage of this approach is to be suitable for noisy data, because of it searches for the optimal local minimum rather than using the first found.
3). Comparison Approach
In order to evaluate and compare the performance of the above mentioned algorithms, the following approach has been implemented.
Firstly the calibration workspace has been selected, taking into account the robot and camera workspaces and the typical TMS workspace, i.e. a sphere including the subject head volume for feasible head movements. Then a series of n equidistant points have been highlighted into that sphere. Different coil orientations have been selected for each point. Once the system has been set up, as described in section II-C, the robot has been moved on those points. After a point has been reached, the robot stopped, and held still in that pose for 1 second.
Measures from camera and robot have been acquired within that second, while the system is not moving, in order to avoid mismatches due to the different baud rates in robot and camera data acquisition.
Once the calibration have been computed using the three algorithms, we evaluated the performance of each method on data acquired at m = 250 static poses within the calibration workspace, but different from the n calibration points.
Methods performance have been evaluated in term of:
error in position
error in orientation
computational time
To compute these errors, referring to the Eq. 1, the 4x4 homogeneous transformation matrix E = (MiX)−1YNi has been calculated, with i = 1, …,m.
The position error has been computed as the average of the norm of the E position part, while the orientation error has been evaluated as the average of the norm in the SO(3) space of the rotation part of the matrix E. This approach to assess the error was chosen from the literature and has been used also to validate the above mentioned algorithms.
The same comparison has been carried out varying the number of calibration points (from 5 to 250) and the calibration workspace size (sphere with a radius from 0.05m to 0.15m). As regards the workspace size range, the upper boundary (sphere radius of 0.15m) was chosen considering the intersection between the camera workspace and the useful part of the robot one. Specifically the useful part of the robot workspace is the one allowing for moving the coil all around the head’s subject and so not including the robot base. On the other hand, the lower limit was chosen to highlight how the workspace size affect the calibration performance for different algorithms.
The sphere center was chosen as the center of the maximum feasible calibration workspace.
C. Experimental Setup
A 7 dof robotic manipulator, the Panda robot by Franka Emika GmbH, has been used. On the other hand the neuronavigation system composed of the SofTaxic Optic software (by E.M.S. srl) and the Polaris Vicra (by Northern Digital Inc) infrared camera has been employed. A D70mm Alpha coil (by Magstim) has been attached to the robot end-effector by means of a 3D printed flange.
The camera was positioned in front of the robot (see Fig. 1). In order to have the maximum feasible calibration workspace, the camera was positioned aligned to the robot base, at a distance of 1.7m, and 0.6m higher than the robot base on the vertical axis.
Fig. 1.
Experimental setup: on the left side the Panda robot (1) with the 3D printed flange and the coil (2) attached and connected to the coil item (3); on the right side the Polaris Vicra camera (4). The camera and the robot are connected to a PC (5) through an USB and an Ethernet cable, respectively. A phantom head (6) with the head reference item attached and visible to the camera allows the SofTaxic Optic software to work.
Passive markers fixed on two rigid body, called head reference and coil item, were placed into the camera workspace. Each item has 4 passive markers attached on a known geometry, identifiable both from the camera and the neuronavigation software, to let its position and orientation measurable.
Particularly, the coil item was fixed to the coil, while the head reference item, fixed to a phantom head, needs to be visible by the camera to let the system work properly. Indeed, the camera data have been streamed directly through the neuronavigation software, as in a real TMS session.
Hence, the head reference needs to be visible for the entire session, in order to let the SofTaxic Optic software work. However, to let the robot move without obstacles, during the data acquisition the phantom head, with the attached markers, has been kept outside the calibration workspace, although it lasted to be visible to the camera.
The camera has been connected to a pc (Ubuntu 16.04 O.S.) through an USB cable, while the robot has been connected to the same pc through an Ethernet cable.
An interface to manage the robot and camera connections has been developed in c++, using the Qt libraries.
The robot has been controlled real-time, with a 1kHz control rate, by means of the Franka Control Interface (FCI). Once both the camera and the robot have been connected, the latter was moved to the calibration points. Camera and robot data (respectively coil item pose with respect to the camera base frame and end-effector pose with respect to the robot base frame) have been both acquired at the camera maximum frequency, i.e. 20Hz.
Then acquired data were processed on Matlab 2017. Firstly only data at static poses were selected; then calibration algorithms were implemented and tested on those data. Specifically n poses where acquired to perform the calibration and other m = 250 poses where used to evaluate algorithms performance. Moreover n has been varied from 5 to 250 with steps of 15. The calibration workspace was chosen as a sphere with a radius of 0.1m and varying the orientation in the range ±30deg among the different points. Then, each approach has been tested on sphere workspaces of different sizes, respectively with radius of 0.05, 0.08, 0.1 and 0.15m, and orientation varying in the range ±30deg.
III. Results
As highlighted in Fig. 2, the three algorithms evaluated have the same performance in terms of orientation error, but different for the position error and computational time performance.
Fig. 2.
Calibration performance for QUAT, SGO and QR24 algorithms with respect to the number of calibration points employed, evaluated for a sphere workspace with radius of 0.1m: a) Position errors; b) Orientation errors; c) Computational time. The solid lines and the colored areas represent the mean value and the standard deviation, respectively.
Indeed, Fig. 2b shows that the SGO, the QUAT and the QR24 algorithms presented quite the same behavior and values, varying the number n of points employed in the calibration, for the orientation error, that is about 0.24 ± 0.12deg.
On the other hand, the error values for the position part are different among the approaches (Fig. 2a). The QR24 algorithm shows the higher error values, with a maximum of 2.19 ± 0.72mm, for a workspace with radius of 0.1m. While the SGO and QUAT methods, that presents the same trend for n ≥ 20, have maximum error, within that range, equal to 0.98 ± 0.40mm. The QUAT approach present, only for n < 20, error values few orders of magnitude higher than for n ≥ 20, resulting in a trend quite vertical in the figures 2a and 2b.
As regards the computational time, the QR24 method shows the best performance, with values lower than 1s even when assessed for many calibration points. The SGO method has a computational time varying in the range 8 ÷ 17s, slightly influenced by the number of calibration points involved.
Conversely, the QUAT algorithm computational time is deeply related to the number of pose pairs involved in the calibration, varying from 1.17s to 15.11s for n in the range 5 ÷ 250.
It is worth mentioning that the quaternion-based approach has revealed divergent performance on different datasets, contrary to the other two approaches.
The SGO method showed lower error values (0.92 ± 0.36mm and 0.24 ± 0.13deg for the best case n = 155) than the QR24 method (1.80 ± 0.55mm and 0.24 ± 0.13deg for the best case n = 155) and more stability than the QUAT one.
Although the SGO computational time is worse than the QR24 one (10.52 ± 1.69s and 0.15 ± 0.14, respectively), it remains enough constant with respect to the number of calibration points.
At the end, a small workspace and about 150 pose pairs allow to achieve best performance (position and orientation error equal to 0.82 ± 0.33mm and 0.22 ± 0.11deg respectively, and computational time about 10s), as showed in Fig. 3 and Fig. 4. This result is reported only for the SGO method, because the errors of the other two approaches seemed to be not influenced by the workspace size.
Fig. 3.
Average values of the position errors for the SGO algorithm with respect to the number of calibration points employed, for sphere workspace of radius r0 = 0.05m, r1 = 0.08m, r2 = 0.1m and r3 = 0.15m.
Fig. 4.
Average value of the orientation errors for the SGO algorithm with respect to the number of calibration points employed, for sphere workspace of radius r0 = 0.05m, r1 = 0.08m, r2 = 0.1m and r3 = 0.15m.
IV. Conclusion
In the present study an evaluation and comparison of three different algorithms for the hand-eye and robot-world calibration problem has been presented, particularly for TMS application. An old and well-known approach has been compared with two different and recently proposed methods. The algorithms have been implemented and evaluated in terms of computational time and position and orientation errors in Matlab 2017. A robot-aided TMS system, composed of a 7 dof Panda robot (by Franka Emika GmbH) and a Polaris Vicra camera (by Northern Digital Inc) combined with the SofTaxic Optic software (by E.M.S. srl) was employed to test the algorithms. The robot moved the coil (linked to its end-effector) to the defined calibration points, varying its orientation. Robot and camera data were measured at 20Hz and then only measures corresponding to static poses were selected to perform the calibration.
Summing up, among the three calibration methods tested, the SGO one has been revealed the best for TMS application, because of its stability over different datasets and number of pose pairs taken into account, keeping the lowest position and rotation error values (0.83 ± 0.35mm and 0.22 ± 0.12deg respectively, for a sphere workspace with radius of 0.05m). Although its computational time is higher than the others, it is not influenced by the number of calibration points, with values suitable for TMS application, where the calibration is usually performed only once before many sessions and offline.
The orientation errors for the SGO method, as for the other two, results acceptable for the TMS application, considering that in the neuronavigation software the admissible threshold is usually set on 5 degrees.
Furthermore a small workspace (sphere with radius of 0.05m) and a number of calibration points about 150 allow to achieve the best performance for the SGO method. Obviously a sphere radius of 0.05m could be too small and not suitable for all generic TMS applications. However this result shows that decreasing the workspace size the calibration performance could be improved. Therefore, in order to achieve the best performance, it is advisible to adapt and limit the calibration workspace to the smallest useful for each TMS application, instead of employing the biggest one for all the possible applications.
Future works will be address to evaluate other calibration algorithms, assessing how perfomance vary into workspaces different from the calibration one. Furthermore, while in this work we evaluated errors only in static poses, algorithms’ performance will be evaluated also in dynamic conditions. Hence, dynamic performance need to be assessed for scenarios as high frequency stimulation during task providing for head’s movements.
Acknowledgement
This work was partly supported by the European Projects H2020 RESHAPE: REstoring the Self with embodiable HAnd ProsthesEs (ERC-2015-STG, project n. 67890)
References
- [1].Zakaria WW, Tomari R, Ngadengon R. Active head motion compensation of tms robotic system using neuro-fuzzy estimation. MATEC Web of Conferences; EDP Sciences; 2016. [Google Scholar]
- [2].Ruohonen J, Karhu J. Navigated transcranial magnetic stimulation. Neurophysiologie clinique/Clinical neurophysiology. 2010;40(1):7–17. doi: 10.1016/j.neucli.2010.01.006. [DOI] [PubMed] [Google Scholar]
- [3].Lancaster JL, Narayana S, Wenzel D, Luckemeyer J, Roby J, Fox P. Evaluation of an image-guided, robotically positioned transcranial magnetic stimulation system. Human brain mapping. 2004;22(4):329–340. doi: 10.1002/hbm.20041. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [4].Kantelhardt SR, Fadini T, Finke M, Kallenberg K, Siemerkus J, Bockermann V, Matthaeus L, Paulus W, Schweikard A, Rohde V, et al. Robot-assisted image-guided transcranial magnetic stimulation for somatotopic mapping of the motor cortex: a clinical pilot study. Acta neurochirurgica. 2010;152(2):333–343. doi: 10.1007/s00701-009-0565-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [5].Yi X, Bicker R. Design of a robotic transcranial magnetic stimulation system. Ph.D. dissertation, University of Newcastle upon Tyne; 2012. [Google Scholar]
- [6].Matthäus L, Giese A, Trillenberg P, Wertheimer D, Schweikard A. Solving the positioning problem in tms. GMS CURAC. 2006;1 [Google Scholar]
- [7].Pennimpede G, Spedaliere L, Formica D, Di Pino G, Zollo L, Pellegrino G, Di Lazzaro V, Guglielmelli E. Hot spot hound: A novel robot-assisted platform for enhancing tms performance. Engineering in Medicine and Biology Society (EMBC), 2013 35th Annual International Conference of the IEEE; IEEE; 2013. pp. 6301–6304. [DOI] [PubMed] [Google Scholar]
- [8].Ernst F, Richter L, Matthäus L, Martens V, Bruder R, Schlaefer A, Schweikard A. Non-orthogonal tool/flange and robot/world calibration. The International Journal of Medical Robotics and Computer Assisted Surgery. 2012;8(4):407–420. doi: 10.1002/rcs.1427. [DOI] [PubMed] [Google Scholar]
- [9].Richter L, Ernst F, Schlaefer A, Schweikard A. Robust real-time robot–world calibration for robotized transcranial magnetic stimulation. The International Journal of Medical Robotics and Computer Assisted Surgery. 2011;7(4):414–422. doi: 10.1002/rcs.411. [DOI] [PubMed] [Google Scholar]
- [10].Ha J, Kang D, Park FC. A stochastic global optimization algorithm for the two-frame sensor calibration problem. IEEE Transactions on Industrial Electronics. 2016;63(4):2434–2446. [Google Scholar]
- [11].Zhuang H, Roth ZS, Sudhakar R. Simultaneous robot/world and tool/flange calibration by solving homogeneous transformation equations of the form ax= yb. IEEE Transactions on Robotics and Automation. 1994;10(4):549–554. [Google Scholar]
- [12].Li H, Ma Q, Wang T, Chirikjian GS. Simultaneous hand-eye and robot-world calibration by solving the ax = yb problem without correspondence. IEEE Robotics and Automation Letters. 2016;1(1):145–152. [Google Scholar]
- [13].Dornaika F, Horaud R. Simultaneous robot-world and hand-eye calibration. IEEE transactions on Robotics and Automation. 1998;14(4):617–622. [Google Scholar]




