Skip to main content
Clinical Orthopaedics and Related Research logoLink to Clinical Orthopaedics and Related Research
. 2011 Feb 3;469(6):1701–1708. doi: 10.1007/s11999-011-1797-y

Does Perception of Usefulness of Arthroscopic Simulators Differ with Levels of Experience?

Gabriëlle J M Tuijthof 1,2,, P Visser 1, Inger N Sierevelt 1, C Niek Van Dijk 1, Gino M M J Kerkhoffs 1
PMCID: PMC3094627  PMID: 21290203

Abstract

Background

Some commercial simulators are available for training basic arthroscopic skills. However, it is unclear if these simulators allow training for their intended purposes and whether the perception of usefulness relates to level of experience.

Questions/purposes

We addressed the following questions: (1) Do commercial simulators have construct (times to perform tasks) and face validity (realism), and (2) is the perception of usefulness (educational value and user-friendliness) related to level of experience?

Methods

We evaluated two commercially available virtual reality simulators (Simulators A and B) and recruited 11 and nine novices (no arthroscopies), four and four intermediates (one to 59 arthroscopies), and seven and nine experts (> 60 arthroscopies) to test the devices. To assess construct validity, we recorded the median time per experience group for each of five repetitions of one identical navigation task. To assess face validity, we used a questionnaire to judge up to three simulator characteristic tasks; the questionnaire asked about the realism, perception of educational value, and perception of user-friendliness.

Results

We observed partial construct validity for Simulators A and B and considered face validity satisfactory for both simulators for simulating the outer appearance and human joint, but barely satisfactory for the instruments. Simulators A and B had equal educational value according to the participants. User-friendliness was judged better for Simulator B although both were graded satisfactory. The perception of usefulness did not differ with level of experience.

Conclusions

Our observations suggest training on either simulator is reasonable preparation for real-life arthroscopy, although there is room for improvement for both simulators.

Clinical Relevance

These simulators provide training in surgical skills without compromising patient safety.

Introduction

Substantial surgical skills are required to perform an arthroscopic procedure without risking iatrogenic injury of articular cartilage and within the routine time scheduled for the operation [16, 17, 20, 23]. Learning arthroscopic skills takes considerable time and implicates an increased risk of surgical errors during the early stages of the learning curve when operating on patients [4, 17, 24]. The traditional learning model where the trainee is supervised continuously by the surgeon attempts to minimize these surgical errors. However, as the training time for acquiring arthroscopic skills is being reduced [8, 12] and demands from society for high-quality healthcare increase [23], initiatives have been taken to train basic skills away from the operating room [8, 12].

Training arthroscopic surgical skills preferably is performed with actual instrument handling. The theory that states skilled motor behavior relies on accurate predictive models of our body and the environment we interact with (eg, instruments) supports this approach [5, 14, 28, 33, 34]. These predictive models are stored in our central nervous system. To do a certain task, the best available predictive model is selected. A key feature in this theory is that these predictive models are tuned, updated, and learned by providing feedback from our sensory organs (vision and proprioception).

This requires medical simulators that facilitate adequate training. A broad spectrum of simulators has been described in the literature. Traditionally, cadaveric material has been used as a substitute for live patients [10, 20]. Its importance is evident; however, disadvantages are the availability of cadaveric material and preparation time. Two types of simulators have been introduced to overcome the disadvantages of cadaveric material: anatomic bench models [25, 32] and virtual reality systems [4, 15, 17, 26, 35]. As these simulator developments have reached maturity, they have become commercially available. However, it is unclear if these simulators qualify as suitable training means for arthroscopic skills.

We therefore addressed the following questions: (1) do commercial simulators have construct (times to perform tasks) and face validity (realism), and (2) is the perception of usefulness (educational value and user-friendliness) related to level of experience?

Materials and Methods

On February 18, 2009, we performed a systematic search with the Google™ and Yahoo!® search engines as these contain the largest set of Web pages [1, 7]. Combinations of search terms were used: arthroscopy, simulator, orthopaedic, models, simulation, and trainer. A complementary search was performed in classification code G09B23/28 of the patent database Esp@cenet®. Eight different physical and virtual reality arthroscopy simulators were commercially available. Companies were sent an invitation with the request to provide the simulator for 2 weeks at our institute. Two companies agreed to participate: Toltech Knee Arthroscopy Simulator (Touch of Life Technologies, Aurora, CO, USA) and InsightArthroVR® Arthroscopy Simulator (GMV, Madrid, Spain). The other companies [6, 9, 19, 21, 25, 29, 30] refrained for various reasons unrelated to financial issues.

The Toltech Knee Arthroscopy Simulator (Simulator A) is a virtual reality simulator for arthroscopic knee surgery with two handles that give haptic feedback (Fig. 1) (Appendix 1). The InsightArthroVR® Arthroscopy Simulator (Simulator B) is a virtual reality simulator for arthroscopic knee and shoulder surgery with a multitool that gives haptic feedback (Fig. 2) (Appendix 1).

Fig. 1.

Fig. 1

A photograph shows a participant performing tasks on Simulator A.

Fig. 2.

Fig. 2

A photograph shows a participant performing tasks on Simulator B.

We recruited 37 participants, including (1) all staff members practicing arthroscopy routinely and present at the time of testing (except the main researcher GMMJK), (2) all residents present at the time of testing, and (3) medical students and researchers of our orthopaedic department. The participants were divided into three groups having different levels of arthroscopic experience: novices who had never performed an arthroscopic procedure, intermediates who had performed up to 59 arthroscopies, and experts who had performed more than 60 arthroscopies. This boundary level of 60 arthroscopies was based on the average opinion of fellowship directors who were asked to estimate the number of operations that should be performed to allow a trainee to perform unsupervised meniscectomies [24]. Simulator A was evaluated by 22 participants in April 2009 and Simulator B by 22 participants in October 2009 (Fig. 3). One participant had reached a higher level of experience between those times. The corresponding subgroups had similar characteristics (Fig. 3).

Fig. 3.

Fig. 3

A flowchart shows the participant population. Subgroups were made on arthroscopic experience at three levels based on the number of arthroscopies performed: novices (0), intermediates (1–59), and experts (> 60). Seven participants evaluated both simulators. The age in years and the number of attended arthroscopies (“Observation”) are expressed as median with range in parentheses. The number of participants who previously had used a simulator (“Simulator”) or had experience in playing computer games (“Games”) is shown.

All participants were scheduled a maximum period of 30 minutes. They had no opportunity to familiarize themselves with either simulator before the experiment. The researcher showed the selection of exercises and performance of the calibration protocol and tasks for the test.

The assessment of construct validity (time to perform a task) was based on one basic navigation task. As the simulators were unlikely to offer a navigation task that was the same, one navigation task was prescribed that can and could be performed on all simulators for comparison. With the arthroscope placed in the anterolateral portal and the probe in the anteromedial portal, nine anatomic landmarks had to be probed sequentially: medial femoral condyle, medial tibial plateau, posterior horn of the medial meniscus, midsection of the medial meniscus, ACL, lateral femoral condyle, lateral tibial plateau, posterior horn of the lateral meniscus, and midsection of the lateral meniscus [32]. The participants were asked to repeat this navigation task up to five times in a limit of 10 minutes. The navigation task time was defined as described previously [32] and determined with a separate video recording of the simulator monitor in which the virtual intraarticular joint is presented. We recorded the median time per experience group for each of five repetitions of the navigation task.

Face validity (realism), educational value, and user-friendliness were determined by giving the participants a second task in which exercise(s) had to be performed that were characteristic for that particular simulator and by asking them to fill out a questionnaire afterward. The exercises were selected by the faculty surgeon (GMMJK) and the company to be sure that they best represented the capability of the simulator. Assistance in performing these exercise(s) was given only if a participant failed to continue for a period of 2 minutes. Task performance was pointed out to the participants. The characteristic exercise chosen for Simulator A was “inspection of the suprapatellar pouch with only the 30° arthroscope.” This exercise is set up in three stages: watching an instruction video of the exercise, performing the exercise once guided by example hint-images in a stepwise sequence, and performing the complete exercise once again without guidance. The exercise chosen for Simulator B was threefold: microfracture technique to treat a cartilage lesion in the femoral condyle, visual exploration and probing of a superior labrum anterior superior lesion, and placement of three suture anchors repairing a Bankart lesion (shoulder instability). All three exercises were preceded by textual instructions and had to be performed once. The questionnaire consisted of questions regarding general information (Fig. 3); face validity of the outer appearance of the simulator, the intraarticular virtual joint, and the virtual instruments (Table 1); educational value; and user-friendliness (Table 2). Questions were answered using a 10-point numerical rating scale (NRS) (eg, 0 = completely unrealistic and 10 = completely realistic) or dichotomous requiring a yes/no answer. A 10-point NRS was chosen as all participants were Dutch and this grading system is used at all educational institutions. A value of 7 or greater was considered sufficient. Thus, we expected the grading to be performed based on uniform interpretation of the NRS. Some questions featured a “not applicable (N/A)” answer option, which could be used solely by novices, as these questions required prior knowledge of the real-life arthroscopic situation. For the same reason, only the answers from the expert and intermediate groups were used on simulator realism and educational value. Only answers from the novice and intermediate groups were used on user-friendliness.

Table 1.

Questions addressing face validity

Face validity aspect Question
Outer appearance What is your opinion of the outer appearance of this simulator?
Is it clear in which joint you will be operating?
Is it clear which portals are being used?
Intraarticular joint How realistic is the intraarticular anatomy?
How realistic is the texture of the structures?
How realistic is the color of the structures?
How realistic is the size of the structures?
How realistic is the size of the intraarticular joint space?
How realistic is the arthroscopic image?
Instruments How realistic do the instruments look?
How realistic is the motion of your instruments?
How realistic does the tissue feel when you are probing?

All questions were answered on a 10-point numerical rating scale.

Table 2.

Questions addressing educational value and user-friendliness

Parameter Question
Educational value I The simulator allows training of joint inspection*
The simulator allows training of therapeutic intervention*
The simulator allows training of joint irrigation*
The variation of exercises offered by the simulator is adequate*
Difference in required skill level between exercises is adequate*
Educational value II The simulator is a good way to prepare for a real-life arthroscopic operation*
User-friendliness I How clear are the instructions to start an exercise on the simulator?
How clear is the presentation of your performance by the simulator?
Is it clear how you can improve your performance?
How motivating is the way the results are presented to improve your performance?
User-friendliness II I felt the need to read a manual before operating the simulator*

* Questions requiring a dichotomous yes/no answer; all other questions were answered on a 10-point numerical rating scale.

The presence of normal distributions of task times was assessed by Kolmogorov-Smirnov tests. Owing to small sample sizes and skewed distributions, the task times were analyzed nonparametrically. Construct validity was determined for each simulator independently by using Kruskal-Wallis tests to calculate the overall presence of differences in task times between the three experience groups for each of the five task repetitions. The significance level was adjusted for multiple comparisons with the Bonferroni-Holm procedure (alpha = 0.05) [11]; when we detect significant differences we performed pair-wise comparisons between the experience groups separately using Mann-Whitney U tests. The scores of the three separate aspects of face validity of the simulators (Table 1) and User-friendliness I (Table 2) were expressed as mean summary scores of the corresponding questions. Educational Value I (Table 2) was expressed as a sum score of five dichotomous questions and ranged from 0 to 5. The mean summary scores (Face Validity and User-friendliness I) were verified for normality by Kolmogorov-Smirnov tests, expressed as mean and SD, and assessed for differences between both simulators with Student’s t tests. The ordinal scale of Educational Value I was presented as medians with ranges and analyzed using a Mann-Whitney U test. The dichotomous questions (Educational Value II and User-friendliness II) expressed as categorical yes/no answers were presented as frequencies and percentages (%) and analyzed by chi square tests or Fisher’s exact test (in case one or more cells had expected counts less than five). The significance level was adjusted for multiple comparisons with the Bonferroni-Holm procedure (alpha = 0.05) [11].

Results

With the exception of two participants, the novice group completed only the first repetition of the navigation task or none at all on Simulator A within the time limit (Fig. 4). The novices were slower (p = 0.001) in completing the first repetition. Post hoc analysis showed the navigation task times of the experts (median, 125 seconds; range, 68–245 seconds) and the intermediates (median, 129 seconds; range, 60–311 seconds) were faster (p < 0.001 and p = 0.01, respectively) than those of the novices (median, 447 seconds; range, 181–600 seconds) (Fig. 4). The task times of the intermediates and the experts were similar (p = 0.93). No differences were observed between the experience groups for the other repetitions. For Simulator B, we observed slower task completion of the novices for the second and third repetitions (p = 0.005 and p = 0.008, respectively). The navigation task times of the first repetition of the experts (median, 90 seconds; range, 65–177 seconds) were not faster than those of the novices (median, 165 seconds; range, 109–605 seconds) (p = 0.019) and those of the intermediates (median, 105 seconds; range, 75–204 seconds) (p = 0.503) (Fig. 4). Post hoc comparisons of the second and third repetitions showed faster (p = 0.001 and p = 0.002, respectively) task times of the experts compared with those of the novices (Fig. 4). The task times of the intermediates were not different compared with those of the experts or novices for these repetitions.

Fig. 4A–B.

Fig. 4A–B

The graphs show the results of the navigation repetitions for (A) Simulator A and (B) Simulator B. The results are presented as medians with ranges. Construct validity was observed for the first repetition of Simulator A and the second and third repetitions of Simulator B.

The mean face validity scores of the outer appearance and simulated intraarticular joint were 7.3 (SD, 1.4) and 6.4 (SD, 1.4) for Simulator A and 8.4 (SD, 0.6) and 6.1 (SD, 0.9) for Simulator B, respectively. Thus, they were judged sufficient by the intermediates and experts (Fig. 5). The mean face validity score of the simulated instruments was 4.9 (SD, 1.5) for Simulator A and 5.7 (SD, 1.2) for Simulator B. Thus, the face validity of the simulated instruments was judged barely sufficient for both simulators (Fig. 5). Differences were not observed for any aspect of face validity between the simulators. The median sum score for Educational Value I was 3 (range, 1–5) for Simulator A and 5 (range, 2–5) for Simulator B (p = 0.009). Simulator A was judged suitable for real-life surgery (Educational Value II) by 10 of 11 participants (91%), as was Simulator B by all 13 participants (100%) (p = 0.46). The mean score of 8.3 (SD, 1.0) for User-friendliness I of Simulator B was greater (p < 0.001) than that for Simulator A (6.5 [SD, 1.3]) (Fig. 5). More (p = 0.002) respondents felt the need to read the manual (User-friendliness II) before operating Simulator A (11 of 15, 73.3%) than before operating Simulator B (two of 13, 15.4%).

Fig. 5.

Fig. 5

A graph shows the results of the normalized sum scores for face validity and User-friendliness I. The values are expressed as means with SDs. User-friendliness I is the combined opinion of the intermediates and novices; the other columns are the combined opinions of the experts and the intermediates. The face validity of the outer appearance and intraarticular joint were judged sufficient. The face validity of the instruments was judged barely sufficient for both simulators. Differences were not observed for any aspect of face validity between the simulators. The mean score for User-friendliness I of Simulator B was greater (p < 0.001) than that for Simulator A.

Discussion

As arthroscopic simulators gain maturity and become commercially available, it is unclear whether they are suitable for use in training. We therefore addressed the following questions: (1) do commercial simulators have construct (times to perform tasks) and face validity (realism), and (2) is the perception of usefulness (educational value and user-friendliness) related to level of experience?

We note limitations to our study. First is the relatively small number of participants in each experience group, which could have led to nonsignificant results and the skewed distribution of task times. The groups could not be enlarged owing to logistic limitations; however, care was taken to include all experts and intermediates present at the time of testing to prevent selection bias. Other evaluation studies with simulators have experienced similar problems in recruiting participants [3, 17, 22, 32]. Second is the absence of transfer or predictive validation, which was not feasible in the time frame. Studies performed with similar arthroscopy simulators [9, 12] do show training on these systems decreases the operative learning curve. These findings are in line with the opinion of all participants that training on either simulator will be good preparation before performing real-life arthroscopy. Third, our study is limited to two arthroscopic simulators, which were, in principle, not that distinctive as they are both virtual reality systems with haptic feedback devices. This is reflected in the results. If other types of simulators would have been included, such as anatomic bench models, a wider palette of alternatives could have been described and differences would be more pronounced. Fourth, only one navigation task was used to observe construct validity. The choice of this task is in line with tasks evaluated in other studies [12, 17, 22, 32] and is indicated as an important arthroscopic skill to master before operating in the theater [27]. Fifth, only a few tasks were used to determine face validity, educational value, and user-friendliness. These tasks were chosen carefully and reflected the way exercises are built up and feedback on performance is given by each simulator. Therefore, we assumed the participants were given a good impression of the learning environment of each simulator. Sixth, the choice of expert level was somewhat arbitrary, especially for the novice versus intermediate groups. This could have influenced the demonstration of construct validity as the experience level might have been insufficiently distinctive.

Neither simulator showed full construct validity (Fig. 4) because the task times were substantially similar for all repetitions between the novices and experts, and similar between intermediates and experts. These findings are comparable to those in the study by Srivastava et al. [31], who used a similar division in experience levels and found no substantial differences between the groups. They speculated the results may be influenced by the fact that experts knew what to expect and novices were very motivated. This could be true for our study. A more detailed comparison with other studies cannot be made as the criteria to qualify as expert, intermediate, or novice differ among studies [17, 26, 32], or a different acceptable significance level was chosen [2]. It is recommended to set uniform experience levels when performing this type of study. By using the study of O’Neill et al. [24], a solid foundation for assigning experience levels was aimed for. Task time was chosen as an outcome measure, as it is widely used and validated in assessing surgical skills learning, it can be measured using all commercially available arthroscopy simulators, and it makes overall objective comparison possible.

Face validity was observed for both simulators, although there is room for improvement. The presence of tactile feedback in an arthroscopy simulator is considered essential to imitate clinical practice adequately and train safe manipulation [22, 36]. Intermediates and experts indicated tissue probing was unrealistic on both simulators (Fig. 5). Training skills without receiving natural feedback could lead to an offset in the internal models stored in our central nervous system. This might increase errors in the operating room. Performing realistic force feedback for cutting or shaving is another challenge to implement in these simulators [15]. The intraarticular joint space of Simulator B was considered large. Additionally, as both simulators present virtual reality images, they leave an artificial impression. This could be improved with the latest animation techniques used in the gaming industry. These face validity results are comparable to results of other studies, in which imitation of the real-life situation generally is sufficient, but none is given a perfect score [2, 17, 18, 31, 32]. An explanation could be that simulators that do not resemble a human joint are graded more mildly as it is so obvious that they do not resemble reality, whereas simulators that come close to the real-life representation are scrutinized more thoroughly for small deviations. Educational value was perceived for both simulators by intermediates and experts. This subjective opinion is supported by Issenberg et al. [13], who identified a top 10 list of most important educational criteria for medical simulators. Both simulators fulfill seven of 10 criteria, including the most important ones: give feedback on performance, allow repetitive practice, and allow integration into the curriculum. Unfortunately, they do not offer training of precise portal placement, which is another important skill to be mastered before starting to operate on patients [27].

Overall, Simulator B was considered more user-friendly than Simulator A, although Simulator A was graded satisfactory (Fig. 5). The feedback given by Simulator B resembles the way mainstream computer games do this. For both simulators, there is room for improvement. Simulator B offers a larger variety of exercises, and is more user-friendly, whereas Simulator A showed a more distinct difference in task time between experts and novices. Teaching surgeons can embrace this type of simulator for implementation in curricula.

Acknowledgments

We thank Touch of Life Technologies and GMV for providing their simulators for use free and we also thank all participants in the evaluation.

Open Access

This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.

Appendix 1. Detailed description of simulators

Before training can be started, both simulators require starting a computer. Subsequently, the user interfaces guide the trainees through a calibration protocol, after which each of the simulators guide the trainee through the range of exercises in a self-learning curriculum.

Simulator A has a Dell PC [Dell, Round Rock, TX, USA] with a quad core processor and high-end video card, two monitors, and two handles. The left monitor displays a virtual mentor, and the right monitor displays the simulated arthroscopic view (Fig. 1). Two handles represent a 30° arthroscope and a probe. Both handles incorporate haptic feedback (Phantom®; SensAble Technologies, Woburn, MA, USA). The camera and a light source can be independently swiveled. A passive leg is present that allows knee articulation. Simulator A provides a complete self-learning curriculum for training of diagnostic knee arthroscopy with 10 assorted meniscal tears that can be inspected and probed. After an exercise, an overall score (0%–100%) is provided consisting of a combination of metrics: camera and scope angle, limb flexion, varus/valgus force, position of the tip of the arthroscope or probe, task time, and whether a particular structure has been probed. At the lowest difficulty level, hint images guide the trainee. A beep signal warns if too much force is exerted on tissue. Its purchase price is $85,000 and comes with an optional annual service agreement of $15,000, which covers hardware and software updates.

Simulator B has a Dell PC with a quad core processor and a high-end video card and software version v.5.0.2, interchangeable models representing knee and shoulder anatomy, and one monitor with touch screen displays the simulated arthroscopic view or the curriculum (Fig. 2). A simulated 30° arthroscopic camera and multipurpose tool are provided with haptic feedback (Phantom® Omni™; SensAble Technologies). The camera and a light source can be independently swiveled. The multipurpose tool represents graspers, a power shaver, a probe, or a “chondropick” depending on the exercise. The handles of the arthroscope and tool can be interchanged. The knee module contains a passive leg that can be manipulated. Simulator B provides a complete self-learning curriculum that guides the trainee through the tasks to train basic diagnostic and therapeutic skills (eg, microfracturing or meniscectomy) in the knee and shoulder: seven meniscal tears and three subacromial and four glenohumeral disorders. After an exercise, Simulator B offers a selection of metrics to provide feedback in a graphic presentation: score between 0 and 10, score related to task completion dependent on the task, task time, covered distance of camera and instrument, roughness of camera and instrument motion, instrument collisions, and covered distance. During the exercise, textual hints are given on how to proceed. Competency levels can be configured. Its purchase price is $91,283, including the shoulder and knee modules, and comes with an optional annual service agreement of $10,945, which covers hardware and software.

Footnotes

Each author certifies that he or she has no commercial associations (eg, consultancies, stock ownership, equity interest, patent/licensing arrangements, etc) that might pose a conflict of interest in connection with the submitted article.

This work was performed at the Academic Medical Center.

References

  • 1.Barker J, Kupersmith J. Recommended search engines. UC Berkeley Library. Available at: http://www.lib.berkeley.edu/TeachingLib/Guides/Internet/SearchEngines.html. Accessed December 10, 2008.
  • 2.Bayona S, Fernandez-Arroyo JM, Martin I, Bayona P. Assessment study of insightARTHRO VR arthroscopy virtual training simulator: face, content, and construct validities. J Robot Surg. 2008;2:151–158. doi: 10.1007/s11701-008-0101-y. [DOI] [PubMed] [Google Scholar]
  • 3.Bliss JP, Hanner-Bailey HS, Scerbo MW. Determining the efficacy of an immersive trainer for arthroscopy skills. Stud Health Technol Inform. 2005;111:54–56. [PubMed] [Google Scholar]
  • 4.Cannon WD, Eckhoff DG, Garrett WE, Jr, Hunter RE, Sweeney HJ. Report of a group developing a virtual reality simulator for arthroscopic surgery of the knee joint. Clin Orthop Relat Res. 2006;442:21–29. doi: 10.1097/01.blo.0000197080.34223.00. [DOI] [PubMed] [Google Scholar]
  • 5.Dankelman J, Chmarra MK, Verdaasdonk EG, Stassen LP, Grimbergen CA. Fundamental aspects of learning minimally invasive surgical skills. Minim Invasive Ther Allied Technol. 2005;14:247–256. doi: 10.1080/13645700500272413. [DOI] [PubMed] [Google Scholar]
  • 6.DelltaTech. Simendo arthroscopy. Available at: http://www.simendo.eu. Accessed February 18, 2009.
  • 7.Dogpile.com. Different engines, different results: web searchers not always finding what they’re looking for online. Available at: http://www.infospaceinc.com/files/Overlap-DifferentEnginesDifferentResults.pdf. Accessed December 10, 2008.
  • 8.Farnworth LR, Lemay DE, Wooldridge T, Mabrey JD, Blaschak MJ, DeCoster TA, Wascher DC, Schenck RC., Jr A comparison of operative times in arthroscopic ACL reconstruction between orthopaedic faculty and residents: the financial impact of orthopaedic surgical training in the operating room. Iowa Orthop J. 2001;21:31–35. [PMC free article] [PubMed] [Google Scholar]
  • 9.Gomoll AH, Pappas G, Forsythe B, Warner JJ. Individual skill progression on a virtual reality simulator for shoulder arthroscopy: a 3-year follow-up study. Am J Sports Med. 2008;36:1139–1142. doi: 10.1177/0363546508314406. [DOI] [PubMed] [Google Scholar]
  • 10.Grechenig W, Fellinger M, Fankhauser F, Weiglein AH. The Graz learning and training model for arthroscopic surgery. Surg Radiol Anat. 1999;21:347–350. doi: 10.1007/BF01631337. [DOI] [PubMed] [Google Scholar]
  • 11.Holm S. A simple sequentially rejective multiple test procedure. Scand J Statist. 1979;6:65–70. [Google Scholar]
  • 12.Howells NR, Gill HS, Carr AJ, Price AJ, Rees JL. Transferring simulated arthroscopic skills to the operating theatre: a randomised blinded study. J Bone Joint Surg Br. 2008;90:494–499. doi: 10.1302/0301-620X.90B4.20414. [DOI] [PubMed] [Google Scholar]
  • 13.Issenberg SB, McGaghie WC, Petrusa ER, Lee GD, Scalese RJ. Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review. Med Teach. 2005;27:10–28. doi: 10.1080/01421590500046924. [DOI] [PubMed] [Google Scholar]
  • 14.Kaufman HH, Wiegand RL, Tunick RH. Teaching surgeons to operate: principles of psychomotor skills training. Acta Neurochir (Wien) 1987;87:1–7. doi: 10.1007/BF02076007. [DOI] [PubMed] [Google Scholar]
  • 15.Lu J, Chen J, Cakmak H, Maass H, Kuhnapfel U, Bretthauer G. A knee arthroscopy simulator for partial meniscectomy training. Proceedings of the 7th Asian Control Conference. Hong Kong, China; 2009:763–767.
  • 16.Mabrey JD, Cannon WD, Gillogly SD, Kasser JR, Sweeney HJ, Zarins B, Mevis H, Garrett WE, Poss R. Development of a virtual reality arthroscopic knee simulator. Stud Health Technol Inform. 2000;70:192–194. [PubMed] [Google Scholar]
  • 17.McCarthy AD, Moody L, Waterworth AR, Bickerstaff DR. Passive haptics in a knee arthroscopy simulator: is it valid for core skills training? Clin Orthop Relat Res. 2006;442:13–20. doi: 10.1097/01.blo.0000194678.10130.ff. [DOI] [PubMed] [Google Scholar]
  • 18.Megali G, Tonet O, Dario P, Vascellari A, Marcacci M. Computer-assisted training system for knee arthroscopy. Int J Med Robot. 2005;1:57–66. doi: 10.1002/rcs.28. [DOI] [PubMed] [Google Scholar]
  • 19.Mentice. Arthroscopy. Available at: http://www.mentice.com/. Accessed February 18, 2009.
  • 20.Meyer RD, Tamarapalli JR, Lemons JE. Arthroscopy training using a “black box” technique. Arthroscopy. 1993;9:338–340. doi: 10.1016/S0749-8063(05)80434-7. [DOI] [PubMed] [Google Scholar]
  • 21.Mitsubishi Electric Research Laboratories. Knee arthroscopy simulation using volumetric knee models. Available at: http://www.merl.com/projects/kneesystem2/. Accessed February 18, 2009.
  • 22.Moody L, Waterworth A, McCarthy AD, Harley P, Smallwood R. The feasibility of a mixed reality surgical training environment. Virtual Real. 2008;12:77–86. doi: 10.1007/s10055-007-0080-8. [DOI] [Google Scholar]
  • 23.Morris AH, Jennings JE, Stone RG, Katz JA, Garroway RY, Hendler RC. Guidelines for privileges in arthroscopic surgery. Arthroscopy. 1993;9:125–127. doi: 10.1016/S0749-8063(05)80359-7. [DOI] [PubMed] [Google Scholar]
  • 24.O’Neill PJ, Cosgarea AJ, Freedman JA, Queale WS, McFarland EG. Arthroscopic proficiency: a survey of orthopaedic sports medicine fellowship directors and orthopaedic surgery department chairs. Arthroscopy. 2002;18:795–800. doi: 10.1053/jars.2002.31699. [DOI] [PubMed] [Google Scholar]
  • 25.Pacific Research Laboratories, Inc. Sawbones. Available at: http://www.sawbones.com/. Accessed February 18, 2009.
  • 26.Pedowitz RA, Esch J, Snyder S. Evaluation of a virtual reality simulator for arthroscopy skills development. Arthroscopy. 2002;18:E29. doi: 10.1053/jars.2002.33791. [DOI] [PubMed] [Google Scholar]
  • 27.Safir O, Dubrowski A, Mirsky L, Lin C, Backstein D, Carnahan A. What skills should simulation training in arthroscopy teach residents? Int J Comput Assist Radiol Surg. 2008;3:433–437. doi: 10.1007/s11548-008-0249-y. [DOI] [PubMed] [Google Scholar]
  • 28.Schmidt RA. Motor schema theory after 27 years: reflections and implications for a new theory. Res Q Exerc Sport. 2003;74:366–375. doi: 10.1080/02701367.2003.10609106. [DOI] [PubMed] [Google Scholar]
  • 29.Simsurgery. SEP products. Available at: http://www.simsurgery.com/web/. Accessed February 18, 2009.
  • 30.Simulab Corp. Orthopedic products. Available at: http://www.simulab.com. Accessed February 18, 2009.
  • 31.Srivastava S, Youngblood PL, Rawn C, Hariri S, Heinrichs WL, Ladd AL. Initial evaluation of a shoulder arthroscopy simulator: establishing construct validity. J Shoulder Elbow Surg. 2004;13:196–205. doi: 10.1016/j.jse.2003.12.009. [DOI] [PubMed] [Google Scholar]
  • 32.Tuijthof GJ, van Sterkenburg MN, Sierevelt IN, van Oldenrijk J, Van Dijk CN, Kerkhoffs GM. First validation of the PASSPORT training environment for arthroscopic skills. Knee Surg Sports Traumatol Arthrosc. 2010;18:218–224. doi: 10.1007/s00167-009-0872-3. [DOI] [PubMed] [Google Scholar]
  • 33.Wolpert DM, Ghahramani Z. Computational principles of movement neuroscience. Nat Neurosci. 2000;3(suppl):1212–1217. doi: 10.1038/81497. [DOI] [PubMed] [Google Scholar]
  • 34.Wolpert DM, Ghahramani Z, Flanagan JR. Perspectives and problems in motor learning. Trends Cogn Sci. 2001;5:487–494. doi: 10.1016/S1364-6613(00)01773-3. [DOI] [PubMed] [Google Scholar]
  • 35.Ziegler R, Fischer G, Muller W, Gobel M. Virtual-reality arthroscopy training simulator. Comput Biol Med. 1995;25:193–203. doi: 10.1016/0010-4825(94)00038-R. [DOI] [PubMed] [Google Scholar]
  • 36.Zivanovic A, Dibble E, Davies B, Moody L, Waterworth A. Engineering requirements for a haptic simulator for knee arthroscopy training. Stud Health Technol Inform. 2003;94:413–418. [PubMed] [Google Scholar]

Articles from Clinical Orthopaedics and Related Research are provided here courtesy of The Association of Bone and Joint Surgeons

RESOURCES