Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Jun 6.
Published in final edited form as: J Surg Res. 2017 Mar 6;214:203–208. doi: 10.1016/j.jss.2017.02.071

A comparison of a homemade central line simulator to commercial models

Rebecca F Brown a,*,#, Christopher Tignanelli b,#, Joanna Grudziak a, Shelley Summerlin-Long a, Jeffrey Laux c, Andy Kiser d, Sean P Montgomery a
PMCID: PMC8179971  NIHMSID: NIHMS1705066  PMID: 28624045

Abstract

Background:

Simulation is quickly becoming vital to resident education, but commercially available central line models are costly and little information exists to evaluate their realism. This study compared an inexpensive homemade simulator to three commercially available simulators and rated model characteristics.

Materials and methods:

Seventeen physicians, all having placed >50 lines in their lifetime, completed blinded central line insertions on three commercial and one homemade model (made of silicone, tubing, and a pressurized pump system). Participants rated each model on the realism of its ultrasound image, cannulation feel, manometry, and overall. They then ranked the models based on the same variables. Rankings were assessed with Friedman’s and post hoc Conover’s tests, using alphas 0.05 and 0.008 (Bonferroni corrected), respectively.

Results:

The models significantly differed (P < 0.0004) in rankings across all dimensions. The homemade model was ranked best on ultrasound image, manometry measurement, cannulation feel, and overall quality by 71%, 67%, 53%, and 77% of raters, respectively. It was found to be statistically superior to the second rated model in all (P < 0.003) except cannulation feel (P = 0.134). Ultrasound image and manometry measurement received the lowest ratings across all models, indicating less realistic simulation. The cost of the homemade model was $400 compared to $1000-$8000 for commercial models.

Conclusions:

Our data suggest that an inexpensive, homemade central line model is as good or better than commercially available models. Areas for potential improvement within models include the ultrasound image and ability to appropriately measure manometry of accessed vessels.

Keywords: Central line insertion education, Central line insertion simulation, Patient simulation, Medical education, Resident training, Surgical procedures education

Introduction

Simulation is quickly becoming a vital tool for resident education. Simulation-based education has been used in multiple areas of medical education to develop resident’s technical skills and teach them safe practices.1 Central line simulation is one of the most frequently simulated procedural techniques in resident education.

Insertion and use of Central Venous Access Devices (CVAD) constitute a routine component of daily medical practice across specialties, especially in the intensive care unit and the operative suite. Improper placement of CVAD has been associated with significant morbidity that can be prevented with appropriate training.2 Unfortunately, CVAD complications are not rare events, with reported rates ranging from as low as 4% to as high as 45%.3-5 Common adverse events related to placement of CVAD include infection, arrhythmia, arterial puncture and cannulation, pneumothorax, and hematoma. Major AEs include air embolism, superior vena cava perforation, aortic perforation, and cardiac tamponade.5 Of all the risk factors for central line complications, the strongest predictor is the number of unsuccessful insertion attempts, a figure that correlates directly with procedural experience of the operator.6,7 Inexperience requiring multiple insertion attempts has also been shown to increase the rate of central line–associated bloodstream infection (CLABSI), a complication that is associated with significant health care costs.8,9

The impact of these complications is significant, and efforts to minimize and prevent their occurrence should be standard practice at teaching hospitals. Simulation allows for proper CVAD insertion training without putting patients at risk of harm. Multiple studies have demonstrated that simulation-based education for CVAD insertion decreases CLABSI rates.10 In addition to reducing CLABSI rates, simulation is associated with improved patient outcomes, including fewer insertion attempts and reduced pneumothorax rates.11 Another study demonstrated that the use of manometry to measure intravascular pressure before vessel dilation essentially eliminates arterial injury due to arterial cannulation, a complication that occurs in 0.1%-0.5% of CVAD insertions.12

Unfortunately, commercially available CVAD simulation models are costly (prices ranging from $1000-$800013-15) and are often limited in their functionality and the variable anatomy frequently encountered in clinical practice.16 In addition, little information exists to evaluate how well each model mimics human anatomy and physiology (i.e., venous versus arterial pressures of model vessels), information that is necessary to perform key portions and safety checkpoints of procedures in clinical practice. As simulation continues to gain importance in resident training, simulation centers at many institutions have started creating homemade models to allow for cost reduction while providing comparable quality to commercially available simulators.16-19 In addition, homemade models can be easily modified to provide the learner with challenging anatomy reflective of real-life practice. This study compared an inexpensive homemade internal jugular central line insertion simulator, developed to meet the needs of an institution-wide central line insertion training initiative,20 to three high-end commercially available simulators; various model characteristics were also rated.

Materials and methods

Institutional review board approval was obtained prior to initiation of the study. Upper level residents, fellows, and attending physicians from the Departments of Surgery, Anesthesia, and Internal Medicine at the University of North Carolina were recruited via email and word of mouth to participate in this study on a voluntary basis.

Four internal jugular central line simulator models (three commercial models [CMA, CMB, and CMC] and a homemade model [HM] constructed at our onsite simulation center, see Fig. 1) were set up as per manufacturer recommendations and confirmed to be functioning appropriately at the beginning of the study. Models were draped in a fashion such that only the area of the model to be used for ultrasound and access were visible, blinding the participants from recognizable branding or identifiable markings on models. Central line kits were provided at each of the four central line stations; these were reset after each insertion, and broken or altered components were replaced to ensure similar experiences for all participants. Coinvestigators were available to answer questions or troubleshoot model dysfunction for all participants.

Fig. 1 –

Fig. 1 –

Photographs of the homemade internal jugular model created in our simulation laboratory with a silicone mold and tubing and a pressurized pumping system. Overall supply cost for this model is $400.00.

Participants, deemed expert based on experience of having placed >50 lines in their lifetime, were asked to perform ultrasound-guided central line insertions on each of the four central line training models in random order and subsequently rate each model’s realism. A 10-point Likert scale was used, rating the following characteristics: ultrasound image, vessel appearance, tissue feel, ability to measure manometry, resistance when placing line, and overall impression (see Appendix A). After completion of all four insertions, participants then ranked the models against each other from best to worst based on the following characteristics: ultrasound image, manometry measurement, cannulation feel, and overall impression (see Appendix B). Surveys were collected in an opaque envelope to protect participant privacy.

Statistical analyses

Our primary analysis is a comparison of the rankings of the models. To assess the rankings, we used Friedman’s test followed by Bonferroni corrected (alpha = 0.008) post hoc Conover’s tests21 of the models in sequence of mean rank. A prestudy power calculation determined we would need 17 participants to achieve 80% power while holding the family-wise type I error rate to 5%. We also gathered ordinal ratings on corresponding variables for exploratory purposes. We examined the pattern of correlations among the variables and fit a mixed effects ordinal logistic regression model predicting the overall rating from the component ratings to identify which aspects seemed to be most relevant to the models’ quality. The analyses were conducted using R 3.3.0 and the ordinal and Pairwise Multiple Comparison of Mean Ranks (PMCMR) packages.22-24

Results

Seventeen individuals participated in the study. Table displays basic demographic information about the participants, including level of training, department, total ultrasound-guided lines placed in lifetime, and total lines placed in the past month.

Table –

Demographic information about study participants.

Characteristic n (%)
Level of training
 Resident (>PGY3) or fellow 7 (41)
 Attending 10 (59)
Department
 Anesthesia 7 (41)
 Surgery 8 (47)
 Internal medicine 1 (6)
Lines placed in lifetime
 50-100 6 (35)
 >100 11 (65)
Lines placed in last 30 d
 Less than 4 9 (53)
 Greater than or equal to 4 8 (47)

The internal jugular central line insertion models differed significantly in rankings across all four dimensions. HM, the homemade model, was typically ranked best, whereas the CMA was typically ranked worst (Fig. 2).

Fig. 2 –

Fig. 2 –

The proportion of times each model was given each ranking for each metric.

The models’ rankings differed on overall quality (χ2 = 33.5, df = 3, P < 0.001). In order of mean ranking from best to worst, the models were: HM then CMC, CMB, and CMA. Contrasting these in order, HM was significantly better in overall quality than CMC, and CMB was better than CMA (ps < 0.001), but CMC and CMB did not significantly differ (P = 0.34).

This ranking pattern was largely consistent across the three component dimensions: cannulation feel, manometry measurement, and ultrasound image. The models differed on cannulation feel (χ2 = 22.5, df = 3, P < 0.001). In order of mean ranking from, the models were: HM (best), then CMC, CMB, and CMA. HM was not significantly better than CMC (P = 0.13), and CMC was not significantly better than CMB (P = 0.29), but CMB was better than CMA (P < 0.001).

The rankings also differed on manometry measurement (χ2 = 18.5, df = 3, P < 0.001), with HM ranked number one, then CMB, CMC, and CMA. Contrasting these, HM was significantly better than CMB (P < 0.001), but the latter two contrasts were not significant (P = 0.34 and 1.0, respectively).

Finally, the models differed on quality of ultrasound image (χ2 = 31.7, df = 3, P < 0.001). HM was again preferred, followed by CMC, CMB, and CMA. HM was ranked significantly better than CMC (P = 0.003), and CMB was better than the CMA (P < 0.001), but CMC and CMB did not significantly differ (P = 0.39).

We also gathered rating data for the models for these characteristics. Boxplots of the distributions of ratings are displayed in Figure 3. Data for vessel appearance and resistance were not included due to a strongly positive correlation in rating scores with ultrasound image and tissue/cannulation feel, respectively, and therefore did not contribute any additional information. The characteristics rating data are consistent with the model ranking data; the Spearman correlations between the ultrasound images, manometry measurement, cannulation feel, and overall quality are −0.79, −0.45, −0.59, and −0.82, respectively, (these correlations are negative because a model ranking of 1 is best and 4 is worst, whereas a characteristic rating of 10 is most realistic and 1 is least realistic).

Fig. 3 –

Fig. 3 –

Boxplots of ratings of model characteristics. A rating of 1 indicates least realistic, whereas 10 means most realistic. The “whiskers” extend to the lowest and highest data values. Note that the boxplot for CMA looks unusual for Ultrasound Image because all ratings were 1s.

The rating data allow us to explore the degree to which we are able to assess independent aspects of the quality of the models. We formed a correlation matrix from the four ratings variables using polychoric correlations and performed an eigen decomposition of it. Only the first eigen value was greater than 1.0 (3.0, the next was 0.64), accounting for 76% of the multidimensional variance. This implies there may be only one real dimension of variation in the ratings, meaning that the models were consistently good/poor across the dimensions.

Finally, we explored the possible determinants of the overall quality rating. We fit a mixed effects ordinal logistic regression model to the rating data, with the overall quality rating as the response. Because CMA was consistently ranked poorly and because the relationships between the variables may have been different for this simulation model (i.e., an interaction), the CMA’s data were excluded from the exploratory regression model. A likelihood ratio test dropping all variables (except dummies for the simulation models, which have been established to differ) was highly significant (χ2 = 66.7, df = 3, P < 0.001), establishing that the regression model does contain information about the overall rating. The ratings for all three variables (cannulation feel, manometry measurement, and ultrasound image) spanned the full range from 1-10, and the standard deviations of the ratings were similar (ranging from 1.7 for cannulation feel to 2.9 for manometry). This allows us to tentatively interpret the magnitude of the coefficients and their significance as a measure of the importance of the variables. The most significant variable was cannulation feel (exp(β) = 3.7, z = 4.25, P < 0.001), followed by ultrasound image (exp(β) = 2.2, z = 3.67, P < 0.001), and manometry (exp(β) = 1.4, z = 2.60, P = 0.009).

Discussion

Historically, medical procedural training was based on the mantra of “see one, do one, teach one”, with patients serving as test subjects for new learners. This historic approach theoretically puts the patient at risk for significant harm due to iatrogenic injury. As medicine has evolved, so has its focus on patient safety and the quality of medical care delivered. This relatively new direction, coupled with resident work hour restrictions, significantly reduces the procedural exposure medical students and residents have throughout the course of their training.25,26 In response to these trends and to encourage deliberate practice, the use of procedural simulation models and national simulation mandates (e.g., Fundamentals of Laparoscopic Surgery, Fundamentals of Endoscopy) are being increasingly employed. The success of deliberate practice hinges on the accuracy of the training model utilized. Key procedural elements such as ultrasound guidance of the needle, traversing subcutaneous tissue, identification of the needle tip, venipuncture, and visualization of blood return in the syringe make up the repetitive practice needed in the development of expertise in CVAD placement.

As the use of simulation for resident training becomes more ubiquitous, the costs of obtaining accurate training models can present a significant challenge to programs across the country. Even after the initial investment of $1500-$8000 for a commercially available internal jugular central line insertion model, inserts necessary to maintain simulator functionality require a continual and often significant investment. When developing a hospital training protocol for use as part of institution-wide course for incoming residents,20 we were unable to identify any studies that evaluated the relationship between the financial investment in these commercial models and the quality of training experience delivered. Furthermore, no studies were identified comparing commercial models with each other, much less with a more affordable homemade simulator. Thus, in this study, we evaluated if more costly internal jugular commercial models provide a more realistic or better training apparatus than the less expensive $400 homemade model developed at our institution.

Conclusions

Our study demonstrates that an inexpensive, homemade internal jugular central line insertion model created using readily available materials and pumping system not only mimics qualities found in commercially available models but can also accurately simulate human anatomy and physiology. A panel of expert observers ranked our institution’s homemade internal jugular central line insertion model significantly higher in overall quality, ultrasound image, and manometry measurement when compared to three other commercially available simulators. This model was created at a fraction of the cost of commercially available models. In addition, homemade models provide the opportunity to simulate complex or aberrant anatomy, allowing for increasingly challenging simulation that could better prepare residents when abnormal anatomy is encountered clinically.

In summary, this study demonstrates that an inexpensive, homemade internal jugular central line insertion model is at least comparable, if not superior to more expensive, commercially available simulators. Areas for improvements in all models tested included ultrasound image and ability to accurately control and measure manometry of accessed vessels, an important step in ensuring venous access prior to cannulation.

Supplementary Material

Appendicies A & B

Acknowledgment

The authors would like to acknowledge Neal Murty and Henry Goodell of the University of North Carolina Clinical Learning and Research (CLeAR) Center for their help in the development of our homemade central line model as well as their support with study set up and breakdown. The authors would also like to acknowledge Laerdal and Syndaver for providing central line simulators for use in the study.

Funding:

This work was supported by the University of North Carolina Institute for Healthcare Quality Improvement Seed Grant and the University of North Carolina Clinical Learning and Research (CLeAR) Center. The project described was supported by the National Center for Advancing Translational Sciences (NCATS), National Institutes of Health, through grant award number UL1TR001111. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.

Footnotes

Disclosure

The authors report no proprietary or commercial interest in any product mentioned or concept discussed in this article.

Supplementary data

Supplementary data related to this article can be found at http://dx.doi.org/10.1016/j.jss.2017.02.071.

REFERENCES

  • 1.Issenberg SB, McGaghie WC, Hart IR, et al. Simulation technology for health care professional skills training and assessment. JAMA. 1999;282:861–866. [DOI] [PubMed] [Google Scholar]
  • 2.Eggimann P, Harbarth S, Constantin MN, et al. Impact of a prevention strategy targeted at vascular-access care on incidence of infections acquired in intensive care. Lancet. 2000;355:1864–1868. [DOI] [PubMed] [Google Scholar]
  • 3.Hodzic S, Golic D, Smajic J, et al. Complications related to insertion and use of central venous catheters (CVC). Med Arch. 2014;68:300–303. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Bhutta ST, Culp WC. Evaluation and management of central venous access complications. Tech Vasc Interv Radiol. 2011;14:217–224. [DOI] [PubMed] [Google Scholar]
  • 5.Kornbau C, Lee KC, Hughes GD, et al. Central line complications. Int J Crit Illn Inj Sci. 2015;5:170–178. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Kusminsky RE. Complications of central venous catheterization. J Am Coll Surg. 2007;204:681–696. [DOI] [PubMed] [Google Scholar]
  • 7.Sznajder JI, Zveibil FR, Bitterman H, et al. Central vein catheterization. Failure and complication rates by three percutaneous approaches. Arch Intern Med. 1986;146:259–261. [DOI] [PubMed] [Google Scholar]
  • 8.O’Grady NP, Alexander M, Dellinger EP, et al. Guidelines for the prevention of intravascular catheter-related infections. The hospital infection control practices advisory committee, center for disease control and prevention, U.S. Pediatrics. 2002;110:e51. [DOI] [PubMed] [Google Scholar]
  • 9.Rello J, Ochagavia A, Sabanes E, et al. Evaluation of outcome of intravenous catheter-related infections in critically ill patients. Am J Respir Crit Care Med. 2000;162:1027–1030. [DOI] [PubMed] [Google Scholar]
  • 10.Barsuk JH, Cohen ER, Feinglass J, et al. Use of simulation-based education to reduce catheter-related bloodstream infections. Arch Intern Med. 2009;169:1420–1423. [DOI] [PubMed] [Google Scholar]
  • 11.Ma IW, Brindle ME, Ronksley PE, et al. Use of simulation-based education to improve outcomes of central venous catheterization: a systematic review and meta-analysis. Acad Med. 2011;86:1137–1147. [DOI] [PubMed] [Google Scholar]
  • 12.Ezaru CS, Mangione MP, Oravitz TM, et al. Eliminating arterial injury during central venous catheterization using manometry. Anesth Analg. 2009;109:130–134. [DOI] [PubMed] [Google Scholar]
  • 13.Laerdal. Available at: www.laerdal.com/us/doc/217/Laerdal-IV-Torso. Accessed February 2, 2016.
  • 14.Blue Phantom. Available at: http://www.bluephantom.com/product/Internal-Jugular-Central-Line-Ultrasound-Manikin_NEW!.aspx?cid=380. Accessed February 2, 2016.
  • 15.SynDaver Labs. Available at: www.syndaver.com/shop/synatomy/central-line-trainer-copy/. Accessed February 2, 2016. [Google Scholar]
  • 16.Varga S, Smith J, Minneti M, et al. Central venous catheterization using a perfused human cadaveric model: application to surgical education. J Surg Educ. 2015;72:28–32. [DOI] [PubMed] [Google Scholar]
  • 17.Wadman MC, Lomneth CS, Hoffman LH, et al. Assessment of a new model for femoral ultrasound-guided central venous access procedural training: a pilot study. Acad Emerg Med. 2010;17:88–92. [DOI] [PubMed] [Google Scholar]
  • 18.Jimbo T, Ieiri S, Obata S, et al. A new innovative laparoscopic fundoplication training simulator with a surgical skill validation system. Surg Endosc. 2017;31:1688–1696. [DOI] [PubMed] [Google Scholar]
  • 19.Loor G, Doud A, Nguyen TC, et al. Development and evaluation of a three-dimensional multistation cardiovascular simulator. Ann Thorac Surg. 2016;102:62–68. [DOI] [PubMed] [Google Scholar]
  • 20.Grudziak J, Herndon B, Dancel R, et al. A standardized, interdepartment, simulation-based central line insertion course closes an educational gap and improves intern comfort with the procedure. Am Surg. forthcoming: June 2017. [PubMed] [Google Scholar]
  • 21.Conover WJ, Iman RL. On multiple-comparisons procedures. Los Alamos, NM: Los Alamos Scientific Laboratory; 1979. [Google Scholar]
  • 22.R: A language and environment for statistical computing. Available at: https://www.R-project.org/. Accessed June 7, 2016.
  • 23.Ordinal-Regressonal Models for Ordinal Data. R package version. Available at: http://www.cran.r-project.org/package=ordinal/. Accessed June 28, 2016. [Google Scholar]
  • 24.The Pairwise Multiple Comparison of Mean Ranks Package (PMCMR). R package. Available at: http://CRAN.R-project.org/package=PMCMR. Accessed June 7, 2016. [Google Scholar]
  • 25.Promes SB, Chudgar SM, Grochowski CO, et al. Gaps in procedural experience and competency in medical school graduates. Acad Emerg Med. 2009;16 Suppl 2:S58–S62. [DOI] [PubMed] [Google Scholar]
  • 26.Kim S, Dunkin BJ, Paige JT, et al. What is the future of training in surgery? Needs assessment of national stakeholders. Surgery. 2014;156:707–717. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Appendicies A & B

RESOURCES