Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2017 May 1.
Published in final edited form as: Hum Factors. 2015 Nov 6;58(3):427–440. doi: 10.1177/0018720815613919

Evaluation of Simulated Clinical Breast Exam Motion Patterns Using Marker-Less Video Tracking

David P Azari 1, Carla M Pugh 1, Shlomi Laufer 1, Calvin Kwan 1, Chia-Hsiung Chen 1, Thomas Y Yen 1, Yu Hen Hu 1, Robert G Radwin 1
PMCID: PMC4924820  NIHMSID: NIHMS795517  PMID: 26546381

Abstract

Objective

This study investigates using marker-less video tracking to evaluate hands-on clinical skills during simulated clinical breast examinations (CBEs).

Background

There are currently no standardized and widely accepted CBE screening techniques.

Methods

Experienced physicians attending a national conference conducted simulated CBEs presenting different pathologies with distinct tumorous lesions. Single hand exam motion was recorded and analyzed using marker-less video tracking. Four kinematic measures were developed to describe temporal (time pressing and time searching) and spatial (area covered and distance explored) patterns.

Results

Mean differences between time pressing, area covered, and distance explored varied across the simulated lesions. Exams were objectively categorized as either sporadic, localized, thorough, or efficient for both temporal and spatial categories based on spatiotemporal characteristics. The majority of trials were temporally or spatially thorough (78% and 91%), exhibiting proportionally greater time pressing and time searching (temporally thorough) and greater area probed with greater distance explored (spatially thorough). More efficient exams exhibited proportionally more time pressing with less time searching (temporally efficient) and greater area probed with less distance explored (spatially efficient). Just two (5.9 %) of the trials exhibited both high temporal and spatial efficiency.

Conclusions

Marker-less video tracking was used to discriminate different examination techniques and measure when an exam changes from general searching to specific probing. The majority of participants exhibited more thorough than efficient patterns.

Application

Marker-less video kinematic tracking may be useful for quantifying clinical skills for training and assessment.

Keywords: hands-on clinical examination, tactile inspection, medical simulation, examination technique

INTRODUCTION

Assessment of hands-on clinical skills is becoming increasingly important in medicine. The National Board of Medical Examiners has instituted hands-on clinical skills examination as part of the United States Medical Licensing Examination (Gilliland et al., 2008; Papadakis, 2004); the American Board of Surgery requires surgical residents to show proof of certification in the Fundamentals of Laparoscopic Surgery, Advanced Trauma Life Support, and Advanced Cardiac Life Support (Buyske, 2010); and the Educational Commission for Foreign Medical Graduates has required a hands-on clinical skills examination for medical licensure in the United States for over 10 years (Whelan et al., 2005). Despite these initiatives, hands-on clinical skills are difficult to objectively measure and quantify.

The American College of Obstetricians and Gynecologists (2012) recommends clinical breast examinations (CBEs) in addition to regular mammograms as an integral part of periodic health examinations. CBEs, conducted by a practicing physician or other qualified health professional, involve the patient’s breast being physically palpated and searched for irregularities and potential cancers. Common search patterns for hand motions include vertical strips, radial spokes, and concentric circles. The MammaCare method of palpation, described by Pennypacker and Pilgrim in 1993 (Barton, Harris, & Fletcher, 1999; Day, 2008; McDonald, Saslow, & Alciati, 2004), in which “the finger pads of the middle three fingers move in dimesize circular motion, applying three levels of pressure at each point along a vertical strip search pattern,” is the most widely studied of these techniques (McDonald et al., 2004).

Mammography is also commonly used to screen for potential breast cancers (U.S. Preventive Services Task Force, 2009), but this technology misses 8% to 17% of cancers (Goodson, Hunt, Plotnik, & Moore, 2014), and significant barriers exist in regular participation and access to mammography, especially in poor and rural communities (Peek & Han, 2004). Evidence suggests that standardized CBEs in conjunction with breast imaging increases overall sensitivity in breast cancer detection (Goodson et al., 2014; Jatoi, 2003; Oestreicher, Lehman, Seger, Buist, & White, 2005). However, “CBE is practiced with little standardization” (Saslow et al., 2004), and there is growing evidence that patients would benefit from specifically structured and standardized examination techniques (Barton et al., 1999; Day, 2008).

The use of medical simulators and manikins is an approach that may be used to evaluate performance in hands-on CBEs and other procedures at the point of care. Salud, Ononye, Kwan, Salud, and Pugh (2012) has developed and validated physical models of breast tissue for use in simulated CBE procedures. The CBE simulators can be reconfigured to represent various clinical pathologies and instrumented with internal sensors to detect palpation characteristics over time (Kaye, Salud, Domont, Blossfield Iannitelli, & Pugh, 2011). Although a variety of similar sensor systems have been used (Kwan, Salud, Ononye, Zhao, & Pugh, 2012; Salud, Kwan, & Pugh, 2013) to detect pressure applied against the model, there is a need for a noninvasive, scalable means of measuring and evaluating individual technical skill during detailed procedures. Digital video recording of the hands presents a convenient, portable, and inexpensive alternative to integrated sensor systems to track and analyze examination characteristics over time and can easily be scaled for widespread use in training and evaluation.

New marker-less video tracking methods based on cross-correlation template matching have been programmed to track the motion trajectory of the hand for a selected region of interest (ROI) over successive video frames for a single camera (Chen, Hu, Yen, & Radwin, 2013). Methods based on sequential Bayesian estimation have achieved robust performance under challenging viewing conditions, including low lighting, low resolution, motion blur, and occlusion experienced in field-acquired videos (Chen, Hu, & Radwin, 2014). These measures can be practically applied to algorithms for quantifying kinematic properties of movements and exertions (Chen et al., 2015). The present study was designed to evaluate if these methods can be adapted for tracking physician hands while performing simulated CBEs.

Marker-less video tracking was used for quantifying the movement patterns used for conducting CBEs on breast models containing various simulated pathologies. Differences in physician technique while conducting CBEs were studied. This study is concerned with characterizing the properties, patterns, and efficiencies of hand motion during CBEs, rather than assessing the clinical aptitude of any particular physician.

METHODS

This study used four previously validated clinical breast models, each with a unique, palpable lesion. Models A and B had soft, superficial masses (rubber balls) representing cysts that measured 2 cm × 2 cm and 2 cm × 1 cm, respectively. Models C and D had 2-cm masses composed of plastic and silicone representing cancers in the chest wall. Masses for Models A and B (left breast) were placed in the superior lateral quadrant, and masses for Models C and D (right breast) were placed in the inferior lateral quadrant. The models were composed primarily of silicon, with additional rubber, plastic, and foam pieces as seen in Figure 1 to simulate realistic breast tissue. Detailed information on the construction and validation of pathology representation for these models is described in Salud et al. (2012) and Laufer et al. (2015).

Figure 1.

Figure 1

Breast simulator models and implanted materials representing pathologies included in the study.

Physicians who attended the 2013 American Academy of Family Practitioners conference in San Diego, California were recruited to participate and perform simulated clinical breast exams at each of the four models. Physician recruitment and participation was institutional review board (IRB) approved. Participants were recruited during an open session at the conference and asked to complete a one-page paper survey (Appendix) listing their particular clinical specialty, years of experience, gender, teaching experience, country of practice, and medical position. All study participants were physicians. The survey also allowed respondents to select the number of actual CBEs they completed per week from a range of options: 0–5, 6–10, 11–20, and >20. The midpoints (or 20, at the upper limit) and 50 working weeks per year were used to estimate the total number of exams performed. Participants who had completed less than 250 exams in their career were excluded from analysis. Surveys were completed prior to participating in the study.

Of those participants selected, 79% were physicians who practiced in the United States and 55% were female physicians. Two participants were residents. Their years of experience ranged from 2 years to 35 years, with a mean of 14 years. Fifteen participants complete 0–5 CBE exams per week, 9 participants complete 6–10 CBE exams per week, 7 participants complete 11–20 per week, and 2 participants complete more than 20 per week.

Each station (see Figure 2) was equipped with a unique breast simulator, a dedicated laptop computer, and a webcam (either a Logitech C920 or Logitech Webcam Pro 9000) that recorded 2D planar video with 720 pixel × 480 pixel resolution at 30 frames per second. The cameras were placed directly above the model so that each had minimal pitch, roll, and tilt angles relative to the simulator. The field of view was centered on the breast model and focused, with contrast/white balance and lighting adjusted to produce ideal recording conditions. Exams were recorded on digital video and saved on a laptop computer.

Figure 2.

Figure 2

Standard CBE station components.

Each breast simulator included 16 standard-length reference markers in the camera view to allow for eventual pixel-distance calibration. The software interface, breast simulator, and webcam placement during a typical exam are shown in Figure 3.

Figure 3.

Figure 3

Operator (left) and participant (right) engaged at a standard exam station.

Each participant performing an examination at a particular model was informed that the patient believed she had felt a mass during self-examination but was unable to pinpoint the location. Pictures of the patient’s face, age, and the text seen in Figure 1 were placed next to each model. Station to station progress was randomized. Following each exam, the participants were asked to mark the location, size, shape, and consistency of any tumorous mass found in the model on a survey and provide a tentative diagnosis and suggested follow-up procedures from a list of options. Participants were encouraged to perform an exam as they would in the office and were discouraged from discussing any techniques or findings with other physicians.

Data Inclusion for Video Analysis

All participants were filmed performing a full examination. We observed that search patterns exhibited a high amount of variation in technique and style. Common approaches included using one hand only, two hands moving separately, and two hands moving as a semisingle unit. Examples of these techniques are shown in Figure 4.

Figure 4.

Figure 4

Examples of varying techniques in conducting CBEs.

Due to the highly variable techniques exhibited during the simulations, videos were excluded when participants used more than one hand or independently moved their fingers to probe the model or only partially conducted an exam. Of the 518 trials collected, nearly half (247) showed some combination of two hands moving separately to complete the exam. Fifty-one trials were completed with participants moving two hands as a semisingle unit. The remaining one-handed trials (87) were chosen for motion tracking, as they had the most consistent view of the hands. A total of 70 one-handed trial videos across all models (52.9 minutes of continuous hand motion) were able to be fully processed and analyzed. This set represents an unbalanced distribution of participants across stations, as some participants exited the study before completing all of the models or they switched between one- and two-handed techniques for different models. To block these effects, a random selection of videos across participants with sufficient experience (32) was selected, and one-way between-groups analysis of variance (ANOVA) was performed. The t tests (employing Welch’s correction) and Levene’s test were also used to test mean performances of different categories and validate assumptions of equal variance.

In order to track hand position, analysts marked the pixel locations of the tumor and the corners of the simulated tissue and selected a rectangular ROI during the first frame of contact with simulated tissue. The ROI was defined to contain the fingertips up to the proximal interphalangeal (PIP) joint of the index, middle, and ring fingers. Pixel and ROI markings for each video record were visually checked to confirm accuracy. The analyst loaded the video, specified the ROI in the marker-less video tracking program, and supervised the automatic tracking of the ROI, making only minimal manual corrections as necessary. Video capture spanned from initial hand placement until the participant’s hand was withdrawn from the simulator.

Participants would occasionally remove their hands from the exam, engage in completing the demographic survey, and then return to further explore the pathology. In situations in which the participant’s hand was removed and replaced, all periods of time when the hand was in contact with the simulator were identified and included in analysis. The resultant positions, speeds, and accelerations for each frame were stored for calibration and subsequent analysis. Distances in pixels were calibrated against fixed markers appearing in each video (either 2.54 or 5.08 cm). Representative samples of the pixel-based position data for different techniques (Figure 5) and corresponding kinematic speed records postcalibration are shown in Figure 6.

Figure 5.

Figure 5

Path information produced by the tracking algorithm for vertical strip (left – Model D) and concentric (right – Model B) motions for one-handed clinical breast exams. The curved black lines are the path traced out by the participant over time. The arrows represent the location of the tumors. The outer box represents the edge of the simulator skin.

Figure 6.

Figure 6

Example of instantaneous speed versus time during the concentric (a) and vertical (b) searches seen in Figure 5.

Total displacement, time and displacement spent outside the tumor radius, time spent actively pressing and probing tissue, and total area covered were also extracted from the kinematic record of each video. These variables helped to characterize the position of the hands relative to the tumor and the overall breast model. The duration of each exam was not considered a reasonable measure of technique due to the indirect relationship between thoroughness and proficiency (Salud et al., 2012) and the variety of locations where participants began probing. We therefore developed several performance measures to represent the relative efficacy of different techniques. Such measures need to account for examination characteristics over model area (Figure 7) and time (Figure 8), so that both temporal and spatial characteristics are represented.

Figure 7.

Figure 7

Combination of time searching (%) and time pressing (%).

Figure 8.

Figure 8

Combination of distance explored (%) and area covered (%).

Performance Measures

In order to quantify the differences in technique among participants, we computed four measures to describe temporal and spatial characteristics. These measures were not intended to assess the clinical quality of a particular physician but rather to characterize the various one-handed techniques utilized in terms of temporal and spatial movement characteristics of the exam.

Temporal characteristics

The kinematic measure of time searching (Equation 1) represents the proportion of time spent outside the tumor radius relative to the total time of the exam. The tumor radius differed for each model but ranged between 2.65 and 2.81 cm. Greater time searching means a greater amount of time during the exam is spent probing nontumorous tissue. Less time searching indicates a greater emphasis on probing the tumor location than exploring the surrounding tissues.

Time Searching(%)=Time spent searching outside tumor radiusTotal time spent (1)

Time pressing (Equation 2) measures the amount of time probing tissue relative to the total time of the exam. As participants move from location to location on the model, speed naturally increased due to the transition to a new location. A threshold of 20% of the mean peak speed in the dataset was selected to represent transition in moving from location to location on the model and actively probing tissue. Greater time pressing represents more time spent below the movement threshold and when the participant is more likely engaging with tissue, whereas less time pressing represents more time spent moving from one location to another on the model.

Time Pressing(%)=Time spent actively pressing and probing tissueTotal time spent (2)

The combination of these measures describes temporal efficiency (Figure 7).

The center of Figure 7 represents equal proportions of time during examination spent pressing and moving to new locations as well as equal time spent searching for and intentionally probing the tumor for its characteristics. Each quadrant in this comparison exhibits a set of specific properties derived from the combined results of Equations 1 and 2. Exams exhibiting techniques in the upper left (low time pressing, high time searching) are likely to appear sporadic, whereas those in the lower left (low time pressing, low time searching) are more likely to localize and probe near the tumor location. The upper right (high time pressing, high time searching) is more thorough, and the lower right (high time pressing, low time searching) is more time efficient. The greatest temporal efficiency is represented by the lowest right-most corner, where all time is spent actively engaging with tissue in the area of the tumor. Although each corner in Figure 7 is technically possible to attain by chance, they are highly unlikely while performing an actual exam.

Spatial characteristics

The area covered (Equation 3) represents the area traced out over the course of the exam by the participant’s fingertips, relative to the total area of the model. Greater area covered represents more tissue probed at least once during the exam, relative to the total area of the model.

Distance explored (Equation 4) represents the proportion of distance covered while outside the tumor radius, relative to the total displacement during the exam:

Area Covered(%)=Total area probedTotal area available (3)
Distance Explored(%)=Displacement outside of tumor radiusTotal displacement (4)

The combination of both measures describes the spatial characteristics (Figure 8).

The center of Figure 8 represents half of the model area probed and equal distance explored within and outside of the tumor radius. The upper left quadrant of this figure (low area covered, high distance explored) is likely to appear sporadic, and the lower left (low area covered, low distance explored) would represent exams localized to one area. The upper right (high area covered, high area explored) is the most spatially thorough, and the lower right (high area covered, low distance explored) is the most efficient. The lower right corner of Figure 8 represents the greatest spatial efficiency, where all area of the model is covered, without exploration of nontumorous regions. Although each corner in Figure 8 is technically possible to attain by chance, this is unlikely during an actual exam.

To further describe aggregate characteristics across models, a multivariate kernel density estimation (Venables & Ripley, 2002) for both the spatial and temporal measures described above were performed for the selected videos. Multivariate kernel density estimation is an approximation of the combined probability density functions of two or more univariate datasets. The resulting probability density matrix describes the relative occurrence of examination characteristics for unique combinations of temporal (time searching and time pressing) or spatial (area covered and distance explored) properties from 0% to 100% in each direction. Temporal density estimates for these videos included time searching (Equation 1) and time pressing (Equation 2). Spatial density estimates for the same videos included area covered (Equation 3) and distance explored (Equation 4). Probability density function estimates were calculated for every combination of temporal equation values (0%–100%) and spatial equation values (0%–100%) and plotted on a 1,000 × 1,000 grid for increased visual resolution. The estimation matrix was plotted as a heat map, where greater densities indicate higher likelihoods of participant performance at specific combinations along both axes and are represented by higher intensity colors. The probability density estimates ranged from 0 (lowest likelihood) to 0.0008 (highest likelihood) and summed to 1 across each probability space.

RESULTS

Each kinematic measure (time searching, time pressing, area covered, distance explored) is expressed as a proportion by Equations 14. The 32 unique participants’ videos were randomly selected across the four models for statistical analysis. Variables used in categorizing the selected participant performance are summarized in Table 1. A one-way between-groups ANOVA indicated that there were statistically significant differences between models for mean radial distance (cm) from tumor location (n = 32, F = 0.00, p = 0.00) and displacement exploring outside the tumor radius (n = 32, F = 0.00, p = 0.00) as a percentage of total displacement (see Figure 9).

TABLE 1.

Kinematic Measure Component Summary by Model

Models

A (n = 6) B (n = 12) C (n = 6) D (n = 7)




Component Measures* M SD M SD M SD M SD
Total time spent searching
  outside of tumor radius (s)
22.91 14.94 26.50 17.22 20.95 16.13 48.17 39.04
Total time spent (s) 40.09 18.99 43.59 23.21 38.50 19.61 62.99 45.64
Time spent actively pressing
  and probing tissue (s)
28.54 12.26 36.22 20.95 31.21 15.67 46.62 31.79
Total area probed (cm2) 441.83 63.48 395.18 59.39 427.01 119.06 553.47 68.36
Displacement outside of
  tumor radius (cm)
32.78 21.43 26.52 15.30 20.17 10.43 54.56 44.08
Total displacement (cm) 46.44 25.61 36.61 18.02 34.27 20.19 69.08 53.05

Figure 9.

Figure 9

Mean radial distance (top) and displacement outside tumor radius (bottom) by model.

A Levene’s test for homogeneity of variance was conducted for both sets (p > 0.05). One participant seemed to prioritize thoroughness in exploration (Figure 9), exhibiting over 97% area coverage on Model C (compared to the model mean of 72%), thus exhibiting the highest percentages for distance explored (87%) and time pressing (87%) on Model C. There were no significant differences between models with respect to examination speed or acceleration (Table 2).

TABLE 2.

Speed and Acceleration Summary by Model (n = 32)

Speed (cm/s) Acceleration (cm/s2)


Model Mdn M SD Max. Mdn M SD Max.
A 46.82 59.05 10.76 360.89 566.94 717.99 107.88 3,682.15
B 35.52 49.96 14.44 387.48 438.68 583.22 276.70 4,247.53
C 33.13 47.33 10.80 324.18 344.43 525.11 196.10 4,198.74
D 42.32 58.00 14.43 442.45 521.18 690.36 283.46 4,312.52

All randomly selected participants indicated via written survey that a mass was found on Models A, B, and C. Only 43% of participants reported finding a mass in Model D. Female participants were more likely to correctly identify the mass in Model D (63%). Participants who indicated a mass was found on Model D exhibited a higher mean time pressing (79%) compared to those who did not find a mass (67%).

Population distribution density estimates (Figure 10) showed a significant difference (p < 0.001) by t tests with Welch correction between quadrants (0%–50% split in each direction as seen in Figures 7 and 8). Thorough and sporadic quadrants exhibited the least correlation effect size (r) for spatial and temporal comparisons (0.16 < r < 0.42), whereas the greatest effect size was observed for thorough and efficient (0.71 < r < 0.88) comparisons. There were also differences (p < 0.001) between regions when grouped by their respective probabilities. The highest third of values accounted for 5% of probabilities in both density plots, whereas the lowest third of values accounted for 87% of the temporal plot and 85% of the spatial plot. The effect size of highest compared to lowest values (r = 0.97) was greater than both the middle values compared to high values (0.84 < r < 0.89) and middle values compared to low values (0.93 < r < 0.94) for both spatial and temporal comparisons, respectively.

Figure 10.

Figure 10

Multivariate kernel density distribution of temporal (left; see also Figure 7) and spatial (right; see Figure 8) efficiency measures with relative density legend (below). Highest efficiencies are attained in the lower right quadrant of each plot.

Time spent pressing ranged from 46% to 93% of the total exam time, whereas time spent searching outside of the tumor radius ranged from 22% to 86%. The area covered as a percentage of the total area ranged from 39% to 100%, whereas the distance explored outside the tumor radius as a percentage of the total distance ranged from 33% to 88%. The most efficient examinations, seen in only 2 of 32 randomly selected participant exams (6%), were those which exhibited the following characteristics:

  1. greater than 50% area covered

  2. greater than 50% time pressing

  3. less than 50% time searching

  4. less than 50% distance explored

These regions can be seen in the lower right quadrants of the two probability density plots (Figure 10). The most temporally thorough quadrant (high time pressing and time searching) accounted for 25 (78%) exams, whereas the most spatially thorough quadrant (high area covered and distance explored) accounted for 29 (91%) exams. The most temporally efficient (low time searching, high time pressing) accounted for 7 (22%) exams, whereas the most spatially efficient (high area covered, low distance explored) accounted for 2 (6%) exams. No exams were temporally localized or disordered, and only one exam was spatially sporadic. Mean differences between probability estimates in the most efficient, the most thorough, and all other regions for both spatial and temporal efficiency measures were statistically significant (p < 0.001).

DISCUSSION

This study utilized video motion capture and video tracking to compare the motions of physicians performing one-handed CBEs. A marker-less video tracking system was utilized to automatically record the location of participant hands relative to each breast simulator. Analysts supervised the tracking program, occasionally providing corrections if needed. Hand speed, acceleration, and exploration relative to certain positions in the model were measured throughout each procedure. Visual differences observed in exams were represented in the kinematic data.

We found that the techniques employed by physicians in conducting CBEs were highly variable. The use of one or two hands, changing stance or position, and switching hands in an exam were common. For the purposes of this study, analysis was limited to one-handed examinations without hand switching and included all time when participants were in physical contact with the model. Two-handed exams, as well as alternating exam techniques, were excluded. However, we were able to establish that the new measures were able to discriminate differences in spatial and temporal characteristics utilized by physicians. Future research should investigate how these performance measures can be extended to two-handed approaches, switching or inconsistent techniques, examination accuracy, and integration of real-time pressure data. The current marker-less motion capture and video record of the various additional techniques observed may help to generate additional physician profiles and quantify similarities or differences in detection from such unique approaches to CBE.

Time searching and radial distance from the tumor were important in discriminating model differences. The spatial and temporal exam characteristics (localized, sporadic, thorough, efficient) were emergent from combinations of Equations 1–4. The majority of participant techniques were classified as both spatially and temporally thorough, with fewer participants exhibiting the most efficient techniques. There was rare use of sporadic or localized techniques.

More female participants correctly identified the mass in Model D than male participants, although no significant effects of experience were observed. The most thorough exams exhibited high area covered while simultaneously exhibiting high distance explored, suggesting an emphasis on discovering all potential lesions. The most efficient exams covered high surface area while exploring less outside of the tumor radius, perhaps suggesting an increased emphasis on locating and differentiating between tissues near the lesion.

The thresholds for these classifications, while arbitrarily chosen at 50% of each kinematic measure, resulted in statistically significant (p < 0.05) descriptions of the participant population across the possible space of characteristics. Additional thresholds could be created to further describe performance within specific regions. Although the results do not elucidate whether variation among participants is a driving factor in kinematic differences between models, these measures could be useful in ensuring a common standard and repeatable quality in creating and using simulators. Thus, it might be possible to develop a set of quantitative measures to assist in the training of novices where timely detection and identification of tissue are crucial.

Only four CBE models were included and did not represent all possible types of lesions. Our technique may not be applicable to other physical exam skills, especially when the hands are not clearly visible, and the performance measures used in this study may not be applicable if the model pathology is unknown.

The measures developed in this study can be utilized for training physicians in conducting CBEs, improving technique, as well as for evaluation by providing continuous, automatic, and real-time feedback. Unlike embedded sensor technologies that can only be used in simulators, the video motion capture method can provide quantified measures of actual CBE performance in addition to records during simulator training. The proposed nomenclature (sporadic, localized, thorough, and efficient) includes quantifiable means for comparing performance against a standard. Physicians could use this system to gauge their own awareness, skill, and overall performance on an ongoing basis. This method could be also used to evaluate a physician’s ability and suggest technique changes to improve detection. Marker-less motion capture thus has the potential to improve quantified measures of CBE performance and standardization of technique, while removing the limitations of embedded sensor technology and simulated tissue inherent in other analysis efforts.

Future research should investigate how well these measures help improve detection in live tissue and how different pathologies change kinematic characteristics and relationships between kinematics and pressure. By correlating certain physician kinematic characteristics during controlled simulations to levels of experience, correctness, and exploration, it is possible to provide recommendations for normative exam procedures, improved simulators, and exam guidelines in greater detail and consistency than previously available.

KEY POINTS.

  • The techniques employed by physicians in conducting CBEs were highly variable.

  • Generally, the physicians studied opted for thoroughness (78% and 90%) over efficiency (21% and 6%) for temporal and spatial criteria, respectively.

  • Marker-less kinematic tracking information can provide useful feedback in developing teaching and evaluation tools for hands-on clinical examinations.

Acknowledgments

This research was sponsored in part from U.S. Department of Health and Human Services, National Institutes of Health Grants R01EB011524 (to C.M.P.) and R21EB014583 (to R.G.R.).

Biographies

David P. Azari is a graduate student at the University of Wisconsin (Madison) in mechanical engineering and industrial and systems engineering. He has a BS degree (2006) in mathematics from the University of Colorado (Boulder).

Carla M. Pugh is Susan Behrens Professor of Education and Patient Safety at the University of Wisconsin (Madison). She has a BA degree (1988) from the University of California, Berkeley, and she earned an MD degree (1992) from Howard University College of Medicine (Washington, D.C.) and a PhD (2001) from Stanford University School of Education (Stanford, CA).

Shlomi Laufer is a research associate at the University of Wisconsin (Madison) in the Department of Surgery and the Department of Electrical and Computer Engineering. He has a BSc degree (2004) from Technion–Israel Institute of Technology and a PhD (2011) from Hebrew University of Jerusalem (Israel).

Calvin Kwan is a researcher at the University of Wisconsin–Madison in the Department of Surgery. He has a BS degree (2010) in biomedical engineering from the Northwestern University (Evanston, IL).

Chia-Hsiung Chen was a research assistant in the Occupational Ergonomics and Biomechanics Laboratory at the University of Wisconsin–Madison. He earned his BS degree (2004) from National Sun Yat- Sen University (Taiwan), his MS degree (2006) from National Taiwan University (Taiwan), and a PhD (2014) from the University of Wisconsin (Madison). He is currently a software developer at Epic Systems Corporation.

Thomas Y. Yen is an instrumentation innovator and instructor in the Departments of Biomedical Engineering and Industrial and Systems Engineering, and he is technical director of the Internet of Things (IoT) lab at the University of Wisconsin (Madison). He has a BS degree in Biomedical Engineering (1987) from Northwestern University (Evanston, IL). He earned his MS degree (1991) and PhD (1997) in industrial engineering from the University of Wisconsin (Madison).

Yu Hen Hu is a professor at the University of Wisconsin (Madison) in electrical and computer engineering. He has a BS degree (1976) from National Taiwan University (Taipei), and he earned a PhD degree (1982) in electrical engineering from the University of Southern California (Los Angeles).

Robert G. Radwin is a professor at the University of Wisconsin (Madison) in biomedical engineering, industrial and systems engineering, and orthopedics and rehabilitation. He has a BS degree (1975) from New York University Polytechnic School of Engineering, and he earned his MS (1979) and PhD (1986) degrees from the University of Michigan (Ann Arbor).

APPENDIX

Demographic Survey for University of Wisconsin–Madison Breast Exam Project

  1. Please indicate: Physician Nurse NP PA Medical Student Resident Other______

  2. Specialty_____________________________

  3. Country of practice _______________________

  4. Year in training or practice: ____________

  5. Number of breast exams performed/week 0–5  6–10  11–20  >20

  6. Gender: Male  Female

  7. Do you teach students or residents how to perform breast exams? Yes  No

  8. Have you used manikins/models for teaching (or learning)? Yes  No

REFERENCES

  1. American College of Obstetricians and Gynecologists. Well-woman visit. Committee Opinion No. 534. Obstetrics and Gynecology. 2012;120:421–424. doi: 10.1097/AOG.0b013e3182680517. [DOI] [PubMed] [Google Scholar]
  2. Barton MB, Harris R, Fletcher SW. The rational clinical examination. Does this patient have breast cancer? The screening clinical breast examination: Should it be done? How? Journal of the American Medical Association. 1999;282:1270–1280. doi: 10.1001/jama.282.13.1270. [DOI] [PubMed] [Google Scholar]
  3. Buyske J. The role of simulation in certification. Surgical Clinics of North America. 2010;90(3):619–621. doi: 10.1016/j.suc.2010.02.013. [DOI] [PubMed] [Google Scholar]
  4. Chen CH, Azari D, Hu YH, Lindstrom MJ, Thelen D, Yen TY, Radwin RG. The accuracy of conventional 2D video for quantifying upper limb kinematics in repetitive motion occupational tasks. Ergonomics. 2015 doi: 10.1080/00140139.2015.1051594. Advance online publication. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Chen CH, Hu YH, Radwin RG. 2014 IEEE China Summit & International Conference on Signal and Information Processing (ChinaSIP) Xi’an, China: IEEE; 2014. A motion tracking system for hand activity assessment; pp. 320–324. [Google Scholar]
  6. Chen C-H, Hu YH, Yen TY, Radwin RG. Automated video exposure assessment of repetitive hand motion. Human Factors. 2013;55(2):298–308. doi: 10.1177/0018720812458121. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Day NB. The need for performance and standardization of the best clinical breast exam. The Journal for Nurse Practitioners. 2008;4(5):342–349. [Google Scholar]
  8. Gilliland WR, La Rochelle J, Hawkins R, Dillon GF, Mechaber AJ, Dyrbye L, Papp KK, Durning SJ. Changes in clinical skills education resulting from the introduction of the USMLE Step 2 clinical skills (CS) examination. Medical Teacher. 2008;30(3):325–327. doi: 10.1080/01421590801953026. [DOI] [PubMed] [Google Scholar]
  9. Goodson WH, III, Hunt TK, Plotnik JN, Moore DH., II Optimization of clinical breast examination. The American Journal of Medicine. 2014;123(4):329–334. doi: 10.1016/j.amjmed.2009.08.023. [DOI] [PubMed] [Google Scholar]
  10. Jatoi I. Screening clinical breast examination. Surgical Clinics of North America. 2003;83:789–801. doi: 10.1016/S0039-6109(03)00028-8. [DOI] [PubMed] [Google Scholar]
  11. Kaye AR, Salud LH, Domont ZB, Blossfield Iannitelli KM, Pugh CM. Expanding the use of simulators as assessment tools: The new pop quiz. Studies in Health Technology and Informatics. 2011;163:271–273. [PMC free article] [PubMed] [Google Scholar]
  12. Kwan C, Salud L, Ononye CI, Zhao S, Pugh C. Moving past normal force: Capturing and classifying shear motion using 3D sensors. Studies in Health Technologies and Informatics. 2012;173:245–249. [PMC free article] [PubMed] [Google Scholar]
  13. Laufer S, Cohen ER, Kwan C, D’Angelo AD, Pugh CM, et al. Sensor technology in assessments of clinical skill. The New England Journal of Medicine. 2015;372(8):784. doi: 10.1056/NEJMc1414210. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. McDonald S, Saslow D, Alciati MH. Performance and reporting of clinical breast examination: A review of the literature. CA: A Cancer Journal for Clinicians. 2004;54(6):345–361. doi: 10.3322/canjclin.54.6.345. [DOI] [PubMed] [Google Scholar]
  15. Oestreicher N, Lehman CD, Seger DJ, Buist DS, White E. The incremental contribution of clinical breast examination to invasive cancer detection in a mammography screening program. American Journal of Roentgenology. 2005;184:428–432. doi: 10.2214/ajr.184.2.01840428. [DOI] [PubMed] [Google Scholar]
  16. Papadakis MA. The Step 2 clinical-skills examination. New England Journal of Medicine. 2004;350(17):1703–1705. doi: 10.1056/NEJMp038246. [DOI] [PubMed] [Google Scholar]
  17. Peek ME, Han JH. Disparities in screening mammography: Current status, interventions, and implications. Journal of General Internal Medicine. 2004;19(2):184–194. doi: 10.1111/j.1525-1497.2004.30254.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Salud LH, Kwan C, Pugh CM. Simplifying touch data from tri-axial sensors using a new data visualization tool. Studies in Health Technologies and Informatics. 2013;184:370–376. [PMC free article] [PubMed] [Google Scholar]
  19. Salud LH, Ononye CI, Kwan C, Salud JC, Pugh CM. Clinical breast examination simulation: Getting to real. Studies in Health Technologies and Informatics. 2012;173:424–429. [PMC free article] [PubMed] [Google Scholar]
  20. Saslow D, Hannan J, Osuch J, Alciati MH, Baines C, Barton M, et al. Clinical breast examination: Practical recommendations for optimizing performance and reporting. CA: A Cancer Journal for Clinicians. 2004;54:327–344. doi: 10.3322/canjclin.54.6.327. [DOI] [PubMed] [Google Scholar]
  21. U.S. Preventive Services Task Force. Screening for breast cancer: U.S. Preventive Services Task Force Recommendation Statement. Annals of Internal Medicine. 2009;151(10):716–726. doi: 10.7326/0003-4819-151-10-200911170-00008. [DOI] [PubMed] [Google Scholar]
  22. Venables WN, Ripley BD. Modern applied statistics with S. 4th. New York: Springer; 2002. [Google Scholar]
  23. Whelan GP, Boulet JR, McKinley DW, Norcini JJ, van Zanten M, Hambleton RK, et al. Scoring standardized patient examinations: Lessons learned from the development and administration of the ECFMG Clinical Skills Assessment (CSA) Medical Teacher. 2005;27(3):200–206. doi: 10.1080/01421590500126296. [DOI] [PubMed] [Google Scholar]

RESOURCES