Abstract
The clinical breast examination (CBE) is performed to detect breast pathology. However, little is known regarding clinical technique and how it relates to diagnostic accuracy. We sought to quantify breast examination search patterns and hand utilization with a new data collection and analysis system. Participants performed the CBE while the sensor mapping and video camera system collected performance data. From this data, algorithms were developed that measured the number of hands used during the exam and active examination time. This system is a feasible and reliable method to collect new information on CBE techniques.
Introduction
The clinical breast examination (CBE) is intended to detect palpable lesions and guide further diagnostic workup and treatment. Despite the widespread practice of this examination, there is little standardization in the way that it is performed [1,2]. The sensitivity of the clinical breast exam to detect cancer in the clinical setting ranges from 22 to 54% [3,4]. However, evidence is lacking regarding actual clinical technique and practitioner accuracy. Studies investigating training with a structured examination technique have demonstrated improvements in breast mass detection [5–8]. Moreover, evidence regarding the actual technique performed in the clinical setting and how it relates to the ability of a practitioner to detect a breast lesion is lacking.
The CBE includes visual inspection and palpation. Different search patterns, pressure application and finger/hand utilization have been reported when teaching palpation and technique [2,9]. Our prior work has demonstrated the ability to use force sensors to delineate physical examination techniques on different simulated models including the breast [10,11], pelvis [12,13] and prostate [14]. In one study, we utilized several force sensing resistors placed underneath a simulated breast model to demonstrate that patient and clinician factors correlate significantly with examination time, number of sensors touched and maximum pressure applied [10]. Despite our findings, further work is needed to capture the different elements of the exam in order to understand how technique affects diagnostic accuracy. The goal of this study was to quantify breast examination search patterns and hand utilization with a new data collection and analysis system.
Methods & Materials
Data collection
This study was performed at the 14th Annual American Society of Breast Surgeons Meeting in Chicago, Illinois in May 2013. The American Society of Breast Surgeons Meeting is for surgeons with a special interest in the treatment of breast disease. Each year, this conference brings together over 1,300 breast surgeons from around the world. Clinician data were collected over a 3-day period. One hundred and thirty-eight clinicians visited a booth stationed in the exhibit hall and volunteered to perform a full CBE on a breast simulator.
Sixteen clinicians who failed to correctly indicate the patient findings were closely matched with 16 clinicians who correctly indicated the patient findings. Matched controls were based on gender, country (domestic or international) and years of practice.
The breast examination simulator is a task trainer instrumented at the base with a pressure mapping system. The breast simulator can be reconfigured to represent various clinical presentations. For this study, the simulator consisted of a moderately firm right breast representing dense breast tissue with a 2 cm hard mass with irregular borders in the lower inner quadrant. The pressure mapping system included a 25 × 25 cm ultra-thin, tactile pressure sensor. The sensor is comprised of 1,936 individual sensing elements uniformly distributed in a 44 × 44 matrix (Tekscan®, Boston, MA). The sensor map was connected to the computer’s USB using a designated data acquisition handle. Data were sampled at 90 Hz and stored for offline analysis. Video recordings and pressure data were manually synchronized.
Before performing the CBE, clinicians completed a background survey indicating specialty, country, year in training or practice, number of breast exams performed per week, gender and experience with teaching breast exams and using simulation. Clinicians were asked to perform a complete CBE. The simulated patient was a 40 year old female with a palpable breast mass. We used a multimodality system to collect performance data as participants performed the examination. Electronic sensor data were collected from the simulator and the clinicians’ hands were video recorded. After performing the CBE, participants documented their exam findings on an assessment form.
Video data
CBE performance videos were reviewed and coded by a single observer. Two parameters were measured using the video data: CBE times and number of hands used. Participants who switched between one- and two-handed techniques were classified according to the method used more often.
Sensor Data
Three parameters were measured using the sensor data: CBE time, number of hands used and CBE average force. The CBE time was calculated as the time the total applied force was 1.0 N above the baseline.
Data Analysis
Data from the video review showed that during a one-handed exam, the examiner typically palpates the breast in a serial manner, moving from one area to another. Therefore, the spatial data should correlate in the time domain. Alternatively, during a two-handed exam, the examiner commonly goes back and forth between two areas being palpated. In this case, areas of anti-correlation will be found.
Using the video review data, an algorithm was developed to predict the number of hands used at any given moment. This was done in two steps. First, the area of palpation was identified. Then, the algorithm searched for an area of palpation, which anti-correlates to the first. These two steps were repeated in 0.1 seconds intervals. The correlation was calculated over a one second time window. The window was selected after observing an average palpation speed of 1–2 Hz. For stability reasons, at any given moment, the hand number presented by the algorithm was the majority of the last five intervals.
Surveys were coded and analyzed using descriptive and comparative inferential statistics. The sensor data were analyzed using MATLAB R2010b. Data analysis was performed using IBM SPSS Statistics version 20.
Results
Participant demographics can be seen in Table 1. Examiners who correctly identified the mass, applied more average pressure throughout the exam (12.53N (SD=5.23) than those who did not find the mass (6.67 (SD=0.21)) [p=.001] [Figure 1]. Clinicians who correctly identified patient findings spent, on average, more time examining the patient (63.69 ± 18.15 sec) than those who did not correctly identify the findings (54.1 sec ± 24.89). When the single outlier who did not correctly identify the findings was excluded (119 sec), the difference becomes significant (M= 49.48 ± 17.98 sec) [p<0.05].
Table 1.
Demographic characteristics of participants (n=32
| Characteristic | Count (%) or Mean (SD) |
|---|---|
| Provider type | |
| Physician | 30 (94%) |
| Resident | 2 (6%) |
| Specialty | |
| Breast Surgeon | 17 (53%) |
| General Surgeon | 11 (34%) |
| Surgical Oncology | 4 (13%) |
| Average year in training or practice | 16.22 (11.08) |
| Country | |
| United States | 14 (44%) |
| International | 12 (37%) |
| Unreported | 6 (19%) |
| Number of breast exams performed per week | |
| 0–5 | 2 (6%) |
| 6–10 | 2 (6%) |
| 11–20 | 1 (3%) |
| >20 | 27 (84%) |
| Male | 21 (66%) |
| Teach students or residents how to perform breast exams | 18 (56%) |
| Used manikins/models for teaching or learning | 12 (38%) |
Figure 1.
Average pressure applied by participants who did not find the mass and who found the mass
The average time spent on the breast exam according to the video review was 60.28 ± 21.28 sec compared to 59.19 ± 21.76 sec according to the computer algorithm (alpha=.99). The average absolute difference between the two methods was 2.16 ± 3.39 sec.
The video review classified 8 users as one-handed and 24 as two-handed. The computer algorithm correctly identified 6 of one-handed CBEs (75%) and 21 of the two handed CBEs (87.5%) correctly (kappa=.82). There was no significant relationship between number of hands used during the CBE and accuracy.
The difference between one- and two-handed exam technique can be seen in Figure 2. Figure 2a shows a one-handed exam. The average signal from four adjacent regions of interest (ROI) is presented in subplot IV. With the one-handed user, it can be seen that while different pressure values are measured, the different signals correlate (ROI #1–4). In Figure 2b a two-handed exam is shown with corresponding ROI. Unlike the one-handed exam, ROI #1 and #3 correlate with each other and anti-correlate with ROI #2 and #4. In this specific example, the examiner switched from a one-handed to a two-handed exam technique at around 30 seconds. This is illustrated in subplot III by the change in the total force and palpation frequency used.
Figure 2.
Screenshots of the computer program. An example of the examiner using the one-handed technique can be seen on the left (A) and using the two-handed technique on the right (B). Clockwise from upper right: (I) Video image. (II) Pressure map with blue representing low pressure and red high. (III) Graph of total force over time. The vertical green lines represent the beginning and end of the examination and the red dash-dot line is the time currently depicted in the video, pressure map and regions of interest (ROI). A synchronization peak is present before the first green vertical line. (IV) The sum pressure for different ROI is depicted. A one second window is shown. The ROI are marked on the pressure map with red squares and corresponding numbers.
Conclusions & Discussion
Data collected at this meeting helped us further comprehend how healthcare providers perform the CBE. Quantifying the CBE is essential to understanding which aspects of the exam influence accuracy. The combination of video recordings and the pressure mapping system served as a feasible and reliable method to collect new information on CBE techniques.
Our results show that examiners applying more pressure during the CBE were more likely to find the breast mass. These findings are consistent with prior studies that correlated increased pressure with improved accuracy in other simulated physical exams [13,14]. In addition, there was a trend towards improved accuracy with increased examination time. Of interest, there was no significant relationship between number of hands used during the CBE and accuracy, however, further study is needed in this area.
One limitation of this study was the high accuracy rate demonstrated by participants. The specialized nature of the participant group or the difficulty of the clinical presentation are possible explanations for this high success rate. Future studies will include simulators reflecting alternate breast pathologies and examiners from other healthcare provider levels and clinical specialties. This may help us gain a better understanding of the relationship between CBE technique and clinical performance.
Future work will include evaluating different aspects of the CBE such as search patterns and contributions of finger/hand utilization. These features can be easily categorized with video review and subsequent studies will focus on developing associated algorithms. In order to further test the validity of our system, more data needs to be collected and analyzed.
The new multimodality pressure mapping and video-recording system provided data to develop reliable algorithms that accurately measured the number of hands used during the exam in addition to the length of time spent. These algorithms were confirmed with video analysis of the CBE. In addition, the pressure mapping system provided pressure data that cannot be detected from observation alone. This is a first step in the objective classification of the components of the CBE.
Acknowledgments
This research was supported by the National Institutes of Health R01EB011524 Grant titled Validation of Sensorized Breast Models for High Stakes Testing.
References
- 1.McDonald S, Saslow D, Alciati MH. Performance and reporting of clinical breast examination: A review of the literature. CA: A Cancer Journal for Clinicians. 2004;54:345–361. doi: 10.3322/canjclin.54.6.345. [DOI] [PubMed] [Google Scholar]
- 2.Saslow D, Hannan J, Osuch J, Alciati MH, Baines C, Barton M, et al. Clinical breast examination: Practical recommendations for optimizing performance and reporting. CA: A Cancer Journal for Clinicians. 2004;54:327–344. doi: 10.3322/canjclin.54.6.327. [DOI] [PubMed] [Google Scholar]
- 3.Barton MB, Harris R, Fletcher SW. Does this patient have breast cancer? The screening clinical breast examination: Should it be done? How? The Journal of the American Medical Association. 1999;282(13):1270–1280. doi: 10.1001/jama.282.13.1270. [DOI] [PubMed] [Google Scholar]
- 4.Fenton JJ, Barton MB, Geiger AM, Herrinton LJ, Rolnick SJ, Harris EL, et al. Screening clinical breast examination: How often does it miss lethal breast cancer? Journal of the National Cancer Institute. Monographs. 2005;35:67–71. doi: 10.1093/jncimonographs/lgi040. [DOI] [PubMed] [Google Scholar]
- 5.Chalabian J, Dunnington G. Do our current assessments assure competency in clinical breast evaluation skills? The American Journal of Surgery. 1998;175(6):497–502. doi: 10.1016/s0002-9610(98)00075-0. [DOI] [PubMed] [Google Scholar]
- 6.Chalabian J, Garman K, Wallace P, Dunnington G. Clinical breast evaluation skills of house officers and students. The American Journal of Surgery. 1996;62(10):840–845. [PubMed] [Google Scholar]
- 7.Hall DC, Adams CK, Stein GH, Stephenson HS, Goldstein MK, Pennypacker HS. Improved detection of human breast lesions following experimental training. Cancer. 1980;46(2):408–414. doi: 10.1002/1097-0142(19800715)46:2<408::aid-cncr2820460233>3.0.co;2-p. [DOI] [PubMed] [Google Scholar]
- 8.Pennypakcer HS, Naylor L, Sander AA, Goldstein MK. Why can’t we do better breast examinations? Nurse Practitioner Forum. 1999;10(3):122–128. [PubMed] [Google Scholar]
- 9.Katz VL, Dotters D. Breast Diseases. In: Lentz GM, Lobo RA, Gershenson DM, Katz VL, editors. Lenz: Comprehensive Gynecology. 6th Ed. Philadelphia, PA: Elsevier Mosby; 2012. pp. 301–334. [Google Scholar]
- 10.Pugh CM, Domont ZB, Salud LH, Blossfield KM. A simulation-based assessment of clinical breast examination technique: Do patient and clinician factors affect clinical approach? The American Journal of Surgery. 2008;195:874–880. doi: 10.1016/j.amjsurg.2007.10.018. [DOI] [PubMed] [Google Scholar]
- 11.Salud LH, Pugh CM. Use of sensor technology to explore the science of touch. Studies in Health Technology and Informatics. 2011;163:542–548. [PubMed] [Google Scholar]
- 12.Pugh C, Rosen J. Qualitative and quantitative analysis of pressure sensor data acquired by the E-pelvis simulator during simulated pelvic examinations. Medicine Meets Virtual Reality. 2002;85:376–379. [PubMed] [Google Scholar]
- 13.Pugh CM, Youngblood P. Development and validation of assessment measures for a newly developed physical examination simulator. Journal of American Medical Informatics Association. 2002;9(5):448–460. doi: 10.1197/jamia.M1107. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Balkissoon R, Blossfield K, Salud L, Ford D, Pugh C. Lost in translation: unfolding medical students' misconceptions of how to perform a clinical digital rectal examination. The American Journal of Surgery. 2009;197(4):525–532. doi: 10.1016/j.amjsurg.2008.11.025. [DOI] [PubMed] [Google Scholar]


