Abstract
Purpose
This paper reports the validation of an assessment instrument designed to measure the outcomes of training in evidence-based practice (EBP) within the context of dentistry. Four EBP dimensions are measured by this instrument: (1) understanding of EBP concepts, (2) attitudes about EBP, (3) evidence accessing methods, and, (4) confidence in critical appraisal. The instrument is the Evidence Based Practice Knowledge, Attitudes, Access, and Confidence Evaluation (KACE) that has four scales, totaling 35 items: EBP knowledge (10), EBP attitudes (10), accessing evidence (9) and confidence in critical appraisal (6).
Methods
Four elements of validity were assessed: consistency of items within the KACE scales (extent to which items within a scale measure the same dimension), discrimination (capacity to detect differences between individuals with different training or experience), responsiveness (capacity to detect the effects of education on trainees) and test-retest reliability. Internal consistency of scales was assessed by analyzing responses from KACEs completed by second year dental students, dental residents and dental school faculty using Cronbach alpha. Discriminative validity was assessed by comparing KACE scores for students, residents and faculty members. Responsiveness was assessed by comparing pre - and post - training responses for dental students and residents. To measure test-retest reliability, the KACE was completed twice by a class of freshmen dental students 17 days apart and the knowledge scale was completed twice by 16 dental faculty 14 days apart.
Results
Item - to - scale consistency ranged from 0.21 to 0.78 for knowledge, 0.57 to 0.83 for attitude, 0.70 to 0.84 for accessing evidence and 0.87 to 0.94 for confidence. For discrimination, ANOVA and post-hoc testing by the Tukey-Kramer method revealed significant score differences among students, residents and faculty consistent with education and experience levels. For responsiveness to training, dental students and residents demonstrated statistically significant changes, in desired directions, from pre - to post - test. For the student test-retest, Pearson correlations for KACE scales were: knowledge (0.66), attitudes (0.66), evidence accessing (0.74) and confidence (0.76). For the knowledge scale test-retest by faculty, the Pearson correlation was 0.79.
Conclusion
The construct validity of the KACE is equivalent to that of instruments that assess similar EBP dimensions in medicine. Item consistency for the knowledge scale was more variable than for other KACE scales, a finding also reported for medically-oriented EBP instruments. The KACE has good discriminative validity, responsiveness to training effects, and test-retest reliability.
Keywords: Evidence-based practice, critical appraisal, dental education, assessment
Introduction
Evidence-based practice (EBP) requires the integration of the best research evidence with patients’ values and clinical circumstances in clinical decision-making. 1 EBP has been proposed as a mechanism to promote the transference of research evidence into the day-to-day provision of health care services. One of the challenges for dentists and other health care providers is staying abreast of new developments in the biomedical sciences and clinical practice. Faced with a volume of scientific information that is impossible to consume in total, providers must develop capacity for focused access and appraisal of the literature to guide their practices and stay current. A core attribute of competent health care practice is the ability to locate, analyze and use high quality evidence to facilitate provision of optimal patient care and guide professional development needed for maintenance of competence throughout a 30-40 year career. 2 However, a 2005 systematic review that examined the relationship of physicians’ time in practice and quality of care concluded that providers’ age and time in practice were associated with lower levels of knowledge and lower adherence to standards of care with a trend toward less optimal patient outcomes. 3
Virtually all assessments of health professions education, including dental education, recommend focusing on evidence-based practice in the curriculum as a mechanism for educating students to provide patient care supported by research evidence versus the historical “in my hands” approach 4 and instilling an educational culture that values and promotes intellectual curiosity.4-13 In response, dental schools are implementing courses for students and training for faculty in EBP to build a dental workforce that has the capacity to make informed use of biomedical science. 14-15
As more dental schools implement EBP training, strategies are needed to assess the effectiveness of these efforts and determine the competence of trainees. There have been numerous reports of the outcomes of EBP training. 16-20 However, systematic reviews of EBP training outcomes have indicated that evaluation designs were often faulty and few investigators used validated assessment instruments, which hindered interpretation of results 21,22 In 2006, Shaneyfelt, et al reviewed instruments designed to evaluate trainees’ acquisition of EBP skills. 23 From among 104 instruments reported in 347 articles published from 1980 through 2006, these investigators identified seven instruments that met standards for format and validity testing. These instruments employ a questionnaire format and measure one or more of the following EBP dimensions: knowledge, attitudes toward evidence-based practice, search strategies, frequency of use of evidence sources, current application of EBP, intended future use of EBP, and confidence. For the knowledge dimension of these questionnaires, respondents are presented with one or more patient scenarios followed by questions addressing elements of EBP. Response options for knowledge questions are typically one-best-answer multiple-choice, or a true-false format. For the other dimensions, respondents self-report their attitudes, search strategies, use of evidence sources, current use, future intentions, and confidence guided by rating scales. This small group of validated instruments included the EBP Knowledge and Searching Skills Survey reported in a series of papers by Taylor, et al; Bradley and Herrin; and Bradley, et al 24,25,26; the Berlin Questionnaire reported by Fritsche, et al; and Akl, et al; 27,28 and an un-named questionnaire developed by Johnson, et al that has scales for EBP knowledge, attitude, current personal use and intended future use. 29 These instruments have been frequently used to measure EBP training outcomes. All three instruments were developed for medical students, residents or practicing physicians; the questions, terminology and patient care scenarios reflect a medical context. None of the EBP assessment instruments reviewed by Shaneyfelt, et al were developed for dentistry.
To fill this void, we developed a new assessment tool to measure the outcomes of EBP training in the dental environment. The KACE: Knowledge, Attitudes, Access, and Confidence Evaluation permits dentally relevant evaluation of EBP knowledge, attitudes about EBP, sources for accessing evidence, and confidence in critical appraisal skills. Our goal was to develop an instrument that can be administered in a variety of settings without extensive directions, used by students, residents or practitioners, and completed in a convenient time -- 15 to 20 minutes. The catalyst for the development of the KACE was expanded focus on critical appraisal at The University of Texas Health Science Center at San Antonio Dental School (UTHSCSA–DS). Since 2007, UTHSCSA-DS has implemented a curriculum project supported by a grant from NIH / NIDCR to develop students’ capacity for critical appraisal of evidence. Co-author John Rugh is the Principal Investigator of this initiative, which has been described previously. 30
Methods
Development of the KACE
The Taylor, Bradley, Fritsche, and Johnson EBP questionnaires served as source material for the pilot version of the KACE created in 2007. Items and response format in the attitudes, confidence and evidence accessing sections of the source questionnaires transferred reasonably well from the medical to a dental context. However, the scenario-based knowledge sections depicted medical conditions that were not introduced to our dental students in-depth until the third year of school, were not relevant to oral health care, and focused on several EBP concepts that are not emphasized in our curriculum. Therefore, we developed a new knowledge scale for the KACE that was linked to the objectives of the EBP course completed by our students. Table 1 indicates the 10 learning outcomes for this course. One question was developed for each learning outcome in multiple-choice format. For example, for the learning outcome, “recognize the hierarchy of evidence from weakest to highest quality,” the associated question in the knowledge section of the KACE is:
Table 1.
Learning Outcomes of Evidence–Based Practice Course at UTHSCSA-DS and Basis for KACE Knowledge Questions
Learning Outcomes | |
---|---|
1 | Recognize hierarchy of evidence (evidence pyramid) and identify types of evidence ranging from weak to highest quality |
2 | Identify the quality of specific evidence sources when provided examples |
3 | Describe a PICO question |
4 | Implement a systematic strategy for finding evidence |
5 | Discuss validity issues related to sample size |
6 | Identify threats to validity for studies reporting research on diagnostic techniques and treatment techniques |
7 | Demonstrate awareness of the Cochrane Collaboration and other systematic reviews |
8 | Identify study designs and distinguish features unique to commonly used study designs |
9 | Identify search strategies for specific patient care scenarios / questions |
10 | Identify sensitivity and specificity and distinguish between them |
In judging the quality of the dental literature, which one of the following is the highest level of evidence?
Article on a non-randomized clinical trial that includes references
Case series article that has been peer reviewed
Cochrane Review of an oral health topic
Detailed report of a clinical case by a recognized leader in that field
I don’t know
The KACE was pilot tested with students and faculty to assess item clarity and calculate completion time. Feedback obtained from pilot testing and internal scrutiny by the investigators led to the current version of the KACE. Table 2 indicates the scales, items and response format and provides examples of items. Knowledge questions are in a one-best response format with five options, including “I don’t know” to minimize random guessing. KACE respondents are encouraged to select “I don’t know” if they could only respond by guessing. All knowledge items are scored as incorrect or correct with weights of 0 and 1 assigned, respectively. “I don’t know” responses and blank responses are scored as incorrect and assigned a weight of zero. The attitudes, evidence accessing, and confidence scales all employ five-point scales. For attitudes, the scale range for each item is 1 = “Strongly Disagree to 5 = Strongly Agree”. For the evidence access scale, the scale range for each item is 1 = ”Never Use” to 5 = “Very Frequently Use.” For confidence, the scale range for each item is 1 = “Not at All Confident” to 5 = “Very Confident”.
Table 2.
KACE Scales, Example Items, Number of Items Within Scales, Response Formats, and Scoring Range for each Scale
Scales and Example Items | Number of Items |
Response Format | Scale Range for Scoring |
---|---|---|---|
Knowledge of Core EBP Concepts Which of the following statements best describes a PICO? |
10 | 5 option MCQ: One Best Response |
0 - 10 |
Attitudes about EBP EBP should be an integral part of the dental school curriculum. |
10 | 5 options: Strongly Disagree (1) to Strongly Agree (5) |
10-50 |
Sources for Accessing Evidence How frequently do you access dental evidence from the Cochrane Database of Systematic Reviews? |
9 | 5 options: Never Use (1) to Very Frequently Use (5) |
9-45 |
Confidence in Doing Critical Appraisal How confident are you at appraising the appropriateness of the study design? |
6 | 5 options: Not at All Confident (1) to Very Confident (5) |
6-30 |
Validation of the KACE
Validation of the KACE was based on standards set by the American Psychological Association, the American Educational Research Association and the National Council on Measurement in Education in their joint report: Standards for Educational and Psychological Testing, and other sources. 31,32, 33
The extent to which items within each scale measured the same dimension, an indicator of the soundness of instrument construction was assessed by analyzing responses from KACEs completed by dental faculty, dental residents, first year dental students and second year dental students using Cronbach’s coefficient alpha. 34 Discriminative validity was assessed by comparing KACE pre-training scores for dental students, dental residents and faculty members via ANOVA with post-hoc pairwise comparisons using the Tukey-Kramer Test. 35 Responsiveness was assessed by comparing pre - and post - training scores, via paired samples t-tests, for dental students who completed the 2008 and 2009 EBP courses and for dental residents in a Research Methods and EBP Course conducted in 2009. To measure test-retest reliability, the KACE was completed twice by freshmen dental students 17 days apart, and the KACE knowledge scale was completed twice by dental faculty 14 days apart. Test-retest reliability was assessed by Pearson product moment correlations. Overall means for each KACE scale were calculated as indicated in Table 2. Statistical comparisons were based on subjects’ overall total score for each scale.
Approval to collect and analyze the data for the validation was obtained from the UTHSCSA Institutional Review Board (UTHSCSA IRB Protocol HSC20080091H). For all administrations of the KACE, students, residents and faculty members were informed of the uses of the collected data and were provided an IRB-approved information sheet that reviewed the study protocol and described measures to protect confidentiality. Participation was voluntary in all instances. All validation data were obtained from administrations of the KACE in 2008 and 2009.
Results
Consistency (Reliability) of Items Within the KACE Scales
The individual item to overall scale consistency (reliability) as assessed by Cronbach’s coefficient alpha are depicted for the knowledge, attitudes, evidence accessing and confidence scales in Tables 3 - 6, respectively. Findings are reported for first and second year dental students, dental residents, and dental school faculty members. The number of subjects who completed each scale fully (i.e., provided a response for all items in the scale) are indicated in these tables. There is minor variation in the number of subjects reported from table to table because some individuals did not fully complete all items on each scale. Alphas for the knowledge scale ranged from 0.208 to 0.781 across seven administrations of the entire KACE instrument completed by 600 subjects (Table 3). Alphas for the EBP attitudes scale ranged from 0.573 to 0.834 across seven administrations with 584 subjects (Table 4). Item reliability for the evidence accessing scale ranged from 0.617 to 0.844 for 581 subjects (Table 5). Item reliability for the confidence scale ranged from 0.872 to 0.941 for 583 subjects (Table 6).
Table 3.
Internal Consistency (Reliability) for the KACE Knowledge Scale
Subjects | Number of Subjects |
Purpose of Test | Cronbach’s Alpha |
---|---|---|---|
1st Year Dental Students | 95 | 1st administration of test-retest reliability assessment |
0.439 |
1st Year Dental Students | 87 | 2nd administration of test-retest reliability assessment |
0.527 |
2nd Year Dental Students | 149 | Pre-test before EBP course | 0.687 |
2nd Year Dental Students | 152 | Post-test after EBP course | 0.410 |
Dental Residents (postgraduate education) |
29 | Pre-test before Research and EBP course |
0.781 |
Dental Residents (postgraduate education) |
28 | Post-test after Research and EBP course |
0.208 |
Dental School Faculty | 60 | Baseline assessment prior to EBP Initiative at UTHSCSA-DS |
0.664 |
Table 6.
Internal Consistency for the KACE Confidence Scale
Subjects | Number of Subjects |
Purpose of Test | Cronbach’s Alpha |
---|---|---|---|
1st Year Dental Students | 92 | 1st administration of test-retest reliability assessment |
0.940 |
1st Year Dental Students | 82 | 2nd administration of test-retest reliability assessment |
0.931 |
2nd Year Dental Students | 147 | Pre-test before EBP course | 0.941 |
2nd Year Dental Students | 151 | Post-test after EBP course | 0.894 |
Dental Residents (postgraduate education) |
29 | Pre-test before Research and EBP course |
0.872 |
Dental Residents (postgraduate education) |
26 | Post-test after Research and EBP course |
0.922 |
Dental School Faculty | 56 | Baseline assessment prior to EBP Initiative at UTHSCSA-DS |
0.917 |
Table 4.
Internal Consistency for the KACE Attitude Scale
Subjects | Number of Subjects |
Purpose of Test | Cronbach’s Alpha |
---|---|---|---|
1st Year Dental Students | 92 | 1st administration of test-retest reliability assessment |
0.779 |
1st Year Dental Students | 82 | 2nd administration of test-retest reliability assessment |
0.641 |
2nd Year Dental Students | 148 | Pre-test before EBP course | 0.834 |
2nd Year Dental Students | 150 | Post-test after EBP course | 0.809 |
Dental Residents (postgraduate education) |
29 | Pre-test before Research and EBP course |
0.742 |
Dental Residents (postgraduate education) |
25 | Post-test after Research and EBP course |
0.573 |
Dental School Faculty | 58 | Baseline assessment prior to EBP Initiative at UTHSCSA-DS |
0.829 |
Table 5.
Internal Consistency for the KACE Evidence Accessing Scale
Subjects | Number of Subjects |
Purpose of Test | Cronbach’s Alpha |
---|---|---|---|
1st Year Dental Students | 91 | 1st administration of test-retest reliability assessment |
0.829 |
1st Year Dental Students | 82 | 2nd administration of test-retest reliability assessment |
0.844 |
2nd Year Dental Students | 148 | Pre-test before EBP course | 0.805 |
2nd Year Dental Students | 149 | Post-test after EBP course | 0.761 |
Dental Residents (postgraduate education) |
29 | Pre-test before Research and EBP course |
0.617 |
Dental Residents (postgraduate education) |
26 | Post-test after Research and EBP course |
0.701 |
Dental School Faculty | 56 | Baseline assessment prior to EBP Initiative at UTHSCSA-DS |
0.767 |
Discriminative Validity (Sensitivity)
The capacity of the KACE to detect response differences between members of the dental school community with dissimilar levels of education and experience was assessed by comparing the baseline scores obtained by dental students, dental residents and faculty members prior to EBP training. As displayed in Table 7, assessment of between-subjects effects by ANOVA indicated statistically significant differences for each scale. For the knowledge scale, ANOVA revealed an overall significant difference among the subject groups (F 2,235 = 23.7, p<0.001). Post-hoc pairwise comparisons using the Tukey-Kramer test among subject groups indicated that faculty members scored significantly higher (p<0.001) on the pre-training EBP knowledge test than both second year students (p<0.001) and residents (p<0.001). The mean knowledge scores of second year students (3.8 ± sd 2.4) and residents (4.1 ± sd 2.8) were not significantly different (p = 0.764).
Table 7.
Comparison of Dental Student, Dental Resident and Dental School Faculty Members’ Scores of KACE Scales Prior to EBP Training
Knowledge Scale (mean±SD) |
Attitudes Scale (mean±SD) |
Accessing Evidence Scale (mean±SD) |
Confidence Scale (mean±SD) |
|
---|---|---|---|---|
2nd Year Dental Students |
3.8 ± 2.4 N = 149 |
23.6 ± 5.2 N = 142 |
21.8 ± 5.4 N = 141 |
16.5 ± 5.0 N = 147 |
Dental Residents | 4.1 ± 2.8 N = 29 |
26.6 ± 4.9 N = 27 |
25.0 ± 4.0 N = 28 |
15.9 ± 4.5 N = 29 |
Faculty Members | 6.3 ± 2.3 N = 60 |
26.6 ± 6.4 N = 54 |
27.8 ± 5.3 N = 50 |
21.7 ± 5.0 N = 55 |
Total | 4.4 ± 2.6 N = 238 |
24.7 ± 5.6 N = 223 |
23.6 ± 5.8 N = 219 |
17.7 ± 5.4 N = 231 |
F2,235 = 23.7 p<0.001 |
F2,220 = 7.9 p<0.001 |
F2,216 = 25.1 p<0.001 |
F2,228 = 24.6 p<0.001 |
For the attitudes scale, ANOVA revealed an overall significant difference among the subject groups (F 2,220 = 7.9, p<0.001). Faculty members’ mean attitudes scores (26.6 ± sd 6.4) were significantly more positive (p< 0.002) than those of dental students (23.6 ± sd 5.2), but there was no difference between the attitude scores of faculty and residents (p = 1.000). Residents’ attitudes about EBP (26.6 ± sd 4.9) were also significantly more positive (p = 0.025) than those of the students. For the evidence accessing scale, ANOVA revealed an overall significant difference among the subject groups (F 2,216 = 25.1, p<0.001) with post-hoc comparisons indicating dental students used significantly fewer sources than either residents (p<0.011) or faculty members (p<0.001). Resident and faculty responses were not significantly different (p = 0.061). For confidence, ANOVA indicated an overall significant difference among the subject groups with regard to their confidence in performing key critical appraisal tasks (F 2,228 = 24.6, p<0.001). Faculty members were significantly more confident (p<0.001) than either students or residents. Students’ and residents’ confidence scores were not significantly different (p = 0.809).
Responsiveness
The capacity of the KACE to detect the effects of training was assessed by comparing pre - and post - training responses for 65 second year dental students who completed an 18-hour EBP course and for 25 dental residents who completed a 26-hour Research Methods and EBP Course. Table 8 presents the dental students’ pre - and post - training scores. Table 9 presents these data for the dental residents. Students and residents demonstrated significant changes, in the desired directions, for all KACE scales, except the attitude scale for the residents.
Table 8.
Comparison of KACE Scores by Second Year Dental Students Before and After Training in an EBP Course
Before Training (mean±SD) |
After Training (mean±SD) |
Two-tailed Paired Samples T-Test |
||||
---|---|---|---|---|---|---|
N | T | df | p-value | |||
Knowledge | 65 | 3.7 ± 2.5 | 6.5 ± 1.7 | −11.66 | 64 | <0.001 |
Attitudes | 58 | 22.3 ± 5.1 | 25.0 ± 5.5 | −3.18 | 57 | <0.002 |
Accessing Evidence |
61 | 21.4 ± 4.8 | 23.5 ± 4.8 | −2.87 | 63 | <0.006 |
Confidence | 64 | 16.4 ± 5.3 | 20.3 ± 4.3 | −6.60 | 63 | <0.001 |
Table 9.
Comparison of KACE Scores By Dental Residents Before and After Training in an EBP Course
Before Training (mean±SD) |
After Training (mean±SD) |
Two-tailed Paired Samples T-Test |
||||
---|---|---|---|---|---|---|
N | T | df | p-value | |||
Knowledge | 25 | 4.1 ± 2.8 | 8.1 ± 1.3 | −7.80 | 24 | <0.001 |
Attitudes | 22 | 26.0 ± 4.7 | 28.1 ± 3.9 | −1.80 | 21 | 0.086 |
Accessing Evidence |
22 | 24.9 ± 3.7 | 30.9 ± 3.7 | −6.03 | 21 | <0.001 |
Confidence | 24 | 15.8 ± 4.7 | 24.4 ± 3.8 | −7.95 | 23 | <0.001 |
Test / Retest Reliability
To assess the capacity of the KACE to provide consistent measurement across separate administrations without formal EBP training within the test-retest interval, the KACE was administered twice with a 17-day interval to freshmen dental students shortly after their matriculation. These students were immersed in basic science instruction and had not received any training in EBP. The test-retest reliability of the knowledge scale, which is of particular interest for training programs designed for faculty, was assessed by having 16 dental school faculty members complete that section of the KACE twice, 14 days apart. These individuals were identified from a roster of faculty who had participated in an EBP study club and attended several seminars on critical appraisal. The results of the test-retest reliability assessment with first year dental students are displayed in Table 10. Small differences in mean scores existed between the first test and the second test with Pearson correlations ranging from 0.66 to 0.76 indicating acceptable test-retest reliability. On the knowledge scale test-retest assessment by selected faculty, the first test mean was 7.4 ± sd 1.5 and the mean for the second test was 7.9 ± sd 1.4. The Pearson Correlation was 0.79.
Table 10.
Results of KACE Test-Retest Reliability Assessment Completed by First Year Dental Students at UTHSCSA Dental School
Scale | N | Test 1 (mean±SD) |
Test 2 (mean±SD) |
Pearson Correlation |
---|---|---|---|---|
Knowledge | 76 | 3.6 ± 1.7 | 3.6 ± 1.9 | 0.66 |
Attitudes | 67 | 24.2 ± 4.2 | 23.5 ± 3.3 | 0.66 |
Accessing Evidence | 65 | 22.6 ± 5.7 | 23.6 ± 6.6 | 0.74 |
Confidence | 70 | 17.9 ± 5.3 | 18.5 ± 5.0 | 0.76 |
Discussion
This paper reports the validation of the EBP Knowledge, Attitudes, Access and Confidence Evaluation (KACE), designed to measure the outcomes of EBP training in a dental context. Four EBP dimensions are measured by the KACE: (1) understanding of EBP concepts, (2) attitudes about EBP, (3) evidence accessing, and (4) confidence in critical appraisal. We assessed four elements of validity: internal item to scale consistency, discrimination, responsiveness to the effects of education, and test-retest reliability (dependability of measurement). Dental students and residents demonstrated statistically significant changes, in desired directions, from pre - to post - test, demonstrating that the KACE has the capacity to detect the effects of training. For example, residents’ EBP knowledge scores nearly doubled from a mean of 4.1 prior to training to 8.1 post-training, and their confidence scores dramatically improved from a mean of 15.8 before training to 24.4 after training. The KACE demonstrated the capacity to discriminate among students, residents and faculty. As indicated in Table 7, mean scores reveal a monotonic progression from student to resident to faculty for the knowledge, attitudes and access scales. On the confidence scale, students rated their critical appraisal skills slightly more positively than residents (16.5 to 20.10; p = 0.809) but both groups were significantly less confident than faculty (14.37). Test-retest reliability correlations for students and faculty met accepted standards. 33
For item - to - scale consistency, Cronbach’s coefficient alpha ranged from 0.21 to 0.78 for knowledge, 0.57 to 0.83 for attitudes, 0.61 to 0.84 for accessing evidence, and 0.87 to 0.94 for confidence. For the access and confidence scales, alphas for all seven administrations exceeded 0.60, and for the attitude scale, only one of 7 alphas fell below 0.60. The alphas for the knowledge scale were variable: 0.781, 0.687, 0.664, 0.527, 0.439, 0.410, 0.208. This variability has been previously reported for other instruments measuring EBP knowledge. Table 11 summarizes validity data for the knowledge scales of the instruments developed by Taylor, Bradley and Johnson. 24,25,26,29 The knowledge scales for these instruments ranged from 11 to 23 questions. The variable alphas reported for these scales, ranging from 0.03 to 0.88, are consistent with our findings. Collectively, these investigators conducted 10 assessments of internal item-to-scale consistency with medical residents, practicing physicians and medical school faculty; five of the analyses produced alphas below 0.60.
Table 11.
Summary of Construct Validity Data Reported for Other EBP Assessment Instruments
Explanations for the variable alphas for the KACE and other EBP assessment instruments include: (1) one or more items do not assess the same domain of knowledge as the other questions, (2) one or more items have construction issues (clarity of stem or options, or terminology) that cause respondents to interpret the question in different ways, and/or (3) high levels of internal consistency are difficult to obtain from an examination with a limited number of items. Because the KACE knowledge questions are based on 10 EBP concepts frequently articulated in the literature and linked to learning outcomes of the EBP course at UTHSCSA-DS, explanation 1 is not likely, but cannot be completely ruled out. Explanation 2 also seems unlikely given the frequent pilot testing of the KACE during instrument development. There is evidence in the test construction literature that examinations designed to measure concept comprehension with less than 30 items are less likely to produce high levels of internal consistency. 32 The appropriate degree of reliability depends upon the use of the data and the type of decision. Instruments designed as part of an overall, multi-measure assessment for a training program are typically brief to enhance logistical feasibility, and thus may be somewhat less reliable. High stakes assessments that require precise measures, such as national board examinations and other credentialing evaluations, have many more items to achieve high reliabilities. The project goal was to produce an assessment questionnaire that can measure the effects of EBP training across several dimensions, not just knowledge, and can be conveniently used in a variety of settings in 15-20 minutes. Given this goal, including 30 or more knowledge questions was not a viable option.
Conclusion
In contrast to other EBP assessment instruments that focus primarily on comprehension of concepts, the KACE provides information about the effects of training in four domains: knowledge, attitudes, information accessing and confidence. The KACE is applicable to classroom settings, workshops, seminars, faculty retreats, and online administration. The construct validity of the KACE is comparable to that of other instruments that assess EBP dimensions in medicine. Item consistency for the knowledge scale was more variable than for other KACE scales, a finding also reported for medically-oriented EBP instruments. Overall, the KACE demonstrates good sensitivity to the effects of training, distinguishes among respondents with different educational and experiential levels, has good test-retest reliability and has strong internal consistency when the instrument is considered as a whole, across four dimensions.
Acknowledgments
This study was supported by NIH/NIDCR R25 018663
Contributor Information
William D. Hendricson, Assistant Dean, Educational and Faculty Development, University of Texas Health Science Center at San Antonio, Dental School
John D. Rugh, Professor and Chair, Department of Orthodontics, University of Texas Health Science Center at San Antonio, Dental School.
John P. Hatch, Professor, Psychiatry, University of Texas Health Science Center at San Antonio, Dental School.
Debra L. Stark, Evaluation Specialist, Academic Center for Excellence in Teaching, University of Texas Health Science Center at San Antonio.
Thomas Deahl, Adjunct Associate Professor, Department of Orthodontics, University of Texas Health Science Center at San Antonio, Dental School and Institute for Natural Resources, Concord, CA.
Elizabeth R. Wallmann, Fourth Year Dental Student, University of Texas Health Science Center at San Antonio, Dental School
References
- 1.Sackett DL, Strauss SE, Richardson WS, Rosenberg WMC, Haynes RB. Evidence-Based Medicine: How to Practice and Teach EBM. Churchill Livingstone; London: 2000. [Google Scholar]
- 2.Epstein RM, Hundert EM. Defining and assessing professional competence. JAMA. 2002;287(2):226–235. doi: 10.1001/jama.287.2.226. [DOI] [PubMed] [Google Scholar]
- 3.Choudhry NK, Fletcher RH, Soumerai SB. Systematic review: the relationship between clinical experience and quality of health care. Ann Intern Med. 2005 Feb 15;142(4):260–73. doi: 10.7326/0003-4819-142-4-200502150-00008. [DOI] [PubMed] [Google Scholar]
- 4.Bertolami CN. Creating the dental school faculty of the future: a guide for the perplexed. J Dent Educ. 2007;71(10):1267–1280. [PubMed] [Google Scholar]
- 5.Field MJ, editor. Institute of Medicine Report. National Academy Press; Washington, DC: 1995. Dental Education at the Crossroads: Challenges and Change. [PubMed] [Google Scholar]
- 6.U.S. Department of Health and Human Services . Oral Health in America: A Report of the Surgeon General, No. 00-4713. USDHHS, NIDCR, NIH; Rockville, MD: 2000. [Google Scholar]
- 7.Institute of Medicine. Committee on Quality of Health Care in America . Crossing the Quality Chasm: A New Health Care System for the 21st Century. National Academy Press; Washington, DC: 2001. [Google Scholar]
- 8.Hendricson WD, Cohen PA. Oral health care in the 21st century: implications for dental and medical education. Acad Med. 2001;77(12):1181–1206. doi: 10.1097/00001888-200112000-00009. [DOI] [PubMed] [Google Scholar]
- 9.American Dental Association . Future of Dentistry: Today’s Vision, Tomorrow’s Reality. American Dental Association, Health Policy Resources Center; Chicago: 2002. [Google Scholar]
- 10.Santa Fe Group Special Report: The Necessity for Major Reform in Dental Education. Global Health Nexus. 2004;6(2):10–5. [Google Scholar]
- 11.Depaola DP. The revitalization of U.S. Dental Education. J Dent Educ. 2008;72(2 Suppl):28–42. [PubMed] [Google Scholar]
- 12.Werb SB, Matear DW. Implementing evidence-based practice in undergraduate teaching clinics: a systematic review and recommendations. J Dent Educ. 2004;68(9):995–1003. [PubMed] [Google Scholar]
- 13.Hendricson WD, Andrieu SC, Chadwick DG, Chmar JE, Cole JR, George MC, et al. Educational strategies associated with development of problem-solving, critical thinking and self-directed learning. J Dent Educ. 2006;70(9):925–36. [PubMed] [Google Scholar]
- 14.Kassebaum DK, Hendricson WD, Taft T, Haden NK. The dental curriculum at North American dental institutions in 2002-2003: a survey of current structure, recent innovations and planned changes. J Dent Educ. 2004;68(9):914–931. [PubMed] [Google Scholar]
- 15.Haden NK, Hendricson WD, Kassebaum DK, Ranney RR, Weinstein G, Anderson EL, Valachovic RW. Curriculum change in dental education, 2003-2009. J Dent Educ. 2010;74(5):539–557. [PubMed] [Google Scholar]
- 16.Smith CA, Ganschow PS, Reilly BM. Teaching residents evidence-based medicine skills: a controlled trial of effectiveness and durability. J Gen Int Med. 2000;15:710–715. doi: 10.1046/j.1525-1497.2000.91026.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.MacRae HM, Regehr G, Brenneman F, McKenzie M, McLeod RS. Assessment of critical appraisal skills. Am J surg. 2004;187:120–123. doi: 10.1016/j.amjsurg.2002.12.006. [DOI] [PubMed] [Google Scholar]
- 18.Weberschock TB, Ginn TC, Reinhold J. Change in knowledge and skills of year 3 undergraduates in evidence-based medicine seminars. Med Educ. 2005;39:665–671. doi: 10.1111/j.1365-2929.2005.02191.x. [DOI] [PubMed] [Google Scholar]
- 19.Ross R, Verdieck A. Introducing an evidence-based medicine curriculum into a family practice residency – is it effective? Acad Med. 2003;78:412–417. doi: 10.1097/00001888-200304000-00019. [DOI] [PubMed] [Google Scholar]
- 20.Crowley SD, Owens TA, Schardt CM. A web-based compendium of clinical questions and medical evidence to educate internal medicine residents. Acad Med. 2003;78:270–274. doi: 10.1097/00001888-200303000-00007. [DOI] [PubMed] [Google Scholar]
- 21.Taylor R, Reeves B, Ewings P, Binns S, Keast J, Mears RA. Systematic review of the effectiveness of critical appraisal skills training for clinicians. Med Educ. 2000;34:120–25. doi: 10.1046/j.1365-2923.2000.00574.x. [DOI] [PubMed] [Google Scholar]
- 22.Hyde C, Parkes J, Deeks J, Milne R. Systematic review of the effectiveness of teaching critical appraisal. ICRF/NHS Centre for Statistics in Medicine; Oxford: 2000. [Google Scholar]
- 23.Shaneyfelt T, Baum KD, Bell D, Feldstein D, Houston TK, Kaatz S, et al. Instruments for evaluating education in evidence-based practice. A systemic review. JAMA. 2006;296:1116–1127. doi: 10.1001/jama.296.9.1116. [DOI] [PubMed] [Google Scholar]
- 24.Taylor R, Reeves B, Mears R, Keast J, Binns S, Ewings P. Development and validation of a questionnaire to evaluate the effectiveness of evidence-based practice teaching. Med Educ. 2001;35:544–547. doi: 10.1046/j.1365-2923.2001.00916.x. [DOI] [PubMed] [Google Scholar]
- 25.Bradley P, Herrin J. Development and validation of an instrument to measure knowledge of evidence-based practice and searching skills. Med Educ Online. 2004;9:15–19. doi: 10.3402/meo.v9i.4354. [DOI] [PubMed] [Google Scholar]
- 26.Bradley P, Oterholt C, Herrin J, Nordheim L, Bjorndal A. Comparison of directed and self-directed learning in evidence-based medicine: a randomized controlled trial. Med Educ. 2005;39:1027–1035. doi: 10.1111/j.1365-2929.2005.02268.x. [DOI] [PubMed] [Google Scholar]
- 27.Fritsche L, Greenhalgh T, Falck-Ytter Y, Neumayer HH, Kunz R. Do short courses in evidence-based medicine improve knowledge and skills? Validation of Berlin Questionnaire and before and after study of courses in evidence-based medicine. BMJ. 2002;325:1338–1341. doi: 10.1136/bmj.325.7376.1338. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Akl EA, Izuchukwu IS, El-Dika S, Fritsche L, Kunz R, Schunemann HJ. Integrating an evidence-based medicine rotation into an internal medicine residency. program. Acad Med. 2004;79:897–904. doi: 10.1097/00001888-200409000-00018. [DOI] [PubMed] [Google Scholar]
- 29.Johnson JM, Leung GM, Fielding R, Tin KYK, Ho LM. The development and validation of a knowledge, attitude and behavior questionnaire to assess undergraduate evidence-based practice teaching and learning. Med Educ. 2003;37(11):992–999. doi: 10.1046/j.1365-2923.2003.01678.x. [DOI] [PubMed] [Google Scholar]
- 30.Rugh JD. Keeping up to date: the San Antonio CATs initiative. J Am Coll Dent. 2010 June; 2010: In press. [PMC free article] [PubMed] [Google Scholar]
- 31.Standards for Educational and Psychological Testing . American Psychological Association and National Council on Measurement in Education. American Educational Research Association; Washington D.C.: 1999. Joint Committee on Standards for Educational and Psychological Testing of the American Educational Research Association. [Google Scholar]
- 32.Downing SM. Validity: on the meaningful interpretation of assessment data. Med Educ. 2003;37:830–837. doi: 10.1046/j.1365-2923.2003.01594.x. [DOI] [PubMed] [Google Scholar]
- 33.Downing SM. Reliability: on the reproducibility of assessment data. Med Educ. 2004;38:1006–1012. doi: 10.1111/j.1365-2929.2004.01932.x. [DOI] [PubMed] [Google Scholar]
- 34.Bland JM, Altman DG. Statistics Notes: Cronbach’s alpha. BMJ. 1997 Feb 22;314:572. doi: 10.1136/bmj.314.7080.572. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Tukey JW. Exploratory Data Analysis. Addison-Wesley; Reading, MA: 1977. [Google Scholar]