Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2020 Jul 11.
Published in final edited form as: AJOB Empir Bioeth. 2019 Jul 11;10(3):164–172. doi: 10.1080/23294515.2019.1634653

What is the Minimal Competency for a Clinical Ethics Consult Simulation? Setting a Standard for Use of the Assessing Clinical Ethics Skills (ACES) Tool

K Wasson 1, WH Adams 2, K Berkowitz 3,4,*, M Danis 5,**, AR Derse 6, M Kuczewski 7, M McCarthy 8, K Parsi 9, A Tarzian 10,11
PMCID: PMC6921700  NIHMSID: NIHMS1543889  PMID: 31295060

Abstract

Background:

The field of clinical ethics is examining ways of determining competency. The Assessing Clinical Ethics Skills (ACES) tool offers a new approach that identifies a range of skills necessary in the conduct of clinical ethics consultation and provides a consistent framework for evaluating these skills. Through a training website, users learn to apply the ACES tool to clinical ethics consultants (CECs) in simulated ethics consultation videos. The aim is to recognize competent and incompetent clinical ethics consultation skills by watching and evaluating a videotaped CEC performance. We report how we set a criterion cut-score (i.e., minimally acceptable score) for judging the ability of users of the ACES tool to evaluate simulated CEC performances.

Methods:

A modified Angoff standard setting procedure was used to establish the cut-score for an end-of-life case included on the ACES training website. The standard setting committee viewed the Futility Case and estimated the probability that a minimally competent CEC would correctly answer each item on the ACES tool. The committee further adjusted these estimates by reviewing data from 31 pilot users of the Futility Case before determining the cut-score.

Results:

Averaging over all 31 items, the proposed proportion correct score for minimal competency was 80% corresponding to a cut-score that is between 24–25 points out of 31 possible points. The standard setting committee subsequently set the minimal competency cut-score to 24 points.

Conclusions:

The cut-score for the ACES tool identifies the number of correct responses a user of the ACES tool training website must attain to “pass” and reach minimal competency in recognizing competent and incompetent skills of the CECs in the simulated ethics consultation videos. The application of the cut-score to live training of CECs and other areas of practice requires further investigation.

Keywords: ethics consultation, clinical ethics, cut-score, simulation, ethics training, standard setting

Background

The field of clinical ethics consultation has matured into a multidisciplinary profession, with clinical ethics consultants (CECs) being trained in bioethics, philosophy, theology, law, medicine, nursing, social work, pastoral care and other disciplines. Along with the development of the field, questions about its nature and what constitutes appropriate knowledge and training have been debated. How to assess the competence of CECs has been discussed by many, including the American Society for Bioethics and Humanities (ASBH) and others (ASBH 1998; ASBH 2011; Kodish et al 2013; Fins et al 2016; Dubler et al 2009; Pearlman et al 2016; Tarzian 2009). Arguments to professionalize the field, set standards, and assess CECs systematically have gained support and momentum from within the field of clinical ethics consultation and been spearheaded by the ASBH. While prior work suggested assessment could include a portfolio of the CEC’s training and experience, checklists for CECs in practice, and other evaluation processes (Fins et al., 2016; Dubler et al., 2009; Flicker et al., 2014; Kodish et al 2013; Svantesson et al 2014; Pearlman et al 2016), the leadership of the ASBH decided to develop a standardized means of assessment and certification through a written examination and formed a Certification Commission (http://asbh.org/professional-development/hcec-certification, 2017). In November 2018, the examination was offered for the first time. Such an approach tests the knowledge of CECs but is unable to assess completely the analytic and interpersonal skills that are crucial to conducting ethics consultations.

Four of the authors (KW, MK, MM, KP) were part of the original study team and steering committee that developed a tool to evaluate the interpersonal skills of CECs in 2014 – the Assessing Clinical Ethics Skills (ACES) tool (Wasson et al. 2016). The ACES tool drew on the ASBH Core Competencies (2011) and the Veterans Affairs IntegratedEthics® Program’s Ethics Consultant Proficiency Assessment Tool (ECPAT) (US Department of Veterans Affairs 2014). The ACES tool is designed to be used in an educational setting with simulated ethics consultation cases. It includes 11 main constructs that assess a range of interpersonal skills, including specific behaviors that the CEC should demonstrate to facilitate an ethics consultation (Table 1). Currently, the ACES tool has two primary purposes: first, it is used in real time to evaluate graduate bioethics students performing as CECs in “live” simulated cases where each item is scored by a trained rater as Done, Not Done, or Done Incorrectly; second, the ACES website provides training on how to use the tool to assess the performance of a CEC in a variety of videotaped ethics consultations (For a sample see https://lucapps.luc.edu/clinicalethicsdemo/). Users of the website can work through four different cases and evaluate different CECs as they demonstrate the skills included on the ACES tool. One goal is that by learning to identify competent ethics consultation skills in the videos, the user will also be able to reflect on their own skills and competency in ethics consultation. After each scene, users submit their answers and, for each ACES item, receive feedback on their response to the simulated CEC’s performance in the ethics consultation video, including how their answers align with those determined by the ACES steering committee. This feedback appears in written form showing the user’s responses compared with the correct responses and a pre-recorded video explaining the correct responses.

Table 1:

ACES Tool Items (© 2016 Loyola University Chicago Used with permission. All rights reserved)

Manage the Formal Meeting
 1. Identify yourself and your role as the ethics consultant
 2. Have each party introduce themselves
 3. Explain the purpose of the consult
Gather the Relevant Data
 4. Elicit the relevant medical facts in the case
 5. Confirm the appropriate decision maker
 6. Clarify when needed
Express and Stay within the Limits of the Ethics Consultant’s Role During Meetings or Encounters
 7. Health professionals and administrators should distinguish their clinical roles from their ethics role as needed
 8. Correct errant expectations of participants as needed.
Listen Well, and Communicate Interest, Respect, and Empathy to Participants
 9. Maintain eye contact
 10. Use engaged body language
 11. Express supportive statements early in the consult
 12. Avoid distracting behaviors
Elicit the Moral Views of the Participants in a Non-Threatening Way
 13. Establish the moral views and/or values of participants
 14. Establish the wishes and/or expectations of participants
 15. Clarify when needed
Enable Participants to Communicate Effectively and Be Heard by Other Participants
 16. Give each participant an opportunity to share his or her views
 17. Ask clarifying questions
 18. Redirect conversation when needed
Accurately and Respectively Represent the Views of Participants to Others
 19. Summarize the conversation reflecting back the views of each party
 20. Allow participants to confirm or clarify
Educate Participants Regarding the Ethical Dimensions of the Case
 21. Identify the ethical issues
 22. Identify the range of ethically (in)appropriate options
 23. Explain the rationale for the ethically (in)appropriate options
 24. Use plain/simple language
Mediate Among Competing Moral Views
 25. Identify common goals including areas of agreement
 26. Identify areas of disagreement
 27. Elicit or propose potential compromise options
Distinguish Ethical Dimensions of the Consultation from Other, often Overlapping, Dimensions
 28 Identify relevant legal issues
 29. Offer other appropriate ancillary services
Identify and Explain a Range of Ethically Justifiable Options and Their Consequences
 30. Summarize potential options/choices already discussed
 31. Facilitate discussion about options (pros/cons) including ways forward

The purpose of the ACES tool as presented on the website is to provide training in the recognition of competent and incompetent simulated CEC performances. In so doing, the ACES tool offers users the opportunity to evaluate simulated ethics consultation cases. It gives trainees a first look at their clinical ethics evaluation skills. The ACES tool also offers other stakeholders, such as mentors, an opportunity to identify trainees who need more preparation before performing live consultations. While recognizing appropriate and inappropriate clinical ethics consultation skills in a simulated environment on the ACES training website does not necessarily generalize to personal competency in the clinical setting, those who are unable to differentiate competent and incompetent simulated CEC performances are unlikely to be successful CECs in the clinical setting. Having developed the ACES training website, in this paper we report how we set a criterion cut-score (i.e., minimally acceptable score) for judging the ability of users of the ACES tool to evaluate one simulated CEC performance.

Methods

Standard Setting Method

The ACES Futility Case was used to set the criterion cut-score for identifying users who can and cannot successfully recognize competent and incompetent simulated CEC performances. The Futility Case narrates the end-of-life decisions a daughter must make on behalf of her dying mother. This case was selected as the framework for the standard setting exercise because it comprises a range of items (k = 31 in total) that measure interpersonal skills, from managing the length, purpose, and structure of the ethics consult to facilitating discussion about the next steps participants may take at the end of the meeting (Table 1). Each item was assessed as correct or incorrect by selecting from three options for each item/question on the ACES tool: Done, Not Done, or Done Incorrectly. This case was also selected due to its comparability in content and difficulty with other ACES cases developed for the website.

A modified Angoff standard setting procedure was used to establish the cut-score for the Futility Case as described by Cizek (2006). The Angoff standard setting procedure was selected because of its simplicity (Ricker, 2006), and because it is the most widely used and researched method for establishing a cut-score to identify minimal competency. Developed by William Angoff in 1971, he describes the method in his seminal chapter as

…keeping the hypothetical ‘minimally acceptable person’ in mind, one could go through the test item by item and decide whether such a person could answer correctly each item under consideration. If a score of one is given for each item answered correctly by the hypothetical person and a score of zero is given for each item answered incorrectly by the person, the sum of the item scores will equal the raw score earned by the ‘minimally acceptable person’ (pp. 514–515).

Later, a slight variation of the Angoff procedure (i.e., the Modified Angoff procedure) was introduced. Here, a judge

…would think of a number of minimally acceptable persons, instead of only one such person, and would estimate the proportion of minimally acceptable persons who would answer each item correctly. The sum of these probabilities would then represent the (judge’s) minimally acceptable score (p. 515).

In our study, the standard setting committee comprised four internal (KW, MK, MM, KP) and four external (KB, MD, AD, AT) content experts familiar with the Futility Case and its assessment. Internal experts were those who developed the ACES tool and external experts were from different institutions and selected based on their knowledge of and expertise in clinical ethics consultation. As noted by Ricker (2006), calibrating raters on what defines minimal competency is one of the main assumptions of the Angoff method. Before participating in the standard setting exercise, we asked the raters to review the ASBH Core Competencies for Health Care Ethics Consultations (2011) and the Department of Veteran Affairs IntegratedEthics® Program’s ECPAT (2014). They were also asked to review the Futility Case scoring rubric (https://lucapps.luc.edu/clinicalethics) and complete an evaluation of the Futility Case themselves via the ACES website. As recommended by Cizek (2006), this process was meant to calibrate the standard setting participants in the content standards, skills, and abilities expected of a minimally competent CEC. Afterward, the committee participated in two rounds of the standard setting exercise to arrive at a proposed cut-score for passing the case. The internal experts completed the standard setting exercises prior to the external experts and each group followed the same process.

During the first round, the committee members were instructed to imagine a large number of minimally competent CECs completing the case, i.e., viewing the Futility Case on the ACES website and using the ACES tool to score the performance of the CEC in the video. For each item on the ACES tool, they were asked to estimate the probability that these minimally competent individuals would correctly assess the CEC’s performance in the video, i.e., correctly answer the ACES item. After providing their answers, the first round concluded by alerting each member of the standard setting committee that, “Next week, you will be asked to repeat this exercise one more time. However, before you do so, additional information on how easy or difficult each item is based on a pilot sample of all individuals who completed the case through July 2016 will be provided to you.”

Operational case data were available for 31 users, meaning 31 individuals had completed the Futility Case via the ACES website as of July 2016. Approximately half of these individuals were under age 55 with the remaining half identifying as older than 55 (range 25 – 65). More than half were female (61%), and the clear majority identified as White (90%); only three individuals identified as non-White including one as Asian, one as Black, and one as Hispanic. Nearly all (84%) held a graduate degree while 10% held only a Bachelor’s degree. Two individuals additionally identified as registered nurses. Regarding their profession, 35% identified as clinical ethicists, 16% as administrators, 16% as chaplains, and 16% as physicians. The remainder did not report a profession (17%). Finally, 61% reported that they supervise clinical ethicists.

Classical test theory statistics for each item were provided to the standard setting committee between the two standard setting rounds (Table 3). These statistics included an estimate of each item’s difficulty (using proportion correct scores) as well as each item’s ability to separate high scoring performers from low scoring performers using discrimination statistics, including point biserial correlation coefficients as described by Cody and Smith (2014). The Kuder-Richardson 20 (KR-20) formula, also described by Cody and Smith (2014), was further used to estimate the case’s reliability or internal consistency for the committee. Each member was asked to review these results as well as the results of his/her colleagues from the first round. More specifically, they were asked to compare the probabilities they listed in the first round to their peers’ responses as well as to the item difficulty parameters provided in the interim results report. The purpose of providing this information was to reduce group variability and engender a consensus on the proposed cut-score (Cizek, 2006). After reviewing this information, each committee member was asked to complete the standard setting exercise one final time. As before, they were instructed to imagine a large new group of minimally competent CECs who completed the Futility Case on the website. For each item the committee was asked to estimate the probability that these minimally competent individuals would provide the correct answer.

Once the second round concluded, those responses were used to estimate the proposed cut-score. The median probability estimate among the eight committee members was recorded for all 31 items and yielded an estimate of the proposed proportion correct for minimal competency. Importantly, we used the median (rather than mean) probability estimate in this study, because there were only eight raters available and the median was a more accurate estimate of the center of their judgements. Finally, we used an exact version of the Conover (1999) non-parametric test for equal variances to compare the variability in raters’ scores between internal and external raters. These estimates along with the classical test theory estimates described above were calculated using SAS version 9.4 (Cary, NC).

This study was approved by the Institutional Review Board at Loyola University Chicago Health Sciences Division.

Results

Table 1 displays the ACES items and constructs. Results from the standard setting committee’s first round of responses are displayed in Table 2. With most estimates exceeding a 70% probability of a correct response, the committee generally believed that minimally competent CECs would have a high probability of correctly answering each item in the Futility Case. However, the committee clearly believed that minimally competent CECs would struggle with three items (i.e., items 11, 23 and 28) where the committee believed such individuals would have only a 50% probability of providing a correct response.

Table 2.

Modified Angoff standard setting results for round 1

Internal Raters External Raters Median (range)
A B C D E F G H
1 Identify yourself and your role as the ethics consultant 100 100 100 100 100 90 95 95 100 (90–100)
2 Have each party introduce themselves 100 100 100 100 100 100 95 100 100 (95–100)
3 Explain the purpose of the consult 80 100 100 100 85 80 95 85 90 (80–100)
4 Elicit the relevant medical facts in the case 80 100 100 80 50 75 95 90 85 (50–100)
5 Confirm the appropriate decision maker 50 80 65 70 90 50 85 75 72.5 (50–90)
6 Clarify when needed 50 100 85 65 90 85 85 65 85 (50–100)
7 Health professionals and administrators should distinguish their clinical roles from their ethics role as needed 60 80 75 50 90 50 80 75 75 (50–90)
8 Correct errant expectations of participants as needed. 50 100 75 60 90 50 80 75 75 (50–100)
9 Maintain eye contact 100 100 100 90 95 90 95 85 95 (85–100)
10 Use engaged body language 80 100 100 90 95 90 95 90 92.5 (80–100)
11 Express supportive statements early in the consult 50 80 50 50 50 80 35 75 50 (35–80)
12 Avoid distracting behaviors 100 100 100 100 100 100 95 100 100 (95–100)
13 Establish the moral views and/or values of participants 90 100 85 90 75 75 90 95 90 (75–100)
14 Establish the wishes and/or expectations of participants 90 100 100 80 90 75 90 95 90 (75–100)
15 Clarify when needed 90 100 100 70 90 60 90 90 90 (60–100)
16 Give each participant an opportunity to share his or her views 100 100 100 90 95 70 95 90 95 (70–100)
17 Ask clarifying questions 80 100 90 90 90 65 95 75 90 (65–100)
18 Redirect conversation when needed 70 80 95 60 95 50 75 80 77.5 (50–95)
19 Summarize the conversation reflecting back the views of each party 80 80 85 80 95 70 80 75 80 (70–95)
20 Allow participants to confirm or clarify 70 80 100 80 95 65 80 60 80 (60–100)
21 Identify the ethical issues 90 100 100 90 75 65 85 80 87.5 (65–100)
22 Identify the range of ethically (in)appropriate options 70 80 60 80 20 50 85 85 75 (20–85)
23 Explain the rationale for the ethically (in)appropriate options 50 80 50 70 20 50 50 80 50 (20–80)
24 Use plain/simple language 100 100 100 90 75 85 95 90 92.5 (75–100)
25 Identify common goals including areas of agreement 70 80 90 90 50 60 95 85 82.5 (50–95)
26 Identify areas of disagreement 50 100 85 80 25 45 90 85 82.5 (25–100)
27 Elicit or propose potential compromise options 70 100 85 70 75 30 95 85 80 (30–100)
28 Identify relevant legal issues 50 50 50 50 100 20 85 85 50 (20–100)
29 Offer other appropriate ancillary services 100 100 100 90 100 40 95 100 100 (40–100)
30 Summarize potential options/choices already discussed 100 100 100 80 75 55 95 90 92.5 (55–100)
31 Facilitate discussion about options (pros/cons) including ways forward 100 80 100 80 95 65 95 90 92.5 (65–100)
Proposed Cut % 87.5%
Proposed Cut Score 27–28 points

Their concerns were partially supported by the item difficulty statistics, which are displayed in Table 3. Looking down the difficulty column (where lower values indicate greater difficulty), item 11 (“Express a supportive statement early in the consult”) was the most difficult for the 31 pilot users completing the Futility Case on the ACES website, where no users correctly answered the item. Other items that were scored incorrectly at least 50% of the time were: Item 7 (“Health professionals and administrators should distinguish their clinical roles from their ethics role as needed”); 17 (“Ask clarifying questions”); 26 (“Identify Areas of Disagreement”); and 27 (“Elicit or Propose Compromise Options”). Conversely, the items that were the easiest and had all 31 pilot users answer them correctly were: (1) “Identify yourself and your role as the ethics consultant”; (2) “Have each party introduce themselves”; (12) “Avoid distracting behaviors”; (29) “Offer other appropriate ancillary services”; and (31) “Facilitate discussion about options including ways forward.” The reliability of the case was also very good with an estimated KR-20 coefficient exceeding 0.80 (KR-20 = 0.88) (Table 3).

Table 3.

Performance of the pilot users for the Futility Case

Choices Difficulty
DONE NOT DONE DONE INCORRECTLY
% % %
1 Identify yourself and your role as the ethics consultant 100 . . 100%
2 Have each party introduce themselves 100 . . 100%
3 Explain the purpose of the consult 94 . 6 94%
4 Elicit the relevant medical facts in the case 68 3 29 68%
5 Confirm the appropriate decision maker 16 81 3 81%
6 Clarify when needed 35 58 6 58%
7 Health professionals and administrators should distinguish their clinical roles from their ethics role as needed 65 26 10 26%
8 Correct errant expectations of participants as needed. 19 77 3 77%
9 Maintain eye contact 96 . 4 96%
10 Use engaged body language 96 4 . 96%
11 Express supportive statements early in the consult 89 11 . 0%
12 Avoid distracting behaviors 100 . . 100%
13 Establish the moral views and/or values of participants 79 14 7 79%
14 Establish the wishes and/or expectations of participants 89 4 7 89%
15 Clarify when needed 54 43 4 54%
16 Give each participant an opportunity to share his or her views 86 4 11 86%
17 Ask clarifying questions 43 50 7 43%
18 Redirect conversation when needed 14 82 4 82%
19 Summarize the conversation reflecting back the views of each party 14 82 4 82%
20 Allow participants to confirm or clarify 32 64 4 64%
21 Identify the ethical issues 88 . 13 88%
22 Identify the range of ethically (in)appropriate options 63 21 17 63%
23 Explain the rationale for the ethically (in)appropriate options 75 13 13 75%
24 Use plain/simple language 79 13 8 79%
25 Identify common goals including areas of agreement 71 29 . 71%
26 Identify areas of disagreement 42 50 8 42%
27 Elicit or propose potential compromise options 42 50 8 42%
28 Identify relevant legal issues 21 71 8 71%
29 Offer other appropriate ancillary services 100 . . 100%
30 Summarize potential options/choices already discussed 96 . 4 96%
31 Facilitate discussion about options (pros/cons) including ways forward 100 . . 100%

Note: Valid N = 31. Test Reliability KR-20 = 0.877. Choices = The proportion of individuals out of 31 marking each choice for the item. Difficulty = The proportion of individuals out of 31 correctly answering the item; lower difficulty values reflect more challenging items.

Seeing these item statistics as well as their peers’ responses had a moderate effect on the second round of ratings (Table 4). In fact, there was a severe decline in the probability responses for item 7 (“Health professionals and administrators should distinguish their clinical roles from their ethics role as needed”) (Median = 75% on round 1 versus Median = 57.5% on round 2), for item 26 (“Identify areas of disagreement”) (Median = 82.5% on round 1 versus Median = 50% on round 2), and for item 27 (“Elicit or propose potential compromise options”) (Median = 80% on round 1 versus Median = 67.5% on round 2). Averaging over all 31 items, the proposed proportion correct score for minimal competency was 80% corresponding to a cut-score, or pass/fail score, that is between 24–25 points out of 31 possible points. The standard setting committee subsequently set the minimal competency cut-score to 24 points.

Table 4.

Modified Angoff standard setting results for round 2

Internal Raters External Raters Median
A B C D E F G H
1 Identify yourself and your role as the ethics consultant 100 100 100 100 100 100 95 100 100 (95–100)
2 Have each party introduce themselves 100 100 100 80 100 100 95 100 100 (80–100)
3 Explain the purpose of the consult 90 100 100 80 85 90 90 90 90 (80–100)
4 Elicit the relevant medical facts in the case 80 100 80 60 50 75 70 70 72.5 (50–100)
5 Confirm the appropriate decision maker 70 75 80 70 90 75 80 80 77.5 (70–90)
6 Clarify when needed 70 80 70 70 90 60 65 65 70 (60–90)
7 Health professionals and administrators should distinguish their clinical roles from their ethics role as needed 50 75 50 90 90 40 65 25 57.5 (25–90)
8 Correct errant expectations of participants as needed. 70 75 75 100 90 70 75 75 75 (70–100)
9 Maintain eye contact 100 100 100 50 95 95 95 95 95 (50–100)
10 Use engaged body language 95 100 100 100 95 95 95 90 95 (90–100)
11 Express supportive statements early in the consult 50 70 50 85 50 20 25 20 50 (20–85)
12 Avoid distracting behaviors 100 100 100 90 100 100 95 100 100 (90–100)
13 Establish the moral views and/or values of participants 80 90 85 70 75 80 80 80 80 (70–90)
14 Establish the wishes and/or expectations of participants 90 100 90 90 75 90 85 90 90 (75–100)
15 Clarify when needed 70 100 70 60 90 65 75 60 70 (60–100)
16 Give each participant an opportunity to share his or her views 90 90 90 80 95 90 85 85 90 (80–85)
17 Ask clarifying questions 70 90 60 85 90 70 75 50 72.5 (50–90)
18 Redirect conversation when needed 80 90 90 70 95 75 75 80 80 (70–90)
19 Summarize the conversation reflecting back the views of each party 80 80 90 90 95 80 80 80 80 (80–95)
20 Allow participants to confirm or clarify 80 80 70 70 95 70 65 65 70 (65–95)
21 Identify the ethical issues 90 100 90 70 75 90 85 85 87.5 (70–100)
22 Identify the range of ethically (in)appropriate options 70 80 60 90 20 75 60 70 70 (20–90)
23 Explain the rationale for the ethically (in)appropriate options 70 80 70 80 20 70 50 75 70 (20–80)
24 Use plain/simple language 90 100 90 70 75 85 85 85 85 (70–100)
25 Identify common goals including areas of agreement 75 80 70 50 50 75 75 75 75 (50–80)
26 Identify areas of disagreement 50 90 60 50 25 60 50 50 50 (25–90)
27 Elicit or propose potential compromise options 70 90 60 50 75 65 75 50 67.5 (50–90)
28 Identify relevant legal issues 50 50 50 90 100 60 65 75 62.5 (50–100)
29 Offer other appropriate ancillary services 100 100 100 90 100 100 95 100 100 (90–100)
30 Summarize potential options/choices already discussed 90 100 100 90 75 95 95 95 95 (75–100)
31 Facilitate discussion about options (pros/cons) including ways forward 90 80 100 90 95 90 95 95 92.5 (80–100)
Proposed Cut % 80.0%
Proposed Cut Score 24–25 points

Regarding variability in raters’ scores, at the end of round 1 the internal raters had less variability on item 2 (“Have each party introduce themselves”; range: 100–100) than the external raters (range: 95–100; exact p = .03). This was also true for item 12 (“Avoid distracting behaviors”), where the internal raters had less variability (range: 100–100) than the external raters (range: 95–100; exact p = .03), and on item 28 (“Identify relevant legal issues”) where the internal raters had less variability (range: 50–50) than the external raters (range: 20–100; exact p = .03). Finally, at the end of round 1 the internal raters demonstrated less variability on item 29 (“Offer other appropriate ancillary services”, range: 90–100) than the external raters (range: 40–100; exact p = .03).

On round 2, the internal raters had less variability on item 1 (“Identify yourself and your role as the ethics consultant”, range: 100–100) than the external raters (range: 95–100; exact p = .03), but this conclusion flipped for item 2 (“Have each party introduce themselves”) where the internal raters’ scores were more variable (range: 80–100) than the external raters’ scores (range: 95–100; exact p = .03). Finally, for item 9 (“Maintain eye contact”), the internal raters’ scores were also more variable (range: 50–100) than the external raters’ scores (range: 95–95; exact p = .03).

Discussion

A modified Angoff standard setting procedure as described by Cizek (2006) was used to establish the cut-score for the ACES tool Futility Case. The eight experts on the standard setting committee determined that individuals scoring 24 or more points (out of 31 points or 80%) would be minimally competent and capable of evaluating the performance of the CEC in this case.

The items deemed easiest for a minimally competent CEC to answer correctly, as determined by the 100% scores noted in Table 3 were: (1) “Identify yourself and your role as the ethics consultant”; (2) “Have each party introduce themselves”; (12) “Avoid distracting behaviors”; (29) “Offer other appropriate ancillary services”; and (31) “Facilitate discussion about options (pros/cons) including ways forward. One explanation is that these skills are linked to clear behaviors that make it more straightforward for the individual viewing the case to easily determine if the CEC in the video did (or did not) demonstrate them. Further, some items (particularly “Have each party introduce themselves”), may have been observed or learned in other contexts.

In the pilot data from the ACES tool, one particular item stood out as the most difficult for the 31 website users to answer correctly. Everyone incorrectly answered item 11, “Express a Supportive Statement Early in the Consult”. The difficulty of this item may have arisen because the creators of the ACES tool determined that the CEC must offer a supportive statement within the first five minutes of the consult to receive a Done. This requirement is specified in the scoring rubric on the website and the videos display the time as the case progresses. In the Futility Case video on the ACES website, the CEC does not make such a statement within this time frame. Instead, she makes a vague supportive statement about difficult decisions that need to be made later in the consult. This, coupled with an inappropriate lag time, is insufficient to count clearly as Done according to the scoring rubric. For this reason, the correct answer is Done Incorrectly. The nuance of the supportive statement is a likely reason this item was challenging for the pilot users viewing the video.

While the scoring rubric provides the instructions on how the original steering committee determined what behaviors count as Done, Not Done, or Done Incorrectly for each item on the ACES tool, new users of the tool may not have reviewed it in detail before watching the case and rating the CEC’s skills. In addition, the standard setting committee may have overestimated the ability of a minimally competent CEC to answer some of the items correctly. These items may seem obvious to experts in the field with a great deal of experience. However, it may be difficult to view the Futility Case and CEC’s performance through the eyes of a new minimally acceptable or competent CEC.

Other items that were scored incorrectly at least 50% of the time by these pilot users were: Item 7 (“Health professionals and administrators should distinguish their clinical roles from their ethics role as needed”); 17 (“Ask clarifying questions”); 26 (“Identify Areas of Disagreement”); and 27 (“Elicit or Propose Compromise Options”). For Item 7, 65% of users thought that the CEC stating she was a physician and clinical ethicist was sufficient to count as Done whereas the correct answer was Not Done. The CEC needed to clarify that she was functioning solely as the clinical ethicist in this case. For item 17, 57% of users incorrectly selected Not Done or Done Incorrectly. The CEC asked clarifying questions to both parties and appropriately facilitated the discussion. It may be that users thought the CEC did not ask enough clarifying questions to count as Done and were looking for a performance that went beyond minimal competence. For Item 26, 58% of users answered incorrectly. The scoring rubric indicates parties other than the CEC may make their disagreements obvious during the discussion or state them clearly and, if so, the CEC receives a score of Done for this item by default. This nuance may have been misunderstood by users. For item 27, 58% of users answered incorrectly. The CEC does elicit or propose potential compromise options, most notably a “Do Not Resuscitate Order.” Again, users may have viewed one compromise option as insufficient and wanted the CEC to perform above minimal competence to receive a Done score.

More broadly, examining the importance of this standard setting process, the field of clinical ethics consultation now has an innovative and practical method to assess one measure of competency of CECs in a reliable and standardized way. The cut-score set in this exercise is meant to identify whether a user of the ACES tool who views the Futility Case on the website can correctly identify (in)competent practices and skills of the CEC in the video. While it is not intended to indicate competency of the user as a CEC in practice, we propose that the opposite has merit: if a user of the ACES website is unable to correctly answer 80% of the 31 Futility Case items, that person is unlikely to be competent when performing in the clinical setting because he/she is unable to recognize basic competencies when viewing the performance of another CEC.

The cut-score determines whether that user is minimally competent to evaluate others. As for generalizability, the hope is that through the ACES training, users will increase their awareness of the core skills needed to conduct ethics consultation and that this increased awareness will be applied to the user in their own ethics consultation skills. Whether and how the ACES training translates into developing more effective CECs in practice is an area for future investigation.

Limitations of this method and exercise are the small pilot sample for the Futility Case. We observed heterogeneity in rater judgments between our internal and external raters. Where such variability was significant, it was associated with the items that were the least difficult. This limitation may relate to the few raters available for this analysis, and we are hesitant to draw conclusions about the distribution of internal versus external rater scores based on this small sample size. With more users who complete this case on the ACES website, the results might change and the difficulty estimates may change. The first two ACES cases both had a similar difficulty level and addressed end-of-life issues. By inference, the cut-score of 24/31 should be appropriate for both, but this needs to be tested and is a direction of future research. Subsequent cases involve different ethical issues including pediatric decision making, and a case that tests an additional item (Recognize and Address Barriers to Communication), which focuses on informed consent and using trained interpreters. The current proposed cut-core may not apply to these cases given they may have different difficulty estimates. Finally, the link between the website training and the generalizability of the cut-score in its application to practice (i.e., training CECs) has not been demonstrated. Finally, a single case study does not necessarily support competence and further research and dialogue is needed to define the components of what is considered “minimal competency.”

Future directions for the ACES project include collecting data on how graduate students and CEC-trainees perform when the cut-score is applied to live ethics simulations. This type of controlled environment allows each student to experience the same case and ethical issues while using the same evaluation tool. It also allows trained raters and experts to evaluate students and trainees versus the real world where observing ethics consultations may be challenging and, even if possible, done by a variety of colleagues or supervisors who may not use the same criteria. Other areas for future research include examining how users of the ACES website perform in relation to the original steering committee for each of the cases presented; whether repeating the cases increases the percentage of correct responses; and whether determination of “correct/incorrect” answers for a particular ACES case may need to be revised based on website users’ performance and feedback.

Conclusion

A modified Anghoff method was used to set a cut-score for users accessing the ACES Tool. The cut-score evaluates the ability of those viewing one simulated ethics consultations on the training website to identify minimal competency for a single simulated ethics consultation. We believe that training people to recognize competent and incompetent CEC skills will aid that individual’s ability to recognize his/her own ability to perform ethics consultations – at the least in a simulated environment. This cut-score is novel and demonstrates promise for training clinical ethics consultants as we move toward determining minimal competency for CECs in practice.

Footnotes

CONFLICTS OF INTEREST: None.

ETHICAL APPROVAL: This study was approved by the Institutional Review Board at Loyola University Chicago Health Sciences Division.

Contributor Information

K Wasson, Neiswanger Institute for Bioethics and Stritch School of Medicine, Loyola University Chicago, 2160 S. 1st Ave., Bldg 120, Room 284, Maywood, IL 60153.

WH Adams, Medical Education and Public Health Sciences, Loyola University Chicago Health Sciences Division, 2160 S. 1st Ave. CTRE Room 253, Maywood, IL 60153.

K Berkowitz, Chief of Ethics Consultation, VHA National Center for Ethics in Health Care (10E1E), VA Medical Center, Ethics Room 16083, 423 East 23rd Street, New York, NY 10010; Associate Professor of Medicine and Population Health, NYU School of Medicine.

M Danis, Department of Bioethics, Building 10, Rm. 1C118, Bethesda, MD 20892-1156.

AR Derse, Center for Bioethics and Medical Humanities, Julia and David Uihlein Professor of Medical Humanities and Professor of Bioethics and Emergency Medicine, Institute for Health and Equity, Medical College of Wisconsin, 8701 Watertown Plank Road, Milwaukee, WI 53226-0509.

M Kuczewski, The Fr. Michael I. English, SJ, Professor of Medical Ethics, Chair, Dept. of Medical Education, Director, Neiswanger Institute for Bioethics, Loyola University Chicago, 2160 S. 1st Ave., Bldg 120, Room 285, Maywood, IL 60153.

M McCarthy, Neiswanger Institute for Bioethics and Stritch School of Medicine, Loyola University Chicago, 2160 S. 1st Ave., Bldg 120, Room 286, Maywood, IL 60153.

K Parsi, Bioethics, Graduate Program Director, Neiswanger Institute for Bioethics and Stritch School of Medicine, Loyola University Chicago, 2160 S. First Avenue, Bldg. 120, Room 283, Maywood, IL 60153.

A Tarzian, Maryland Health Care Ethics Committee Network (MHECN), Law & Health Care Program, Maryland Carey Law; University of Maryland School of Nursing, 655 W. Lombard St., Suite 552, Baltimore, MD 21201.

References

  1. Angoff WH (1971). Scales, norms, and equivalent scores In Thorndike RL (Ed.), Educational measurement (pp. 508–600). Washington, DC: American Council on Education. [Google Scholar]
  2. American Society for Bioethics and Humanities (ASBH). 1998. Core Competencies for Health Care Ethics Consultants. Glenview, IL: ASBH. [Google Scholar]
  3. American Society for Bioethics and Humanities (ASBH). 2011. Core Competencies for Health Care Ethics Consultants, 2nd Edition Glenview, IL: ASBH. [Google Scholar]
  4. American Society for Bioethics and Humanities (ASBH) Certification Commission. http://asbh.org/professional-development/certification/hcec-certification-commission Accessed March 23, 2018.
  5. Cizek GJ (2006). Standard Setting In Haladyna TM(Ed.), Handbook of test development (pp. 225–258). Mahwah, NJ: Lawrence Erlbaum Associates, Inc. [Google Scholar]
  6. Cody R and Smith J. (2014). Test Scoring and Analysis Using SAS. Cary, NC: SAS Institute. [Google Scholar]
  7. Conover WJ (1999). Practical Nonparametric Statistics. 3rd ed New York: John Wiley & Sons. [Google Scholar]
  8. Dubler NN, Webber MP, Swiderski DM and the Faculty and the National Working Group for the Clinical Ethics Credentialing Project. 2009. Charting the future: Credentialing, privileging, quality, and evaluation in clinical ethics consultation. Hastings Center Report 39:24. [DOI] [PubMed] [Google Scholar]
  9. Fins JJ, Kodish E, Cohn F, Danis M, Derse A, Dubler NN, et al. 2016. A Pilot Evaluation of Portfolios for Quality Attestation of Clinical Ethics Consultants. The American Journal of Bioethics 16(3): 15–24. [DOI] [PubMed] [Google Scholar]
  10. Flicker SF, Rose SL, Eves MM, Flamm AL, Sanghani R and Smith ML.2014. Developing and Testing a Checklist to Enhance Quality in Clinical Ethics Consultation. The Journal of Clinical Ethics 25: 281–90. [PMC free article] [PubMed] [Google Scholar]
  11. Fox E 1996. Concepts in Evaluation Applied to Ethics Consultation Research. The Journal of Clinical Ethics 7: 116–121. [PubMed] [Google Scholar]
  12. Kodish E, Fins JJ, Braddock C, Cohn F, Dubler NN, Danis M, et al. 2013. Quality attestation for clinical ethics consultants: A two-step model from the American Society for Bioethics and Humanities. Hastings Center Report 43(5): 26–36. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Pearlman RA, Foglia MB, Fox E, Cohan JH, Chanko BL,Berkowitz KA. 2016. Ethics Consultation Quality Assessment Tool: A Novel Methods for Assessing the Quality of Ethics Case Consultations Based on Written Records. The American Journal of Bioethics 16(3): 3–14. [DOI] [PubMed] [Google Scholar]
  14. Ricker KL (2006). Setting Cut-Scores: A Critical Review of the Angoff and Modified Angoff Methods. The Alberta Journal of Educational Research 52 (1): 2006, 53–64. [Google Scholar]
  15. Svantesson M, Karlsson J, Boitte P, Schildman J, Dauwerse L, Widdershoven G et al. 2014. Outcomes of Moral Case Deliberation – the development of an evaluation instrument for clinical ethics support (the Euro-MCD). BMC Medical Ethics 15(30). Access at: 10.1186/1472-6939-15-30. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Tarzian A 2009. Credentials for Clinical Ethics Consultation – Are we there yet? HEC Forum 21(3): 241–48. [DOI] [PubMed] [Google Scholar]
  17. U.S. Department of Veterans Affairs, National Center for Ethics in Health Care. 2014. Ethics Consultant Proficiency Assessment Tool. Available at: http://www.ethics.va.gov/integratedethics/evaluation.asp
  18. Wasson K, Parsi K, McCarthy M, Siddall VJ, Kuczewski M.(2016). Developing an evaluation tool for assessing clinical ethics consultation skills in simulation based education: The ACES project. HEC Forum 28:103–13. [DOI] [PubMed] [Google Scholar]

RESOURCES