Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2014 Dec 16.
Published in final edited form as: Ethics Behav. 2014 Jan;24(1):73–89. doi: 10.1080/10508422.2013.821389

The Influence of Compensatory Strategies on Ethical Decision Making

Jensen T Mecca 1, Kelsey E Medeiros 1, Vincent Giorgini 1, Carter Gibson 1, Michael D Mumford 1, Shane Connelly 1, Lynn D Devenport 1
PMCID: PMC4266941  NIHMSID: NIHMS620587  PMID: 25525318

Abstract

Ethical decision making is of concern to researchers across all fields. However, researchers typically focus on the biases that may act to undermine ethical decision making. Taking a new approach, this study focused on identifying the most common compensatory strategies that counteract those biases. These strategies were identified using a series of interviews with university researchers in a variety of areas, including biological, physical, social, and health as well as scholarship and the performing arts. Interview transcripts were assessed with two scoring procedures, an expert rating system and computer-assisted qualitative analysis. Although the expert rating system identified Understanding Guidelines, Recognition of Insufficient Information, and Recognizing Boundaries as the most frequently used compensatory strategies across fields, other strategies, Striving for Transparency, Value/Norm Assessment, and Following Appropriate Role Models, were identified as most common by the computer-assisted qualitative analyses. Potential reasons for these findings and implications for training and practice are identified and discussed.

Keywords: ethical decision making, compensatory strategy, bias, ethics education


Ethical breaches in research are becoming increasingly common. As many as one third of scientists report having engaged in unethical research behaviors, such as dropping observations from analyses based on gut feelings regarding their accuracy, using inappropriate or inadequate research designs, or engaging in potentially questionable relationships with research subjects or students, according to a 2005 survey by Martinson, Anderson, and de Vries (2005). Similarly, the rate of retractions has increased tenfold in the last decade, with almost 50% attributable to researcher misconduct (Van Noorden, 2011). Given the potential for dishonest or unethical behavior to undermine both communication between scientists and public trust in research findings, this increase in misconduct bears dire implications for the continuing success of scientific endeavors (Mumford et al., 2008). The alarming frequency of ethics violations has spurred interest in the study of ethics, per se (Butterfield, Treviño, & Weaver, 2000). Once the domain of philosophy, empirical study of ethical behavior is necessitated given the high-stakes nature of much current research.

Given the trend toward unethical behavior in research, it is important to increase our understanding of the reasons why people make ethical or unethical decisions. A number of frameworks or models have been put forth in an attempt to explain the mechanisms by which ethical decisions are made (Dubinsky & Loken, 1989; Ferrell & Gresham, 1985; Jones, 1991; Sonenshein, 2007; Treviño, 1986; Treviño, Weaver, & Reynolds, 2006). The current study takes a closer look at certain variables operating within one of these models in an attempt to better understand ethical decision making in a research context.

The model that provides the basis for the effort at hand is Sonenshein's (2007) sensemaking-intuition model. In this model, individuals must consider all elements of a situation as they move forward in choosing a course of action by which to deal with an ethical dilemma. Given the complicated nature of ethically loaded situations, the sensemaking process described inherently utilizes a great deal of cognitive resources. The cognitive demands of sensemaking make it important to identify and understand those variables that undermine the decision maker's ability to synthesize information regarding ethical dilemmas. In the decision-making literature, such variables, which cause a discrepancy between human judgment and a rational norm, are referred to as biases (Tversky & Kahneman, 1974). A number of biases thought to increase the gap between human judgment and rationality in an ethical decision-making context have been identified. For example, Haines, Street, and Haines (2008) discussed perceived importance, where unethical behavior was more likely in the context of scenarios perceived to be unimportant. Tenbrunsel and Messick (2004) suggested that self-deception, wherein one behaves in a self-interested fashion while falsely believing that one's moral principles were upheld, causes the moral implications of decisions to become obscured. Both of these biases may undermine sensemaking, and therefore ethical decision making, by preventing individuals from accurately perceiving situational features and constraints. Thus, a number of biases identified in the literature may serve to undermine ethical decision making by compromising individuals’ ability to make sense of ethical dilemmas.

In contrast, it is extremely important to understand those variables that counteract those biases by contributing to improved ability to synthesize information in the sensemaking process. The decision-making literature suggests that a number of strategies may be utilized in order to improve decisions, generally speaking (Ference, 1972; Pitz, Reinhold, & Geller, 1969). Scott, Leritz, and Mumford (2004a, 2004b) found that individuals who were trained in a variety of strategies, such as analogy identification and listing similarities and differences, displayed enhanced performance in complex problem solving requiring creative thought. Given that problems requiring creative solutions call for this type of complex thinking, there may be similarities between the type of instruction that improves creative problem solving and the varieties of training that help individuals engage in more effective ethical decision making. In fact, there has also been preliminary evidence to indicate that instruction in metacognitive strategies, including recognizing circumstantial complexities, engaging in emotional control, and considering the effects of actions, improves ethical decision making (Mumford et al., 2008). Strategies that improve ethical decision making in this manner do so by compensating for those biases that hinder cognition during the sensemaking process. In other words, these compensatory strategies should operate to lessen the gap between human judgment and rational decision making, as considered by Tversky and Kahneman (1974).

Given that compensatory strategies should improve ethical decision making in research, it is important to identify the strategies most likely to be used in that context. In one effort along these lines, Antes, Caughron, and Mumford (2010) used Mumford and colleagues’ (2006) ethical decision-making instrument as a springboard for the development of a list of compensatory strategies that might influence ethical decision making in research. Doctoral students in industrial-organizational psychology familiar with the ethical decision-making literature discussed why the typical person might choose good or poor responses on this instrument, which includes a number of scenarios used to assess the quality of ethical decision making. Trends in this discussion related to the selection of high answer choices, such as monitoring one's assumptions, evaluating complexity, and self-accountability, were identified as potential compensatory strategies influencing ethical decision making. A list of those strategies developed by the doctoral students is presented in Table 1. Asterisks indicate strategies developed by later efforts in the present study.

TABLE 1.

Initial Compensatory Strategies

Strategy Interrater Agreement Coefficients Operational Definition
Attending to Scientific Principlesa .74 Focusing on the broader principles of an academic discipline as opposed to specific elements of a situation
Complexity Evaluation .71 Examining the elements (contingencies, causes, restrictions, goals) of a situation and the dynamic relationship between the elements
Contingency Planning .75 Thinking about multiple alternatives in light of multiple consequences; developing back-up plans
Deliberative Action .72 Taking planned action when confronted with a problem
Following Appropriate Role Modelsa .75 Taking direction pertaining to ethical issues from appropriate role models in one's scientific field (e.g., other scientists that are commonly held in high regard) or role modeling appropriate behavior to younger colleagues
Maintaining Objective Focus .76 Being aware of personal biases and the impact of personal goals and stereotypes
Monitoring Assumptions .85 Reducing the faulty or irrational assumptions one makes of others or of a situation by drawing upon relevant past experiences or examples rather than solely relying upon one's beliefs about others or the situation
Recognition of Insufficient Informationa .76 Understanding that more information is required to form an opinion or to make a decision
Recognizing Boundaries .65 Having an accurate assessment about one's expertise in relation to situation at hand, an awareness of formal role boundaries, or an understanding of the power structure of the organization
Selective Engagement .80 Considering personal costs or one's personal limitations as a means of deciding whether to become involved in a situation
Self-Accountability .69 Abiding by personal ethics, being honest with oneself, and being responsible for what one says and does
Strategy Selection .86 Reflecting on the dynamics of a situation, one's preference for a strategy, and one's belief that a strategy will be successful and efficient as a means of choosing an ethical decision-making strategy
Striving for Transparencya .70 Emphasizing maintaining transparency in ethical decision making
Understanding Guidelines .71 Knowledge of the content and when to apply field and professional guidelines
Value/Norm Assessment .75 Awareness of the relevant value systems and using them when appropriate
a

Strategies added on the basis of faculty think-aloud interview responses.

The current effort attempts to provide validation evidence for this list of compensatory strategies. Researchers across academic fields were asked to complete the aforementioned measure of ethical decision making (Mumford et al., 2006) and then were given an opportunity to explain their justifications for their answers. If the previously identified compensatory strategies are in fact those used by professionals making ethically loaded decisions in research contexts, it may be possible to identify those strategies in their justifications for their decisions through a qualitative analysis of transcripts of those explanations.

The qualitative research literature offers a number of methods for extracting strategies used in decision making. Two such methods were used in this study. The first of these was a rating process by expert judges of the degree to which compensatory strategies appeared in transcripts of researchers’ justifications for their ethical decisions. The second method involved search and analysis of keywords used by researchers when discussing ethical dilemmas. Fielding and Lee (1998) suggested that this type of keyword analysis can be useful when there are particular words of interest connected to target concepts. This method of analysis allowed for a second source of information regarding the frequency of the use of compensatory strategies. If some compensatory strategies are used more frequently than others, then keywords associated with those strategies should also be used more frequently than keywords associated with other compensatory strategies. Leech and Onwuegbuzie (2011) suggested that this type of keyword analysis be conducted using NVivo, a computer-assisted qualitative data analysis software program.

To explore the extent to which compensatory strategies are used in ethical decision making in a research context and to determine how different qualitative analyses may expose the use of those strategies, we proposed the following research questions:

  • RQ1: Do researchers use the aforementioned compensatory strategies when engaging in ethical decision making?

  • RQ2: What compensatory strategies will appear most often in researcher discussions of ethical decision making on the basis of a content analysis rating by trained, expert raters?

  • RQ3: What compensatory strategies will appear most often in researcher discussions of ethical decision making on the basis of a computer-assisted qualitative data analysis?

  • RQ4: Do the same compensatory strategies appear most often on the basis of these two qualitative approaches?

METHOD

To explore these research questions, faculty interviews were conducted using a think-aloud protocol regarding participants’ responses on the measure of ethical decision making. These interviews were then scored using both of the procedures just described.

Think-Aloud Protocol Interviews

The sample consisted of 64 members of the University of Oklahoma faculty. This size of this sample is noteworthy given the typically limited time of researchers in academia. It allows for more extensive exploration than might be possible with a smaller sample. Of these participants, 37 were men and 27 women, making the sample 58% male. The sample included 16 assistant professors, 28 associate professors, 20 full professors, and one adjunct professor. Professors were recruited from six areas of study, including performance (e.g., architecture, drama, mathematics; n = 10), biological science (e.g., biochemistry, botany, zoology; n = 6), health science (e.g., dentistry, medicine, public health; n = 22), humanities (e.g., history, philosophy, religious studies; n = 5), physical science (e.g., computer science, engineering, physics; n = 7), and social science (e.g., anthropology, economics, sociology; n = 14).

These participants were asked to complete a field-appropriate pre-test measure of ethical decision making1 developed by Mumford and colleagues (2006). This instrument assesses the quality of ethical decisions across a range of ethical situations (e.g., plagiarism, data fabrication, and conflict of interest). It was developed to assess four general categories of behavioral dimensions, data management, study content, professional practices, and business practices, across six fields—biological, health, social, and physical sciences, performance, and humanities. Here, data management includes publication practices, as well as unethical behaviors such as data massaging. Study conduct includes a wide variety of subtopics, including informed consent, confidentiality, and protection of both human and animal subjects. The professional practices category subsumes such concerns as collaboration, evaluation of work, and protection of intellectual property. Finally, the business practices category includes conflicts of interest and the use of physical resources.

The instrument includes equivalent measures for each of these areas. These area-specific versions differ only in their field-specific content but tap the same ethical dimensions discussed above. Evidence of the similarity between these versions of the basic measure is available along with other validation evidence in Mumford and colleagues’ (2006) reported regarding the instrument. Each measure consists of a pretest and a posttest, each including between four and seven situational judgment scenarios including approximately five items per scenario, where items include approximately eight response options. Each response option is identified as high, medium, or low, with high answers indicating the best solutions to the ethical dilemmas and low answers indicating the worst solutions. Evidence regarding the construct validity of these measures, including convergent and divergent validity evidence, correlation of the measures with expected causes of ethical decisions, and correlation of the measures with expected outcomes of ethical decisions, was provided by Mumford et al. (2006).

Participants in the current study completed online pretest measures corresponding with their areas of study. A sample measure is included in the appendices. Participants are asked to read each scenario in the measure, as well as its related items, and then to select two response options to each item. Individuals receive scores for each scenario on the basis of their responses to the items relevant to that scenario, as well as an overall score on the measure, a composite score of all scenarios. The higher this score, the better an individual's ethical decision making.

The present study utilized an ideographic approach to identifying scenarios wherein participants were likely to have utilized compensatory strategies in their ethical decision making. In this approach, each individual's mean for the overall measure was determined on the basis of all the scenarios in that measure. Then scenarios on which participants scored at least half a standard deviation above their own means were identified as potential scenarios where participants displayed the use of compensatory strategies, with higher-than-usual scores indicating better than usual ethical decision making for the participant. Similarly, scenarios for which participants scored at least half a standard deviation below their own means (i.e., less ethical than normal for that participant) were identified as scenarios potentially influenced by participant bias. For example, let us say that a participant received an overall score of X. This score represents the average score of that participant across all scenarios included in the measure. If one of those scenario scores was above a half a standard deviation above X, that scenario was identified as one on which the participant displayed particularly high ethical decision making, potentially due to the presents of compensatory strategy use. If a scenario was below a half a standard deviation below X, that scenario was considered one on which the participant displayed particularly low ethical decision making, potentially due to the presence of a bias. Thus, the ideographic approach used identified only those scenarios wherein participants displayed extreme scores as compared to their own average levels of ethical decision making.

After a 1- to 2-week lag period, participants were asked to participate in a think-aloud interview, during which they were asked questions regarding their answers to those scenarios identified prior to the interview by another member of the research team as scenarios where compensatory strategies or biases might have influenced participant answers. More specifically, each participant was asked about a number of scenarios in which he or she had displayed extreme scores as previously described. Some of these scenarios reflected better-than-average ethical decision making as compared to the participant's norm, whereas others reflected worse-than-average ethical decision making as compared to that same norm. One of four industrial-organizational psychology doctoral students familiar with the literature on ethical decision making and who were blind to the scores of participants on each item (i.e., they did not know whether scenarios had been identified as potentially having been influenced by a bias or compensatory strategy) conducted each interview. Participants were asked a series of standardized questions developed on the basis of the think-aloud literature (Fonteyn, Kuipers, & Grobe, 1993), as exhibited in Table 2.

TABLE 2.

Standardized Interview Questions

Wave 1
    Guide me through the thought process behind your answers.
    What were your thoughts when you chose this answer?
    How did you arrive at those answers?
    What sticks out to you about this situation?
Wave 2
    What did you see as the primary dilemma in this issue?
    What were some things that stuck out to you about this question?
    What outcomes did you consider when you selected your answers?
    How did your professional expertise help you to choose your answers?
    How was this scenario relevant to your experience?
    What factors did you consider when you chose those answers?
    What dilemma did you see with these answers?
    Was this question easy or difficult to answer? Why?
    When have you seen someone in a similar situation make poor decisions? What did they do or not do?
    What professional guidelines did you think about when working through this exercise?
Clarification
    How so?
    Could you expand upon ... ?
    What did you mean when you said ... ?

Interviewers participated in an extensive training program to effectively execute the interview protocol. The first phase of training consisted of practice in which interviewers interviewed one another. These interviews were recorded and reviewed for consistency and to identify any problems with the questions as developed. Following completion of this phase, the students practiced interviewing faculty volunteers. This process also provided a check for the use of an online version of the Mumford et al. (2006) measure and allowed for a determination of the effectiveness of the procedures used to communicate with participants. Practice faculty interviews were also recorded and reviewed for consistency. Overall, the training program took place over a period of approximately two months.

Interview protocol

Each subsequent faculty interview was recorded and then transcribed. Interviews were conducted on the basis of a series of standardized questions, as shown in Table 2. Each Wave 1 question was followed by one to five Wave 2 questions as well as further questions for clarification. Following the completion of the interview process, participants were permitted to ask questions regarding the nature of the study. Although the overall purposes were revealed, no information was given about which of the participants’ answers contained potential biases or compensatory strategies, and no information about the participants’ scores was revealed.

Any additional compensatory strategies beyond the initial list developed by Antes et al. (2010) identified in the interviews were brought up for review before a faculty panel. Those strategies that were unique and could be clearly defined were added to the existing lists. Operational definitions of these new biases and strategies were constructed on the basis of participant responses. Following a review of these operational definitions by subject matter experts, these variables were added to the list of biases and compensatory strategies. The additional compensatory strategies identified are included in Table 1, with asterisks indicating the added strategies.

Four such compensatory strategies were identified: Attending to Scientific Principles, Following Appropriate Role Models, Recognition of Insufficient Information, and Striving for Transparency. Participants who displayed Attending to Scientific Principles, defined as focusing on the broader principles of an academic discipline as opposed to specific elements of a situation, discussed how various outcomes of ethical decision making could impact science as a whole. Following Appropriate Role Models, defined as taking direction pertaining to ethical issues from appropriate role models in one's scientific field, appeared when participants discussed contacting mentors for advice with regard to ethical dilemmas. Participants who indicated that they would like to gather more information before making a decision in a particular context demonstrated Recognition of Insufficient Information, defined as understanding that more information is required to form an opinion or to make a decision. Finally, some participants discussed the importance of leaving documentation of their decision-making processes and making sure not to obscure information, which was identified as Striving for Transparency, or emphasizing maintaining transparency in ethical decision making.

Expert Ratings

Following the think-aloud protocol, transcriptions of interviews were rated using a 1-to-5 Likert rating scale. Rating scales were developed on the basis of the list of preexisting compensatory strategies developed by Antes et al. (2010), with the addition of any compensatory strategies identified during the interview process (e.g., Attention to Scientific Principles). Ratings indicated a combination of frequency and intensity, where either several instances or one strong example of a given compensatory strategy would be scored a rating of 3, with ratings increasing for multiple strong examples and decreasing for a smaller number of weak examples.

The four interviewers completed training on the operational definition of each compensatory strategy and how to apply the rating scales to evaluate the transcripts. These judges then rated a small sample of interview transcripts using the rating scales, with each transcript receiving a score between 1 and 5 on each compensatory strategy. Following this rating procedure, judges discussed discrepancies in their ratings. This process was repeated over a period of approximately three months until ratings exhibited reliabilities above .70 as per the frame-of-reference training methodology developed by Bernardin and Buckley (1981). Following the training process, interrater agreement coefficients averaged .79 for compensatory strategies. Interrater agreement coefficients for each individual compensatory strategy are available in Table 1. Final scores for each participant for each compensatory strategy were calculated by averaging scores across raters.

Computer-Assisted Qualitative Data Analysis

In addition to the expert ratings for each compensatory strategy provided by these judges, a computer-assisted qualitative data analysis procedure was developed using NVivo, a qualitative analysis program. For each compensatory strategy, a list of approximately 25 keywords was developed reflecting the type of language interviewees were likely to use when utilizing each compensatory strategy, including synonyms and different tenses of each word. Keywords might include single words, phrases, or near searches. For near searches, two keywords were required to appear within a certain number of other words, typically five. Completed lists were then reviewed by three judges, all subject matter experts whose research areas included ethical decision making, who removed any words they considered unnecessary or irrelevant. The keyword lists were then revised as per the subject matter experts’ reviews. Example keyword lists for two compensatory strategies, Understanding Guidelines and Striving for Transparency, are included in Table 3.

TABLE 3.

Example Keyword Lists

Understanding Guidelines Striving for Transparency
Guideline Transparent
Professional Clear
“Professional guideline” ~5 Clearly
“Professional guidelines” ~5 Open
“Rule thumb” ~5 Openly
Protocol Forthcoming
Praxis Communicative
“Adhere guideline” ~5 Forthright
“Adhere guidelines” ~5 Direct
“Follow guideline” ~5 Directly
“Follow guidelines” ~5 Available
“Respect guideline” ~5 Accessible
“Respect guidelines” ~5 “Public domain” ~5
“Abide guideline” ~5 Record
Abide guidelines” ~5 Document
“Important guideline” ~5 Report
“Important guidelines” ~5 Inform
“Understand guideline” ~5 “Make readily available”
“Different guideline” ~5
“Different guidelines” ~5
“Flexible guideline” ~5
“Flexible guidelines” ~5
“Circumstance guideline” ~5
“Circumstances guideline” ~5
“Circumstances guidelines” ~5
“Circumstance guidelines” ~5
“Contingenc y guideline” ~5
“Contingent guideline” ~5
“Contingent guidelines” ~5
“Apply guideline” ~5
“Apply guidelines” ~5

Note. ~5 indicates that the affected words were selected when they were within five words of one another.

Following the approval of final lists for each compensatory strategy, keywords were used to search interviewee transcripts using NVivo. The results of these searches were reviewed to check for keyword accuracy, to determine whether each keyword on each list generated an inappropriate number of false hits. Lists were revised on the basis of these results and were then used to determine scores for each interviewee's transcript for each compensatory strategy. The average accuracy rate for keywords was 63%, indicating that about 63% of participant uses of keywords identified through keyword searches were relevant to their corresponding biases or compensatory strategies.

Analyses

Means and standard deviations were determined for each compensatory strategy using both the expert rating system and the computer-assisted qualitative analysis for each participant's transcript. Correlations between these two scoring systems were then determined. The results of these analyses are discussed next.

RESULTS

Expert Ratings

Intercorrelations among compensatory strategies as scored using the expert rating system are shown in Table 4. Some construct validity evidence is available for this scoring system given that various compensatory strategies were correlated with one another as one might expect. For example, Understanding Guidelines, or using one's knowledge of the content of field and professional guidelines to improve ethical decision making, was positively related, r (63)= .25, p < .05, to Value/Norm Assessment, wherein those making ethical decisions use their awareness of relevant value systems. Understanding Guidelines was also positively related, r(63) = .35, p < .01, to Recognizing Boundaries, a compensatory strategy that includes possessing an understanding of the relevant organizational power structure. Furthermore, Self-Accountability, which includes being honest with oneself when making decisions, was positively related, r(63) = .29, p < .05, to Striving for Transparency, or emphasizing maintaining transparency in ethical decision making. In addition, Maintaining Objective Focus, or being aware of personal biases, goals, and stereotypes and their impacts on one's decisions, was positively related, r(63) =.35, p < .01, to Monitoring Assumptions, a compensatory strategy wherein individuals reduce irrational assumptions about others. That these compensatory strategies displayed the relationships one might expect given their definitions implies that the expert rating system accurately assessed the use of compensatory strategies by participants. Means and standard deviations for each compensatory strategy under the expert rating system are also provided in Table 4.

TABLE 4.

Intercorrelations among Compensatory Strategies for Ethical Decision Making, Expert Rating System

M SD 1 2 3 4 5 6 7 8 9 10 11 12 13 14
1. Attending to Scientific Principles 1.63 .40
2. Complexity Evaluation 1.75 .37 –.26*
3. Contingency Planning 1.49 .41 –.02 .08
4. Deliberative Action 1.70 .48 .25* –.01 .16
5. Following Appropriate Role Models 1.54 .58 –.03 –.19 .03 –.14
6. Maintaining Objective Focus 1.60 .44 .22 .23 –.10 .06 –.19
7. Monitoring Assumptions 1.44 .46 .30* .01 .15 .36** –.01 .35**
8. Recognition of Insufficient Information 2.05 .64 .06 –.08 –.01 .06 –.03 .07 .03
9. Recognizing Boundaries 1.92 .53 –.11 .19 –.04 –.18 .09 .12 .01 .17
10. Selective Engagement 1.34 .28 .18 –.06 .13 .10 .17 .05 .20 .07 .16
11. Self-Accountability 1.69 .56 .31* –.09 .22 .29* –.11 –.06 .37** .13 –.32* –.01
12. Strategy Selection 1.23 .27 .14 .11 .07 .13 –.16 .05 –.05 .19 –.11 –.00 .31*
13. Striving for Transparency 1.65 .46 .04 –.04 –.02 –.04 .21 –.19 .07 –.15 –.06 .19 .29* .01
14. Understanding Guidelines 2.13 .65 .13 .23 .06 .14 .06 .15 .25* .24 .35** .17 .16 .13 –.05
15. Value/Norm Assessment 1.85 .53 .13 –.04 .04 .21 .16 –.01 .26* –.03 .03 .11 .15 –.06 .01 .25*
*

p < .05.

**

p < .01.

With regard to our second research question, the three compensatory strategies most apparent in participant transcripts as per the expert rating system were Understanding Guidelines, Recognition of Insufficient Information, and Recognizing Boundaries. Means, standard deviations, and operational definitions for each of these compensatory strategies appear in Table 5. First mention Table 5 To determine the top three compensatory strategies, the variables were placed in rank order on the basis of their average rating, where ratings were on a scale from 1 to 5. On the basis of that ranking, these three strategies were the ones most often used as per the expert rating procedure.

TABLE 5.

Ranking of Compensatory Strategies, Expert Rating System

Compensatory Strategy M SD
1. Understanding Guidelines 2.13 0.64
2. Recognition of Insufficient Information 2.05 0.64
3. Recognizing Boundaries 1.92 0.48
4. Value/Norm Assessment 1.83 0.51
5. Complexity Evaluation 1.76 0.39
6. Self-Accountability 1.69 0.55
7. Deliberative Action 1.68 0.46
8. Striving for Transparency 1.67 0.46
9. Attending to Scientific Principles 1.64 0.39
10. Following Appropriate Role Models 1.56 0.58
11. Maintaining Objective Focus 1.56 0.41
12. Contingency Planning 1.52 0.40
13. Monitoring Assumptions 1.42 0.46
14. Selective Engagement 1.34 0.27
15. Strategy Selection 1.24 0.29

Computer-Assisted Qualitative Data Analysis

In addition to the expert rating system just discussed, additional information regarding the use of compensatory strategies by study participants was garnered through qualitative data analysis using NVivo. Intercorrelations between scores for these strategies appear in Table 6. As with the expert ratings, the correlations between compensatory strategy scores garnered through the computer-assisted qualitative analysis system indicated that this system accurately measured the use of compensatory strategies by researchers. For instance, as in the expert rating system, Understanding Guidelines and Value/Norm Assessment were positively related, r(63) = .32, p < .05. Once again, Understanding Guidelines was also positively related, r(63) = .27, p < .05, to Recognizing Boundaries. Furthermore, Contingency Planning, or thinking about multiple alternatives in light of various consequences and developing back-up plans, was positively related, r(63) = .57, p < .01, to Deliberative Action, defined as taking planned action when confronted with a problem. Due to the alignment of these relationships with what might be reasonably expected, these intercorrelations do provide construct validity evidence. Means and standard deviations for each variable are also included in Table 6.

TABLE 6.

Intercorrelations among Compensatory Strategies for Ethical Decision Making, Computer-Assisted Scoring System

M SD 1 2 3 4 5 6 7 8 9 10 11 12 13 14
1. Attending to Scientific Principles 0.15 0.39
2. Complexity Evaluation 0.68 0.63 .10
3. Contingency Planning 1.40 0.79 –.01 .51**
4. Deliberative Action 1.38 1.31 –.05 .31** .57**
5. Following Appropriate Role Models 1.48 1.98 –.05 .31** .30* .11
6. Maintaining Objective Focus 0.67 0.94 –.01 .19 –.03 –.07 .01
7. Monitoring Assumptions 0.75 1.00 .22 .21 .17 .04 .23 –.06
8. Recognition of Insufficient Information 0.94 1.80 –.01 .58** .34** .23 .33** –.09 .19
9. Recognizing Boundaries 1.14 1.25 –.09 .07 .38** .21 .11 .04 .27* –.02
10. Selective Engagement 1.07 1.29 .05 .13 .28* .17 .01 .07 .17 .09 .51**
11. Self-Accountability 0.95 1.20 –.03 .03 .01 .16 –.05 .15 –.11 –.05 .08 .01
12. Strategy Selection 0.92 0.79 .09 .39** .47** .24 .08 –.07 .11 .17 .14 –.09 .01
13. Striving for Transparency 2.09 2.16 .03 .16 .31* .33** .03 .08 .15 .19 .05 .05 .12 .07
14. Understanding Guidelines 1.03 1.41 .01 .53** .49** .35** .23 .16 .12 .17 .27* .08 .04 .30* .34**
15. Value/Norm Assessment 1.94 1.97 –.13 .39** .14 .18 .13 .13 .10 .02 –.11 –.14 .15 .17 .15 .32**
*

p < .05.

**

p < .01.

With regard to our third research question, on the basis of the computer-assisted qualitative analysis system, the top three compensatory strategies were Striving for Transparency, Value/Norm Assessment, and Following Appropriate Role Models, as shown in Table 7. First mention Table 7 It is noteworthy that these ratings were not created on a scale from 1 to 5 but instead could increase indefinitely as participants used an increasingly large number of keywords, meaning that the range of these scores was greater than for the expert rating scores. Hence, the average scores for display of these compensatory strategies ranged from .154 to 2.09 after scores were corrected for the length of the keyword lists utilized. In contrast, average scores for compensatory strategies range 1.24 to 2.13. The keywords included in the lists for the three variables previously listed were used the most frequently in the average participant's transcript.

TABLE 7.

Ranking of Compensatory Strategies, Computer-Assisted Scoring System

Compensatory Strategy M SD
1. Striving for Transparency 2.09 2.16
2. Value/Norm Assessment 1.94 1.97
3. Following Appropriate Role Models 1.48 1.98
4. Contingency Planning 1.40 0.79
5. Deliberative Action 1.38 1.31
6. Recognizing Boundaries 1.14 1.25
7. Selective Engagement 1.07 1.30
8. Understanding Guidelines 1.03 1.41
9. Self-Accountability 0.95 1.20
10. Recognition of Insufficient Information 0.94 1.80
11. Strategy Selection 0.92 1.79
12. Monitoring Assumptions 0.75 1.00
13. Complexity Evaluation 0.68 0.63
14. Maintaining Objective Focus 0.67 0.94
15. Attending to Scientific Principles 0.15 0.39

Comparison of Systems

Our fourth research question asked whether these two qualitative analysis systems would rank the same compensatory strategies as appearing most often in participant transcripts. In fact, Spearman rank order correlation of the rankings generated by the expert rating system and the computer-assisted qualitative data analysis found no significant correlations. The average compensatory strategy within the computer-assisted qualitative data analysis ranking was 5.2 places from its counterpart within the expert ranking, within a possible 15 places.

DISCUSSION

Before turning to the implications of these results, several limitations must be noted. First, a lowfidelity task, as used in this study, might not generalize to real-world settings in which ethical decision making is required. In addition, the use of this type of task opens the study to social desirability bias, in that participant responses may represent idealized behavior rather than the actual behavior they might display in a challenging ethical decision-making context in the real world. A low-fidelity task is nonetheless appropriate for a study of ethical decision making due to the difficulties inherent in measurement of natural ethical dilemmas and the risk of harm inherent in inducing scenario in which ethical decision making would be necessary. The potential for social desirability bias may have increased the frequency with which compensatory strategies were used as compared with their frequency in real-world ethical decision making. However, this increase would likely apply to all compensatory strategies, leading to the conclusion that the comparative frequency between strategies would not be impacted by this potential bias.

In addition, in the current study, a cutoff score half a standard deviation above an individual's average performance was utilized to determine which scenarios were likely to exhibit the use of compensatory strategies. Other more rigorous cutoff scores could have been justified. In addition, an iterative procedure was used to identify additional compensatory strategies, leading to some doubt as to whether the final list of compensatory strategies is exhaustive. Similarly, although the keyword scoring procedure used in this study was systematic, additional keywords might have been useful to identify the use of compensatory strategies. Finally, although the judges involved in this study possessed substantial expertise in ethics, other judges with greater expertise might have reached different conclusions.

Despite these limitations, it is possible to draw several conclusions on the basis of this study. The first research question was whether the use of the identified compensatory strategies would appear in the context of ethical decision making in research. The results presented here appear to answer this question in the affirmative, indicating that the developed list of compensatory strategies provides a legitimate basis for the consideration of the importance of compensatory strategy use in research and a springboard for future research on this important topic. The apparent use of these compensatory strategies in ethical decision making contributes to our understanding of Sonenshein's (2007) sensemaking model, in that they may act to simplify the process of considering all situational elements necessary to make effective ethical decisions. It is possible that these strategies reduce the cognitive demands of the sensemaking activities that Sonenshein describes. With regard to the second and third research questions, whether certain compensatory strategies would appear more frequently than others in researcher discussion of ethical decisions, the results do provide support. In answer to the second research question, on the basis of the expert ratings system, it would appear that Understanding Guidelines, Recognition of Insufficient Information, and Recognizing Boundaries are used the most often by individuals solving hypothetical ethical dilemmas as compared with other compensatory strategies. In answer to the third research question, on the basis of the computer-assisted qualitative data analysis system, Striving for Transparency, Value/Norm Assessment, and Following Appropriate Role Models appear to be used more often than other compensatory strategies.

These observations lead, in turn, to a consideration of our final research question, whether the most commonly displayed compensatory strategies as revealed by expert ratings would also be the most commonly displayed compensatory strategies as per computer-assisted qualitative data analysis of keywords. Our findings indicate that this is not the case. In fact, the two ranking systems were so different that the correlation between the two systems was not significant.

A possible explanation for this discrepancy lies in the nature of each of the rating systems. The expert rating system allowed for the consideration of the context surrounding participant responses, including various nuances of expression that add to the meaning of the words chosen by those participants. Contrastingly, the computer-assisted qualitative data analysis, with its basis in keyword usage, provided a decontextualized assessment of the words used by individuals. Thus, although the expert raters identified Understanding Guidelines as the most frequently used compensatory strategy, the keyword analysis indicated that more participants chose to use words related to Striving for Transparency.

This finding bears important implications with regard to the ways in which individuals conceptualize and discuss their actions in ethically loaded situations. The results explained previously may indicate that there are certain social norms surrounding the discussion of ethical choices in research. An individual who is asked to justify his or her ethical decision making will seemingly tend to do so in a way that includes discussion of particular compensatory strategies, perhaps those most often associated with good choices in ethically loaded situations. Thus, even though expert raters indicate that individuals often make decisions based on an assessment of guidelines as per the Understanding Guidelines compensatory strategy, it appears that they are more likely to discuss their decisions in the context of Striving for Transparency. Our results imply a disconnect between the strategies most often employed in ethical decision making and the language most often used to discuss ethical dilemmas and the decisions that surround them.

These findings bear important implications with regard to the measurement of compensatory strategy use. Although an expert rating system can be utilized to identify the strategies being used by those engaging in ethical decision making, a decontextualized keyword-search process, such as that available through NVivo, provides insight into the ways in which compensatory strategies in ethical decision making are most often discussed. Therefore, in future efforts along these lines, it is important to select the form of measure most appropriate for the research question at hand—although computer-assisted data analysis provides important information with regard to the language surrounding ethical decision making, it may be necessary to utilize trained raters to identify the concepts and strategies underlying those decisions.

Conclusions and Future Research

The present effort allows us to draw a number of important conclusions. First, compensatory strategies do come into play when researchers are engaging in decision making in ethically loaded situations. Further, some initial validation evidence is provided for a list of potential strategies. This list may be of use in the context of ethics training, in which individuals are trained to utilize the strategies most often used by successful researchers in the field.

Second, it appears that the language professionals tend to use when discussing their decisions in ethical dilemmas may relate to compensatory strategies that are different than the actual strategies they employ. This conclusion implies that both contextualized and decontextualized analyses of the use of these variables are appropriate, depending on the goal of a given research effort along these lines.

However, there is still much to be learned about these two methods for judging compensatory strategies, and with regard to the list of compensatory strategies overall. Are the strategies most commonly employed by research professionals also employed by graduate students, those in training to become research professionals? If not, how do these two groups differ? Furthermore, are compensatory strategies particularly beneficial when used in certain contexts, or overall? Additional studies are needed to explore these questions fully.

Another important consideration regards training individuals to utilize certain compensatory strategies. It may be that undergraduate and graduate students would benefit from training in the compensatory strategies most often used by professionals in their fields. Contrastingly, perhaps by training individuals in the less frequently used compensatory strategies, such as Complexity Evaluation, Maintaining Objective Focus, and Attending to Scientific Principles, we can provide young researchers with even better tools for tackling ethical dilemmas than those their betters possess. Further research is necessary to identify the best ways in which to utilize the list of most and least frequently used strategies.

Compensatory strategies provide a valuable way in which individuals may improve their ethical decision making. This improvement is vital to curbing the dismaying trend toward unethical behavior in research that currently appears across all fields. With a greater understanding of the ways in which people may compensate for their biases in ethical decision making, it may be possible for us to counteract this trend and decrease the mounting frequency of ethical misconduct in research.

ACKNOWLEDGMENTS

Parts of this work were sponsored by Grant No. R21 ES021075-01 from the National Institutes of Health. We thank T. H. Lee Williams for his contributions to this effort.

Footnotes

1

Readers interested in obtaining a copy of this measure are invited to contact Michael Mumford at mmumford@ou.edu.

REFERENCES

  1. Antes A, Caughron J, Mumford MD. Internal report: Errors and compensatory strategies in ethical decision making. University of Oklahoma; Norman: 2010. Unpublished manuscript. [Google Scholar]
  2. Bernardin HJ, Buckley MR. Strategies in rater training. Academy of Management Review. 1981;6:205–212. [Google Scholar]
  3. Butterfield KD, Treviño LK, Weaver GR. Moral awareness in business organizations: Influences of issue-related and social context factors. Human Relations. 2000;53:981–1018. [Google Scholar]
  4. Dubinsky AJ, Loken B. Analyzing ethical decision making in marketing. Journal of Business Research. 1989;19:83–107. [Google Scholar]
  5. Ference TP. Induced strategies in sequential decision-making. Human Relations. 1972;25:377–389. [Google Scholar]
  6. Ferrell OC, Gresham LG. A contingency framework for understanding ethical decision making in marketing. Journal of Marketing. 1985;49:87–96. [Google Scholar]
  7. Fielding NG, Lee RM. Computer analysis and qualitative research. Sage; Thousand Oaks, CA: 1998. [Google Scholar]
  8. Fonteyn ME, Kuipers B, Grobe SJ. A description of think aloud method and protocol analysis. Qualitative Health Research. 1993;3:430–441. [Google Scholar]
  9. Haines R, Street MD, Haines D. The influence of perceived importance of an ethical issue on moral judgment, moral obligation, and moral intent. Journal of Business Ethics. 2008;81:387–399. [Google Scholar]
  10. Jones TM. Ethical decision making by individuals in organizations: An issue-contingent model. Academy of Management Review. 1991;16:366–395. [Google Scholar]
  11. Leech NL, Onwuegbuzie AJ. Beyond constant comparison qualitative data analysis: Using NVivo. School Psychology Quarterly. 2011;26:70–84. [Google Scholar]
  12. Martinson BC, Anderson MS, de Vries R. Scientists behaving badly. Nature. 2005;435:737–738. doi: 10.1038/435737a. [DOI] [PubMed] [Google Scholar]
  13. Mumford MD, Connelly S, Brown RP, Murphy ST, Hill JH, Antes AL, Devenport LD. A sensemaking approach to ethics training for scientists: Preliminary evidence of training effectiveness. Ethics & Behavior. 2008;18:315–339. doi: 10.1080/10508420802487815. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Mumford MD, Devenport LD, Brown RP, Connelly S, Murphy ST, Antes AL. Validation of ethical decision making measures: Evidence for a new set of measures. Ethics & Behavior. 2006;16:319–345. [Google Scholar]
  15. Pitz GF, Reinhold H, Geller ES. Strategies of information seeking in deferred decision making. Organizational Behavior & Human Performance. 1969;4:1–19. [Google Scholar]
  16. Scott G, Leritz LE, Mumford MD. The effectiveness of creativity training: A quantitative review. Creativity Research Journal. 2004a;16:361–388. [Google Scholar]
  17. Scott G, Lertiz LE, Mumford MD. Types of creativity training: Approaches and their effectiveness. The Journal of Creative Behavior. 2004b;38:149–179. [Google Scholar]
  18. Sonenshein S. The role of construction, intuition, and justification in responding to ethical issues at work: The sensemaking-intuition model. The Academy of Management Review. 2007;32:1022–1040. [Google Scholar]
  19. Treviño LK. Ethical decision making in organizations: A person–situation interactionist model. The Academy of Management Review. 1986;11:601–617. [Google Scholar]
  20. Treviño LK, Weaver GR, Reynolds SJ. Behavioral ethics in organizations: A review. Journal of Management. 2006;32:951–990. [Google Scholar]
  21. Tenbrunsel AE, Messick DM. Ethical fading: The role of self-deception in unethical behavior. Social Justice Research. 2004;17:223–236. [Google Scholar]
  22. Tversky A, Kahneman D. Judgment under uncertainty: Heuristics and biases. Science. 1974;185:1125–1131. doi: 10.1126/science.185.4157.1124. [DOI] [PubMed] [Google Scholar]
  23. Van Noorden R. Science publishing: The trouble with retractions. Nature. 2011;478:26–28. doi: 10.1038/478026a. [DOI] [PubMed] [Google Scholar]

RESOURCES