Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2017 Jan 1.
Published in final edited form as: Account Res. 2016;23(5):288–308. doi: 10.1080/08989621.2016.1171149

Making Professional Decisions in Research: Measurement and Key Predictors

Alison L Antes 1, John T Chibnall 2,3, Kari A Baldwin 1, Raymond C Tait 2, Jillon S Vander Wal 3, James M DuBois 1
PMCID: PMC4968873  NIHMSID: NIHMS799158  PMID: 27093003

Abstract

The professional decision-making in research (PDR) measure was administered to 400 NIH-funded and industry-funded investigators, along with measures of cynicism, moral disengagement, compliance disengagement, impulsivity, work stressors, knowledge of responsible conduct of research (RCR), and socially desirable response tendencies. Negative associations were found for the PDR and measures of cynicism, moral disengagement, and compliance disengagement, while positive associations were found for the PDR and RCR knowledge and positive urgency, an impulsivity subscale. PDR scores were not related to socially desirable responding, or to measures of work stressors and the remaining impulsivity subscales. In a multivariate logistic regression analysis, lower moral disengagement scores, higher RCR knowledge, and identifying the United States as one’s nation of origin emerged as key predictors of stronger performance on the PDR. The implications of these findings for understanding the measurement of decision-making in research and future directions for research and RCR education are discussed.

Keywords: professionalism, research ethics, responsible conduct of research, research integrity, assessment, measurement, decision-making, RCR education and instruction

Introduction

Researchers must make a number of decisions in the course of a project, including considerations about data, protecting research participants, mentoring junior investigators, managing personnel, and handling conflicts of interest (AAMC-AAU 2008; Anderson et al. 2007; De Vries, Anderson, and Martinson 2006; Steneck 2007). In order to navigate these issues successfully, investigators must recognize the ethical considerations present in such situations, interpret written and unwritten rules, and address social and political dynamics, among other considerations (Devereaux 2014; Shamoo and Resnik 2015). Sound professional decisions adhere to the norms and ethical expectations of research, safeguard the trust of the public, promote collaboration, and foster the generation of new scientific knowledge (DuBois et al. 2015). Researchers’ actions influence their ability to exemplify the ethical principles of science and of their professions, and, ultimately, the integrity of their contributions to science (DuBois et al. 2015; Shamoo and Resnik 2015).

Professional decision-making requires identifying and thinking through the dynamics of uncertain situations, recognizing and weighing options, predicting likely outcomes, and often obtaining additional information (Antes et al. 2010; DuBois et al. 2015; Stenmark et al. 2011; Thiel et al. 2012). In so doing, investigators must manage self-serving biases, detrimental negative emotions, and short-term thinking, all of which hinder effective decision-making (Bazerman, Tenbrunsel, and Wade-Benzoni 1998; DuBois et al. 2015; Moore and Loewenstein 2004; Thiel, Connelly, and Griffith 2011). Reasoning strategies that facilitate critical analysis, information gathering, and evaluation of outcomes can offset errors and biases in thinking and facilitate sound decision-making (DuBois et al. 2015; Thiel et al. 2012). The purpose of this study was to identify personality, contextual, and knowledge variables among researchers that influenced the utilization of such reasoning strategies in professional decision-making.

The Professional Decision-Making in Research (PDR) Measure

The PDR is a scenario-based measure that operationalizes professional decision-making in research through the assessment of four reasoning strategies: (1) considering consequences and rules, (2) seeking help, (3) managing emotions, and (4) questioning personal assumptions and motives. The PDR is available in two parallel forms, which allows researchers to administer equivalent tests containing distinct items in a pre-post fashion for educational assessment. Each form consists of 16 items that describe a challenging professional situation in research. Examples of challenging situations include suspecting a co-investigator of tampering with data or managing underperforming research assistants. The items are presented in groups of four preceded by a scenario describing the broader context for the challenging situations. (The methods section provides additional detail about the PDR.)

Initial psychometric research with a sample of 300 NIH-funded researchers provided evidence that the PDR is a reliable, valid tool for assessing researchers’ use of strategies in professional decision-making (DuBois et al. 2015). The PDR demonstrated satisfactory reliability (alpha = .84) and parallel form correlation (r = .70). Further, it was not susceptible to socially desirable responding (i.e., providing answers that reflect what is deemed socially acceptable), r = −.02. Construct validity was evidenced through negative correlations of the PDR with narcissism (r = −.15), cynicism (r = −.26), moral disengagement (r = −.32), and compliance disengagement (r = −.38).

This initial research also identified a subset of researchers who performed significantly worse on the PDR than the others (DuBois et al. 2015). This low performing group scored higher in cynicism, moral disengagement, and compliance disengagement. Thus, a distrustful view of others, cognitive distortions of moral issues, and devaluation of research integrity and compliance rules were associated with PDR response patterns that were inconsistent with the four professional decision-making strategies. Instead, the low performing group more frequently selected choices that favored breaking rules, causing unwarranted harm, or acting without complete information or before full deliberation relative to the higher performing group. This pattern suggests that individuals who score poorly on the PDR may be those most “at risk” for poor research decision-making and therefore most in need of training or other support to foster research integrity. However, sufficient evidence is not yet available to ascertain whether the PDR might accurately identify “at risk” individuals, and it remains unclear how interventions might remedy deficits in professional decision-making. Thus, the present study provides a next step in this agenda. We aim to cross-validate the PDR relationships found in the initial study, as well as to identify additional variables that explain variance in performance on the PDR.

Predictors of Decision-Making in Research

Prior studies have shown that aspects of the work environment, the nature of the problem at hand, and personal characteristics influence ethical choices and behavior (Kish-Gephart, Harrison, and Trevino 2010; Trevino, den Nieuwenboer, and Kish-Gephart 2014). In the present research, we focused primarily on individual-level variables, including trait impulsivity and knowledge of principles or guidelines for responsible conduct of research (RCR). We also examined one element of the work environment—the individual’s perceptions of work-related stressors. In addition, we retained three variables as potential predictors that the prior study demonstrated to have significant associations with responses on the PDR: cynicism, moral disengagement, and compliance disengagement (DuBois et al. 2015). We hypothesized that impulsivity and work stress would be associated with lower professional decision-making, and that RCR knowledge would be associated with higher professional decision-making.

Impulsivity

Impulsivity is a personality trait associated with acting without thinking or engaging in rash behavior (Whiteside and Lynam 2001). It is linked with a variety of dysfunctional behaviors, including cheating (McTernan, Love, and Rettinger 2014), and is associated with decision-making deficits (Billieux et al. 2010; Cyders and Smith 2008; Enticott, Ogloff, and Bradshaw 2006; Evenden 1999; Franken et al. 2008; Martin and Potts 2009; Mobini et al. 2007; Zermatten et al. 2005). However, research generally has not examined potential relationships of impulsivity and decision-making in professional settings despite its potential to disrupt decision-making and behavior through several pathways: acting rashly in response to negative emotion (urgency), failing to consider consequences (lack of premeditation), inability to persist at a task (lack of perseverance), seeking stimulation and excitement (sensation seeking), and acting rashly when experiencing positive emotion (positive urgency) (Cyders et al. 2007; Whiteside and Lynam 2001; Whiteside et al. 2005).

With regard to ethical decision-making, recent theorizing suggests that both automatic and controlled processes influence ethical choices and behaviors (Haidt 2001; Reynolds 2006; Sonenshein 2007). When more conscious, controlled processing is necessary, for instance in cases of uncertainty, novelty, or strong emotions (Reynolds 2006), impulsive individuals may be less inclined to stop and think before acting or making a choice. Thus, we hypothesized that impulsivity would disrupt decision-making reflected in fewer choices that utilize reasoning strategies on the PDR. Given the importance of considering possible outcomes and managing negative emotions in professional decision-making (Stenmark et al. 2011; Thiel, Connelly, and Griffith 2011), we expected a negative relationship between both a lack of premeditation and negative urgency and professional decision-making in research.

Work Stressors

Given the pressure researchers report experiencing in their work (De Vries, Anderson, and Martinson 2006; Lease 1999) and evidence that work-related stress and burnout lead to deficits in decision-making, well-being, and work performance (Ganster 2005; Ganster and Rosen 2013; Lin et al. 2015; Mather and Lighthall 2012; Oberlechner and Nimgade 2005; Stewart and Barling 1996; Van der Linden et al. 2005), we examined work stressors as potential correlates of decision-making. Three key job stressors—interpersonal conflict, role ambiguity (i.e., lack of clarity about how to accomplish work tasks), and role overload—can drain an individual’s emotional, cognitive, or physical resources (Zohar 1997). In the absence of adequate coping, stress undermines information processing, focusing attention, emotional regulation, and the generation of alternative solutions (Landy and Conte 2010). Given the resource intensive nature of professional decision-making, we hypothesized a negative relationship between work stressors and professional decision-making in research, and anticipated that role overload in particular would demonstrate a negative relationship with PDR scores.

Knowledge of RCR

Research rules and guidelines govern the work of researchers. They are intended to protect research subjects, prevent mishandling of research funds and data, and guide mentoring responsibilities in science (Shamoo and Resnik 2015). Although knowledge is commonly an intended outcome of instruction in RCR (Antes and DuBois 2014; Kalichman and Plemmons 2007; Powell, Allison, and Kalichman 2007), we are unaware of evidence specifically linking such knowledge to decision-making among researchers. The operating presumption is that making responsible choices in research relies at least in part on knowledge of the rules that govern science. Indeed, we expected that the successful identification of better professional choices in challenging research scenarios on the PDR would be associated with greater knowledge of RCR guidelines and principles. Accordingly, we hypothesized a positive relationship between RCR knowledge and professional decision-making.

Training in RCR is a primary mechanism for disseminating knowledge of rules and guidelines in research, fostering attitudes conducive to integrity, and developing skills for ethical decision-making (Antes and DuBois 2014). Yet, the effectiveness of RCR instruction has been extensively questioned (Antes et al. 2009; Antes et al. 2010; Devereaux 2014; Kalichman 2014a; Kalichman 2014b; Kalichman and Plemmons 2015; Nebeker 2014; Plemmons and Kalichman 2013; Tamot, Arsenieva, and Wright 2013). Our study provided an environment in which to test the relationships among RCR knowledge, disengagement attitudes, and professional decision-making. Therefore, we also examined whether these variables were associated with amount of participation in RCR or research ethics instruction.

In summary, our objective was to cross-validate relationships obtained in prior research (DuBois et al., 2015) and to determine whether impulsivity, work stressors, and RCR knowledge explain variance in performance on the PDR in a sample of funded researchers.

Method

Sampling and Recruitment

We utilized the National Institutes of Health (NIH) RePORTER website to identify NIH-funded extramural investigators who were diverse in terms of career stage and representative of the NIH-funded population in terms of gender, native language, racial background, type of research, and their institutions’ CTSA funding status. The database was queried for R01, K, T31, T32, F31, and F32 projects in their first year. This allowed recruitment of senior (R01) and junior investigators (K, T31, T32, F31, and F32). We also utilized CTSAcentral.org to identify KL2 scholars who were early career investigators. We obtained investigators’ names, emails, types of degree, institutions, and types of research from project information provided online, and we performed web searches to identify investigators’ phone numbers, gender, and English as a second language (ESL) status. We also tracked the CTSA funding status of the researchers’ institutions.

We also identified industry-funded physician researchers in order to evaluate potential differences with the NIH-funded investigators. Industry-funded physician researchers were identified using ClinicalTrails.gov and through web searches of U.S. medical schools for information regarding their clinical trials.

We excluded individuals who participated in the initial PDR validity study. Due to lower response rates in previous research among ESL individuals, we oversampled for ESL participants. We emailed 2,506 potential participants an invitation to volunteer for a study examining professional decision-making in research. We followed up the initial email with phone calls and email reminders. Of the invitations sent, 93 (4%) email addresses were invalid, and an unknown number went to junk email. Of persons who opened the email (n = 1,313), 120 (9%) individuals chose to opt-out of the study, 196 (15%) proceeded to the study webpage but did not complete the surveys, and 400 (30%) participated.

Measures

We administered a battery of psychological tests to participants to measure moral disengagement, cynicism, compliance disengagement, impulsivity, work stressors, RCR knowledge, social desirability, as well as the PDR. All measures we selected have demonstrated reliability and validity.

Moral Disengagement

We assessed moral disengagement with the 8-item version of the Propensity to Morally Disengage Scale. It measures the tendency to use cognitive distortions, such as euphemistic labeling, diffusion of responsibility, and distortion of consequences, that underlie moral disengagement (Moore et al. 2012). Participants rated whether they agreed with items on a 7-point scale from 1 (strongly disagree) to 7 (strongly agree). Overall scale scores were the average of responses on the 8 items; therefore, the range of possible scores was 1 to 7, and higher scores reflect higher levels of disengagement.

Cynicism

We used the 11-item Global Cynicism Scale to assess the extent to which individuals have a generally distrustful attitude towards others (Turner and Valentine 2001). Individuals rated on a 7-point scale from 1 (strongly disagree) to 7 (strongly agree) the extent to which each statement described their thinking. Overall scale scores were generated by computing the average of ratings across the items. Thus, possible scores range from 1 to 7, with higher scores reflecting higher levels of cynicism.

Compliance Disengagement

We employed the 45-item “How I Think about Research” (HIT-Res) scale to assess disengagement from research compliance and integrity (DuBois, Chibnall, and Gibbs 2015). The HIT-Res assesses four cognitive distortions associated with disengagement: assuming the worst, blaming others, minimizing/mislabeling, and self-centered thinking. On a 6-point scale from 1 (strongly disagree) to 6 (strongly agree), participants rated how much statements described how they think about research. The HIT-Res total score was computed by averaging responses across the items. Scores can range from 1 to 6 with higher scores indicating higher compliance disengagement.

Impulsivity

We utilized the 59-item UPPS-P Impulsive Behavior Scale consists of five scales measuring distinct pathways to impulsive behavior: Urgency, Premeditation (lack of), Perseverance (lack of), Sensation Seeking, and Positive Urgency (these pathways were described in the introduction) (Cyders et al. 2007; Whiteside and Lynam 2001; Whiteside et al. 2005). Participants rated on a 4-point scale from 1 (disagree strongly) to 4 (agree strongly) whether they agree or disagree with statements describing ways in which they might think or act. The average of ratings across items for each scale provided the total score for each scale. Possible scores range from 1 to 4, with higher scores reflecting higher levels of impulsivity.

Work Stressors

We administered the 20-item Role Hassles Index that measures three workplace stressors: conflict (interpersonal conflict and tension), ambiguity (uncertainty about performing work tasks), and overload (excessive demands or inadequate resources to accomplish demands) (Zohar 1997). Participants rated how physically or emotionally disruptive they found the workplace events described in each item on a 1 (slightly disruptive) to 3 (very disruptive) scale. If participants did not experience the event within the past two weeks, the instructions indicated that they should select “did not occur.” The scores computed from participants’ ratings indicate “hassle density” as a proportion of the highest possible score for each scale; thus possible scores on the three scales range from 0 to 1.

RCR Knowledge

We assessed knowledge of basic responsible conduct of research principles and rules using a multiple-choice test developed for this study. The test consisted of 15 items about diverse RCR topics outlined as key areas by NIH, including data ownership, human subject protections, animal welfare, mentorship, authorship, plagiarism, peer review, and collaboration (Steneck 2007). One to two items assessed each area. These items were taken from a test bank of 125 multiple-choice RCR knowledge items (Antes and DuBois 2014), and they were reviewed and revised to ensure that they conformed to guidelines for effective multiple-choice items (Haladyna and Downing 1986). Post hoc analysis examining item performance in our sample identified six items that were answered correctly by nearly everyone (90% or more of respondents) and one item that was missed by 77% of respondents. We dropped these items from subsequent analyses in addition to two items that demonstrated negative (r = −.11) or low (r = .03) item-total correlations. The RCR knowledge score was generated from the remaining 6 items, with 1 point for each item correct. Thus, possible scores range from 0 to 6. The average score was 4.28 (about 70% correct).

Social Desirability

We utilized the 13-item Marlowe-Crowne Social Desirability Scale to examine the potential influence of socially desirable responding on PDR scores (Reynolds 1982). Participants indicated whether each statement regarding a personal attitude or trait was true or false. Marking “true” or “false” for each statement corresponded to either indicating a socially desirable response or not. For each item, one point was given towards the total score when participants indicated the socially desirable response. Thus, possible scores range from 1 to 13, with higher scores reflecting a greater propensity to respond in a socially desirable manner.

Background Questionnaire and Hours of RCR Instruction

We used a 19-item questionnaire that asked participants to report on their research, training, and demographic backgrounds. Among the questions about current academic rank and career training was an item in which participants estimated the number of hours of instruction that they had received in research ethics or responsible conduct of research during their career thus far. We used their responses as a continuous variable in our analysis. 94% of participants provided an estimate ranging from 0 to 300 hours, with a mean of 31.8 hours (SD = 31.7). Twenty hours of instruction was the median and modal response, with 19% percent reporting 20 hours of instruction. Being a clinical researcher who works with human subjects was associated with reporting more hours of instruction (r = .11, p <.05).

Professional Decision-Making in Research

We employed the 16-item Professional Decision-Making in Research (PDR) scale as the criterion measure (DuBois et al. 2015). This scenario-based measure assesses the use of professional decision-making strategies when confronted with challenging problems in research. Following the presentation of each scenario depicting a research setting (n = 4), four items present professional challenges in research, such as conflicts of interest, suspected misconduct, subject protection, and managing personnel (for a total of 16 items). Six possible responses follow each item, and participants were instructed to select the two that they would most likely choose if they were in the situation. Three responses exemplify stronger professional decision-making through their use of one of four decision-making strategies—seeking help, managing emotions, considering consequences and rules, and questioning one’s assumptions and motives; the remaining three responses violate one or more of these strategies. For each item, 1 point was assigned if the respondent selected any two of the three stronger options; thus, possible PDR scores range from 0 to 16. The measure consists of parallel forms A and B. Half of the participants completed form A and the remaining half form B to yield validation data for both forms while minimizing the time burden for participants (assignment to Form A or B was random). The aggregated data were used as the criterion measure.

Procedure

We emailed potential participants an invitation to participate in a study that was estimated to take 45–75 minutes. Participants were offered $100 in compensation for participation. Potential participants received one reminder email after one week, a phone call at 10 days, a third email at three weeks, and a fourth and final email at four weeks. If an individual agreed to participate, the recruitment email provided a link to the online study available through Qualtrics. After reading an informed consent document, participants completed the test battery described in the “Measures” section. The Institutional Review Board at “Washington University in St. Louis” reviewed and approved this study (#201401153).

Analytic Approach

We computed correlations for the PDR and cynicism, moral disengagement, compliance disengagement, impulsivity, work stressors, RCR knowledge, demographic variables, and social desirability. Next, we included statistically significant univariate correlates of PDR scores, including demographic variables, in a forward stepwise logistic regression analysis in order to identify variables uniquely associated with low versus high PDR performance.

In the logistic regression, we were interested in the variables that distinguished low scoring individuals from high scoring individuals. These groups were established by applying a strategy common in test development. Test developers often use the top and bottom 25% of scores to define groups of top and bottom performers (Kline 2005). Accordingly, we approximated the top and bottom 25% by examining the distribution of PDR scores. Overall, the negatively skewed distribution indicated that PDR scores were tightly clustered and generally scores tended to be high. Cut points were identified at the whole-number scores closest to the top and bottom 25%, resulting in cuts at ≤ 12 (27%) and ≥ 15 (33%); scores of 13 and 14 comprised the middle 40%. Therefore, the dichotomous outcome variable in our logistic regression was membership in the lower performing group (PDR score ≤ 12) or the higher performing group (PDR score ≥ 15). The low PDR group consisted of 106 individuals, with scores ranging from 2 to 12, and a mean score of 10.58 (SD = 1.77). The high performing group included 133 individuals scoring 15 or 16, with a mean of 15.41 (SD = .49). Residual analysis identified two multivariate outliers in the low group, and these scores were removed from further analysis.

Results

Demographics

Four hundred funded researchers completed the battery of instruments. Table 1 presents the sample characteristics. The majority of participants (63%) were between 20 and 39 years of age, and 55% were male. 236 participants (59%) reported conducting research for 10 years or less, 107 (27%) for 11–20 years, and 56 (14%) for 20+ years. With regard to academic rank, 39% described themselves as pre- or post-doctoral trainees, 35% as instructors or assistant professors, 11% as associate professors, and 15% as full professors. Participants indicated that they conducted a broad range of research: human subjects clinical research (43%), wet lab research (43%), animal subjects research (38%), human subjects social/behavioral research (27%), and “dry lab” research (22%). (Because participants could select more than one category to describe their type of research, the percentages add up to more than 100%.) Seventy-one percent (71%) of researchers identified their nation of origin as the United States and 29% reported being from international locations. Specifically, of the 117 international researchers, 51 (44%) reported being from Asia, 22 (19%) from the European Union, and the remaining from several other regions. Seventy-two percent of the sample described their race as White, 19% as Asian, and the remaining 9% as multiple or other racial categories. Twenty-one percent (21%) of the sample indicated that they spoke English as a second language (ESL).

Table 1.

Sample Demographics

Age N (% of 400)
 20–29 88 (22)
 30–39 163 (41)
 40–49 86 (21)
 >50 63 (16)
Gender*
 Male 221 (55)
 Female 179 (45)
Years Conducting Research
 0–5 99 (25)
 6–10 137 (34)
 11–20 107 (27)
 20+ 56 (14)
Academic Rank
 Pre-doctoral trainee 108 (27)
 Post-doctoral trainee 49 (12)
 Instructor 20 (5)
 Assistant Professor 119 (30)
 Associate Professor 44 (11)
 Full Professor 58 (15)
Funding Status During Past Year
 Trainee on training grant 78 (20)
 Principal investigator 172 (43)
 Co-investigator/Other 15 (4)
 Multiple (e.g., Trainee & PI, PI & Co-Investigator) 135 (34)
Industry Funding Status
 Working on research funded by industry 84 (21)
 Not working on research funded by industry 316 (79)
Graduate Degree
 Professional doctorate (e.g., MD, JD, DO, DDS) 85 (21)
 Research doctorate (e.g., PhD, ScD) 163 (41)
 Multiple degrees (e.g., MD/PhD, MD/MPH) 101 (25)
 Master’s degree or other degree (e.g., MSCI, MS, MPH) 51 (13)
Research Lab/Program Director Status
 Directing a research lab/program 141 (35)
 Not directing a research lab/program 259 (65)
Type of Research*††
 Human Subjects Social or Behavioral Research 107 (27)
 Human Subjects Clinical Research 173 (43)
 Animal Subjects Research 153 (38)
 Dry Lab (e.g., bioinformatics, epidemiology, or existing data analysis) 86 (22)
 Wet Lab (e.g., working with cell cultures or select agents) 172 (43)
Nation of Origin
 From the United States 283 (71)
 Not from the United States 117 (29)
  Asia 51/117 (44)
  European Union 22/117 (19)
  Eastern Europe 10/117 (9)
  Africa 9/117 (8)
  Middle East 9/117 (8)
  Other 16/117 (14)
Racial Categories*
 Asian 76 (19)
 White 288 (72)
 Multiple/Other 36 (9)
Native Language*
 Native English Speaker 317 (79)
 English as a Second Language 83 (21)
*

indicates significant correlation with PDR score; see results section for description.

Defined as: Years doing research that led to publications (your own or others).

††

Participants were instructed to select all that apply; 56% of participants selected multiple types.

Descriptive Statistics and Correlations

We report correlations and descriptive statistics in Table 2. Scores on the PDR ranged from 2 to 16 with a mean score of 13.34 (SD = 2.24). Social desirability did not correlate with the PDR (r = .01, p = .82), replicating findings from the initial validation study (DuBois et al. 2015). Similar to the former study, moral disengagement (r = −.30), cynicism (r = −.21), and compliance disengagement (r = −.27) were statistically significant (p < .01) negative correlates of the PDR. Positive urgency (r = .19) and RCR knowledge (r = .26) were significant (p < .01) positive PDR correlates. Contrary to hypotheses, the remaining four impulsivity scales and the work stressor scales were not associated with PDR scores. Hours of RCR/ethics instruction were not correlated with PDR scores (r = .04, p = .42), RCR Knowledge (r = .07, p = .19), or compliance disengagement (r = −.08, p = .15).

Table 2.

Correlations Among Potential Predictors and Professional Decision-Making in Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
1-Moral Disengagement 1.00
2-Cynicism .34** 1.00
3-Compliance Disengagement .64** .41** 1.00
4-Role Conflict .22** .20** .27** 1.00
5-Role Overload .14** .18** .25** .54** 1.00
6-Role Ambiguity .27** .19** .25** .53** .47** 1.00
7-Negative Urgency −.37** −.28** −.40** −.20** −.14** −.24** 1.00
8-Lack of Premeditation −.05 .11* −.04 −.07 −.06 .04 .27* 1.00
9-Lack of Perseverance −.18** −.20** −.22** −.06 −.08 −.13** .31** .24** 1.00
10-Sensation Seeking −.09 .03 −.13** .02 −.03 .04 .05 .24** −.05 1.00
11-Positive Urgency −.39** −.31** −.42** −.12* −.10* −.19** .64** .30** .31** .22** 1.00
12-RCR Knowledge −.20** −.17* −.18** −.04 .01 −.13** .10 .01 .17** .07 .18** 1.00
13-Hours RCR Instruction −.08 −.08 −.08 .01 .07 .05 .10* .01 .14** −.06 .04 .07 1.00
14-Social Desirability −.21** −.13* −.28** −.15** −.15** −.21** .38** .04 .16** .01 .14** −.04 .02 1.00
15-PDR −.30** −.21** −.27** −.03 −.08 −.07 .10 .02 .04 .06 .19** .26** .04 .01 1.00
Mean 1.83 3.03 2.40 .20 .35 .23 3.11 3.14 3.29 2.55 3.60 4.28 31.76 7.09 13.34
Standard Deviation .71 .80 .65 .17 .20 .19 .53 .39 .40 .57 .43 1.31 31.71 2.53 2.24
Scale Range 1–7 1–7 1–6 0–1 0–1 0–1 1–4 1–4 1–4 1–4 1–4 0–6 N/A 1–13 0–16

N = 400.

*

p < .05;

**

p < .01.

Similar to the relationships observed in the initial sample of 300 investigators (DuBois et al. 2015), several demographic variables were statistically significant PDR correlates. Specifically, conducting clinical research (r = .11, p < .05), gender (r = .13, p < .01; females scored higher), and speaking English as one’s native language (r = .24, p < .01) were correlated with PDR scores. Similarly, racial category (r = .18, p < .01) also correlated with PDR scores: a positive correlation emerged for White race (r = .16, p < .01), and a negative correlation for Asian race (r = −.20, p < .01). Finally, identifying the U.S. as one’s nation of origin also correlated with the PDR (r = .23, p < .01). Type of research funding (industry versus NIH), however, did not correlate with the PDR (r = −.04, p = .42). Therefore, further analyses according to type of research funding were not considered.

Logistic Regression

The results of the logistic regression analysis (n = 239) predicting membership in the high versus low performing PDR group indicated that moral disengagement, OR = 0.51 (95% CI = 0.33–0.77), RCR knowledge, OR = 1.51 (95% CI = 1.21–1.90), and nation of origin, OR = 2.92 (95% CI = 1.60–5.32) were significant (p < .01) predictors of high performance on the PDR.

Discussion

Overall, our data show that a majority of researchers drawn from a diverse sample of 400 NIH-funded and industry-funded researchers score high on a measure of professional decision-making in research. On average, individual decisions reflected use of good decision-making strategies on 83% of the items. However, as with a previous study (DuBois et al. 2015), a bottom tier of performers emerged who selected “less professional” choices more than the other participants. On average, the bottom tier selected at least one less professional option on 34% of items.

Multivariate analyses showed that key predictors of high performance included greater RCR knowledge, lower moral disengagement, and being from the U.S. In fact, the odds of being in the higher performing group were nearly three times higher among individuals whose nation of origin was the U.S. relative to those from elsewhere. Similarly, the odds of being in the higher performing group were 1.5 times higher among people who scored better on the RCR knowledge test and two times higher for people who were lower in moral disengagement.

Our findings suggest there may be an underlying influence of researchers’ nation of origin on professional decision-making in research. This may be particularly true for investigators from Asian countries, as most of the investigators in our sample not from the U.S. indicated that they were from Asia. However, our sample represented a number of international regions. The PDR was written to assess the use of good decision-making strategies in solving ethical and professional problems in research. Some of these strategies – such as seeking help – may be more acceptable in U.S. culture than other cultures; a proposition that should be examined in future research. Other strategies – such as recognizing rules and consequences – were assessed within the context of U.S. regulations and oversight systems. Such factors may advantage researchers who are highly acculturated to the U.S. research setting. That said, all study participants were conducting research in the U.S., so the importance of understanding professional decision-making in research is not mitigated by nation of origin. Nonetheless, it is important to recognize that cultural backgrounds appear to influence how individuals might approach ethical research challenges.

In light of the correlation between nation of origin and ESL status (r = .76, p < .01), it is also possible that participants from other countries were more likely to score lower on the PDR because of language challenges. Although reading level is a potential alternative explanation for lower scores among non-U.S. ESL participants, the PDR was written at the 8th grade level (with a Lexile score of 930, which is lower than USA Today) to mitigate this concern.

Thus, cultural differences in researchers’ perceptions of norms, expectations, and practices in research warrant additional investigation, especially given that science is a worldwide endeavor (Heitman 2014). The prevalence of international researchers training and working in the U.S., international research collaborations, and the growing number of U.S. students training abroad underscore this need, as well as highlight the importance of cross-cultural considerations in research ethics education (Heitman 2014). Indeed, research in cross-cultural psychology indicates cultural differences do exist in judgments about socially appropriate behavior, perceptions of professionalism, and reasoning styles (Boucher and Maslach 2009; Heine and Ruby 2010; Resick et al. 2011; Uhlmann et al. 2013). Such differences may explain differences in responding on the PDR. Overall, more research is needed to explore cultural differences and commonalities in science and best practices for addressing cultural considerations in training in RCR.

Not surprisingly, knowledge of RCR rules and principles was associated with higher performance on the PDR. This suggests that efforts to advance knowledge of RCR may be fruitful as one strategy to advance professional decision-making in research. However, in keeping with concerns that ethics instruction may have limited positive effects (Antes et al. 2009; Antes et al. 2010; Waples et al. 2009), we observed no relationship between RCR knowledge and hours of participation in RCR/ethics instruction. However, this finding should be interpreted in light of the limitation that our measure of participation in instruction was self-report. Self-report of hours of education may be limited by inaccuracies in estimates provided by participants. Nonetheless, these findings suggest the need for additional research to identify the role of RCR knowledge in good professional decision-making and the best approaches for conveying this knowledge to researchers.

Moral disengagement also emerged as an important variable in understanding performance on the PDR. Attitudes that discount or disconnect from the moral dimensions of problems were associated with poorer professional decision-making. Like RCR knowledge, these attitudes were not related to participation in research ethics instruction. Moral disengagement is a function of cognitive distortions that support self-interested thinking (DuBois, Chibnall, and Gibbs 2015), and it is unclear whether and how environmental or educational factors reinforce or reduce disengagement. Thus, future work might examine whether environmental variables such as organizational climate exert an influence on disengagement (Crain, Martinson, and Thrush 2013; Martinson et al. 2010). RCR educational efforts might also consider whether and how existing and new approaches to research integrity training might foster interest in engaging ethical issues in research (Antes et al. 2010, Devereaux 2014, Kalichman 2014b).

While the findings are generally consistent with prior research and support the validity of the PDR, our findings did not support hypotheses with regard to associations of the PDR with work stressors and impulsivity. Only positive urgency was associated with PDR scores, and that association was in a direction opposite that which was hypothesized: greater positive urgency was related to higher PDR scores. Interpretation of this finding should be informed by the composition of the positive urgency items: e.g., “When I get really happy about something, I tend to do things that can have bad consequences.” Thus, it may be that people who endorse such items actually have insight into the impact of their emotions on the quality of their decision-making. Accordingly, they know that they ought to “take a time out” to control their emotions before responding to a difficult situation. This is particularly plausible in a population of doctoral-level participants; achieving a doctoral degree might require or reinforce the development of compensating strategies for professional decisions.

The lack of other significant associations between the PDR and other impulsivity dimensions or work stressors is puzzling. At a conceptual level, it is unlikely that impulsivity and work stressors are unrelated to professional decision-making in research. Indeed, the impulsivity and work stressor scales demonstrated statistically significant correlations with compliance disengagement in research (DuBois, Chibnall, and Gibbs 2015), suggesting some relationship with research integrity. Instead, it is likely that the pattern of findings with the PDR raises several important considerations regarding measurement issues.

The professional decision-making measure is predicated on the assumption that behavior is occasioned by decisions made in response to complex problems (Mumford et al. 2006). Although vignette-based measures provide a practical tool for research and educational assessment, they do not generate the same psychological experience as real-world scenarios (Kish-Gephart, Harrison, and Trevino 2010). In particular, they are limited in their activation of salient situational factors and emotional responses. Hence, while urgency and anxiety likely influence decision-making and behavior (Billieux et al. 2010, Kouchaki and Desai 2015, Penolazzi, Gremigni, and Russo 2012), salient features of those states may not be captured in the context of test-taking, such as that associated with the PDR. Thus, it may be most appropriate to view the PDR, and potentially other vignette-based tests, as examining predisposition, rather than behavior (Cohen 2010). That is, the PDR may demonstrate what a person may be inclined to do—in this case, address workplace problems in a professionally appropriate manner—but not a person’s actual behavior when actually faced with such situations. That is, we do not know whether performance on the PDR indicates that individuals actually make professionally appropriate decisions and behave professionally in a challenging context.

We encourage additional research examining the connection of the PDR to measures of real-world behavior. This information would prove particularly critical in establishing whether low scores identify individuals “at risk” with regard to professional decision-making and behavior in research (Binning and Barrett 1989, Messick 1995). However, linking measurement of theoretical constructs to real-world observations is a classic measurement issue and a perennial challenge in psychological testing (Binning and LeBreton 2009). Moreover, self-report measurements of behavior are likely to be biased. Accordingly, an important next step in research on professional decision-making should involve the use of third party ratings of researcher behavior and researcher characteristics. Of course, in the context of ethical (or unethical behavior), where the consequences of misbehavior can be dire, the implementation of third-party observations is not without complications itself.

Finally, a related point concerns precision in the conceptualization and operationalization of constructs in studies of research ethics, integrity, and misconduct. A given study might examine professional wrongdoing, appropriate professional behavior, ethical decision-making, ethical behavior, unethical decision-making, or unethical behavior in research. Although related, these are distinct phenomena, and the antecedents of behaviors such as ethical decision-making and deviant behavior differ (Kish-Gephart, Harrison, and Trevino 2010). Findings regarding one may or may not extend to the other. In the present study, this point might account for the lack of relationship observed among impulsivity, work stressors, and the PDR. We have examined the tendency to select or violate professionally appropriate strategies for addressing professional and ethical problems in research. This constitutes just one facet of the broader domain of interest to investigators studying research integrity and research misconduct.

We have already noted the need for future research to address limitations in the present effort, namely the use of self-report measures and need for behavioral outcome measures. Additionally, our data are correlational in nature; therefore we cannot draw conclusions about causation. Furthermore, although we recruited from a representative set of NIH-funded and industry-funded investigators in the U.S., our sample is limited to those individuals who volunteered to participate. It is possible that self-selection bias operated in our data, an issue that speaks to the generalizability of the findings if the choice to participate was related in some fashion to professional decision-making (Olsen 2008).

In summary, this study suggests that the use of good decision-making strategies in the research context is associated with greater RCR knowledge, lower moral disengagement, and identifying the U.S. as one’s nation of origin. Our findings reinforce the need to identify effective RCR instructional approaches. They also suggest that education may need to target not only knowledge but attitudes and the use of cognitive distortions or self-serving biases. Finally, educational efforts may need to address specific needs of individuals from outside of the U.S., emphasizing the value and legitimacy of using strategies such as questioning one’s assumptions and seeking help. However, this study also reminds us of how little we know about the predictors of behavior—especially unprofessional behavior, which is often rare (in egregious forms), hidden, and difficult to induce in experimental designs.

Acknowledgments

This study was supported by the U.S. Office of Research Integrity (6 ORIIR140007-02-01), and also in part by the National Center for Advancing Clinical and Translational Science (2UL1 TR000448-06).

References

  1. AAMC-AAU. AAMC. Washington, DC: AAMC-AAU; 2008. Protecting patients, preserving integrity, advancing health: Accelerating the implementation of COI policies in human subjects research; pp. 1–87. [Google Scholar]
  2. Anderson MS, Horn AS, Risbey KR, Ronning EA, De Vries R, Martinson BC. What do mentoring and training in the responsible conduct of research have to do with scientists’ misbehavior? Findings from a national survey of NIH-funded scientists. Academic Medicine. 2007;82:853–60. doi: 10.1097/ACM.0b013e31812f764c. [DOI] [PubMed] [Google Scholar]
  3. Antes AL, Murphy ST, Waples EP, Mumford MD, Brown RP, Connelly S, Devenport LD. A meta-analysis of ethics instruction effectiveness in the sciences. Ethics and Behavior. 2009;19:379–402. doi: 10.1080/10508420903035380. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Antes AL, DuBois JM. Aligning objectives and assessment in responsible conduct of research instruction. Journal of Microbiology & Biology Education. 2014;15:108–116. doi: 10.1128/jmbe.v15i2.852. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Antes AL, Wang X, Mumford MD, Brown RP, Connelly S, Devenport LD. Evaluating the effects that existing instruction on responsible conduct of research has on ethical decision making. Academic Medicine. 2010;85:519–526. doi: 10.1097/ACM.0b013e3181cd1cc5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Bazerman MH, Tenbrunsel AE, Wade-Benzoni K. Negotiating with yourself and losing: Making decisions with competing internal preferences. Academy of Management Review. 1998;23:225–241. [Google Scholar]
  7. Billieux J, Gay P, Rochat L, Van der Linden M. The role of urgency and its underlying psychological mechanisms in problematic behaviours. Behaviour Research and Therapy. 2010;48:1085–1096. doi: 10.1016/j.brat.2010.07.008. [DOI] [PubMed] [Google Scholar]
  8. Binning JF, Barrett GV. Validity of personnel decisions: A conceptual analysis of the inferential and evidential bases. Journal of Applied Psychology. 1989;74:478–494. [Google Scholar]
  9. Binning JF, LeBreton JM. Coherent conceptualization is useful for many things, and understanding validity is one of them. Industrial and Organizational Psychology. 2009;2:486–492. [Google Scholar]
  10. Boucher HC, Maslach C. Culture and individuation: The role of norms and self-construals. Journal of Social Psychology. 2009;149:677–693. doi: 10.1080/00224540903366800. [DOI] [PubMed] [Google Scholar]
  11. Cohen RJ. Psychological Testing and Assessment : An Introduction to Tests and Measurement. 7. Boston: McGraw-Hill Higher Education; 2010. [Google Scholar]
  12. Crain AL, Martinson BC, Thrush CR. Relationships between the survey of organizational research climate (SORC) and self-reported research practices. Science and Engineering Ethics. 2013;19:835–850. doi: 10.1007/s11948-012-9409-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Cyders MA, Smith GT. Emotion-based dispositions to rash action: Positive and negative urgency. Psychological Bulletin. 2008;134:807–828. doi: 10.1037/a0013341. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Cyders MA, Smith GT, Spillane NS, Fischer S, Annus AM, Peterson C. Integration of impulsivity and positive mood to predict risky behavior: Development and validation of a measure of positive urgency. Psychological Assessment. 2007;19:107–118. doi: 10.1037/1040-3590.19.1.107. [DOI] [PubMed] [Google Scholar]
  15. De Vries R, Anderson MS, Martinson BC. Normal misbehavior: Scientists talk about the ethics of research. Journal of Empirical Research on Human Research Ethics. 2006;1:43–50. doi: 10.1525/jer.2006.1.1.43. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Devereaux ML. Rethinking the meaning of ethics in RCR education. Journal of Microbiology & Biology Education. 2014;15:165–168. doi: 10.1128/jmbe.v15i2.857. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. DuBois JM, Chibnall JT, Gibbs J. Compliance disengagement in research: Development and validation of a new measure. Science and Engineering Ethics. 2015 doi: 10.1007/s11948-015-9681-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. DuBois JM, Chibnall JT, Tait RC, Vander Wal JS, Baldwin KA, Antes AL, Mumford MD. Professional decision-making in research (PDR): The validity of a new measure. Science and Engineering Ethics. 2015 doi: 10.1007/s11948-015-9667-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Enticott PG, Ogloff JRP, Bradshaw JL. Associations between laboratory measures of executive inhibitory control and self-reported impulsivity. Personality and Individual Differences. 2006;41:285–294. [Google Scholar]
  20. Evenden JL. Impulsivity: A discussion of clinical and experimental findings. Journal of Psychopharmacology. 1999;13:180–192. doi: 10.1177/026988119901300211. [DOI] [PubMed] [Google Scholar]
  21. Franken IHA, van Strien JW, Nijs I, Muris P. Impulsivity is associated with behavioral decision-making deficits. Psychiatry Research. 2008;158:155–163. doi: 10.1016/j.psychres.2007.06.002. [DOI] [PubMed] [Google Scholar]
  22. Ganster DC. Executive job demands: Suggestions from a stress and decision-making perspective. Academy of Management Review. 2005;30:492–502. [Google Scholar]
  23. Ganster DC, Rosen CC. Work stress and employee health: A multidisciplinary review. Journal of Management. 2013;39:1085–1122. [Google Scholar]
  24. Haidt J. The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review. 2001;108:814–834. doi: 10.1037/0033-295x.108.4.814. [DOI] [PubMed] [Google Scholar]
  25. Haladyna TM, Downing SM. Validity of a taxonomy of multiple-choice item-writing rules. Applied Measurement in Education. 1986;2:51–78. [Google Scholar]
  26. Heine SJ, Ruby MB. Cultural psychology. WIREs Cognitive Science. 2010;1:254–266. doi: 10.1002/wcs.7. [DOI] [PubMed] [Google Scholar]
  27. Heitman E. Cross-cultural considerations in U.S. research ethics education. Journal of Microbiology & Biology Education. 2014;15:130–134. doi: 10.1128/jmbe.v15i2.860. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Kalichman MW. A modest proposal to move RCR education out of the classroom and into research. Journal of Microbiology & Biology Education. 2014a;15:93–95. doi: 10.1128/jmbe.v15i2.866. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Kalichman MW. Rescuing responsible conduct of research (RCR) education. Accountability in Research. 2014b;21:68–83. doi: 10.1080/08989621.2013.822271. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Kalichman MW, Plemmons DK. Reported goals for responsible conduct of research courses. Academic Medicine. 2007;82:846–52. doi: 10.1097/ACM.0b013e31812f78bf. [DOI] [PubMed] [Google Scholar]
  31. Kalichman MW, Plemmons DK. Research agenda: The effects of responsible conduct of research training on attitudes. Journal of Empirical Research on Human Research Ethics. 2015;10:457–459. doi: 10.1177/1556264615575514. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Kish-Gephart JJ, Harrison DA, Trevino LK. Bad apples, bad cases, and bad barrels: Meta-analytic evidence about sources of unethical decisions at work. Journal of Applied Psychology. 2010;95:1–31. doi: 10.1037/a0017103. [DOI] [PubMed] [Google Scholar]
  33. Kline TJB. Psychological Testing: A Practical Approach to Design and Evaluation. Thousand Oaks, CA: Sage Publications; 2005. [Google Scholar]
  34. Kouchaki M, Desai SD. Anxious, threatened, and also unethical: How anxiety makes individuals feel threatened and commit unethical acts. Journal of Applied Psychology. 2015;100:360–375. doi: 10.1037/a0037796. [DOI] [PubMed] [Google Scholar]
  35. Landy FJ, Conte JM. Work in the 21st Century: An Introduction to Industrial and Organizational Psychology. 4. Hoboken, NJ: John Wiley & Sons; 2013. [Google Scholar]
  36. Lease SH. Occupational role stressors, coping, support, and hardiness as predictors of strain in academic faculty: An emphasis on new and female faculty. Research in Higher Education. 1999;40:285–307. [Google Scholar]
  37. Lin W, Ma J, Wang L, Wang M. A double-edged sword: The moderating role of conscientiousness in the relationships between work stressors, psychological strain, and job performance. Journal of Organizational Behavior. 2015;36:94–111. [Google Scholar]
  38. Martin LE, Potts GF. Impulsivity in decision-making: An event-related potential investigation. Personality and Individual Differences. 2009;46:303–308. doi: 10.1016/j.paid.2008.10.019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Martinson BC, Crain AL, De Vries R, Anderson MS. The importance of organizational justice in ensuring research integrity. Journal of Empirical Research on Human Research Ethics. 2010;5:67–83. doi: 10.1525/jer.2010.5.3.67. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Mather M, Lighthall NR. Risk and reward are processed differently in decisions made under stress. Current Directions in Psychological Science. 2012;21:36–41. doi: 10.1177/0963721411429452. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. McTernan M, Love P, Rettinger D. The influence of personality on the decision to cheat. Ethics and Behavior. 2014;24:53–72. [Google Scholar]
  42. Messick S. Validity of psychological assessment: Validation of inferences from persons’ responses and performances as scientific inquiry into score meaning. American Psychologist. 1995;50:741–749. [Google Scholar]
  43. Mobini S, Grant A, Kass AE, Yeomans MR. Relationships between functional and dysfunctional impulsivity, delay discounting and cognitive distortions. Personality and Individual Differences. 2007;43:1517–1528. [Google Scholar]
  44. Moore C, Detert JR, Trevino LK, Baker VL, Mayer DM. Why employees do bad things: Moral disengagement and unethical organizational behavior. Personnel Psychology. 2012;65:1–48. [Google Scholar]
  45. Moore DA, Loewenstein G. Self-interest, automaticity, and the psychology of conflict of interest. Social Justice Research. 2004;17:189–202. [Google Scholar]
  46. Mumford MD, Devenport LD, Brown RP, Connelly S, Murphy ST, Hill JH, Antes AL. Validation of ethical decision making measures: Evidence for a new set of measures. Ethics and Behavior. 2006;16:319–345. [Google Scholar]
  47. Nebeker C. Smart teaching matters! Applying the research on learning to teaching RCR. Journal of Microbiology & Biology Education. 2014;15:88–92. doi: 10.1128/jmbe.v15i2.849. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Oberlechner T, Nimgade A. Work stress and performance among financial traders. Stress and Health. 2005;21:285–293. [Google Scholar]
  49. Olsen R. Self-selection bias. In: Lavrakas PJ, editor. Encyclopedia of Survey Research Methods. Thousand Oaks, CA: Sage Publications; 2008. pp. 809–811. [Google Scholar]
  50. Penolazzi B, Gremigni P, Russo PM. Impulsivity and reward sensitivity differentially influence affective and deliberative risky decision making. Personality and Individual Differences. 2012;53:655–659. [Google Scholar]
  51. Plemmons DK, Kalichman MW. Reported goals of instructors of responsible conduct of research for teaching of skills. Journal of Empirical Research on Human Research Ethics. 2013;8:95–103. doi: 10.1525/jer.2013.8.2.95. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Powell ST, Allison MA, Kalichman MW. Effectiveness of a responsible conduct of research course: A preliminary study. Science and Engineering Ethics. 2007;13:249–264. doi: 10.1007/s11948-007-9012-y. [DOI] [PubMed] [Google Scholar]
  53. Resick CJ, Martin GS, Keating MA, Dickson MW, Kwan HK, Peng C. What ethical leadership means to me: Asian, American, and European perspectives. Journal of Business Ethics. 2011;101:435–457. [Google Scholar]
  54. Reynolds SJ. A neurocognitive model of the ethical decision-making process: implications for study and practice. Journal of Applied Psychology. 2006;91:737–48. doi: 10.1037/0021-9010.91.4.737. [DOI] [PubMed] [Google Scholar]
  55. Reynolds WM. Development of reliable and valid short forms of the Marlowe-Crowne Social Desirability Scale. Journal of Clinical Psychology. 1982;38:119–125. [Google Scholar]
  56. Shamoo AE, Resnik DB. Responsible Conduct of Research. 3. New York: Oxford University Press; 2015. [Google Scholar]
  57. Sonenshein S. The role of construction, intuition, and justification in responding to ethical issues at work: The sensemaking-intuition model. Academy of Management Review. 2007;32:1022–1040. [Google Scholar]
  58. Steneck NH. ORI Introduction to the Responsible Conduct of Research. Washington, DC: U.S. Government Printing Office; 2007. [Google Scholar]
  59. Stenmark CK, Antes AL, Thiel CE, Caughron JJ, Wang X, Mumford MD. Consequences identification in forecasting and ethical decision-making. Journal of Empirical Research on Human Research Ethics. 2011;6:25–32. doi: 10.1525/jer.2011.6.1.25. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Stewart W, Barling J. Daily work stress, mood and interpersonal job performance: A mediational model. Work and Stress. 1996;10:336–351. [Google Scholar]
  61. Tamot R, Arsenieva D, Wright DE. The emergence of the responsible conduct of research (RCR) in PHS policy and practice. Accountability in Research. 2013;20:349–368. doi: 10.1080/08989621.2013.822258. [DOI] [PubMed] [Google Scholar]
  62. Thiel CE, Bagdasarov Z, Harkrider L, Johnson JF, Mumford MD. Leader ethical decision-making in organizations: Strategies for sensemaking. Journal of Business Ethics. 2012;107:49–64. [Google Scholar]
  63. Thiel CE, Connelly S, Griffith JA. The influence of anger on ethical decision making: Comparison of a primary and secondary appraisal. Ethics and Behavior. 2011;21:380–403. [Google Scholar]
  64. Trevino LK, den Nieuwenboer NA, Kish-Gephart JJ. (Un)ethical behavior in organizations. Annual Review of Psychology. 2014;65:635–660. doi: 10.1146/annurev-psych-113011-143745. [DOI] [PubMed] [Google Scholar]
  65. Turner JH, Valentine SR. Cynicism as a fundamental dimension of moral decision-making: A scale development. Journal of Business Ethics. 2001;34:123–136. [Google Scholar]
  66. Uhlmann EL, Heaphy E, Ashford SJ, Zhu L, Sanchez-Burks J. Acting professional: An exploration of culturally bounded norms against nonwork role referencing. Journal of Organizational Behavior. 2013;34:866–886. [Google Scholar]
  67. Van der Linden D, Keijsers GPJ, Eling P, Van Schaijk R. Work stress and attentional difficulties: An initial study on burnout and cognitive failures. Work and Stress. 2005;19:23–36. [Google Scholar]
  68. Waples EP, Antes AL, Murphy ST, Connelly S, Mumford MD. A meta-analytic investigation of business ethics instruction. Journal of Business Ethics. 2009;87:133–151. [Google Scholar]
  69. Whiteside SP, Lynam DR. The Five Factor Model and impulsivity: Using a structural model of personality to understand impulsivity. Personality and Individual Differences. 2001;30:669–689. [Google Scholar]
  70. Whiteside SP, Lynam DR, Miller JD, Reynolds SK. Validation of the UPPS impulsive behaviour scale: A four-factor model of impulsivity. European Journal of Personality. 2005;19:559–574. [Google Scholar]
  71. Zermatten A, Van der Linden M, d’Acremont M, Jermann F, Bechara A. Impulsivity and decision making. Journal of Nervous and Mental Disease. 2005;193:647–650. doi: 10.1097/01.nmd.0000180777.41295.65. [DOI] [PubMed] [Google Scholar]
  72. Zohar D. Predicting burnout with a hassle-based measure of role demands. Journal of Organizational Behavior. 1997;18:101–115. [Google Scholar]

RESOURCES