Abstract
Disciplines establish and enforce professional codes of ethics in order to guide ethical and safe practice. Unfortunately, ethical breaches still occur. Interestingly, it is found that breaches are often perpetrated by professionals who are aware of their codes of ethics and believe that they engage in ethical practice. The constructs of behavioral ethics, which are most often discussed in business settings, attempt to explain why ethical professionals sometimes engage in unethical behavior. Although traditionally based on theories of social psychology, the principles underlying behavioral ethics are consistent with behavior analysis. When conceptualized as operant behavior, ethical and unethical decisions are seen as being evoked and maintained by environmental variables. As with all forms of operant behavior, antecedents in the environment can trigger unethical responses, and consequences in the environment can shape future unethical responses. In order to increase ethical practice among professionals, an assessment of the environmental variables that affect behavior needs to be conducted on a situation-by-situation basis. Knowledge of discipline-specific professional codes of ethics is not enough to prevent unethical practice. In the current article, constructs used in behavioral ethics are translated into underlying behavior-analytic principles that are known to shape behavior. How these principles establish and maintain both ethical and unethical behavior is discussed.
Keywords: Ethics, Behavioral ethics, Ethical decisions, Professionalism, Interdisciplinary, Behavior analysis
Many health care and social service professions have a code of ethics, or similar document, that is meant to guide practitioners in pursuit of ethical practice (Behavior Analyst Certification Board, 2020; American Occupational Therapy Association, 2015; American Physical Therapy Association, 2019; American Psychological Association, 2017; American Speech-Language-Hearing Association, 2016). Although the specific contents of the documents differ by profession, each contains foundational principles on which best practice should be based, a listing of rules of the profession to be used when faced with an ethical dilemma, and often a description of procedures and potential consequences when ethical rules or boundaries are breached. Discipline-specific codes of ethics were written to “provide standards of behavior and performance that form the basis of professional accountability to the public” (American Physical Therapy Association, 2019, p. 1), “outline standards of conduct the public can expect from those in the profession” (American Occupational Therapy Association, 2015, p. 1), and “provide guidance to members, applicants, and certified individuals as they make professional decisions” (American Speech-Language-Hearing Association, 2016, p. 2), and for the “welfare and protection of the individuals and groups with whom psychologists work” (American Psychological Association, 2017, p. 3). Practitioners, as well as students of the professions, are mandated to learn and abide by their respective codes. For example, the Ethics Code for Behavior Analysts (Behavior Analyst Certification Board, 2020) states that all Board Certified Behavior Analysts, Board Certified Assistant Behavior Analysts, and any professional who has applied for the certification examination must know and adhere to the code. However, the question remains as to whether these codes alone are an effective way to shape ethical behavior.
Although there is not one universally agreed-upon definition of ethical behavior, in its most basic conceptualization, ethical behavior refers to doing what is “right” as outlined by the accepted rules of a society or culture (Cox, 2020). Traditional theories of ethics, or what is commonly called “moral behavior,” are heavily rooted in the classic theories of cognitive development (i.e., the theories of moral development described by Piaget and Kohlberg; Reynolds & Ceranic, 2009). Although a comprehensive review of the cognitive theories of ethical behavior is beyond the scope of this article, Zhong et al. (2009) summarized that ethical behavior is seen as a response to moral thought, which improves qualitatively as a person develops chronologically and cognitively. In other words, moral thought is postulated to develop in stages. Once a stage is achieved, moral behavior is theorized to be predictable and consistent over time until the individual starts to grow into the next moral stage (Zhong et al., 2009). The primary focus is on moral reasoning and thought, not on behavior. There is little focus on environmental context or the influences of environmental variables on individual behavior (Reynolds & Ceranic, 2009). This is a potential issue with the cognitive theories of moral development because the contextual variables of a situation may exert significant influence over behavior. In fact, research indicates a lack of correlation between moral reasoning and behavior (Bowman, 2018). Understanding why ethical behavior occurs, or might not occur, is therefore of high importance.
Cox (2020) summarized several theories traditionally used to explain the occurrence of ethical behavior. Three of these are consequentialism, deontology, and virtue theory. The theory of consequentialism, as reviewed by Cox, states that behavior is considered ethical if it leads to the best outcome for the most people. Virtue theory, according to Cox, states that some behaviors are inherently ethical, and some are inherently unethical regardless of context, consequences, or rules. Deontology is the theory that behavior is ethical as long as it is consistent with socially derived rules of conduct and, once established, these rules govern behavior across all circumstances and contexts (Cox, 2020). According to deontological theory, if all people adhere to the established rules of conduct, ethical behavior will be ensured (Rosenberg & Schwartz, 2019).
One issue with assuming that behavior can be predicted by professional codes of ethics (deontology) or by cognitive stage models is that people do not always behave in ways that are consistent with what they know or what they think is or should be ethically correct (O’Brien et al., 2017). The fact is that any professional may engage in unethical behavior given the right, or in this case “wrong,” circumstances (De Cremer, 2009). Research indicates that 92% of Americans are satisfied with their moral character and that 80% think of themselves as being more ethical than others (Prentice, 2014). Despite these data, research studies on ethical behavior indicate that unethical behavior often occurs by professionals who report that they know ethical rules and state that they are ethical people (De Cremer et al., 2010; Prentice, 2014). Why might this be the case? Research in the field of behavioral ethics explains ethical and unethical behavior by taking into account contextual variables in the environment and how they shape ethical behavior over time (Reynolds & Ceranic, 2009). The inclusion of the influence of contextual environmental variables on the behavior of individuals makes the theories of behavioral ethics somewhat unique and more practical than other theories of ethical behavior. But what is behavioral ethics?
Behavioral Ethics
The field of behavioral ethics is an extension of the theories of social psychology and how they relate to ethical decision making (De Cremer, 2009). Behavioral ethics is a line of research and theory that attempts to explain why well-intentioned people sometimes “do bad things” (Prentice, 2014). At this time, research is being done in business, and other for-profit professions, in order to understand how major scandals repeatedly occur in areas such as finance (Duska, 2017), pharmaceuticals (Feldman et al., 2013), and manufacturing (Schwartz, 2017). A search of the term “behavioral ethics” that I conducted revealed that it is absent from the literature of the clinical and medical professions, including behavior analysis. Given that these professions are not free of unethical behavior among practitioners, it seems logical to extend the theories of behavioral ethics to fields such as medicine, psychology, speech therapy, occupational therapy, physical therapy, education, and behavior analysis.
As summarized in Schwartz (2017), the foundations of ethical behavior can best be understood within a four-component decision-making model as described by Rest (1984, 1986). The first component is moral awareness. In this first step to decision making, an individual identifies that there is a moral issue occurring that will require a response. The second component is moral judgment. This is where the situation is assessed to determine the appropriate moral action. The third component is moral intention. It is here that the individual assesses their motivations to act in varying ways. Moral behavior is the fourth component. It is here that the individual executes a plan of action by engaging in a chosen behavior. The first component is concerned with assessing the variables within the environment that will ultimately determine the behavior that occurs in the last component (Treviño et al., 2006); however, there is a difference between ethical intentions and ethical behavior (Chugh & Kern, 2016). The behavior that is ultimately displayed is determined more by situational factors than by dispositional factors (Zhong et al., 2009). Thus, it is important to examine how people behave rather than the intentions they report. Human ethical behavior is very much determined by the environment (Bowman, 2018). It is here that the science of behavior analysis can exert an influence on the explanation and prediction of ethical behavior. Whereas the majority of constructs used in behavioral ethics fall within the realm of social psychology (i.e., incrementalism, conformity bias, framing, self-serving bias, etc.; De Cremer, 2009), the principles behind the constructs may be explained in behavior analytic terms. Although not an exhaustive review, the current article will provide some examples of how constructs used in behavioral ethics may be explained in a parsimonious and direct way through the principles of behavior. This will provide a clearer understanding of how ethical and unethical behavior may be shaped and maintained. The article will also discuss how the theories of behavioral ethics, once translated into behavior-analytic terms, may be used to identify, assess, and safeguard against unethical behavior among professionals on clinical treatment teams.
Self-Serving Bias
One of the most fundamental behavioral ethics constructs used to explain unethical behavior is the self-serving bias. According to this construct, people will process information and behave in ways that serve their own self-interests (Prentice, 2014). This can occur either “consciously” or “unconsciously” (Prentice, 2014). When unconscious, it occurs outside the awareness of the affected individual, making the influence of the self-serving bias difficult to understand and prevent (Moore, 2009). It is also easier to identify when it is affecting the behavior of others rather than the behavior of the self (Prentice, 2014). Personal gains from unethical behavior may be subtle or overt.
Empirical research supports the influence of the self-serving bias on unethical behavior (Moore, 2009). For example, work in the field of medicine has shown that the clinical decisions of doctors can be manipulated in an unethical direction by overt gifts and other rewarding influences from pharmaceutical companies (Dana & Loewenstein, 2003; Wazana, 2000). In another example, Loewenstein et al. (1993) showed the effects of manipulating monetary rewards from court hearings on the behavior of persons assigned as plaintiffs or defendants in hypothetical legal settlements. Studies such as these support the idea that people will act in unethical ways when those actions lead to self-serving rewards; however, the creation of the self-serving bias as a construct seems unnecessary. The underlying mechanism of the self-serving bias is likely one of the foundational principles of behavior analysis: the principle of reinforcement.
When faced with environmental variables that set up an ethical dilemma, people are more likely to engage in behaviors that have produced reinforcers in the past than in behaviors that have produced punishers or have been placed on extinction. If what is reinforced is unethical behavior, unethical behavior will increase. Over repeated exposures to similar contingencies, a pattern of unethical behavior is shaped. Ethical behavior can be conceptualized as operant behavior (Cox, 2020); therefore, it is affected by the principle of reinforcement as is all operant behavior. Repeated studies have shown that providing incentives for engaging in unethical behavior directly increases engagement in unethical behavior (Treviño et al., 2006). Ethical and unethical behavior can be described, predicted, and controlled by past experiences with consequences and current antecedent contingencies (Cox, 2020). The self-serving bias, therefore, may be translated into the principle of reinforcement.
What about the finding that the self-serving bias can exert influences over behavior both consciously and unconsciously (Prentice, 2014)? Can “unconscious” influences be explained through the principle of reinforcement? The answer lies in research on the automaticity of behavior. As stated by Skinner (1953), “A reinforcing connection need not be obvious to the individual reinforced” (p. 75). Automaticity of behavior refers to the finding that operant behavior can be shaped by reinforcement in the absence of the affected person’s awareness of the contingency (Skinner, 1953). In other words, ethical or unethical behavior can be increased in a person in the absence of their awareness of the contingency of the reinforcement (i.e., “unconsciously”).
Why is it that the self-serving bias seems to account for unethical behavior more than it does ethical behavior? According to Michael (2004), reinforcement affects responses more effectively when there is a shorter latency between the target response and the delivery of reinforcement, and when the person is motivated to seek the reinforcer being offered. The consequences for unethical behavior are often more immediate (shorter latency), more tangible, easier to predict, and more desired (high motivation to obtain) than the consequences for ethical behavior (Moore, 2009). It would be logical for organizations to reinforce ethical behavior while withholding reinforcement for unethical behavior; however, this is often not the case because ethical behavior frequently goes unnoticed by others (James Jr., 2000) or is the “expected” behavior of professionals. Behaviors such as receiving kickbacks for referrals and backdating billing paperwork may be reinforced with monetary compensation, whereas behaviors such as whistleblowing and accurately dating late paperwork may go anonymous (resulting in extinction) or be punished by loss of monetary compensation or supervisory reprimands. Unfortunately, this results in unethical behavior being more frequently increased by reinforcement than ethical behavior (James Jr., 2000).
Incrementalism
Also known as the slippery slope phenomenon (Moore, 2009), the behavioral ethics construct of incrementalism accounts for the gradual increase in an individual’s unethical behavior, over the course of time, that occurs without the awareness of the individual. Most often, people do not suddenly start engaging in significant unethical behavior without having first engaged in less significant unethical behavior (Moore, 2009). People gradually engage in unethical behavior in increasingly greater amounts, hardly noticing the shift until the occurrence of an undesired consequence (Moore, 2009). When slightly unethical behavior becomes the norm in practice, similar, yet increasingly unethical behaviors become normalized as well (Tenbrunsel & Messick, 2004). This slippery slope may unknowingly lead to a practice fraught with significant breaches in ethics. Unethical behavior gradually becomes routine (Prentice, 2014; Schwartz, 2017). Not only does incrementalism affect the behavior of the individual, but it also makes others less likely to notice the gradual shift in the individual’s behavior. This leads to unethical practice in others being unaddressed and underreported (Chugh & Kern, 2016).
Although anecdotal reports of incrementalism affecting ethical behavior are common, empirical tests of the phenomenon on ethical behavior are rare (Moore, 2009). Indirect support is provided by research done in the field of marketing. For example, empirical studies have shown that people will refuse to agree to a large request if that request is presented first; however, they will unknowingly agree to the same large request when presented after a series of smaller requests (Ashforth & Anand, 2003).
So, why does the construct of incrementalism cause people to unknowingly engage in higher frequencies and more intense topographies of unethical behavior over time? The behavioral principles of shaping, response generalization, and habituation are possible explanations. Like all operant behavior, unethical behavior is influenced by contingencies in the environment (Cox, 2020). It can be shaped in intensity and frequency through the reinforcement of successive approximations (unethical behavior is shaped over time), generalized over time to responses similar in topography and serving the same function (unethical behavior generalizes), and becomes normalized when newly formed behavior becomes a baseline for future behavior (people may habituate to unethical behavior over time). Code 2.12 (considering medical needs) of the Ethics Code for Behavior Analysts (Behavior Analyst Certification Board, 2020) provides an example in the field of behavior analysis. Behavior analysts are ethically responsible for recommending an assessment for medical conditions that can account for behavior, prior to implementing a behavioral treatment, for any behavior that could reasonably have a medical cause. For example, you have a two-and-a-half-year-old typically developing client who has a toilet-training goal. He has never been toilet trained before and all seems normal with his urination. You might conceivably decide to develop and initiate a toilet-training plan without recommending a medical evaluation. The client toilet trains successfully. You then have a client who is 8 years old with autism spectrum disorder and has a toilet-training goal. His parent tells you that there are probably no medical issues preventing him from being trained. After some thought, you decide to immediately move ahead with the development and initiation of a behavioral plan. Like your previous client, he toilet trains successfully. You are quickly gaining a reputation for toilet-training interventions and have successfully trained 10 clients of varying conditions, all without recommending medical evaluations. Your next client is 17 years old and was toilet trained until the age of 16, at which time a regression occurred and urination accidents began to be observed. He has multiple diagnoses, including a genetic condition that resulted in significant developmental issues from birth. After a momentary thought that there could be some medical issues that resulted in the regression, you remember your past successes and decide to move ahead with a behavioral plan. Unfortunately, this time you were not successful because his regression was caused by bladder inflammation related to his genetic condition. The issue was not identified because you did not recommend a medical evaluation prior to initiating a behavioral intervention. How did you ignore the compliance code with a client who has a behavior that could so obviously be related to a medical condition? The answer may lie in the slippery slope or, when translated into behavior-analytic terminology, the shaping and habituation of unethical behavior over time.
Framing
According to the behavioral ethics construct of framing, the way a situation is described (framed) leads to differences in the way a situation is interpreted and acted on. People who usually engage in ethical behavior may engage in unethical behavior once a situation is described in a way that leads them to act unethically (Cameron & Miller, 2009). An often-used real-life example of when framing influenced ethical decision making is the situation that ultimately led to the Challenger shuttle disaster in 1986. As summarized by Duska (2017), right before the launch, engineers became concerned about the potential weakening of O-rings during takeoff. The engineers warned of potential disaster if the O-ring issue was not addressed. Because there was a lot riding on this flight, supervisors asked the engineers to take off their “engineer hats” and put on their “management hats.” They were told to look at the situation from a different perspective. As managers, the engineers were more concerned with the undesired ramifications of delaying the flight and were now less concerned with the potential for disaster from the O-ring issue. The decision was made to move forward with the flight as scheduled, and the result was disastrous (Duska, 2017). In a more behavior-analytic example, professionals on a clinical team may be more likely to go out of their way to help families who are not involved in their child’s school if the situation is framed as “helping families in need of services who find it difficult to be involved in the school due to language barriers, travel difficulties, and financial struggles” versus “helping absentee families that are not on board with the school.” Once the families are framed as “not on board,” behavior analysts may not try to put systems in place to help families that have legitimate needs. The behavior of the professionals changes just because of the way the reality was framed, not because of the objective reality of the situation.
In behavioral terms, when you change the way the variables of a situation are described, you are likely altering the motivating operations. The person does not become more ethical or less ethical. The way the situation is presented just makes certain stimuli more reinforcing or less reinforcing at a given moment in time, makes certain environmental stimuli more evocative or less evocative than usual, and increases the likelihood of the occurrence of all behaviors associated with access to preferred reinforcers. These are all properties of motivating operations as summarized in Cooper et al. (2020). Therefore, the behavioral ethics construct of framing may be translated into behavior-analytic terminology by stating that changing the variables of a situation will alter the motivating operations in effect, thereby altering behavior in a predictable direction.
What is interesting, yet dangerous, with framing is that the only thing that changes is the subjective way in which the situation is perceived (Cameron & Miller, 2009). The facts of the situation, and the resulting consequences to behavior, remain unchanged. Therefore, the motivating operation should have remained unchanged, but it does not. Consider the Challenger disaster example through the “frame” of motivating operations. When acting within their actual job role, the engineers were directly responsible for the mechanical workings of the shuttle. Regardless of financial consequences, they were motivated to ensure a safe and correct takeoff. Safety was the most potent reinforcer because it was a direct reflection of their abilities and efforts. When weaknesses were noted in the O-rings, the O-rings became highly evocative stimuli that needed to be addressed. Advocating for a delay in the flight, in order to fix the O-rings, became a predictable response of the engineers. But it was framed differently (Duska, 2017). When asked to act as managers, the engineers were now responsible for ensuring that the takeoff happened as scheduled. There was a lot riding on this flight, and serious financial consequences and reputational damage would be the result of any delay. NASA promised an exciting launch, and the world was watching. They were now motivated to continue with the flight as scheduled (a shift in motivating operations). Avoidance of financial loss and blemishes to NASA’s reputation became the most highly preferred reinforcers. The O-rings became less evocative and were able to be ignored, and the decision to launch the shuttle became the most likely behavior to occur. Once again, this was an unethical decision with disastrous results. Were the engineers unethical professionals, or were they professionals who engaged in unethical behavior because they were now under a motivating operation that promoted unethical behavior more than ethical behavior? Seeing the situation through a behavior-analytic “frame” may support the latter.
Obedience to Authority
In a series of experiments on obedience to authority, Milgram (1965) demonstrated how everyday people can be coerced into engaging in highly disturbing behavior simply by being instructed to do so by a perceived authority figure. In his groundbreaking experiments, 40 men served as participants in what they thought was a study of memory. The participants served as “teachers” in the study and were paired with confederates who served as “learners.” With the confederates outside of view, the participants presented lists of words for the confederates to memorize and repeat. In response to incorrect answers, the participants were instructed by the experimenter, who represented an authority figure, to deliver electric shocks of increasing voltage to the “learners.” In reality, the experimenter was not interested in studying memory, the confederates were not actually providing responses, all auditory responses of the “learners” were played by recording, and no electric shocks were being delivered. Unaware of the deception, the participants were under the impression that they were delivering painful shocks to their peers. The results were astonishing. Despite verbal protests, obvious tension, heightened anxiety, hesitancy to respond, and questioning of their own actions, many of the participants found themselves delivering what they thought were shocks of potentially lethal voltage to whom they thought were innocent peers. Their unethical behavior was in direct response to the directives of the perceived authority figure (Milgram, 1965). A similar study, with similar results, was conducted by Sheridan and King (1972). In a more contemporary, real-world situation, Wells Fargo bank tellers in 2016 knowingly set up accounts that were detrimental to their clients allegedly because they were instructed to do so by supervisors (Duska, 2017). Many stated that they knew what they were doing was incorrect.
The behavioral principles accounting for the obedience-to-authority construct are basic but may change depending on the details of the environmental situation. Sometimes, through the principle of positive reinforcement, individuals may overtly set out to please authority figures in order to receive tangible reinforcers and preferred privileges that these figures control. This is often the case in work settings where a supervisor has the power to advance the employment status of subordinates. When instructed to do so, employees may ignore ethical standards in order to potentially advance their careers (Prentice, 2014). Similarly, people may engage in unethical behavior, when told to do so by an authority figure, in order to avoid contacting a punisher, which may be a consequence of noncompliance (i.e., negative reinforcement). Because authority figures typically have the power to both reinforce and punish the behavior of subordinates, a combination of positive and negative reinforcement may be at play. As seen in Milgram (1965), this occurs even when the authoritative relationship is only a perceived one. Over the course of time, a reinforcement history may be established where people become operantly conditioned to obey authority figures as a stimulus class. Ultimately, rule-governed behavior is formed (i.e., “Listen to your boss,” “Do what your teacher says”). On the beneficial side, once people begin to obey authority figures as a rule, it promotes societal order (Prentice, 2014). Of course, this is only beneficial if what is being obeyed is ethical. Whereas leaders may use social learning theories to reinforce the ethical behavior of followers (Treviño et al., 2006), leaders may also employ behavioral contingencies to promote unethical behavior. For example, if a well-respected behavior analyst instructs a behavior analyst in training to develop and implement a punishment-based behavior intervention plan without first conducting a functional behavior assessment, the trainee should be aware that BACB Codes 2.13 and 2.14 (Behavior Analyst Certification Board, 2020) will be broken if they comply with the instructions of the supervisor. Unfortunately, the stimulus class of “well-respected behavior analyst supervisor” may exert control over the behavior of the trainee despite the trainee knowing that a code is about to be broken. Despite the trainee engaging in unethical behavior, adhering to the instructions of the supervisor may result in a good evaluation for the trainee. Defying the instructions, which would be the ethical decision, may result in a poor evaluation for the trainee. Experience with contacting reinforcers and punishers by supervisors in the past would predict that the trainee might engage in the unethical response of developing the intervention plan without a functional behavior assessment if they are motivated to obtain a good evaluation from the supervisor.
Obedience to authority may also be influenced by respondent conditioning. In many cases, people who have risen to positions of authority have done so because of their experience and knowledge in areas over which they have authority. As a stimulus class, authority figures become associated with expertise in their areas, and individuals respond accordingly when given directives by these figures (i.e., “The police officer must know,” “I am sure he knows how to submit the invoice,” “The accountant said it’s OK, and he worked for the IRS”). Unfortunately, not all authority figures have expertise, and even if they do, some may promote unethical behavior due to their own self-serving bias. How we behave, however, may be respondently generalized to all members of the stimulus class. As an example, private practice agencies in behavior analysis may have CFOs with backgrounds in finance or business instead of behavior analysis. Despite these CFOs not having experience in the provision of behavior-analytic therapy or knowledge of the BACB code of ethics, instructions given by the CFOs may change the behavior of clinical employees because they have become conditioned as leaders of the agencies taking on all the qualities of the clinical leaders without having earned those qualities.
When it is translated into behavior-analytic principles, it is easy to see how the obedience-to-authority construct develops and is maintained. It is equally easy to see how these contingencies affect the decisions and behaviors of clinical professionals. Do the behaviors of clinical team members change when an administrative executive (i.e., principal, executive director, director of special services, etc.) attends a meeting? Do the behaviors of supervisees change in the presence of their supervisors? Do students of behavior analysis change their behavior when their professor is a highly acclaimed behavior analyst? The answers are likely yes. A dangerous situation arises when individuals have been operantly and respondently conditioned to follow the directives of the authority figure and that authority figure is giving them directives to engage in unethical behavior. Professionals must assess and identify when these environmental conditions are present and ensure that their decisions and behaviors remain objective and independent of the influences of authority figures as much as possible.
Conformity Bias and the In-Group/Out-Group Phenomenon
In the 1950s, social psychologist Solomon Asch conducted a series of experiments looking to see whether people would conform to the behavior of peers even when it went against their own observations. He found that when faced with a dilemma where beliefs conflicted with the actions of others, many people chose to conform with others instead of behaving in ways consistent with their own beliefs. In behavioral ethics terminology, this finding is explained through the conformity bias. According to the conformity bias, people will conform their behavior to the behavior of others in order to identify with a social group, to feel a sense of belonging, and to avoid social exclusion (Moore, 2009). Consistent with the principles of social learning theory, people conform to group behavior through observing others, becoming aware of social norms, and imitating those norms (Cialdini et al., 2019). The conformity bias has been shown to affect behavior in a variety of situations including behaviors affecting health (i.e., smoking, exercising), behaviors affecting safety (i.e., wearing seat belts, distracted driving), and behaviors affecting personal finances (i.e., enrolling in retirement plans (Prentice, 2014). It has also accounted for changes in ethical behavior in organizational groups (Cialdini et al., 2019) and criminal populations (Moore, 2009).
A related construct is the in-group/out-group phenomenon. Not only do people conform to the behavior of peers whom they consider to be their in-group, they actually judge the behavior of out-group members as more unethical than the behavior of in-group members (Prentice, 2014). People adopt the norms of their in-group and give less consideration to how their behavior is affecting out-group members, even when those behaviors might cause harm (Treviño et al., 2006).
The behavioral principles underlying these constructs seem similar to those underlying the obedience-to-authority phenomenon. These include, but might not be limited to, contingencies of reinforcement, avoidance of punishment, respondent generalization, and the establishment of stimulus control. It is logical to assume that behaviors that conform with peer group norms will be reinforced while behaviors inconsistent with group norms will be punished or at least placed on extinction. If the group norm is aligned with ethical practice, then the professionals within that group will be more inclined to engage in ethical practice. Unfortunately, the opposite is also true. Consider the following situation. You are asked to join a clinical team meeting where you do not personally or professionally know any of the team members. You realize that you will be the only behavior analyst on the clinical team, which consists of two occupational therapists, a psychiatrist, a psychiatric nurse practitioner, and a certified psychoanalyst. How do you prepare for your initial meeting? Would you prepare differently if you realized the team consisted of three behavior analysts, a special education teacher, and a behavioral speech therapist? Remember, you do not know anyone on the team prior to the initial meeting. So, why would just knowing the professions of the team members proactively change your behavior? It is likely that reinforcement history comes into play. Over time, as reinforcement is received for conformity, conformity to in-group behavior turns into rule-governed behavior. From a young age, people are conditioned to follow the norms of groups (Moore, 2009). We therefore may conform our behavior to their actions without an individual assessment of the variables associated with the situation.
Overconfidence Bias
According to the overconfidence bias, people assume they are more competent in a given area than, in reality, they are (Bowman, 2018). With regard to ethical behavior, people will frequently overestimate their moral capacity and automatically think their behaviors are ethical when in fact they may not be (Bowman, 2018). Many people believe that their actions are free from bias and that their ethical behavior is not affected by situational factors (Duska, 2017). In reality, ethical behavior is more a product of environmental variables than of internal traits, moral development, or beliefs (Bowman, 2018; Zhong et al., 2009). Knowing what is ethically correct does not translate to engagement in ethical behavior (O’Brien et al., 2017). Acting, based on having knowledge of what is considered correct, may lead to harmful decisions if it is done in the absence of an objective assessment of the environmental situation (Prentice, 2014). Although most clinical professions have developed ethical guidelines for practitioners to follow, having knowledge of the guidelines (the deontological theory of ethical behavior) is not a good predictor of behavior. Like all operant behavior, ethical behavior is shaped through stimuli in the environment (Cox, 2020). It is often multiply controlled, and only an analysis of the antecedents and consequences within the environment can lead to a functional explanation of why particular behaviors were evoked (Cox, 2020). Despite practitioners’ confidence regarding their knowledge of ethical guidelines and compliance codes, situational factors may prompt and reinforce unethical behavior.
Additional Considerations
Environmental setting events can also result in an increase in unethical behavior. Research shows that people are more likely to engage in unethical behavior when pressured for time, when they feel their behavior is being observed (transparency effect), when they are tired, and when they do not have necessary resources (Drumwright et al., 2015). Once again, the effects of these factors can all be explained through behavioral principles. Being pressured for time and being tired are both setting events that will likely change behavior in some way. In both situations, it is conceivable that a person may not take the time to conduct a proper assessment of the situation, may act quickly without thinking through the consequences of their actions, may be vulnerable to faulty stimulus control, and may not take the time to consult guidelines or seek consultation before acting. When an individual does not have the necessary resources to engage in an ethical action, the environment is preventing that action. It is logical that an individual cannot engage in a behavior if it is environmentally impossible to do so. Finally, the principle of observer reactivity can account for the transparency effect. People will temporarily alter their behavior when being observed, often in a therapeutic direction (Baum et al., 1979). Therefore, when their actions are being observed, people are more likely to engage in ethical behavior than when they are not being observed.
An additional factor that affects ethical behavior is the principle of extinction. Individuals are less likely to engage in ethical behavior if, in a particular situation, there is a history of extinction associated with ethical responses (Drumwright et al., 2015). If individuals feel their behavior will not make an impact on a situation, they may not act in that situation (Drumwright et al., 2015). This is an issue that may come into play when the unethical behavior of another person should be reported to an authority figure. Although reporting the unethical behavior of colleagues is something behavior analysts are compelled to do under specific circumstances, some may not do so if they feel their action will be met with extinction: “Why bother reporting? Nothing will change anyway.” As with all unethical behavior, this is not a reflection of the character or knowledge of the individual who chose not to report; it is reflective of a setting in which the situational variables promoted unethical behavior through a principle of behavior analysis—in this case, the principle of extinction.
An additional consideration is the effects of culture on ethical decision making and behavior. On a deontological level, the cultural background of clients may, in some cases, make adherence to specific professional codes of conduct difficult when they conflict with cultural norms (Cox, 2020). On a broader level, ethical behavior is influenced by cultural variables that affect a variety of personal, family, and societal issues (Rosenberg & Schwartz, 2019). Similar to the behavioral ethics construct of framing, each individual enters a situation with a personal view that determines how the context is perceived. Personal views are influenced by the values and beliefs within a culture (Rosenberg & Schwartz, 2019) and by unique histories of antecedents and consequences (Cox, 2020).
Ethical Behavior of the Clinical Team
Given the research in behavioral ethics, the behavior of professionals on clinical teams may be more influenced by situational factors than whether the professionals on the teams are certified or licensed, knowledgeable of their professional codes of ethics, competent in their work, or ethical in past decisions. As has been discussed, in an applied behavior-analytic framework, ethical behavior is operant behavior (Cox, 2020), not the result of internal traits. Therefore, clinical teams must actively promote and shape ethical behavior in the professionals on the team. Through an assessment of the environmental variables associated with treatment decisions and actions, the team must identify stimuli that may be increasing the probability of unethical behavior and replace them with stimuli that are more likely to evoke ethical behavior. Awareness of the behavioral principles underlying the constructs of behavioral ethics is important to this process.
When a clinical team comes to a decision on how to respond to a situation (be it a treatment recommendation, administrative decision, or an ethical dilemma), the team should conduct a post hoc assessment of the reasons behind making the decision before proceeding with an action. This provides a check on decisions that might have been influenced by environmental variables leading to unethical responses that may later be rationalized (Bowman, 2018). An objective assessment of the behavioral contingencies underlying a clinical team’s decision may proactively prevent an unethical response from occurring. Team members, across professions, may fall into the trap of emitting unethical behavior if they rely only on knowledge of their professional codes of ethics (deontology) and the belief that they are ethical people (overconfidence bias). An emphasis needs to be placed on basing decisions on objective data, considering alternate arguments to decisions, and assessing consequences through differing stakeholder perspectives (Treviño et al., 2006).
Consistent with this concept of objective checks and balances, Table 1 presents 12 questions for clinical teams to consider when assessing the variables that could have influenced a team decision. The questions are designed to identify potential sources of influence, based on the concepts of behavioral ethics and consistent with behavior-analytic principles as described here. The questions are best reviewed after the team has come to a decision but prior to enacting any responses based on that decision. Although the questions are designed for a review of decisions made by clinical teams, the content of most of the questions assesses the variables that may have influenced the behavior of individual team members. The questions, therefore, can also be used by individual professionals to assess their own decisions and actions in the absence of a clinical team. In combination with knowledge of professional codes of ethics and guidelines for empirically supported treatment, a proactive assessment of this sort has the potential for identifying unethical decisions before any behavior is emitted. This might mitigate the potential for undesired and potentially dangerous consequences to clients. An additional benefit of including an ethical review such as this in the team’s decision-making processes is the potential shaping of future ethical responses in the clinical team over time.
Table 1.
Questions for clinical teams to consider prior to enacting treatment decisions
1. Will any members of the clinical team, directly or indirectly, benefit in some way from this decision? | |
2. Will any members of the clinical team, directly or indirectly, contact punishment in some way because of this decision? | |
3. Does this decision or action of the clinical team seem riskier with regard to potential violations of ethical codes than past decisions or actions of the team or individuals on the team? | |
4. Is there the potential that the decision and action of members of the clinical team were influenced by the way the situation was described and conceptualized? | |
5. If the situation were described in a different way, would the decision and action of the clinical team change? | |
6. Are there actual or perceived authority figures (either internal or external to the clinical team) whose presence, opinions, or directives may have influenced the decision and action of members of the team? If so, how could those influences of the authority figure be mitigated without undue consequence to the team members? | |
7. Would any members of the clinical team have engaged in a different decision or action if not for the opinions, directives, decisions, and behaviors of other team members? | |
8. Did any team members seem to come to a decision or action solely because it was consistent with the behavior of professionals in their own discipline? If so, how could this influence be mitigated without undue consequence to the team members? | |
9. Was the decision and action of the interdisciplinary team made with an objective assessment of situational factors and contingencies, or did members of the team rely primarily on profession-specific ethical codes of practice? | |
10. Were team members reluctant to make a decision or engage in an action they perceived as more ethical because past experiences indicated that the more ethical behavior did not produce desired or expected outcomes? | |
11. Would members of the clinical team have engaged in a different decision or action if they were anonymous or if they could not be held accountable for the outcome? | |
12. Did any situational factors, such as time pressures and lack of resources, potentially exert an influence over clinical team discussions, decisions, and actions? How would the behavior of clinical team members have been different if the situational factors were not present? |
Note. These questions are to be used in combination with adherence to profession-specific codes of ethics and empirically supported treatment guidelines
Conclusion
Why is it that clinical professionals will still sometimes engage in unethical behavior despite having knowledge of the ethical codes that guide the practice of their professions? An explanation is provided by the constructs of behavioral ethics, which may be explained by the foundational principles of behavior analysis. Although sometimes not thought of as operant behavior, ethical and unethical responses are both evoked and maintained by the environment. In the current article, I provided some examples of how the constructs discussed in the field of behavioral ethics can be translated into basic behavior-analytic principles. It is important to note that the behavioral explanations presented here may not be the only ways to translate behavioral ethics into behavior-analytic principles. Other researchers and practitioners may certainly find valid, alternate explanations within our behavioral literature; however, the primary importance of this article is that the occurrence of ethical and unethical behavior can be explained within a behavior-analytic framework. Conceptualizing ethical behavior in this way shifts the focus away from blaming a breach of ethics solely on a person and toward blaming variables within the environment for having evoked the ethical breach by the person. Therefore, by identifying environmental variables that may increase the likelihood of unethical behavior, prior to making a decision or engaging in a response, professionals can have greater confidence in the ethicality of their clinical behavior. This will, it is hoped, avoid dangerous situations for clients, as well as the undue labeling of professionals as inherently unethical people when breaches of ethical codes do occur.
Acknowledgments
Availability of data and materials
The current article does not include the collection of original data.
Code availability
Not applicable.
Author contributions
The idea for the current work, completion of the literature review, writing of the manuscript, and any revisions were and will be the work of the author.
Declarations
Conflicts of interest
The author has no conflicts of interest or competing interests to disclose with regard to the current article.
Footnotes
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- American Occupational Therapy Association. (2015). Occupational therapy code of ethics. https://www.aota.org/About-Occupational-Therapy/Ethics.aspx
- American Physical Therapy Association. (2019). Code of ethics for the physical therapist. https://www.apta.org/uploadedFiles/APTAorg/About_Us/Policies/Ethics/CodeofEthics.pdf
- American Psychological Association. (2017). Ethical principles of psychologists and code of conduct. https://www.apa.org/ethics/code/
- American Speech-Language-Hearing Association. (2016). Code of ethics. https://www.asha.org/Code-of-Ethics/
- Ashforth BE, Anand V. The normalization of corruption in organizations. Research in Organizational Behavior. 2003;25:1–52. doi: 10.1016/S0191-3085(03)25001-2. [DOI] [Google Scholar]
- Baum CG, Forehand R, Zegiob LE. A review of observer reactivity in adult-child interactions. Journal of Psychopathology and Behavioral Assessment. 1979;1(2):167–178. doi: 10.1007/BF01322022. [DOI] [Google Scholar]
- Behavior Analyst Certification Board. (2020). Ethics code for behavior analysts. Littleton, Co.: Author.
- Bowman JS. Thinking about thinking: Beyond decision-making rationalism and the emergence of behavioral ethics. Public Integrity. 2018;20:89–105. doi: 10.1080/10999922.2017.1410461. [DOI] [Google Scholar]
- Cameron, J. S., & Miller, D. T. (2009). Ethical standards in gain versus loss frames. In D. De Cremer (Ed.), Psychological perspectives on ethical behavior and decision making (pp. 91–106). Information Age Publishing.
- Chugh D, Kern MC. A dynamic and cyclical model of bounded ethicality. Research in Organizational Behavior. 2016;36:85–100. doi: 10.1016/j.riob.2016.07.002. [DOI] [Google Scholar]
- Cialdini, R., Li, Y. J., Samper, A., & Wellman, N. (2019). How bad apples promote bad barrels: Unethical leader behavior and the selective attrition effect. Journal of Business Ethics. 10.1007/s10551-019-04252-2.
- Cooper, J. O., Heron, T. E., & Heward, W. L. (2020). Applied behavior analysis (3rd ed.). Pearson Education.
- Cox, D. J. (2020). Descriptive and normative ethical behavior appear to be functionally distinct. Journal of Applied Behavior Analysis. 10.1002/jaba.761. [DOI] [PubMed]
- Dana J, Loewenstein G. A social science perspective on gifts to physicians from industry. Journal of the American Medical Association. 2003;290(2):252–255. doi: 10.1001/jama.290.2.252. [DOI] [PubMed] [Google Scholar]
- De Cremer, D. (2009). Psychology and ethics: What it takes to feel ethical when being unethical. In D. De Cremer (Ed.), Psychological perspectives on ethical behavior and decision making (pp. 3–13). Information Age Publishing.
- De Cremer D, Mayer DM, Schminke M. On understanding ethical behavior and decision making: A behavioral ethics approach. Business Ethics Quarterly. 2010;20(1):1–6. doi: 10.5840/beq20102012. [DOI] [Google Scholar]
- Drumwright M, Prentice R, Biasucci C. Behavioral ethics and teaching ethical decision making. Decision Sciences Journal of Innovative Education. 2015;13(3):431–458. doi: 10.1111/dsji.12071. [DOI] [Google Scholar]
- Duska RF. Unethical behavioral finance: Why good people do bad things. Journal of Financial Service Professionals. 2017;71(1):25–28. [Google Scholar]
- Feldman Y, Gauthier R, Schuler T. Curbing misconduct in the pharmaceutical industry: Insights from behavioral ethics and the behavioral approach to law. Journal of Law, Medicine and Ethics. 2013;41(3):620–628. doi: 10.1111/jlme.12071. [DOI] [PubMed] [Google Scholar]
- James HS., Jr Reinforcing ethical decision making through organizational structure. Journal of Business Ethics. 2000;28(1):43–58. doi: 10.1023/A:1006261412704. [DOI] [Google Scholar]
- Loewenstein G, Issacharoff S, Camerer C, Babcock L. Self-serving assessments of fairness and pretrial bargaining. Journal of Legal Studies. 1993;22(1):135–159. doi: 10.1086/468160. [DOI] [Google Scholar]
- Michael, J. (2004). Concepts and principles of behavior analysis (Rev. ed.). Society for the Advancement of Behavior Analysis.
- Milgram S. Some conditions of obedience and disobedience to authority. Human Relations. 1965;18(1):57–76. doi: 10.1177/001872676501800105. [DOI] [Google Scholar]
- Moore, C. (2009). Psychological processes in organizational corruption. In D. De Cremer (Ed.), Psychological perspectives on ethical behavior and decision making (pp. 35–71). Information Age Publishing.
- O’Brien K, Wittmer D, Ebrahimi BP. Behavioral ethics in practice: Integrating service learning into a graduate business ethics course. Journal of Management Education. 2017;41(4):599–616. doi: 10.1177/1052562917702495. [DOI] [Google Scholar]
- Prentice R. Teaching behavioral ethics. Journal of Legal Studies Education. 2014;31(2):325–365. doi: 10.1111/jlse.12018. [DOI] [Google Scholar]
- Reynolds, S. J., & Ceranic, T. L. (2009). On the causes and conditions of moral behavior: Why is this all we know? In D. De Cremer (Ed.), Psychological perspectives on ethical behavior and decision making (pp. 17–33). Information Age Publishing.
- Rosenberg NE, Schwartz IS. Guidance or compliance: What makes an ethical behavior analyst? Behavior Analysis in Practice. 2019;12(2):473–482. doi: 10.1007/s40617-018-00287-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schwartz MS. Teaching behavioral ethics: Overcoming the key impediments to ethical behavior. Journal of Management Education. 2017;41(4):497–513. doi: 10.1177/1052562917701501. [DOI] [Google Scholar]
- Sheridan CL, King RG. Obedience to authority with an authentic victim. Proceedings of the Annual Convention of the American Psychological Association. 1972;7(1):165–166. [Google Scholar]
- Skinner, B. F. (1953). Science and human behavior. Free Press.
- Tenbrunsel AE, Messick DM. Ethical fading: The role of self-deception in unethical behavior. Social Justice Research. 2004;17(2):223–236. doi: 10.1023/B:SORE.0000027411.35832.53. [DOI] [Google Scholar]
- Treviño LK, Weaver GR, Reynolds SJ. Behavioral ethics in organizations: A review. Journal of Management. 2006;32(6):951–990. doi: 10.1177/0149206306294258. [DOI] [Google Scholar]
- Wazana A. Physicians and the pharmaceutical industry: Is a gift ever just a gift? Journal of the American Medical Association. 2000;283(3):373–380. doi: 10.1001/jama.283.3.373. [DOI] [PubMed] [Google Scholar]
- Zhong, C., Liljenquist, K., & Cain, D. M. (2009). Moral self-regulation: Licensing and compensation. In D. De Cremer (Ed.), Psychological perspectives on ethical behavior and decision making (pp. 75–89). Information Age Publishing.
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The current article does not include the collection of original data.