Abstract
Value-based care has incrementally increased its footprint across healthcare over the past 2 decades. Several organizations in ABA have begun experimenting with various components of value-based care specific to the delivery of ABA services and it seems likely that this trend will continue into the future. For those new to value-based care, this article reviews the main conceptual components as well as common myths and misconceptions about value-based care. Though conceptually straightforward, practically pulling off value-based care in ABA will require significant advancements in data collection, analytics, sharing, and transparency that follow from broad field-wide collaboration. Further, many ethical questions will likely arise as ABA providers begin thinking about and assessing their clinical and business operations through a value-based care lens. Though value-based care will likely roll out slowly and incrementally over many years, ABA providers interested in participating or leading these conversations will likely benefit from focusing collaborative efforts around: normalizing data sharing and self-analysis; defining and developing quality and cost measures; identifying patient risk variables; addressing challenges at the intersection of public health ethics and clinical ethics; and addressing challenges at the intersection of AI ethics and clinical ethics. Most probably agree that optimizing patient outcomes is the goal of ABA services. However, doing it in an objective, measurable, and consistent manner that can be validated by third-parties will require overcoming significant challenges.
Keywords: Quality measurement, Value-based care, Ethics, Quantitative analysis
By many metrics, the U.S. health-care system is trending in the wrong direction with respect to cost and quality. Costs per person have increased annually from $146 in 1960 to $12,914 in 2021 comprising 5.00% and 18.25% of the gross domestic product (GDP) in those same years, respectively (Centers for Medicare & Medicaid Services [CMS], 2023a). Surprisingly, increased costs are increasingly paid for by people out of their own pockets rather than through the mechanism designed to help defray these costs at the individual level: health insurance (CMS, 2023a). Many factors likely contribute to this such as increasing insurance costs leading people to be underinsured (Commonwealth Fund, 2023) and an increasingly concentrated health insurance market reducing competition (U.S. Governmental Accounting Office [GAO], 2022). All the above contribute to a startling reality that 51% of people struggle to afford their health care and an estimated one in three Americans are financially burdened by their medical debt (Collins et al., 2023).
Increased overall cost might be tolerable if population health was improving at a similar rate. But it is exasperating that overall population health is not improving in parallel to the increased costs and is arguably getting worse in many areas. For example, 18.8 million people live in areas with limited access to affordable and nutritious food (i.e., a food desert; Morton & Blanchard, 2007; Story et al., 2008; U.S. Department of Agricultural [USDA], 2022) and many live in environments that make it difficult to be physically active with low-income individuals affected the most (i.e., exercise or physical activity deserts; Lane & Davis, 2022; Pate et al., 2021). Rates of chronic diseases have increased from 57% of all global deaths in 1990 to 72% of all global deaths in 2016 (Murray & Lopez, 1997; Naghavi et al., 2017). Estimates suggest anywhere between 16% to 86% of people are medically undertreated, overtreated, or mistreated compared to research supported standards of care and depending on patient’s condition (e.g., Howard et al., 2021; Schuster et al., 2001; Stasinopoulos et al., 2021). And, errors in medical decision making are a major cause of death in the United States (e.g., Leape, 2000; Makary & Daniel, 2016).
The above trends are not novel findings. The challenges of increasing cost with incommensurate improvement in care quality have been known for decades (e.g., Porter & Teisberg, 2006). Though many strategies and solutions have been offered, many popular current approaches fall under the umbrella label of “value-based care” (VBC). At its core, quantifying the value of health care involves a simple comparison. In words: the value of care received is equal to the quality of the care relative to its cost (U.S. Department of Health & Human Services [USDHHS], 2007; Plunkett & Dale, 1988). In mathematical form:
| 1 |
Here, quality can be defined and quantified via one or more “quality measures” (more below). And cost can be defined and quantified via all medical service(s) received for a patient’s particular medical condition over the full cycle of care (Porter & Teisberg, 2006).
Using the definition from Equation 1, the overall value of care can be improved by: increasing health-care quality while maintaining cost; reducing cost while maintaining quality; or increasing quality while reducing cost. In turn, by focusing system feedback and improvements on value, one estimate suggests that the value of care that patients receive may improve from 3% in more limited models to as high as 20% with providers who regularly contact their patients such as in applied behavior analysis (ABA; Abou-Atme et al., 2022). As a result, investment is growing faster for VBC initiatives than methods to increase access to care, with ~100 million people in 2022 (30% of the population) receiving some aspect of their care as part of a VBC model and an estimated growth to 130-160 million people by 2027 (38%–46% of the population; Abou-Atme et al., 2022).
From a reimbursement perspective, it is unclear how prevalent VBC is in the U.S. health-care system.1 For example, in 2022, VBC-related revenue comprised only a reported 6.74% of primary care revenue, 5.54% of surgical revenue, and 14.74% of nonsurgical specialties revenue (LaPointe, 2022). But relying solely on revenue might underestimate the reach of VBC reimbursement. For example, in 2022, 46% of sampled physicians reported revenue from a value-based payment model (Horstman & Lewis, 2023); 49% of sampled physicians reported they participate in some sort of value-based payment (Bendix, 2022); and, in 2020, 60% of health-care payments included some kind of value component up from 11% in 2012 and 53% in 2017 (Bendix, 2022). Further clouding the ability to make precise estimates around VBC impact are data suggesting that more than 70% of value-based payment arrangements may not be publicly disclosed (Mahendraratnam et al., 2019). Considering the term “value-based care” was, it can be argued, coined in 2006 (Larsson et al., 2023; Porter & Teisberg, 2006), all data points indicate an increasing trend in health-care systems trying VBC as an alternative to fee-for-service models, or at least placing emphasis on health-care value.
VBC systems have begun to be introduced to ABA. For example, Magellan Healthcare launched a pilot program with Invo Healthcare in 2021 to define standards for care for children with ASD receiving ABA (Magellan Health, 2021). On the outcome front, Magellan also partnered with Kyo Autism Therapy to develop outcome standards specific to VBC for ABA (Larson, 2021); Centene and CIGNA (or their state affiliates) contracted with Behavioral Health Center of Excellence (BHCOE) in 2022 to develop quality measures aimed at delivering high-quality, cost-effective services (Bloomberg, 2022; Minemeyer, 2022; Peach State Health Plan, 2022); and BHCOE and International Consortium for Health Outcomes Measurement (ICHOM) each recently and independently published outcome measure portfolios positioned as outcome measure sets payers can adopt to speak to care quality (BHCOE, 2021; ICHOM, n.d.). These are examples wherein the stakeholders chose to be vocal in the media about their work. Leaning into the statistic above that more than 70% of VBC arrangements may not be publicly disclosed (Mahendraratnam et al., 2019), the number of current experiments around VBC in ABA might be higher.
Measuring, monitoring, and improving the quality of ABA is not new. Authors have written about improving the quality of services consistently for decades (e.g., Hayes et al., 1980; Hopkins, 1995; Librera et al., 2004; Reichow, 2011; Silbaugh & El Fattal, 2021; Silbaugh, 2023) and arguably it was a major driver in the creation of the Behavior Analyst Certification Board (BACB). Related to this, others have written about using behavioral principles to improve health-care quality in other areas of medicine and behavioral health (e.g., Kelley & Gravina, 2018; Riley & Freeman, 2019). However, past work has often focused on the contingencies surrounding the behavior of the individual practitioner. This makes sense because behavior analysts are often experts at changing the behavior of individuals. But, value-based care contingencies are typically manipulated at the group level (e.g., ABA organizations). What might be new to some behavior analysts is measuring, monitoring, and creating contingencies to improve the quality of ABA service delivery for a group of people. What also seems new for ABA is the heavy role that payers are beginning to assert in managing those contingencies (see citations at the beginning of this paragraph).
Improving the overall value of ABA is obviously multifaceted and difficult. As demonstrated in many other areas of health care, measuring, monitoring, and improving the quality and cost of service delivery requires the coordinated behavior and agreement among many people spanning many organizations and many stakeholders. Given formalized quality measurement is rapidly increasing in ABA, the purpose of the current article is threefold. First, to outline the conceptual components to VBC for those new to this body of literature. Second, to dispel common myths and misconceptions of VBC. Lastly, to review the analytical challenges that are likely to arise as independent ABA agencies seek to simultaneously optimize both individual-level and population-level behavioral health.
Conceptual Components of Value-Based Care
As described above, a common definition of value-based care is that care value is determined by the quality of care received relative to its cost (Eq. 1; Porter & Teisberg, 2006). Another way to think about this is shown in Figure 1 where the quality of services ranges from low-to-high on the x-axis and the cost of services ranges from low-to-high on the y-axis. If a service is of life-changing quality and is cheap or free (lower right quadrant), it has a lot of value. And, if a service is of very low quality and no one can afford it, it has very low value (upper left quadrant). The idea behind value-based care is to identify ways to move what providers do toward the lower right corner via decreasing the cost of existing approaches, improving the quality of services provided, or doing both. Under these assumptions, identifying and improving the value of care requires measuring its cost and quality (Porter & Teisberg, 2006).
Fig. 1.

Care Value as an Interaction between Quality and Cost
Measuring Cost
Measuring cost is not as straightforward as it may seem. At the most basic level, the two primary variables used to measure the total cost of care are utilization and price (e.g., American Hospital Association [AHA], 2023). Utilization is typically defined as all services delivered for a patient’s particular medical condition over the full cycle of care and typically for a specified period of coverage such as health-plan benefit year (Porter & Teisberg, 2006). Price is typically defined from a payer’s perspective in how much they reimburse for a service and not from the provider’s perspective in how much it costs to deliver a service. It is important to note that services are defined around a patient’s condition—not around specific types of providers or specific services (Porter & Teisberg, 2006). For example, calculating cost for a cardiac bypass surgery would include the entire surgery team: cardiac surgeon, anesthesiologist, perfusionist, operating room nurses and technicians, intensivist, intensive care nurses, cardiac care nurses, physical or occupational therapists, rehabilitation nurses, exercise physiologists, dieticians and nutritionists, cardiologist, primary care physician, and maybe more. Identifying and aggregating individual procedures and their price from all providers over the full cycle of care is what equates to the cost of a condition during a specified period of coverage.
Cost can also be separated into measures of efficiency (e.g., Hussey et al., 2009; Ryan et al., 2017). Common measures of efficiency in health economics include technical efficiency (a.k.a. operational efficiency), economic efficiency (a.k.a. productive efficiency), allocative efficiency, and pareto efficiency (Guiness & Wiseman, 2011). Here, technical efficiency means providing a given output with the least inputs, economic efficiency means producing a given output at the least cost, allocative efficiency means matching the pattern of output to the pattern of demand, and pareto efficiency is a point in the system wherein no one can be better off without someone being made worse off (Guiness & Wiseman, 2011). In ABA, for example, this might mean producing the same patient outcomes while reducing duration in treatment, reducing hours per week of ABA, increasing staff effectiveness per hour, avoiding overtreatment, avoiding rework and clinical errors, reducing administrative costs, matching clinicians to patients they are best at working with, and balancing resources to maximize patient outcomes across all patients in one’s care.
Opportunity cost is another important cost concept in health economics. Opportunity cost refers to the overall value of the next best alternative (Guiness & Wiseman, 2011; Riera-Prunera, 2022; Sampson et al., 2023). As an equation:
| 2 |
For example, say you have 20 registered behavior technicians (RBTs) to allocate across two types of patients (e.g., early intensive behavior intervention and school-aged social skills programs). With patients in group A, you can improve their skills by 10% year-over-year. The company is newer to providing services for patients in group B, so the overall improvement in their skills will be much lower. However, by bringing on patients in group B and learning to provide quality services for them, the company estimates their operational efficiencies will allow them to provide a 2.5% increase in their skills in the 1st year, 10% in year 2, and 25% in years 3 and beyond. By these estimates, by the third year, the better option is to take on more patients in group B than from group A. In an efficient system, providers ensure that the opportunity cost of how they currently allocate their skills and resources would never exceed the benefit gained from allocating their skills and resources in that manner. It is important to note that past research suggests humans rarely consider opportunity costs in their decision making unless explicitly incorporated into the decision-making process (e.g., Spiller, 2011)
Measuring Quality
Measuring care quality might be even more difficult than care cost as care cost at least comes with receipts and a paper trail. Nevertheless, researchers from the field of quality measurement have identified at least eight different types of value-related measures (Table 1; National Quality Form [NQF], 2023). Some of these we have already encountered in cost measures via resource use and efficiency. If one provider can generate the same outcomes but with fewer resources or greater efficiency, that would move them toward the lower right quadrant in Figure 1 and would be considered to have greater value. Specific to quality, however, three broad categories are commonly recognized: outcome measures, process measures, and structure measures2.
Table 1.
Measurement Types and Their Definitions (CMS, 2023b)
| Measurement Type | Definition |
|---|---|
| Composite (a.k.a. roll-up) | Summarization of multiple measures into a single value or index. |
| Resource Use | The count of the frequency of a defined health system resource. Some may further add a dollar amount to each unit. |
| Efficiency | The cost of care associated with a specified level of health outcome. |
| Outcome | Measure focusing on the health status of a patient, or a change in health status, resulting from health care. Can be a desirable outcome or an adverse outcome. |
| Intermediate Clinical Outcome | The change produced by an intervention that has a known relation to a long-term outcome that is the primary goal. |
| Outcome: PRO-PM | Patient-reported outcome-based performance measure. Measure of the healthcare outcome obtained directly from the patient. |
| Process | Steps that should be followed to provide good care. Process measures should have a scientific basis relating the process which, when well executed, increases the probability of a good performance on an outcome measure. |
| Structure | Assesses the features of a healthcare organization or clinician relevant to its capacity to provide good healthcare. |
Outcome measures are the ideal target as they represent the purpose behind why people are receiving health care in the first place. Outcome measures answer the question, “How does the patient’s health status improve [or avoid getting worse] because they received a particular treatment or intervention?” Sometimes the desired outcome takes a long time to be realized (e.g., behavior change). In these situations, it makes sense to track intermediate changes along the way from baseline to the desired outcome. These are termed intermediate clinical outcomes (NQF, 2023). Outcome measures and intermediate clinical outcomes are often defined by the health-care provider based on the physiological or behavioral change expected vis-á-vis a scientific understanding of how the treatment or intervention works. But, as should be familiar to many behavior analysts, the fact that something changed in an expected direction does not mean it resulted in a socially significant change for the patient nor that the cost was worth the amount of change observed. Patient-reported outcome measures are aimed at this kind of social validity and involve a variety of methods for measuring the patient’s perception of the outcome of services received. In many situations, multiple outcome measures from all three areas might be used together to gain a holistic view of intervention effectiveness (e.g., for a list of outcome measures, visit NQF, 2023, or National Committee for Quality Assurance [NCQA], 2023).
Process measures are a second type of quality measure. As will be discussed in detail below, deriving and measuring outcomes is not always a straightforward process. Further, few things in life can be perfectly predicted. It is possible to do everything “by the book” but for the outcome to be less than ideal because of factors unknown beforehand. Process measures are perfect for these kinds of situations. Process measures are typically related to field-wide agreed upon standards of care where the existing research literature suggests that following those procedures will lead to a better outcome for the patient be it an outcome measure, intermediate clinical outcome, or patient-reported outcome (Porter & Teisberg, 2006). Here, the idea is that a provider provides quality care to the extent that they can demonstrate they follow recognized best practices. Though it seems like process measures should simply be adhered to and measuring them unnecessary, some estimates suggest that poor process quality accounts for as high as 25%–30% of U.S. health-care costs (Shrank et al., 2019; Milstein, 2004). Therefore, contingencies around observing, measuring, monitoring, and improving processes seem to be underutilized (for a list of existing process measures in health care, see NQF, 2023).
Structure measures are the final type of quality measure often discussed in the quality measurement literature. Structure measures assess the features of a health-care organization, or the provider, that are relevant to their capacity to provide good care. These could involve anything from having the right types of surgery units, the right type of expertly trained staff, the materials and resources needed to actually implement best practice standards, the ratio of providers to patients, the use of an electronic health record system, and so on (for a list of structure measures, see NQF, 2023).
An example may help tie all these together. Past researchers have found that delivering aspirin within 10 min of someone entering the ER with chest pains leads to a high probability of 1-month survival (outcome measure). All things considered, this is what is most important. If people come to the ER with chest pain, we want them to survive and should measure this. A relevant structure measure might then be whether the hospital has a large enough stock of aspirin so that anyone entering the ER with chest pain could receive it. Based on historical data, the hospital can monitor how much they have needed in the past, ensure that amount plus a cushion is always in stock, and increase or decrease it relative to seasonal trends. The specific structure measure might then be the amount of aspirin relative to predicted need. A corresponding process measure might relate to the systems needed to identify someone in need of aspirin and the latency from ER entry to aspirin delivery. Given the range and complexity of services delivered in an ER, many competing contingencies could affect performance on any of these three measures. So, each ER likely has figured out a way to do the above based on their unique setting. Their approach to structure and process can be quantified in terms of number of resources used and the corresponding cost to implement their current system of aspirin delivery. This total package—survival rate, ratio of stocked to predicted aspirin, proportion of patients correctly identified as requiring aspirin, latency to aspirin delivery, amount and cost of associated resources—is what is combined to make claims about the value of care a patient receives when they enter that ER with chest pain: quality performance divided by cost.
As you might imagine, the above simple situation already involves quite a bit of data collection, tracking, and monitoring for a single type of care delivered. Many health-care facilities provide many types of healthcare services. Collecting, tracking, and monitoring quality and cost across all possible nuanced services one delivers can add a tremendous amount of work; and it would be difficult for patients or insurance companies to compare providers if there were quality and cost measures associated with every service delivered. Thus, a final type of measure is the composite measure (a.k.a. roll-up measure). As the name implies, composite measures combine multiple different metrics into a single value or index to make comparisons simpler for all stakeholders.
Controlling for Differences3
As mentioned in the previous paragraph, a primary use of health-care value measurement and monitoring is to compare providers. Comparisons can occur within the same provider such as whether we obtain better patient outcomes this year compared to last. Or comparisons can occur between providers such as determining whether Company A or B is better at working with severe behavior clients or with early intensive behavioral intervention clients. Somewhere in the previous section or in this paragraph readers may have started to think, “Wait a second, different individuals will learn at different rates and are affected differently by their total set of comorbid physiological, behavioral, and socioenvironmental conditions. It does not seem fair to compare the same measures for individuals we know will likely have different outcomes based on their total case severity.” Because of this fair concern, value measurement systems often adjust, or control for, the total set of patient characteristics when presenting and comparing value-related measures (e.g., Iezzoni, 2012). At least three different strategies exist that ABA providers should likely be familiar with (Table 2).
Table 2.
Common Methods to Control for Patient Characteristics When Comparing Providers on Quality Measures
| Method | Benefits | Drawbacks |
|---|---|---|
| Stratification |
1. Simple. 2. Easy to understand. 3. Assumptions only around what variables to include not how they should influence outcomes. |
1. Complexity of analysis. 2. May miss interactions among variables. |
| Risk Adjustment |
1. Most used method. 2. Can account for interactions among variables. |
1. Statistical acumen needed to do well. 2. Relative risk is or can be known to modeler. 3. May create bias in a dataset where none exists. |
| Data-Driven Clustering |
1. Easy to understand. 2. Assumptions only around what variables to include not how they should influence outcomes. |
1. Statistical acumen needed to do well. 2. Interpreting cluster groupings. |
Controlling for Patient Characteristics
Stratification might be the simplest method for controlling for differences in patient characteristics when comparing value measures. Stratification involves simply identifying a variable or characteristic with a logical relation to the measure, collecting data on it, and then visually displaying the measure with the data separated along that variable or characteristic (or using descriptive statistics if that is your preference). For example, Figure 2 shows an example of how changes in assessment scores can be stratified based on the number of comorbid medical diagnoses. Here, the assumption is that the more medical diagnoses someone has (e.g., seizure disorder, severe intellectual disability) the more challenging it might be to make progress over time. This is captured by the top panel in Figure 2 where a decreasing trend in the change in assessment scores is observed as the number of comorbid diagnoses increases. To compare providers (e.g., company, BCBAs) on this measure (e.g., bottom panel, Figure 2), you would examine how they perform only within a specific stratification. For example, Provider C tends to have the best outcomes for patients with one diagnosis but performs poorly with those with three diagnoses; Provider A performs worst with patients with one diagnosis but performs best (alongside Provider G) with patients with five or more diagnoses; and, Provider J tends to be near the bottom across all stratifications.
Fig. 2.
Example of Stratification Used to Control for Patient Characteristics when Comparing Provider Performance on Value-Related Measures. Note. The top panel shows stratification to examine how changes in assessment scores relate to the number of diagnoses patients’ have. The bottom panel shows the same data but separated by provider for easier comparison among them
There are two primary benefits to using stratification to control for patient differences when assessing performance on value-related measures. The first is that stratification is relatively simple to do and easy to understand. Identify the variable you want to stratify across based on past researcher literature, collect the data, then plot. The second benefit is that there are no assumptions built into the analysis around how the measures should vary across stratifications. The data are simply graphed and visual analysis tells us whether differences exist along the stratification and potential courses of action (e.g., put patients with 5+ diagnoses on Provider A or G’s caseload and do not give them to Provider J).
There are two downsides, however, to using stratification only. The first is the complexity of resulting analyses. Figure 2 shows an example of a single characteristic that influences performance on a single measure. Given so little work has been done in this area of ABA, there might be dozens or hundreds of relevant patient characteristics we might want to examine and for each of dozens of measures. Making sense of this complexity can be difficult. The second, and related, downside to only using stratification is that it only examines variables in isolation but fails to capture interactions among them. For example, recent analyses suggest no difference in six metrics of provider service delivery based on age, gender, number of diagnoses, clinical group, number of programs for individual patients, and the number of behavior plans for individual patients (Sosine & Cox, 2023). But, ~77% of the variability in service delivery could be predicted using a combination of 100+ variables spanning patient characteristics and provider characteristics. Using only stratification methods would likely miss those interactions.
A second approach to controlling for differences in patient characteristics is termed risk adjustment (e.g., Iezzoni, 2012). Risk adjustment is arguably the most common method for controlling for patient differences when analyzing and comparing provider performance across value-related measures. Risk adjustment is a statistical process whereby a patient’s underlying health status and spending is controlled for when comparing patient outcomes (HealthCare.gov, n.d.). Risk adjustment has been around for decades and, thus, has book long treatments outlining what data to collect and how to document your approach (e.g., Poe Bernard, 2020), the benefits and drawbacks for the statistical methods that can be conducted (e.g., Iezzoni, 2012), and how it relates to provider profiling (e.g., Goldfield & Boland, 1996). Often the output of risk adjusted statistical models is a severity index or rating scale and similar approaches have recently begun to appear in ABA (Taylor et al., 2023).
As with stratification, there are benefits and drawbacks for using risk adjustment methods. One significant benefit is that risk adjustment is, it can be argued, the status quo for controlling for variability in patient characteristics when comparing providers across value-related measures. Thus, as demonstrated by the book long treatments referenced above (Goldfield & Boland, 1996; Iezzoni, 2012; Poe Bernard, 2020), there are many resources available to help people conduct the analyses and to speak a language the healthcare industry has been speaking for decades.
The downsides to risk adjustment are at least threefold. First, conducting risk adjustment requires statistical abilities that ABA companies may not have. Second, risk adjustment often requires that the relative risk to patient outcomes along the patient characteristics included are known from the research literature or past company-wide data. Past authors have lamented the paucity of reporting on demographics and other patient characteristics that affect intervention outcomes in the published ABA literature (e.g., Jones et al., 2020). Thus, the feasibility of robustly risk-adjusting value-related measures in ABA is currently unknown. Lastly, past researchers have shown that risk-adjustment methods may actually create bias in a dataset where none exists in the raw data (Birzhandi & Cho, 2023; Zink & Rose, 2020, 2021). This highlights that risk-adjustment is far from a perfect strategy for handling the complexity of varying patient characteristics when comparing ABA providers.
With the rise of data collection and compute power, machine learning methods have also begun to be used to control for patient differences. Stratification, it can be argued, allows one to examine how value-related measures differ based solely on the raw data across categories. Risk-adjustment arguably allows one to use statistical assumptions to combine many different characteristics into a single equation. Machine learning methods might be thought of as an intermediate bridge between the two. Here, unsupervised machine learning methods allow one to group patients together based on how similar they are among the many variables the researcher or practitioner wants to include in the analysis to describe a patient’s profile (e.g., Cox & Sosine, 2023). That is, here, the researcher lets the data tell them the natural groupings in the data as opposed to making assumptions about what the groupings should be. These methods can simplify stratification analyses using many patient characteristics while avoiding the assumptions that come with risk adjustment. Given the novelty of these approaches, however, further research needs to be conducted around the benefits and drawbacks of this alternative approach.
Controlling for Provider Characteristics
Like patient characteristics, there are certain provider characteristics that may influence value-related measures in ways that are outside the control of the provider (Iezzoni, 2012; Goldfield & Boland, 1996). And, also like patient characteristics, controlling for provider characteristics can be accomplished via stratification, statistical methods, machine learning methods, etc. Two provider characteristics commonly tracked and controlled for are provider experience and provider methods (e.g., Porter & Teisberg, 2006). Provider experience is often measured via the number of patients a provider has treated for a particular condition and becomes a rough proxy for their skill and efficiency. All things considered and assuming everyone is practicing within their scope of competence, we would expect someone with more experience to have better patient outcomes than someone with less experience, and it would be unfair to reward or punish someone simply based on how long they have been around.
Methods refer to the processes of care delivery themselves. As an example, the research literature may suggest two different approaches are effective in producing positive patient outcomes. The provider chooses one of these approaches and thus their abilities would be compared to other providers who took a similar approach. This does not mean that comparing across different approaches would not be useful in the long run to determine which approaches might be better for which types of patients in the long run. However, for value-related measure comparisons, providers would be compared to those with similar experience and using similar methods.
Public Data Sharing and Transparency
A fourth component of value-based care is public data sharing and transparency around value-related measures. It is possible to make claims about patient outcomes by simply looking internally and only at one’s own data. Over time, one could identify if they are getting better or worse than how they performed in the past. But they still would not know by how much they might improve nor if there are areas where they are performing so poorly that they may want to consider ceasing to offer that service. To obtain such feedback on relative performance, a provider needs data on how they are doing compared to similar providers who provide services to similar patients. This requires that providers share data on their quality measures over time with each other. Further, patients also have a right to know how good potential providers are and if there are providers who are better at treating people most like them. This requires that data is shared in a public location accessible to all relevant stakeholders.
Critical to public data sharing is transparency around exactly what measures were collected, how they were collected, and how they made their way to showing up in the public data sharing repository. Fortunately for readers of this journal, behavior analysts do not have to reinvent the wheel here. The ability to obtain feedback on performance relative to peers already exists via several organizations, albeit these systems are in their infancy (e.g., BHCOE, ACQ). Further, data registries are a common method that providers use in other areas of health care to submit data and receive feedback on their performance relative to other providers. Though some registries exist for autism (Payakachat et al., 2016; Schendel et al., 2013), they are not currently specific to ABA. Nevertheless, registries in other areas of behavioral health provide examples for the types of measures others have found useful and that ABA providers might use as a starting point. Regardless of whether it is accomplished through accreditation, data registries, or some other mechanism, the value of services delivered is unlikely to improve unless it is shared publicly and how it was collected is transparent.
Critical to note is that public data sharing and transparency creates an obvious potential downside of ABA companies changing their behavior because their performance is publicly broadcast. At the time of writing, there are few opportunities for patients or employees to know how well one ABA company provides services relative to another. Making such data public could lead to situations where ABA companies try to game the system such as by “teaching to the test” or to only take on clients that are easier to demonstrate progress with leading to inequity issues. The presence and type of unintended consequences VBC systems create and how to address them is a well-documented topic in the VBC literature (e.g., Damberg et al., 2014; Porter & Teisberg, 2006; Powell et al., 2011; She & Harrison, 2021) and is discussed further in sections below. But, here, what is important to highlight is that public data sharing and transparency creates a new set of contingencies likely to influence the behavior of providers in many intentional and unintentional ways. Coincidentally, many behavior analysts are skilled in understanding and designing complex reinforcement schedules offering an opportunity for those interested to contribute positively to this conversation.
Field-Wide Collaboration and Coordination
A final component of value-based care is field-wide collaboration and coordination around the first four components. It will be extremely difficult to improve the quality of services—on the whole—if each individual BCBA or ABA organization uses different cost measures; different quality measures; controls for patient differences differently; uses different methods to describe what makes them and their service delivery unique; and refuses to share data or be transparent about how they measure and analyze the above. This means that strategies such as coordinated working groups (Porter & Teisberg, 2006) and integrated learning teams (Teisberg et al., 2020) spanning business and clinical operations within and across organizations will likely be needed to fully measure, analyze, and improve upon the above. Related to this, and important to note, is that quantifying the above typically occurs around a patient’s condition—not around specific types of providers. Thus, collaboration and coordination would ideally include other providers that an ABA organization’s patients interact with whose care is related to the same condition for which their patient’s are receiving ABA therapy.
Myths and Misconceptions of Value-Based Care
Given the complexity and relative novelty of value-based care to ABA, it may help to discuss common myths and misconceptions about value-based care. Table 3 summarizes five common misconceptions about VBC. One misconception is that a single stakeholder creates and disseminates the measures used for value-based care. Though those first to market with value-based measures will likely control the initial narrative, the goal with all value-based care is iterative improvement toward improved quality and reduced cost. This means that anyone who demonstrates improved value will have an important voice in the conversation. It also means that quality and cost measures and benchmarks will likely change over time as researchers and providers identify methods to improve quality and reduce cost.
Table 3.
Common Myths and Misconceptions about VBC
| Myth / Misconception | Actuality |
|---|---|
| One stakeholder creates quality measures and disseminates them for use by all. | Anyone can create and disseminate a quality measure. If it reduces cost, improves quality, or does both, then it is important. Measures also continuously iterate as fields improve. |
| Reducing care is the goal. | Reducing errors and rework is good. But, improving value is the goal. Reducing cost / care is not of value if outcomes fail to maintain or improve. |
| Higher quality care necessarily costs more. | Better providers are often more efficient and make less errors. Providers better at treating particular conditions often have lower costs in the long run. |
| Structure and process measures come first, then we can talk outcomes. | Consensus may never come. Standards of care are useful but should not stifle innovation. There is no reason both cannot be pursued simultaneously. |
| Significant breakthroughs are needed to advance VBC. | Most advances develop systematically with small iterations over time which add up to a large impact. Further, many fields have dozens of quality and cost measures rather than a few key ones. |
A second common myth and misconception is that reducing care is the goal. Reducing errors and rework certainly are an important goal. And, where two or more methods exist for obtaining the same patient outcomes, it seems like it would be difficult to argue for the rationale behind using a more expensive one. But, as noted by Porter and Teisberg (2006), competing only on costs instead of cost and quality (i.e., value) only makes economic sense in commodity businesses because all sellers are similar. Health care is not a commodity business and not all healthcare providers offer the exact same service. Thus, reducing care is not the goal, rather the goal of VBC is to improve the value of health care to patients over time.
A third myth and misconception is that higher quality care necessarily requires higher costs. Rather, research in human service delivery often finds that focused attention, experience, learning, and scale in addressing a particular condition leads to that provider being able to obtain better results, faster, and with greater efficiency (e.g., Colombara et al., 2015; see penetration cycles in economics for similar concepts). Further, better providers also typically have more accurate diagnoses; fewer errors in decision making leading to less rework and overuse or misuse of resources; less invasive interventions; and reduced need for additional treatment (Porter & Teisberg, 2006). In short, providers better at treating particular conditions often have less cost per patient in the long run because they do more with the same amount of resources.
Another common myth and misconception is that fields newer to value-based care need to start by gaining consensus around structure and process measures so that everyone is doing the same (or similar) things (i.e., that clinical decision making is standardized first). This argument often goes that only by first ensuring that everyone is engaged in similar processes can outcomes be truly compared. But, forcing everyone to adhere to identical processes may negate the precise complexity of a patient’s individual situation if the level needed to make a clinical decision is more detailed than the clinical decision processes presumed to be best. Further, forcing identical processes creates a contingency that may lead providers to focus on methods to improve adherence to the process rather than contingencies that foster innovation around better structures and processes (Porter & Teisberg, 2006). An alternative approach, for those who disagree that processes and structures needed standardized first, is that what is needed is variation and selection around competition with results, not standardized care; that is, competition on results is most important, not just adherence to evidence-based ABA.
A final common myth and misconception is that significant breakthroughs are needed to advance value-based care. Though large steps in reducing cost or improving quality are certainly useful, the literature on the development of quality measures over time highlights that variation, iteration, and selection of measures over time can add to significant improvement over time (e.g., Crandall et al., 2011; Taylor et al., 2014). A related solution to this surrounds data sharing, transparency, and field-wide collaboration so that the field can iterate in an agile framework (Ashmore & Runyan, 2014) around continuous improvement of patient outcomes. It is important to note that advances in this area are likely to look different than traditional academic research because it is focused around service delivery not research; but sharing of information is worthwhile nonetheless. Outlets, such as Behavior Analysis in Practice, are designed around providing best-practice information relevant to the service delivery of behavior analysis suggesting that options exist for ABA providers to share such information. However, editors, reviewers, and readers may need to begin reading outside of current behavior analytic specific journals and publishing different types of articles that may differ from what they are used to reading.
Data and Analytic Challenges to Value-Based Care in ABA
Now that we have covered the basic components and common misconceptions of value-based care in general, significant challenges remain for ABA provider organizations and the field of ABA to gain consensus. The first challenge will be for individual providers to begin collecting data on (i.e., measuring) the many facets of VBC outlined above. As a reader looking for something they can do tomorrow when they get to work, it would be addressing this challenge. Below are numerous questions that need data to answer sufficiently. As a first step, can you answer these questions? If not, how will you begin collecting data around these topics and how will you prioritize which ones to address data collection around first?
Beginning with cost, questions here include: How does each organization measure the cost of delivering one unit of ABA services now? How well does their measure align with the payer’s perspective of utilization relative to price? What is the gap between their measure of cost and the payer’s perspective? What measures of efficiency do they currently use? How do they currently measure clinical and administrative errors, rework, and recurring effort? Do they currently measure and manage opportunity cost and, if so, what is their definition of “success” currently for their patients? How does their approach differ from other providers? How do they tie the above into organizational behavior management systems? What even counts as a “cycle of care” for ABA treatment? And, how might researchers begin to offer and publish strategies for ABA providers around optimal cost measures so as not to overburden leadership in ABA organizations with data swamps and death by dashboards? It is important to note that the above measures of cost are often argued as best aggregated and compared for each patient over the full cycle of care within a specified reporting period.
Arguably, the biggest hurdle to VBC initiatives in ABA will be in identifying, adopting, and creating systematic methods to collect data on quality measures. As noted in the introduction, some have already begun to make movement around outcome measures by promoting specific portfolios of norm-referenced and criterion-referenced assessments (e.g., BHCOE, 2021; ICHOM, n.d.) whereas others have argued for combining standardized and nonstandardized assessments given that standardized assessments may fail to capture a large portion of what behavior analysts work on at the request of patients (e.g., ACQ, 2022).4 Supporting these approaches are recent data highlighting that approximately half of the goals worked on over the course of 3.9 million ABA sessions came from a standardized program library tied to standardized assessments and the remaining half came from custom designed programs (Sosine & Cox, 2023).
Regardless of one’s approach to the use of standardized versus nonstandardized assessments as outcome measures for ABA, ingenuity will be needed via collaboration between providers and researchers. Remember, the goal of outcome measures is to improve the quality of care each BCBA and RBT delivers to their patients. Many currently available standardized assessments often require 12 months or longer before changes are observed. As a feedback tool around service quality, this long of a delay is of little utility for providers. As an alternative, many past researchers have used goals mastered to indicate progress (e.g., Linstead et al., 2017a, b). Goals mastered has the advantages of capturing the outcomes unique and deemed important to each individual patient and can give more immediate feedback to BCBAs and RBTs around the effectiveness of their services. However, goals mastered has a significant drawback where the difficulty and amount of behavior change captured with each goal may differ across providers and possibly even across goals within the same patient. This makes “goals mastered” alone difficult to use as a measure for comparing providers as it may not mean the same thing in each instance. Given the central importance of outcome measures to VBC, this challenge needs a long-term solution agreed upon by all stakeholders.
Less discussed in the extant literature are intermediate clinical outcomes and patient-reported outcome measures. Intermediate clinical outcomes around standardized assessments might be an excellent way to bridge the current long delays to changes on standardized assessment scores. Work here would require data collected on behavioral patterns that reliably predict long-term changes on standardized assessments. Given behavior analysts’ superb abilities around observing and measuring patterns in behavior–environment and behavior–behavior relations, we seem well positioned to find such behavioral markers. All that is missing is creative work around what these might be, data collected to assess their validity as predictors of behavior change, and replication across many patients to become validated. Likewise, patient-reported outcomes are relatively absent from the ABA literature despite effective and applied being critical components of ABA (Baer et al., 1968, 1987).
Broadly accepted process and structure measures are also largely absent from the published literature. We do know that conducting a functional assessment typically leads to more effective and faster reduction of challenging behavior (e.g., Neidert et al., 2010; O’Brien & Hendrix, 2021). It seems likely that preference assessments and reinforcer assessments (formal or informal) will improve the effectiveness of skill acquisition programs (e.g., Leaf et al., 2016; Toussaint et al., 2015; Weyman & Sy, 2018). And, the pages of JABA are filled with demonstrations for how we can train staff and parents to implement programs, the importance of treatment fidelity, and various methods to teach verbal behavior, motor skills, or adaptive skills of daily living. But an efficacy-effectiveness gap (e.g., Nordon et al., 2016; Ustun, 2011) likely exists in ABA just as it does in other health-care fields. In brief, how does the efficacy of our research translate to practical effectiveness as we move from the laboratory to the clinic and from brief, 5-min sessions focused on a single program and a single reinforcement schedule to 2 hr-long sessions with dynamic changes in programs, reinforcer schedules, and reinforcers used? And, how do organizational structures and systems relate to patient outcomes? As with the various categories of outcome measures, experimentation and data sharing of findings from service settings will significantly help move this area forward.
Controlling for patient characteristics is next on the list of challenges and, it can be argued, second most important behind outcome measures. A fundamental tenet of a scientific approach to behavior is that claimed principles and laws of learning apply to all biological organisms. But, just because they do apply to all living organisms does not mean that learning occurs at the same rate for all people, everywhere, and all at once. VBC is not just about whether someone makes progress or not, but how quickly they do and if alternative approaches may have been better. The specifics and success of each intervention are likely to be affected by certain characteristics of the patient’s learning history, their physiology, current circumstances, and other therapies being received. What characteristics do we collect data on now? To what extent do they affect rate of progress for which patients and under what conditions? What data should we start collecting so we can look at it in finer detail once we have enough data?
Measuring and controlling for provider characteristics is also important. Many authors have written about scope of competence for individual BCBAs (e.g., Brodhead et al., 2022; Brodhead et al., 2018); and its critical importance is buttressed by its inclusion in the BACB Ethics Code for Behavior Analysts (BACB, 2020). Measuring and controlling for provider characteristics is similar to conversations around scope of competence with two important distinctions. First, many past writings around scope of competence make arguments for why we should attend to it and only practice within it. However, controlling for experience requires us to make explicit and to collect data on our definition of “competence.” Few authors have made arguments around specific data elements that should be captured so that we can observe, measure, and make claims about competence directly. This will be required for VBC. Second, scope of competence is often discussed at the individual BCBA level. However, for VBC, the unit of analysis is often at the provider group (i.e., ABA company) as a whole. Providers will have to learn to collect data and think about their methods and experience at this level of aggregation.
Once data around the measures, patient characteristics, and provider experience have been agreed upon and collected, the field will need to develop methods to relate overall cost to quality. Figure 3 shows one example of the types of analyses that will need to be conducted. In particular, how do various providers measure failure costs, the costs of achieving good quality, what are the total quality costs measured by different providers, where are different providers finding a cost minimum, what counts as a unit of ABA, how might we best aggregate these quantities across all services delivered under the umbrella of “ABA,” how might these curves differ across patients with varying clinical profiles, and how does provider experience and expertise shift these curves?
Fig. 3.
Health-care Cost Model of the Tradeoff between Quality and Total Cost of the Care. Note. See text for details and definitions.
Adapted from Freiesleben (2004) to make relevant to ABA and VBC
Another set of data and analytic challenges involves bringing all the data around cost, quality, patient characteristics, and provider characteristics into a single system that offers actionable insight to ABA providers, payers, and patients. Solving this challenge involves creating the critical technical infrastructure and information technology (IT) platforms to ingest and aggregate the data, developing computational intelligence platforms to quickly make sense of the information, and also to visually present the data in a manner in a meaningful way to each relevant stakeholder (Zanotto et al., 2021).
Ethical Challenges to Value-Based Care
A final set of considerations for VBC surrounds expansion of the ethical considerations ABA providers might currently focus on when they provide ABA services for individual patients, one-by-one (Table 4). At its core, VBC is more of a population health look at a total system of care for patients with a particular condition.5 Though justification for ethical claims in population health might be similar to justifications of ethical claims made about the care for an individual patient (e.g., maximize benefit and minimize harm), there are some distinct differences that may result in ethical dilemmas—a situation where two competing claims to what is right lead to incompatible behavior. What is “most right” or “the best” decision will likely depend significantly on the details of the decision context as well as which ethical theory the decision-maker prefers. And, ethical dilemmas may arise only seldomly. Nevertheless, understanding that these ethical dilemmas can arise allows ABA providers to proactively collect data to help them decide the best course of action should they arise. Given that book-length treatments exist around both public health ethics (e.g., Bayer, 2006) and clinical ethics in ABA, a full treatment of the intersection between these two areas is outside the scope of the current article. These five were chosen as they are seemingly significant areas where ethical dilemmas seem as though they will have a high likelihood of occurring.
Table 4.
Summary of Potentially Novel or Unique Ethical Challenges for Behavior Analysts Thinking about Value-Based Care
| Area (Ethical Concept) | Traditional ABA | Behavior Analysis and VBC |
|---|---|---|
|
Defining the right (set of) outcome(s). (Beneficence / Nonmaleficence) |
Specific topographical or functional behavior change defined uniquely for each patient. | Targets defined at the patient population level. ABA outcomes may or may not take precedent, depending on context. |
|
Allocation of resources. (Justice) |
Unclear if considered. More is always better? | Conversation around cost and allocation of resources across all patients under one’s care become a primary consideration along with care quality. |
|
Outcomes vs. Standards of Care (Ethical Theory Conflict: Utilitarianism vs. Deontology) |
Unclear if considered? Assumption that adhering to methodology is best? | More data around patient outcomes and the methods needed to achieve sufficient outcomes in nonlaboratory settings. Move from baseline vs. treatment to treatment vs. treatment analyses. |
|
Equity (Justice) |
Unclear if data collected on this or considered? Focus on isolated procedural effectiveness in small samples. | Analysis of large datasets spanning patient characteristics that are difficult to control for experimentally so as to identify the characteristics' relation to patient outcomes. |
|
Scope of Competence (Beneficence / Nonmaleficence) |
Ideal ethical guideline that certified behavior analysts ought to practice within. | Explicit definition and measurement. Potential assumption that no one has the automatic right to practice with patients unless they have demonstrated competence. |
A first area where an ethical dilemma might arise is around defining the right set of outcome(s) for a patient population. Health-care providers are often highly trained to identify areas of improvement specific to their area of expertise. VBC focuses the conversation on the total cycle of care for a particular condition within a specified period. In this view, ABA providers would recognize they are one specialist within a larger care team aimed at improving someone’s life relative to a particular condition. Sometimes, this might mean that other outcomes specific to other disciplines take precedence over ABA. Sometimes, this might mean ABA-specific outcomes take precedence over those specific to other disciplines. And, in all situations, data around optimizing patient outcomes relative to the patient's stated preferences should determine what the right balance is.
A second area where ethical dilemmas might arise surrounds more direct and explicit conversations around the allocation of resources. By definition, the value of ABA services includes reference to the amount of resources used to deliver a particular outcome. By the definition commonly used within the VBC literature (Porter & Teisberg, 2006), those that produce the same outcomes for lower cost provide more valuable care. And, by definition, those who produce better outcomes for the same cost provide more valuable care. Conversations around cost and allocation of resources are relatively absent from the research literature in ABA service delivery. A lot of data shows providers how to change behavior with varying techniques and that we can change behavior where previously it might have been difficult. Adding resource use to this conversation adds questions around not just whether we change behavior, but how fast can we? And what does it take to speed up behavior change by x amount?
Fortunately, we already have data and decision-making frameworks to begin adding resource use to the conversation. The same published literature providing us with effective techniques also provides us with ample data to calculate how many sessions, minutes, trials, etc. were needed to produce the effects observed for specific behavior change strategies and for specific patient profiles. Once obtained, comparisons across procedures will allow providers to begin using the most efficient ways to change behavior and will allow researchers to begin exploring methods to improve the efficiency of behavior change strategies. Related to this, BCBAs already make many decisions around how to allocate their time, attention, and clinical resources to the patients on their caseload. Thus, many are likely already thinking about resource allocation to optimize their patients’ progress. The conversations in this article are simply expanding that outward to not think just about the patients on our caseload but also across the entire ABA organization we work at. Note, this does not mean that analysis at the broader, organizational level trumps the analysis at the individual patient level. Only that all analyses are likely considered relative to the larger landscape it all sits within.
A third important ethical conversation the field will likely need to come to consensus on surrounds outcomes versus standards of care. At its core, this is a false dichotomy. There is no reason that adhering to standards of care and best practices do not lead to optimal outcomes. It can be argued that this relationship is necessarily dependent by definition of standards of care and of evidence-based best practice; they get those labels because they have been shown to lead to the best patient outcomes based on what we know at this point in time. But, as described previously, strict adherence to methodology may stifle innovation and cause providers to ignore the nuances of a situation wherein deviation would actually be in the best interest of the patient. On the other hand, outcomes resulting from ABA services are likely impossible to perfectly predict. Adhering to best practice standards of care might be the most appropriate way to compare and rank providers but this still leaves open the thorny conversation around who decides what those best practice standards of care are and how they make that determination.
The conversation in the previous paragraph is similar to the classic clinical ethics debate between utilitarianism (it does not matter what you do as long as patient outcomes are optimal) versus deontology (you should be rewarded by following best practices because some things are simply out of your control). And, seemingly, bears a striking resemblance to conversations around functional analyses versus IISCA versus descriptive functional assessments.6 It is important to note that the argument here is not whether the assessments provide you with the same type of information because that is clearly not the case (Tiger & Effertz, 2021). The point here is that an outcomes-based approach focuses on whether the best possible outcomes were achieved as opposed to how you arrived at them. As questions, How do we know our decision did lead to the best possible outcomes? What data can we collect—across many providers and many patients—to determine what standards of care can compete in these analyses and which ones are simply to exploratory to consider?
A fourth important ethical topic central to VBC is equity. In a broad sense, equity in health-care service delivery refers to the idea that equity in health is the “absence of systematic disparities in health (or in the major social determinants of health) between groups with different levels of underlying social advantage/disadvantage” (Braveman & Gruskin, 2003). Despite the seeming simplicity of that definition, operationalizing equity in health care is often inconsistent (Lane & Davis, 2022) and can sometimes be incompatible (Culyer & Wagstaff, 1993). Equity and the role of social determinants of health (SDoH) are largely absent from the ABA literature. This is not surprising as much published work has focused on clinical methodology and demonstration of isolated procedural effectiveness for small sample sizes. Understanding disparities in ABA services requires large datasets spanning many patients across an organization or many organizations. As the field begins to move in the direction of looking broadly at the value of care it provides, it seems likely we will identify areas where disparities exist and that will take creative solutions to solve.
Related to the health (in)equity conversation is the potential for providers to “game the system” or to provide care only for those who are easier to treat so that their publicly shared data looks better. In theory, appropriately stratified or risk adjusted measures should negate this possibility from occurring. However, as described previously, risk adjustment can be hard to do well and may add bias to a dataset where none existed. Further, both risk adjustment and stratification require you have the right data already collected which many ABA organizations may not have. What this suggests is that the ethical implementation of VBC likely will need to wait until these challenges are adequately solved to avoid creating greater inequity in the delivery of ABA services.
A final ethical conversation relates to the scope of competence discussion earlier. At its snarkiest and most blunt, an adage in VBC is that, “Health-care providers have no right to practice without proving good results.” Note here that “good” does not mean that the provider is the “best.” Stated in a more admirable way, just because ABA providers can provide services to particular types of patients does not mean that they should. Rather, excellence and quality should shape what services are offered and to whom as opposed to breadth of services and convenience. One example of this at its extreme is a Wisconsin health-care delivery system called ThedaCare, which claimed at one point to only seek to provide services in areas where it ranked in the 95th percentile or above (Porter & Teisberg, 2006). In ethical terms, this involves greater self-reflection and explicit identification of the benefits we can provide certain types of patients, sharing of data so we know if there are others better than us the patient can work with, and referring where it is in the best interests of the patient, regardless of its impact on our business.
Moving Forward
Payers and patients are justified in asking what kind of health-related improvements they can expect from the money and time they spend working with ABA providers. They also are justified in asking whether one ABA provider is better or worse in helping them achieve their healthcare goals compared to another. As with other health-care disciplines, VBC is likely to come to ABA unevenly and slowly over time; mainly because it is hard to do well and requires the collaboration of many people spanning many disciplines. Given where the field is at now, however, there are at least three things ABA providers can begin to do tomorrow when they get to work so that they can contribute to improving the quality of care they deliver and move toward optimizing the value of the services they deliver.
The first thing to do is begin collecting data. When reading through this article, there were likely several areas wherein readers recognized they are not currently collecting the data being described or the idea of collecting it was not even on their radar. Though the amount of information to be collected may feel overwhelming, the important thing is simply to get started. Data and information accumulate only over time, what feels fluent today was likely difficult when it was new, and the range of data collected can be increased systematically as ABA providers gain fluency. Further, at some point in time the field will converge and a set of cost and quality measures will become a standard set of measures for evaluating the value of ABA. Those who get started early are more likely to contribute meaningfully to those decisions. So, as a first step, identify the handful of cost-related or quality-related measures important to your organization that you currently do not collect, and figure out how you will begin collecting it.
The second thing ABA providers can do now is to take a critical look at the IT infrastructure underlying their business. As noted throughout, evaluating the value of ABA services requires bringing together many disparate data sources that span business operations, billing, and clinical services. Once brought together, it involves aggregating, analyzing, and visualizing the data to identify where variability in provider performance or patient outcomes indicates opportunity for improvement. Finally, value in health care is often a relative claim. This means that data sharing and transparency around what ABA providers are good at (and what they might not be good at) is important—even if just internal to an organization. All of the above requires the IT infrastructure and skilled personnel to do it cost efficiently. A first step here is to conduct data maturity and IT maturity assessments so as to help you plan where improvement is needed (e.g., DAMA, 2024; data.org, 2024).
The final thing that ABA providers can do now is to shift their thinking to more of an ecological approach. That is, instead of thinking of themselves as a single functional service, separated from other services, analyzing the value of care a patient receives requires that they identify how they fit with other providers who all contribute to the health and well-being of a patient with a particular condition. Stated in a different way, VBC involves providing care with the patient at the center and focused on their condition and its related health. Note, integrating and coordinating ABA services does not mean we provide eclectic services or relax standards around providing evidence-based care. Rather, it means understanding how we fit in the total milieu of services someone is receiving around the condition for which they sought our services. The focuses are not ABA, SLP, OT, and special education. The focus is autism spectrum disorder, ADHD, or self-injurious behavior.
The field of ABA service provision is at an interesting inflection point. Strategic initiatives focused on the value of healthcare delivered often involve: (1) improving care quality; (2) improving patient access; or (3) reducing cost. Importantly, advances in one of these three areas can sometimes lead to a worsening in one of the other two areas without a fundamental change in the way the system operates. For example, expanding access to services as they currently are delivered necessarily involves greater overall cost in a system. Likewise, reducing cost alone may result in reduced care quality, access, or outcomes. Initiatives over the past decade have led to significant expansion in access to ABA services specifically for individuals with ASD. It cannot be understated how fantastic this is. However, though the demand for ABA services still is much greater than the supply of BCBAs, field-wide initiatives around improving quality and optimizing outcomes per dollar spent are significantly lagging and have some questioning whether ABA is effective at all (U.S. Department of Defense, 2020). Obviously, readers of this journal disagree with this claim alongside this author. But, until a fundamental change occurs in the way that ABA services are provided, initiatives around improving care quality and reducing cost per outcome are needed in parallel to initiatives that improve access. VBC is a familiar framework for strategically analyzing the importance of all three aims.
Funding
None
Data Availability
There is no data associated with this article.
Declarations
Conflicts of Interest
None
Footnotes
VBC has been a topic of interest in many countries outside the United States (e.g., the UK, Walker et al., 2010; Latin America, Vazquez et al., 2009). However, the details of VBC implementation depend on the structure of the larger health-care system. Given the author’s lack of familiarity with health-care systems outside the United States, this article discusses VBC primarily relative to healthcare service delivery in the United States. This is not because this conversation relative to other countries is not important only because the author is not educated enough about other systems to speak to them.
For a chapter length treatment of these measures and their relation to ABA see chapter 8 in Brodhead et al. (2022).
N.B. Analyzing data at this level does not necessitate a move away from developing, collecting data, and tailoring interventions at the individual level. Thus, though discussion in this section is around analytics aggregated at a different level than the delivery of ABA services, it should not be read as an argument that one is better than the other. They are different and used for different purposes.
Such variability and breadth of behaviors targeted via ABA services also highlight the portfolio nature of VBC programs. That is, VBC programs often involve many different metrics as opposed to a single score from a single assessment. Identifying the right portfolio for each unique patient receiving services is no easy feat, let alone coming to agreement for all patients served by all providers across the field as a whole.
N.B. Collecting, aggregating, and analyzing data within an ABA organization across patients or providers does not mean we abandon the individualized approach and tailoring of ABA services that is the hallmark of effective healthcare delivery—ABA or otherwise. It can be argued that those who continue to do so will have better patient outcomes and so would be aligned with the goal of VBC. Nevertheless, a helpful reviewer comment suggested it may be worthwhile to remind readers that these are not in conflict or mutually exclusive. We can analyze data at both levels and to good effect.
Readers interested in learning more about this conversation are referred to Coffey et al. (2019); Fisher et al. (2016); Greer et al. (2020); Hanley et al. (2014); and Tiger and Effertz (2021).
RethinkFirst was not involved in any way with the work conducted, the decisions made, or the analyses used in connection with this article. The opinions expressed in this article are the author’s own and do not necessarily represent the views of RethinkFirst.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- Abou-Atme, Z., Alterman, R., Khanna, G., & Levine, E. (2022). Investing in the new era of value-based care. McKinsey & Company. https://www.mckinsey.com/industries/healthcare/our-insights/investing-in-the-new-era-of-value-based-care
- American Hospital Association (AHA). (2023). Total cost of care: Key considerations. https://trustees.aha.org/articles/1339-total-cost-of-care-key-considerations#:~:text=Two%20primary%20variables%20are%20used,regardless%20of%20provider%20or%20setting.
- Ashmore, S., & Runyan, K. (2014). Introduction to agile methods. Addison-Wesley. [Google Scholar]
- Baer, D. M., Wolf, M. M., & Risley, T. R. (1968). Some current dimensions of applied behavior analysis. Journal of Applied Behavior Analysis,1(1), 91–97. 10.1901/jaba.1968.1-91 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Baer, D. M., Wolf, M. M., & Risley, T. R. (1987). Some still-current dimensions of applied behavior analysis. Journal of Applied Behavior Analysis,20(4), 313–327. 10.1901/jaba.1987.20-313 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bayer, R. (2006). Public health ethics: Theory, policy, and practice. Oxford University Press. [Google Scholar]
- Behavior Analyst Certification Board. (2020). Ethics code for behavior analysts. https://bacb.com/wp-content/ethics-code-for-behavior-analysts/
- Behavioral Health Center of Excellence (BHCOE). (2021). Selecting appropriate measurement instruments to assess treatment outcomes of individuals with autism spectrum disorder: Guidelines for practitioners, payors, patients, and other stakeholders.
- Bendix, J. (2022). Value-based care gains ground. Medical Economics,99(9), 30–35. [Google Scholar]
- Birzhandi, P., & Cho, Y. S. (2023). Application of fairness to healthcare, organizational justice, and finance: A survey. Expert Systems with Applications,216, 119465. 10.1016/j.eswa.2022.119465 [Google Scholar]
- Bloomberg (2022, May). Behavioral Health Center of Excellence and Centene announce partnership to advance quality in autism treatment outcomes. https://www.bloomberg.com/press-releases/2022-05-10/behavioral-health-center-of-excellence-and-centene-announce-partnership-to-advance-quality-in-autism-treatment-outcomes
- Braveman, P., & Gruskin, S. (2003). Defining equity in health. Journal of Epidemiology & Community Health, 57, 254–258. https://jech.bmj.com/content/jech/57/4/254.full.pdf [DOI] [PMC free article] [PubMed]
- Brodhead, M. T., Cox, D. J., & Quigley, S. P. (2022). Practical ethics for effective treatment of autism spectrum disorder. Elsevier. [Google Scholar]
- Brodhead, M. T., Quigley, S. P., & Wilczynski, S. M. (2018). A call for discussion about scope of competence in behavior analysis. Behavior Analysis in Practice,11, 424–435. 10.1007/s40617-018-00303-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Centers for Medicare & Medicaid Services (CMS). (2023a). National health expenditure data: Historical. https://www.cms.gov/data-research/statistics-trends-and-reports/national-health-expenditure-data/historical#:~:text=U.S.%20health%20care%20spending%20grew,spending%20accounted%20for%2018.3%20percent.
- Centers for Medicare & Medicaid Services (CMS). (2023b). Types of measures. https://mmshub.cms.gov/about-quality/new-to-measures/types
- Coffey, A. L., Shawler, L. A., Jessel, J., Nye, M. L., Bain, T. A., & Dorsey, M. F. (2019). Interview-Informed Synthesized Contingency Analysis (IISCA): Novel interpretations and future directions. Behavior Analysis in Practice,13(1), 217–225. 10.1007/s40617-019-00348-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Collins, S. R., Roy, S., & Masitha, R. (2023, October). Paying for it: How health care costs and medical debt are making Americans sicker and poorer. The Commonwealth Fund. https://www.commonwealthfund.org/publications/surveys/2023/oct/paying-for-it-costs-debt-americans-sicker-poorer-2023-affordability-survey
- Colombara, F., Martinato, M., Girardin, G., & Gregori, D. (2015). Higher levels of knowledge reduce health care costs in patients with inflammatory bowel disease. Inflammatory Bowel Diseases,21(3), 615–622. 10.1097/MIB.0000000000000304 [DOI] [PubMed] [Google Scholar]
- Commonwealth Fund. (2023). Health insurance coverage eight years after the ACA. https://www.commonwealthfund.org/publications/issue-briefs/2019/feb/health-insurance-coverage-eight-years-after-aca
- Cox, D. J., & Sosine, J. (2023). Influence of sample size, feature set, and algorithm on cluster analyses for patients with autism spectrum disorders. PsyArXiv.10.31234/osf.io/9k2yv
- Crandall, W. V., Boyle, B. M., Colletti, R. B., Margolis, P. A., & Kappel, am, M. D. (2011). Development of process and outcome measures for improvement: Lessons learned in a quality improvement collaborative for pediatric inflammatory bowel disease. Inflammatory Bowel Diseases,17(10), 2184–2191. 10.1002/ibd.21702 [DOI] [PubMed] [Google Scholar]
- Damberg, C. L., Sorbero, M. E., Lovejoy, S. L., Martsolf, G. R., Raaen, L., & Mandel, D. (2014). Measuring success in health care value-based purchasing programs. Rand Health Quarterly, 4(3), 9. https://pubmed.ncbi.nlm.nih.gov/28083347/ [PMC free article] [PubMed]
- DAMA. (2024). The data management book of knowledge. https://www.dama.org/cpages/body-of-knowledge
- data.org. (2024). Data maturity assessment. https://data.org/dma/
- Fisher, W. W., Greer, B. D., Romani, P. W., Zangrillo, A. N., & Owen, T. M. (2016). Comparisons of synthesized and individual reinforcement contingencies during functional analysis. Journal of Applied Behavior Analysis,49(3), 596–616. 10.1002/jaba.314 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Freiesleben, J. (2004). On the limited value of cost of quality models. Total Quality Management & Business Excellence,15(7), 959–969. 10.1080/14783360410001681908 [Google Scholar]
- Goldfield, N., & Boland, P. (1996). Physician profiling and risk adjustment. Aspen Publishing.
- Greer, B. D., Mitteer, D. R., Briggs, A. M., Fisher, W. W., & Sodawasser, A. J. (2020). Comparisons of standardized and interview-informed synthesized reinforcement contingencies relative to functional analysis. Journal of Applied Behavior Analysis,53(1), 82–101. 10.1002/jaba.601 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Guiness, L., & Wiseman, V. (2011). Introduction to health economics (2nd ed.). Open University Press.
- Hanley, G. P., Jin, C. S., Vaneslow, N. R., & Hanratty, L. A. (2014). Producing meaningful improvements in problem behavior of children with autism via synthesized analyses and treatments. Journal of Applied Behavior Analysis,47(1), 16–36. 10.1002/jaba.106 [DOI] [PubMed] [Google Scholar]
- Hayes, S. C., Rincover, A., & Solnick, J. V. (1980). The technical drift of applied behavior analysis. Journal of Applied Behavior Analysis,13(2), 275–285. [DOI] [PMC free article] [PubMed] [Google Scholar]
- HealthCare.gov (n.d.). Risk adjustment. https://www.healthcare.gov/glossary/risk-adjustment/#:~:text=A%20statistical%20process%20that%20takes,outcomes%20or%20health%20care%20costs.
- Hopkins, B. L. (1995). Applied behavior analysis and statistical process control? Journal of Applied Behavior Analysis,28(3), 379–386. 10.1901/jaba.1995.28-379 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Horstman, C., & Lewis, C. (2023, April). Engaging primary care in value-based payment: New findings from the 2022 Commonwealth Fund survey of primary care physicians. The Commonwealth Fund. https://www.commonwealthfund.org/blog/2023/engaging-primary-care-value-based-payment-new-findings-2022-commonwealth-fund-survey
- Howard, J. M., Nandy, K., & Woldu, S. L. (2021). Demographic factors associated with non-guideline-based treatment of kidney cancer in the United States. JAMA Network Open,4(6), e2112813. 10.1001/jamanetworkopen.2021.12813 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hussey, P. S., De Vries, H., Romley, J., Wang, M. C., Chen, S. S., Shekelle, P. G., & McGlynn, E. A. (2009). A systematic review of health care efficiency measures. Health Services Research,44(3), 784–805. 10.1111/j.1475-6773.2008.00942.x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Iezzoni, L. I. (2012). Risk adjustment for measuring health care outcomes (4th ed.). Health Administration Press.
- International Consortium for Health Outcomes Measurement (ICHOM) (n.d.) ICHOM ASD set of patient-centered outcome measures webinar. https://connect.ichom.org/patient-centered-outcome-measures/autism-spectrum-disorder/
- Jones, S. H., St. Peter, C. C., & Ruckle, M. M. (2020). Reporting demographic variables in the Journal of Applied Behavior Analysis, 53(3), 1304–1315. 10.1002/jaba.722 [DOI] [PubMed]
- Kelley, D. P., III., & Gravina, N. (2018). A paradigm shift in healthcare: An open door for organizational behavior management. Journal of Organizational Behavior Management,38(1), 73–89. 10.1080/01608061.2017.1325824 [Google Scholar]
- Lane, J. M., & Davis, B. A. (2022). Food, physical activity, and health deserts in Alabama: The spatial link between healthy eating, exercise, and socioeconomic factors. GeoJournal,87, 5229–5249. 10.1007/s10708-021-10568-2 [Google Scholar]
- LaPointe, J. (2022, August). Value-based payment makes up just 6.74% of primary care revenue. Value-Based Care News. https://revcycleintelligence.com/news/value-based-payment-makes-up-just-6.7-of-primary-care-revenue
- Larson, C. (2021, December). Magellan Healthcare, Kyo partner to develop outcome standards for ABA value-based care. Behavioral Health Business. https://bhbusiness.com/2022/12/07/magellan-healthcare-kyo-partner-to-develop-outcome-standards-for-aba-value-based-care/
- Larsson, S., Clawson, J., & Howard, R. (2023). Value-based health care at an inflection point: A global agenda for the next decade. NEJM Catalyst (nonissue content, commentary). https://catalyst.nejm.org/doi/abs/10.1056/CAT.22.0332
- Leaf, J. B., Leaf, R., Leaf, J. A., Alcalay, A., Ravid, D., Dale, S., . . . & Oppenheim-Leaf, M. L. (2016). Comparing paired-stimulus preference assessments with in-the-moment reinforcer analysis on skill acquisition: A preliminary investigation. Focus on Autism & Other Developmental Disabilities, 33(1), 14–24. 10.1177/1088357616645329
- Leape, L. L. (2000). Institute of Medicine medical error figures are not exaggerated. Journal of the American Medical Association,284, 95–97. 10.1001/jama.284.1.95 [DOI] [PubMed] [Google Scholar]
- Librera, W. L., Bryant, I., Gantwerk, B., & Tkach, B. (2004). Autism program quality indicators: A self-review and quality improvement guide for programs serving young students with autism spectrum disorders. New Jersey Department of Education. https://files.eric.ed.gov/fulltext/ED486480.pdf
- Linstead, E., Dixon, D. R., French, R., Granpeesheh, D., Adams, H., German, R., …, & Kornack, J. (2017a). Intensity and learning outcomes in the treatment of children with autism spectrum disorder. Behavior Modification,41(2), 229–252. 10.1177/0145445516667059 [DOI] [PubMed]
- Linstead, E., Dixon, D. R., Hong, E., Burns, C. O., French, R., Nocak, M. N., & Granpeesheh, D. (2017b). Translational Psychiatry, 7(9), e1234. 10.1038/tp.2017.207 [DOI] [PMC free article] [PubMed]
- Magellan Health. (2021, June). Magellan Healthcare and Invo Healthcare announce value-based collaboration focused on improved autism outcomes. https://ir.magellanhealth.com/news-releases/news-release-details/magellan-healthcare-and-invo-healthcare-announce-value-based
- Mahendraratnam, N., Sorenson, C., Richardson, E., Daniel, G. W., Buelt, L., Westrich, K., . . . & Dubois, R. W. (2019). Value-based arrangements may be more prevalent than assumed. American Journal of Managed Care, 25(2), 70–76. https://www.ajmc.com/view/valuebased-arrangements-may-be-more-prevalent-than-assumed [PubMed]
- Makary, M. A., & Daniel, M. (2016). Medical error—The third leading cause of death in the US. British Medical Journal,353, i2139. 10.1136/bmj.i2139 [DOI] [PubMed] [Google Scholar]
- Milstein, A. (2004). Testimony to the U.S. Senate Health, Education, Labor and Pension Committee. https://www.govinfo.gov/content/pkg/CHRG-108shrg91659/html/CHRG-108shrg91659.htm
- Minemeyer, P. (2022, April). Evernorth, BHCOE team to develop quality measures for autism. Fierce Healthcare. https://www.fiercehealthcare.com/payers/evernorth-bhcoe-team-develop-quality-measures-autism
- Morton, L.W., & Blanchard, T. C. (2007). Starved for access: Life in rural America’s food deserts. Rural Realities 1(4), 1–10. https://www.iatp.org/sites/default/files/258_2_98043.pdf
- Murray, C. J. L., & Lopez, A. D. (1997). Alternative projections of mortality and disability by cause 1990–2020: Global Burden of Disease Study. The Lancet,349, 1498–1504. 10.1016/S0140-6736(96)07492-2 [DOI] [PubMed] [Google Scholar]
- Naghavi, M., Abajobir, T., Abbafati, C., Abbas, K. M., Abd-Allah, F., Abera, S. F., . . . & Murray, C. J. L. (2017). Global, regional, and national age-specific mortality for 264 causes of death, 1980–2016: A systematic analysis for the Global Burden of Disease Study 2016. The Lancet, 390, 1151–1210. 10.1016/S0140-6736(17)32152-9 [DOI] [PMC free article] [PubMed]
- National Committee for Quality Assurance (NCQA) (2023). HEDIS measures and technical resources. https://www.ncqa.org/hedis/measures/
- National Quality Forum (NQF). (2023). Find measures. https://www.qualityforum.org/Qps/QpsTool.aspx
- Neidert, P. L., Dozier, C. L., Iwata, B. A., & Hafen, M. (2010). Behavior analysis in intellectual and developmental disabilities. Psychological Services,7(2), 103–113. 10.1037/a0018791 [Google Scholar]
- Nordon, C., Karcher, H., Groenwold, R. H. H., Zollner Ankarfeldt, M., Pichler, F., Chevrou-Severac, H., . . . & Abenhaim, L. (2016). The “efficacy-effectiveness” gap: Historical background and current conceptualization. Value in Health, 19(1), 75–81. 10.1016/j.jval.2015.09.2938 [DOI] [PubMed]
- O’Brien, M. J., & Hendrix, N. M. (2021). Research on challenging behaviors and functional assessment. In J. L. Matson (Ed.), Functional assessment for challenging behaviors and mental health disorders (pp. 183–211). Springer. 10.1007/978-3-030-66270-7_6
- Pate, R. R., Dowda, M., Saunders, R. P., Colabianchi, N., Clennin, M. N., Cordan, K. L., . . . & Shirley, W. L. (2021). Operationalizing and testing the concept of a physical activity desert. Journal of Physical Activity & Health, 18, 533–540. 10.1123/jpah.2020-0382 [DOI] [PMC free article] [PubMed]
- Payakachat, N., Tilford, J. M., & Ungar, W. J. (2016). National database for autism research (NDAR): Big data opportunities for health services research and health technology assessment. Pharmacoeconomics, 34(2), 127–138. 10.1007%2Fs40273-015-0331-6 [DOI] [PMC free article] [PubMed]
- Peach State Health Plan (2022, September). Behavioral Health Center of Excellence and Peach State Health Plan announce partnership to advance quality in autism treatment outcomes. https://www.prnewswire.com/news-releases/behavioral-health-center-of-excellence-and-peach-state-health-plan-announce-partnership-to-advance-quality-in-autism-treatment-outcomes-301627837.html
- Plunkett, J. J., & Dale, B. G. (1988). Quality costs: A critique of some “economic cost of quality” models. International Journal of Production Research,26(11), 1713–1726. 10.1080/00207548808947986 [Google Scholar]
- Poe Bernard, S. (2020). Risk adjustment documentation & coding (2nd ed.). American Medical Association.
- Porter, M. E., & Teisberg, E. O. (2006). Redefining health care: Creating value-based competition on results. Harvard Business Review Press.
- Powell, A. A., White, K. M., Partin, M. R., Halek, K., Christianson, J. B., Neil, B., . . . & Bloomfield, H. A. (2011). Unintended consequences of implementing a national performance measurement system in local practice. Journal of General Internal Medicine, 27, 405–412. 10.1007/s11606-011-1906-3 [DOI] [PMC free article] [PubMed]
- Reichow, B. (2011). Development, procedures, and application of the evaluative method for determining evidence-based practices in autism. In B. Reichow, P. Doehring, D. V. Cichetti, & F. R. Volkmar (Eds.), Evidence-based practices and treatments for children with autism (pp. 25–39). Springer. 10.1007/978-1-4419-6975-0_2
- Riera-Prunera, C. (2022). Opportunity cost. In F. Maggino (Ed.), Encyclopedia of quality of life and well-being research. Springer. 10.1007/978-3-319-69909-7_2016-2
- Riley, A. R., & Freeman, K. A. (2019). Impacting pediatric primary care: Opportunities and challenges for behavioral research in a shifting healthcare landscape. Behavior Analysis: Research & Practice,19(1), 23–38. 10.1037/bar0000114 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ryan, A. M., Tompkins, C. P., Markovitz, A. A., & Burstin, H. R. (2017). Linking spending and quality indicators to measure value and efficiency in health care. Medical Care Research & Review,74(4), 452–485. 10.1177/1077558716650089 [DOI] [PubMed] [Google Scholar]
- Sampson, C., Leech, A., & Garcia-Lorenzo,. (2023). Editorial: Opportunity costs in health care: Cost-effectiveness thresholds and beyond. Frontiers in Health Services,3, 1293592. 10.3389/frhs.2023.1293592 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schendel, D. E., Bresnahan, M., Carter, K. W., Francis, R. W., Gissler, M., Gronberg, T. K., . . . & Susser, E. (2013). The international collaboration for autism registry epidemiology (iCARE): Multinational registry-based investigations of autism risk factors and trends. Journal of Autism & Developmental Disorders, 43(11), 2650–2663. 10.1007%2Fs10803-013-1815-x [DOI] [PMC free article] [PubMed]
- Schuster, M. A., McGlynn, E. A., & Brook, R. H. (2001). How good is the quality of health care in the United States? Milbank Quarterly,76(4), 517–563. 10.1111/1468-0009.00105 [DOI] [PMC free article] [PubMed] [Google Scholar]
- She, E. N., & Harrison, R. (2021). Mitigating unintended consequences of co-design in health care. Health Expectations,24(5), 1551–1556. 10.1111/hex.13308 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shrank, W. H., Rogstad, T. L., & Parekh, N. (2019). Waste in the US health care system: Estimated costs and potential for savings. Journal of the American Medical Association,322(15), 1501–1509. 10.1001/jama.2019.13978 [DOI] [PubMed] [Google Scholar]
- Silbaugh, B. C. (2023). Discussion and conceptual analysis of four group contingencies for behavioral process improvement in an ABA service delivery quality framework. Behavior Analysis in Practice,16, 421–436. 10.1007/s40617-022-00750-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Silbaugh, B. C., & El Fattal, R. (2021). Exploring quality in the applied behavior analysis service delivery industry. Behavior Analysis in Practice,15(2), 571–590. 10.1007/s40617-021-00627-y [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sosine, J., & Cox, D. J. (2023). Initial description of 3.9 million unique ABA sessions. PsyArXiv. 10.31234/osf.io/pq28y
- Spiller, S. A. (2011). Opportunity cost consideration. Journal of Consumer Research,4(1), 595–610. 10.1086/660045 [Google Scholar]
- Stasinopoulos, J., Wood, S. J., Bell, J. S., Manski-Nankervis, J., Hogan, M., & Sluggett, J. K. (2021). Potential overtreatment and undertreatment of type 2 diabetes mellitus in long-term care facilities: A systematic review. Journal of the American Medical Directors Association,22(9), 1889–1897. 10.1016/j.jamda.2021.04.013 [DOI] [PubMed] [Google Scholar]
- Story, M., Kaphingst, K. M., Robinson-O’Brien, R., & Glanz, K. (2008). Creating healthy food and eating environments: Policy and environmental approaches. Annual Review of Public Health, 29, 253–272. https://www.annualreviews.org/doi/pdf/10.1146/annurev.publhealth.29.020907.090926 [DOI] [PubMed]
- Taylor, M. J., McNicholas, C., Nicolay, C., Darzi, A., Bell, D., & Reed, J. E. (2014). Systematic review of the application of the plan–do–study–act method to improve quality in healthcare. BMJ Quality & Safety,23, 290–298. 10.1136/bmjqs-2013-001862 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Taylor, R. S., Colombo, R. A., Wallace, M., Heimann, B., Benedickt, A., & Moore, A. (2023). Toward socially meaningful case conceptualization: The risk-driven approach. Behavior Analysis in Practice, 16(4), 1022–1033. https://doi-org.ezproxy.lib.purdue.edu/10.1007/s40617-023-00812-1 [DOI] [PMC free article] [PubMed]
- Teisberg, E., Wallace, S., & O’Hara, S. (2020). Defining and implementing value-based health care: A strategic framework. Academic Medicine, 95(5), 682–685. 10.1097%2FACM.0000000000003122 [DOI] [PMC free article] [PubMed]
- Tiger, J. H., & Effertz, H. M. (2021). On the validity of data produced by isolated and synthesized contingencies during the functional analysis of problem behavior. Journal of Applied Behavior Analysis,54(3), 853–876. 10.1002/jaba.792 [DOI] [PubMed] [Google Scholar]
- Toussaint, K. A., Kodak, T., & Vladescu, J. C. (2015). An evaluation of choice on instructional efficacy and individual preferences among children with autism. Journal of Applied Behavior Analysis,49(1), 170–175. 10.1002/jaba.263 [DOI] [PubMed] [Google Scholar]
- U.S. Department of Agriculture (USDA). (2022). Economic Research Service documentation on food access. https://www.ers.usda.gov/data-products/food-access-research-atlas/documentation/
- U.S. Department of Defense. (2020). Report to the committee on armed services of the Senate and Hours of Representatives: The Department of Defense comprehensive autism care demonstration annual report. https://therapistndc.org/wp-content/uploads/2020/08/Annual-Report-on-Autism-Care-Demonstration-Program-for-FY-2020.pdf
- U.S. Department of Health & Human Services. (2007). National healthcare quality report. AHRQ Publication No. 08-0040. https://archive.ahrq.gov/qual/nhqr07/nhqr07.pdf
- U.S. Government Accountability Office (GAO). (2022, November). United States Government Accountability Office Report to Congressional Committees: Private health insurance markets remained concentrated through 2020, with increases in the individual and small group markets. https://www.gao.gov/assets/gao-23-105672.pdf
- Ustun, T. B. (2011). The global burden of mental disorders. American Journal of Public Health,89(9), 1315–1318. 10.2105/AJPH.89.9.1315 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Vazquez, M. L., Vargas, I., Unger, J. P., Mogollon, A., Ferreira da Silva, M. R., & de Paepe, P. (2009). Integrated health care networks in Latin America: Toward a conceptual framework for analysis. Pan American Journal of Public Health,26(4), 360–367. [DOI] [PubMed] [Google Scholar]
- Walker, S., Mason, A. R., Claxton, K., Cookson, R., Fenwick, E., Fleetcroft, R., & Sculpher, M. (2010). Value for money and the quality and outcomes framework in primary care in the UK NHS. British Journal of General Practice,60(574), e213–e220. 10.3399/bjgp10X501859 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Weyman, J. R., & Sy, J. R. (2018). Effects of neutral and enthusiastic praise on the rate of discrimination acquisition. Journal of Applied Behavior Analysis,51(2), 335–344. 10.1002/jaba.440 [DOI] [PubMed] [Google Scholar]
- Zanotto, B. S., Etges, A. P. B., & d. S., Marcolino, M. A. Z., & Polanczyk, C. A. (2021). Value-based healthcare initiatives in practice: A systematic review. Journal of Healthcare Management,66(5), 340–365. 10.1097/JHM-D-20-00283 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zink, A., & Rose, S. (2020). Fair regression for health care spending. Biometrics,76(3), 973–982. 10.1111/biom.13206 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zink, A., & Rose, S. (2021). Identifying undercompensated groups defined by multiple attributes in risk adjustment. BMJ Health & Care Informatics, 28(1), e100414. https://informatics.bmj.com/content/bmjhci/28/1/e100414.full.pdf [DOI] [PMC free article] [PubMed]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
There is no data associated with this article.


