Skip to main content
Sage Choice logoLink to Sage Choice
. 2024 Feb 12;28(8):1889–1901. doi: 10.1177/13623613241231624

Evidence-b(i)ased practice: Selective and inadequate reporting in early childhood autism intervention research

Micheal Sandbank 1,, Kristen Bottema-Beutel 2, Ya-Cing Syu 1, Nicolette Caldwell 3, Jacob I Feldman 4, Tiffany Woynaroski 4,5
PMCID: PMC11301951  NIHMSID: NIHMS1971818  PMID: 38345030

Abstract

We conducted a multi-pronged investigation of different types of reporting bias in autism early childhood intervention research. First, we investigated the prevalence of reporting failures of completed trials registered on clinicaltrials.gov, and found that only 7% of registered trials were updated with results on the registration platform and only 64% had associated published reports. Next, we investigated the extent to which inadequate reporting prevents inclusion in meta-analytic summary estimates by identifying reports of studies that were eligible for inclusion in a prior meta-analysis, and found that 25% were excluded due to inadequate reporting. Finally, we investigated selective reporting practices by analyzing the protocols of the studies included in the meta-analysis which had been registered on any trial registry and coding their timing, completeness, and consistency. We found that 23% of studies were pre-registered, 71% were late-registered, and 5% were registered at an unclear date. Only 8% of registrations specified all of the necessary components. Evidence of selective reporting was common; 36% failed to report a registered outcome, 61% reported unregistered outcomes, 23% switched primary and secondary outcomes, and 43% had assessment timepoints that differed from registration specification. Given the inadequacy of registration and reporting practices, we offer practical recommendations to facilitate improvement for the field of autism research.

Lay Abstract

When researchers fail to report their findings or only report some of their findings, it can make it difficult for clinicians to provide effective intervention recommendations. However, no one has examined whether this is a problem in studies of early childhood autism interventions. We studied how researchers that study early childhood autism interventions report their findings. We found that most researchers did not register their studies when they were supposed to (before the start of the study), and that many researchers did not provide all of the needed information in the registration. We also found that researchers frequently did not publish their findings when their studies were complete. When we looked at published reports, we found that many of the studies did not report enough information, and that many studies were reported differently from their registrations, suggesting that researchers were selectively reporting positive outcomes and ignoring or misrepresenting less positive outcomes. Because we found so much evidence that researchers are failing to report their findings quickly and correctly, we suggested some practical changes to make it better.

Keywords: autism, early intervention, selective reporting, trial registration


Young autistic children may benefit from non-pharmacological interventions provided in early childhood, and various intervention approaches have been developed for this purpose. The evidence supporting the efficacy of these approaches is generated through primary studies of intervention effects, many of which employ between-group designs such as quasi-experimental and randomized controlled trials (RCTs). The findings from such primary studies are frequently synthesized in systematic reviews and meta-analyses, which are then used to inform practice recommendations and treatment guidelines (Hyman et al., 2020; Steinbrenner et al., 2020; Trembath et al., 2021).

However, methodological flaws in primary studies may bias estimates of intervention effects, and this bias can be compounded across studies synthesized in reviews and meta-analyses, potentially inflating summary effect estimates and ultimately leading to ineffective or even counterproductive practice recommendations. Consequently, it is standard practice when conducting systematic reviews and meta-analyses to evaluate primary studies for common sources of bias, such as those introduced by inadequate random assignment and allocation concealment (selection bias), inadequate masking of assessors (detection bias), or inadequate and differential participant retention over the course of the study (attrition bias; Higgins et al., 2022). Meta-analysts also routinely use tools to detect evidence of bias introduced by the differential likelihood of publication of studies that report positive and significant findings relative to those that report null or negative findings (publication bias; Boutron et al., 2022). Although the former sources of bias are often (though not always) considered in systematic reviews and meta-analyses, there are additional sources of bias related to reporting practices that rarely receive attention, such as the failure to report completed trials, inadequate reporting of trials, and selective reporting of trials. These practices, which range from simple oversights to academic misconduct, can also distort perceptions of the evidence base by facilitating overestimation of intervention effects and underestimation of intervention risk (Serghiou et al., 2023).

Failure to report

Failure to report describes investigator failure to publish or publicly share results of a completed trial. This may be a deliberate decision, in which the investigator chooses not to report any findings because the trial results were unfavorable, or an unintentional failure, for example, when the lead investigator forgoes reporting in favor of other demands, such as grant-writing, since successful grant applications tend to be more heavily incentivized by academic institutions than peer-reviewed publications. Although a 2007 US federal law requires that results for pharmacological drug and clinical device trials be publicly reported on clinicaltrials.gov, a platform established by the National Institutes of Health (NIH) National Library of Medicine in 2000 for registration and reporting of clinical trials, a recent investigation found that 90% of trials conducted by US Academic Institutions were not compliant with this law within a year after trial completion (Food and Drug Administration Amendments Act, 2007; Piller, 2015). Moreover, investigations linking peer-reviewed publications to registered non-drug trials suggest that as many as half of completed trials are never published (Bashir et al., 2017; Jones et al., 2013). These findings influenced the implementation of a 2017 NIH policy requiring that all NIH-funded clinical trials, whether drug or behavioral, be registered and reported on clinicaltrials.gov to increase transparency and public accessibility of information regarding the effects of all candidate treatments (National Institutes of Health, 2016). Similar regulations requiring registration and reporting of medicinal product trials have been passed in the European Union (European Commission, 2012), though recent investigations suggest compliance with reporting requirements is higher (i.e. 49.5%), especially for trials with a commercial sponsor (Goldacre et al., 2018).

Inadequate reporting

Although effect sizes from group design studies testing the effects of interventions can be derived or estimated from a wide variety of reported statistics, researchers periodically report findings in a way that prohibits derivation of relevant effect sizes and, consequently, inclusion in meta-analysis. When meta-analyzing group studies, meta-analysts typically elect to synthesize the standardized mean difference between the intervention and comparison group after receipt of intervention (Deeks et al., 2022). This effect size is easily calculated using post-intervention means, standard deviations, and sample sizes. However, primary study authors will occasionally report only mean change scores or means and standard deviations estimated from linear models that account for multiple baseline covariates. Although analysis of change scores and the use of linear models is not problematic in and of itself, reporting this information in the absence of unadjusted post-intervention means and standard deviations can present problems for meta-analysts. This is because meta-analysts are encouraged to avoid synthesizing effects derived from unadjusted post scores with those derived from change scores or from linear models which adjust for baseline covariates, as these represent fundamentally different effect sizes across studies, and will contribute heterogeneity to findings (Cuijpers et al., 2017; Deeks et al., 2022; Voils et al., 2011). In addition, even if meta-analysts wish to estimate effect sizes using information from linear models which adjust for covariates, raw unadjusted standardized differences are needed to accurately estimate the standard errors for such effect sizes (What Works Clearinghouse, 2022).

When investigators fail to report the necessary information for relevant effect size derivation, meta-analysts often contact corresponding authors to request additional data, but with little success. For example, Gabelica et al. (2022) investigated data-sharing behavior by contacting authors of 1779 articles that explicitly stated that data was available by request, and found that authors of 1669 articles (93%) either failed to respond or declined to provide data. When corresponding authors fail to respond or share additional study information, potentially eligible studies must then be excluded from the meta-analysis. Thus, inadequate reporting coupled with poor data-sharing practices can represent a major hurdle to comprehensive evidence synthesis and precise summary effect estimation.

Selective reporting

Selective reporting involves intentional omissions or misreporting in order to misrepresent trial findings as more favorable than they were in reality. Bias introduced by selective reporting is important to detect, as it is directly tied to primary hypotheses studies are designed to test. Authors may engage in selective reporting to increase the likelihood of publication (since null results are less likely to get published) or to support an intervention or theory in which they are personally invested (either financially, professionally, or emotionally). This latter motivation may warrant particular attention, given that prior work has shown that early childhood autism researchers frequently have undisclosed conflict of interests arising from affiliation with entities that developed or provide the intervention being tested (Bottema-Beutel et al., 2021b).

Selective reporting may take the form of failing to report findings for all outcomes measured (e.g. reporting only those associated with significant results), reporting data at unplanned assessment points when results are more favorable, reporting secondary outcomes as primary or primary outcomes as secondary, or selectively reporting results of specific statistical analyses which suggest favorable results in lieu of planned analytic approaches (Chan et al., 2004). Although Cochrane guidelines include selective reporting as a core source of bias that must be evaluated in systematic reviews, evidence suggests it is difficult to detect with standard risk of bias tools and, therefore, rarely given adequate consideration (Page & Higgins, 2016; Saric et al., 2019). Still, some work suggests it may be relatively common, affecting up to half of registered studies (Wayant et al., 2017). For example, in their review of follow-up reports of medical RCTs, Kampman and colleagues found that 40% of follow-up reports named different primary outcomes than those reported in the original studies (Kampman et al., 2021). A recent meta-analysis of similar investigations estimated that primary outcome switching occurs in approximately a third of studies and secondary outcome switching occurs in 50% to 75% of studies (TARG Meta-Research Group and Collaborators, 2023).

Open science and prospective trial registration

Because reporting failures, inadequate reporting, and selective reporting can introduce biases that distort the scientific record, there have been increasing calls for scientists to use open science practices such as public data sharing and prospective trial registration to enhance the transparency and reproducibility of research (Nosek et al., 2015, 2018). Prospective trial registration is a process that requires principal investigators to specify trial arms, enrollment criteria, primary and secondary outcomes, and planned analysis procedures in advance of study enrollment in a publicly available document (Fredrickson & Ilfeld, 2011). This practice discourages selective reporting, reporting failures, and other questionable research practices that are known to inflate effect estimates (e.g. mishandling data analyses in an effort to produce a statistically significant result, also known as p-hacking) by providing a public record of planned trial methods, outcomes, and analyses, which can then be compared to subsequently published studies to verify that reported methods and outcomes match. Prospective trial registration also encourages investigators to indicate when study methods deviated from protocols and to offer justification for the change, or to flag analyses that were exploratory, allowing readers to consider important caveats in interpretation. Evidence suggests that the use of open science practices has increased in recent years (though there is still room for improvement; Wallach et al., 2018) and that increased use of prospective trial registration may reduce the potential inflation of effects caused by reporting and publication biases. For example, in a study of observed effects over time in trials funded by the National Heart, Lung, and Blood Institute, Kaplan and Irving (2015) found that the proportion of trials reporting null effects substantially increased after the year 2000, when the mandate to prospectively register drug trials on clinicaltrials.gov took effect. They also found that pre-registration was significantly associated with the trend toward null findings. However, it is important to note that trials may be retrospectively registered, after study enrollment or completion, which may give the appearance to readers of prospective registration, but without guarding against the potential for selective reporting.

Current investigation

When trials are registered, investigators can identify evidence of selective reporting through comparison of trial protocols and subsequently published studies and flagging of discrepancies in outcomes, assessment timepoints, and analyses (e.g. Chan et al., 2004; Wayant et al., 2017). Meta-science studies estimating the prevalence of selective reporting in biomedical research suggest that it is relatively common (Kampman et al., 2021; TARG Meta-Research Group and Collaborators, 2023), but no such study has yet been done in regards to autism intervention trials. Intervention research to support young autistic children is a current funding priority for the US government, with multiple standing calls for NIH proposals of research to benefit this population (National Institutes of Health, 2021a, 2021b, 2021c). The increased funding and attention toward this effort has led to a rapid increase in the conduct and publication of randomized controlled trials. In fact, the set of controlled group studies of interventions designed for this population doubled in only 4 years, and quadrupled in the last decade (Sandbank et al., 2023). We recently summarized this evidence in a comprehensive meta-analysis of 252 studies of non-pharmacological interventions for young autistic children published before November 2021 (Sandbank et al., 2023). As part of that investigation, we evaluated common sources of bias (e.g. selection, detection, and attrition) and then considered those evaluations in our analyses. The current investigation extends this work through a three-pronged investigation of various sources of reporting bias (specifically, failure to report, inadequate reporting, and selective reporting) in early childhood autism intervention literature.

Aim 1: failure to report

The first aim of the investigation centered on quantifying the extent to which early childhood autism intervention researchers in the United States fail to report the findings of completed trials, either on publicly available platforms or in published articles. By definition, this reporting bias cannot be detected through examination of published studies (or meta-analytic data sets composed of published or unpublished studies such as our own), and is better detected via identification of registered completed trials that do not have corresponding published results. Given that US law and 2017 NIH policy require that investigators both register and publicly share results of NIH-funded trials and provide a platform to easily do both (i.e. clinicaltrials.gov), we elected to examine reporting failures using trials registered on clinicaltrials.gov in the present study. Our research questions were:

What proportion of completed trials of non-pharmacological interventions for young autistic children registered on clinicaltrials.gov have posted results on that platform, and what proportion have associated published articles reporting results?

Aim 2: inadequate reporting

In the second part of the investigation, we aimed to quantify the extent to which inadequate reporting coupled with poor data-sharing practice prevents inclusion in meta-analysis and inhibits summary effect estimation. To accomplish this, we leveraged records from the previous meta-analysis on early childhood autism interventions, which included controlled group studies published before November 2021 not restricted to research registered on clinicaltrials.gov (Sandbank et al., 2023). Using this data set, we sought to answer the following research questions:

What is the proportion of reports that were deemed eligible for inclusion in the meta-analysis, but were excluded from subsequent summary effect estimation due to inadequate reporting and lack of summary data sharing upon request? Of those which were excluded for these reasons, what proportion included statements in published reports indicating that data was available upon request?

Aim 3: selective reporting

In the final component of the investigation, we estimated the extent and prevalence of selective reporting by flagging all of the trials in our meta-analysis data set that had been registered (either on clinicaltrials.gov or other international registries) and coding the completeness and consistency of their registration and results. Our research questions were:

Of the studies included in the prior meta-analysis (Sandbank et al., 2023) that had a corresponding registration, what proportion were flagged for late registration or practices indicative of selective reporting (i.e. failed to report registered outcomes, reported unregistered outcomes, reported primary outcomes as secondary or vice versa, reported assessment timepoints which differed from pre-registered assessment timepoints, reported analytic approaches which differed from preregistered analyses)?

Method

Because this investigation was separated into three parts, methods are reported accordingly.

Procedures for identifying reporting failures (Aim 1)

For the reasons stated above, our research question centered on results reporting of trials registered on clinicaltrials.gov. Our subsequent research questions leveraged the data set from the prior meta-analysis (Sandbank et al., 2023) and included studies registered on any platform (including those based outside the United States).

Identifying registered trials on clinicaltrials.gov

In order to identify registered and completed trials of early childhood autism interventions, we used the search function on clinicaltrials.gov to search trial registrations for the term “autism” and then used the filter function to restrict search results to trial registrations marked as “completed,” “intervention,” and “children.” This search was completed on 11 April 2023, and was inclusive of all trials registered since the website was established (i.e. the year 2000). This yielded 552 potentially eligible trial registrations, which were subsequently screened to exclude trials with pharmacological interventions, those with participant samples that were older than 8 years on average, those with only one trial arm (i.e. no control or comparison group), those with samples that did not comprise >50% children on the autism spectrum, and those that had been completed less than a year prior. In the event that the registration reported a trial that included an age-range which spanned ages above and below 8 years, we searched for the corresponding published article to verify that participants were younger than 8 years on average. In the event that no corresponding published article could be located, we calculated the midpoint of the eligible age range and excluded trials with midpoints that exceeded 8 years. After screening, a total of 84 eligible registered trials were identified.

Documenting results reporting on clinicaltrials.gov

In order to determine the number of registered trials with results reported on clinicaltrials.gov, we extracted trial records for each eligible registration along with the dates for which results were posted on the website. Trial registrations with no posted results had no corresponding dates.

Identifying published reports of registered trials

To identify published reports of trials registered on clinicaltrials.gov, we examined each trial registration and extracted publications that were automatically indexed by the website due to the inclusion of the clinicaltrials.gov identifier (National Clinical Trial number) in the corresponding published report. When trials did not have any publications indexed by clinicaltrials.gov, we cross-referenced the information reported in the trial registration with studies in our own database to identify any matches. This data set was accrued through a systematic search in November 2021 of the following electronic databases: Academic Search Complete, CINAHL Plus with Full Text, Education Source, Educational Administration Abstracts, ERIC, MEDLINE, Proquest Dissertations and Theses, PsycINFO, Psychology and Behavioral Sciences Collection, and SocINDEX with Full Text. In addition, we searched Google Scholar for the trial ID to identify reports which may have been published after November 2021. Registered trials with no corresponding published results or reports were categorized as reporting failures.

Procedures for identifying inadequate reporting (Aim 2)

Studies were categorized as instances of inadequate reporting if they were identified as eligible for inclusion in the recent meta-analysis of early childhood intervention studies, but excluded from meta-analytic summary estimation because relevant data for effect size estimation could not be extracted from published papers or retrieved from authors when requested (Sandbank et al., 2023). As part of the coding procedures for the previous meta-analysis, the first author and primary coder flagged studies that were potentially eligible for inclusion but reported only change scores or post-intervention means and standard errors estimated from linear models which adjusted for multiple baseline covariates. If the study was less than 10 years old and the corresponding author’s primary email address could be located, the primary coder emailed the corresponding author with an explanation of the project and a request for summary data (unadjusted post-intervention means and standard deviations for intervention and comparison groups on all measured outcomes). Studies for which authors replied with necessary information were coded and included in the meta-analysis. Studies for which authors either failed to reply or responded and declined to provide requested information were coded as excluded due to inadequate reporting. We then coded pdf files of each of these studies to determine whether they included data availability statements which indicated that data would be available upon request. We used the Adobe search function and entered the terms “data,” “availab*,” “request,” “repository,” “OSF,” and “NDAR,” and also searched the text on the first page and before and after the reference section.

Procedures for identifying selective reporting (Aim 3)

To examine the extent to which the conclusions of meta-analyses and systematic reviews may be influenced by selective reporting, we sought to identify all of the studies that were included in the previous meta-analysis (Sandbank et al., 2023) which had corresponding registration on any platform (within or outside the United States), code the identified registrations for completeness, and code the report for indicators of selective reporting. The coding manual (and accompanying de-identified data) is available in a public data repository (Sandbank et al., 2023).

Identifying studies associated with registered trials

To identify studies that had a corresponding registration, we searched all reports in our data set for the terms “Trial” and “Regis*” in an effort to pull relevant text which indicated Trial Registration Numbers. Registrations on any platform (e.g. clinicaltrials.gov, ISRCTN registry, Australia New Zealand Clinical Trials Registry) were identified and retrieved for coding. In addition, we cross-referenced studies in our set with trial registrations identified on clinicaltrials.gov to match any trials to published papers in which authors failed to cite trial registrations.

Coding registration completeness and selective reporting

To capture evidence of selective reporting, we coded each identified registration-publication pair for registration timing, registration completeness, and indicators of selective reporting. Our selective reporting codes were informed in part by similar investigations in other fields (Wayant et al., 2017). The coding manual is available in an open-access repository at https://osf.io/fs8qu/. In every case, we used the earliest version of registration and ignored registration changes that occurred after the start of the study.

Registration timing

Studies were coded as pre-registered when the registration date preceded the date the study started, as reflected in the protocol.

Registration completeness

Trial registrations were coded in terms of whether they adequately specified: (a) trial arms, (b) outcomes, (c) assessment timepoints, and (d) analytic approach.

Trial arms

Registrations were coded as having adequately specified trial arms if they clearly described an intervention and comparison group that mapped onto arms described in the corresponding published report, either in name or detailed description. For example, if the intervention group was named in the registration and the paper as “Early Start Denver Model,” the trial arm was coded as adequately specified. However, if the intervention group listed in the registration was only vaguely named (e.g. “parent-mediated intervention”) with no other description, we coded trial arms as being inadequately specified.

Outcomes

Outcomes were coded as adequately specified if the registration listed specific, named measures as outcomes. Examples of outcomes that might be coded as adequately specified include, “Mullen Scales of Early Development,” “Autism Diagnostic Observation Schedules,” or observational measures such as “Number of Different Words Used in Parent-Child Interaction.” Outcomes were coded as inadequately specified if the registration did not list outcomes at all, or listed only constructs, such as “social communication outcomes” or “measures of brain function.”

Assessment timepoints

Registrations were coded as having adequately specified assessment timepoints if registrations detailed the specific time points that participants would be assessed, such as “at study entry and four months after entry.” Assessment timepoints were coded as inadequately specified if assessment timepoints were not described.

Analytic approach

Registrations were coded as having specified analytic approach if they indicated any planned analytic methods, regardless of specificity. Analytic methods were coded as not specified if they did not indicate any analytic plans.

Selective reporting

Registration-publication pairs were cross-referenced to code whether studies evidenced selective reporting. Specific items coded were (a) failure to report a pre-registered outcome, (b) reporting of an unregistered outcome (unless authors indicated in the publication that it was exploratory or unregistered), (c) reporting of a primary outcome as secondary or a secondary outcome as primary, (d) mismatched assessment timepoints, and (e) change in analysis approach without justification. Instances where studies deviated from registration but authors indicated the deviation was exploratory or unregistered were not flagged as selective reporting.

Failure to report pre-registered outcome

Studies were coded as failing to report a pre-registered outcome if an outcome was indicated in the registration but was not reported in the paper. This item was coded as no if either (a) all of the outcomes listed in the registration were reported in the paper or (b) the registration failed to specify outcomes.

Reports unregistered outcome

Studies were coded as reporting an unregistered outcome if the study reported an outcome that was not listed in the registration, unless the authors clearly indicated in the report which specific outcomes were unregistered or exploratory. Studies which had corresponding registrations that did not adequately describe outcomes were also coded as having reported unregistered outcomes. If the outcomes listed in the registration matched the outcomes reported in the study, this selective reporting indicator was coded as “no.”

Reports primary outcome as secondary or secondary outcome as primary

Studies were coded as reporting a primary outcome as secondary or a secondary outcome as primary if the coder determined that the following three things were true: (a) the registration specifically indicated that some outcomes were primary and some outcomes were secondary, (b) the study specifically indicated that some outcomes were primary and that some outcomes were secondary, and (c) the outcomes indicated as primary/secondary in the registration and the study did not match. This item was coded “no” if either (a) the registration failed to indicate which outcomes were primary or secondary, or (b) the study failed to indicate which outcomes were primary or secondary or (c) both the registration and the study indicated matching primary and secondary outcomes.

Mismatched assessment timepoints

Study-registration pairs were coded as having mismatching assessment timepoints if there were clear discrepancies between timepoints indicated in the registration and the paper (e.g. the registration indicated participants would be measured at 2, 6, and 12 months after baseline, but the paper exclusively reported outcomes at 5 months). This item was also coded “yes” if extra assessment timepoints were reported. For example, if the registration indicated participants would be assessed only at 6 months, but the study reported outcomes measured at 3 and 6 months, this would also have been coded as mismatching assessment timepoints (unless authors clearly indicated in the study that the additional assessment timepoint reported had not been registered). This item was coded as “no” if either (a) the registration failed to adequately specify assessment timepoints or (b) the registration specified timepoints which matched those indicated in the study.

Change in analysis approach without justification

Studies were coded as reporting unjustified changes to the analytic approach if all of the following were true: (a) the registration specified an analysis approach, (b) the analysis approach described in the paper greatly differed from that specified in the registration, and (c) the authors failed to indicate that the analytic approach differed from the registration and provide a justification. If any of these items were not true, or if researchers did not specify an analytic approach in the registration, this item was coded as “no.”

Coding reliability

Registration-publication coding was completed by three separate primary coders, who independently overlapped on 12 registration-publication pairs (20.3%) to gauge reliability. Reliability was calculated using Cohen’s kappa, as 0.835, 95% CI [0.744, 0.925], reflecting strong agreement. Given that reliability was strong, primary codes were designated as the final codes.

Results

Reporting failures (Aim 1): what proportion of us trials registered on clinicaltrials.gov have posted results and corresponding published papers?

Of the 84 completed trials we identified that were registered on clinicaltrials.gov that were completed at least 1 year before the search date, only six (7.1%) had shared results on the platform. In addition, only 53 (64%) of 84 completed trials had associated published papers.

Inadequate reporting (Aim 2): of reports which were potentially eligible for inclusion in a large meta-analysis of nonpharmacological interventions for young autistic children, what proportion were excluded due to inadequate reporting and lack of data sharing?

Of 385 reports identified as eligible for inclusion in the prior meta-analysis, 95 (24.6%) were excluded due to inadequate reporting and lack of data sharing. We were unable to contact authors of three reports because we could not locate their email addresses, and for two reports in which the corresponding author had died. An additional 15 reports had been published for more than 10 years; thus, we did not reach out to the authors of those reports, based on the assumption that the data were no longer available. We then checked for data availability statements from the remaining 75 studies that: (a) did not have the necessary data in the published manuscript, (b) were less than 10 years old, and (c) were published by authors who failed to respond or declined to supply data when we contacted them. We were unable to locate full-text files for two of the 75 studies. The remaining 73 studies were checked at the full-text level. Of these remaining studies, four included published statements indicating data were available upon request, and three indicated that they would follow data-availability requirements of the journal or funder.

Selective reporting (Aim 3): what proportion of studies included in the prior meta-analysis were registered on any platform? Of those studies included in the meta-analysis which were registered, what proportion evidenced inadequate or incomplete registration, late registration, or practices associated with selective reporting?

Registration timing

Of 252 studies included in the prior meta-analysis, 56 studies (22%) 1 had corresponding registrations on any platform that we were able to locate and code, and three studies had registrations which were either irretrievable or not published in English. Registration platforms represented in the data set included clinicaltrials.gov (34), ISRCTN (formerly known as the International Standard Randomized Controlled Trial Number; 8), Australian New Zealand Clinical Trial Registry (6), University Hospital Medical Information Network Clinical Trials Registration (3), Deutsches Register Klinischer Studien (1), Dutch Trial Register (1), Clinical Trials Registry—India (1), and the What Works Clearinghouse’s Registry of Randomized Controlled Trials (1). Of the studies for which we were able to identify corresponding registrations, 13 (23.2%) were pre-registered, 40 (71.4%) were late registered, and three (5.4%) were unclear about registration or study start dates. De-identified data reflecting study-registration coding is available in an open access repository at https://osf.io/fs8qu/.

Evidence of inadequate/incomplete registration

Among the 56 registrations, eight (14.2%) failed to adequately specify intervention and comparison groups, while 12 (21.4%) failed to specify study outcomes, two (3.5%) failed to specify assessment timepoints, and 50 (89.2%) failed to specify analytic approach. When these aspects of registration were considered together, only five registrations (8%) fully specified all four.

Evidence of selective reporting

Of the 56 studies with registrations, 20 (35.7%) failed to report a registered outcome, 34 (60.7%) reported unregistered outcomes without labeling them as exploratory, 13 (23.2%) reported primary outcomes as secondary outcomes or vice versa, and 24 (42.8%) had mismatched assessment timepoints. Two of the six studies that specified analytic methods in registration changed them in the study without justification. Of the five studies that adequately specified all aspects of the study in registration, only one (an unpublished dissertation) was not flagged for any evidence of selective reporting.

Discussion

This investigation documented the extent of problematic reporting practices in early childhood nonpharmacological autism intervention research. Results suggest that conclusions about evidence-based practice may be substantially threatened by (a) investigator failure to report the results of completed trials, (b) inadequate reporting and sharing of data upon request, (c) limited and inadequate pre-registration practices, and (d) selective reporting of positive findings in favor of null or negative findings.

Failure to report completed trials

We found that over a third of completed trials of nonpharmacological interventions for young autistic children registered on clinicaltrials.gov had not been published a year (or longer) after completion, and that over 92% of completed trials had no results posted on the platform clinicaltrials.gov, despite policies indicating that results of US federally funded clinical trials should be reported within one year of their completion (National Institutes of Health, 2016). This is consistent with findings of broader investigations of US trial publication and results reporting rates, which found that approximately 30% to 40% of registered trials are never published in journals (Chen et al., 2016; Ross et al., 2012), and that only 10% of drug or device trials conducted by academic institutions have posted results on clinicaltrials.gov within a year of trial completion (i.e. the legally mandated deadline; Piller, 2015). The broad failure of investigators to post results of completed trials on clinicaltrials.gov is particularly concerning, given that the stated intention of the policy mandating results reporting was to mitigate the impact of selective reporting practices and increase public accessibility of findings, thereby allowing patients and doctors to better gauge the potential benefits and safety of available treatments (Zarin et al., 2017). Because a substantial proportion of clinical trials are never published in scholarly journals, and most scholarly articles are broadly inaccessible to the public due to publisher paywalls, clinicaltrials.gov was created as a publicly accessible and user-friendly mechanism to bridge the barriers inhibiting dissemination of the findings of publicly funded research to the public that funded it. Our results and others suggest that, thus far, these efforts have failed.

Both the US law governing results reporting of drug and device trials as well as the policy detailing expectations for NIH-funded clinical trials specify that failure to comply with reporting mandates could result in enforcement actions, including termination of the award as well as withholding of further federal awards from the “project or program” (Uniform Administrative Requirements, 2014/2023). However, in reviewing the registered clinical trials that had no accompanying posted results or published reports, our team noted that some scholars appeared to have successfully won several federal grants despite failing to ever publish or report findings of previously funded clinical trials. Although the US federal government has the ability to impose fines on institutions that fail to report drug or device trial results, investigative reporting on this issue suggested that, as of 2015, it had never done so (Piller, 2015).

A key limitation of this aspect of the investigation is that it was restricted primarily to trials registered on clinicaltrials.gov; consequently, the findings cannot be assumed to be representative of practice outside the United States. In fact, a similar investigation of results reporting on the European Union (EU) Clinical Trials Register (which is required for medicinal product trials conducted in the EU), found that just over half of trials (50.5%) had associated results reported on the platform. Reporting results on clinical registry platforms is particularly important, both because it makes the findings from research readily accessible to the public and because evidence suggests that results and adverse event reporting tends to be more complete on structured results reporting platforms compared to papers published in academic journals (Hartung et al., 2014). Although clinicaltrials.gov and EUCTR platforms readily support this type of results reporting, many registries do not.

Inadequate reporting and lack of data sharing

Our results further suggest that inadequate reporting, lack of investigator availability, and poor data sharing practice prevented us from including a substantial portion of eligible studies in past meta-analytic estimates. Although a small percentage of papers that were excluded for this reason included data-sharing statements indicating that data was available upon request, the vast majority did not. This was a surprising finding, given that journal policies increasingly require the inclusion of such statements in published reports. Many investigators failed to respond to emails when contacted, and others declined to share summary data, citing either a lack of time to gather what was needed, a lack of access to the data (e.g. because records had not been kept beyond 3 years after study completion) or a desire that their study be excluded from meta-analysis. It should be noted that the numbers reported here do not reflect the full number of papers that reported results in a way that prevented appropriate effect size derivation. Some authors of eligible studies who reported results from change scores or adjusted linear models also reported unadjusted means and standard deviations for each outcome in supplementary materials, which allowed our team to derive this information without contacting them. Others provided summary data upon request when contacted. These practices and others, such as depositing de-identified data in an open-access repository, are recommended to ensure that primary studies are included in all relevant meta-analyses for which they are eligible (Chow et al., 2023).

Inadequate and late registration

Our findings suggest that when trials are registered, the vast majority of them are registered late, often years after the trial start date. However, although investigators frequently registered trials while they were ongoing or nearly complete, we found that trial registrations almost never fully specified all relevant dimensions of the trial (i.e. the intervention and comparison conditions, the outcomes, the assessment timepoints, and the analytic approach). The overwhelming inadequacy of registration appears to be largely (though not completely) driven by investigator failure to specify their plans for analysis. This may be exacerbated by registration platforms that do not prompt investigators to detail their analytic approach. For example, while the Australian New Zealand Clinical Trials Registry includes a field prompting investigators to indicate “statistical methods/analysis,” clinicaltrials.gov does not. Given that statistical fishing and hypothesizing after results are known (HARKing; Kerr, 1998) are practices that are known to bias findings, but that can only be detected via pre-registration of analytic approach, it is vital that registration platforms clearly prompt investigators to specify an analytic plan in registration. Including detailed information about analytic approach in the registration, as well as detailed information about other aspects of the trial, should not be difficult for investigators leading federally funded clinical trials, as they will have already had to detail these procedures at length in grant applications.

Selective reporting of positive findings

Although we were limited in our investigation of selective reporting by the scarcity and inadequacy of trial registration, we found evidence suggesting selective reporting of trial results may be fairly common. Many published reports detailed findings for unregistered outcomes and failed to report findings for registered outcomes which were presumably measured. Similarly, investigators often reported primary outcomes as secondary or vice versa. This is consistent with investigations of selective reporting in other fields, which suggest that one indicator of selective reporting, primary and secondary outcome switching, may occur in a majority of published reports (TARG Meta-Research Group and Collaborators, 2023). In fact, when all indicators of selective reporting (i.e. failure to report a registered outcome, reporting of unregistered outcomes, reporting of primary outcome as secondary or vice versa, reporting outcome assessment timepoints that do not align with preregistration, or reporting an analysis that differs from that specified in registration without justification) were considered, only 10 of 56 studies were not flagged for a single indicator. However, many of those registrations were retrospective or failed to adequately specify their analysis approach, preventing our team from detecting unjustified deviations from protocol in that area. Given that the vast majority of studies that could be examined for selective reporting practices evidenced at least one, it is reasonable to assume that a similar proportion (or even higher proportion) of unregistered published studies are also affected by these practices. In fact, only one study in our sample, an unpublished dissertation, was completely and adequately registered and bore no evidence of selective reporting; even this study was retrospectively registered.

Recommendations for change

When considered in totality, our results suggest that problematic trial reporting practices may have a considerable and concerning influence on the conclusions of systematic reviews and meta-analyses focused on evidence-based practices designed to support young autistic children. When these practices operate in tandem, they can introduce compounding error into estimates of intervention effects. That is, when trials that produce null and negative results go unreported and unpublished, and when investigators selectively report only positive findings of other trials, meta-analysts are unable to include null and negative effect estimates in summary effect estimation, and the resulting summary effects are substantially positively biased. This compounded reporting failure can ultimately lead to the designation of practices which are ineffective or even harmful as being evidence-based.

To address this problem, we suggest several recommendations that are aligned with efforts to improve registration and reporting practices in other fields (see Serghiou et al., 2023 for recommendations for biomedical clinical trials). First, federal funding agencies must begin enforcing mandates for results reporting on clinicaltrials.gov by withholding funding for institutions or investigators that broadly fail to comply with stated policies. Biosketch sections in grant applications designated for listing prior and current grant funding should be amended to require links to corresponding registration for all completed awards that have funded clinical trials, and reviewers should be provided with specific instructions to verify that previously completed trials have posted results as a prerequisite to winning subsequent federal funding.

Second, registration platforms should be amended to ensure that they provide investigators with specific prompts to describe all essential study procedures (especially analytic approach) with sufficient detail. Platforms could encourage investigators to use text from winning grant applications where methods were specified at length, which would ensure that methods were sufficiently described while also easing the process of pre-registration for investigators who may be stretched for time. Alternatively, platforms may allow users to link to other registration templates (such as those offered by the Open Science Framework, which offer specific prompts to investigators to specify in detail how data will be handled). Finally, platforms might consider altering the graphical user interface to more clearly alert both investigators and the public when the registration is incomplete or late (e.g. by highlighting the registration in red).

Third, we recommend that autism-specific journals require pre-registration as a prerequisite for publication of clinical trials, and provide reviewers with templates to judge the adequacy of registration during review to ensure it was not only pre-registered, but also done so completely. In addition, registered protocols should be included in supplementary materials for published articles, so that they are readily available to readers. Although a number of journals across fields publish autism-relevant clinical trial research, our prior work has shown that nearly half of the controlled studies of early childhood autism interventions are published in just 5 journals (Sandbank et al., 2020). Therefore, a collective agreement among autism-specific journals to require pre-registration and publish registered reports (where study protocols are detailed and reviewed in depth and journals agree to publish results if protocols are followed) could substantially improve the rate and adequacy of pre-registration in this field. This process has already begun, as the journals Autism Research and Autism recently began requiring that clinical trials published in the journal be prospectively registered. In addition, Research in Autism Spectrum Disorders now offers registered reports as a submission option, and Autism will be offering this publication option soon (S. Fletcher-Watson, personal communication, 3 November, 2023). Pre-registration requirements need not be restricted to studies employing group designs, as preregistration can also enhance the transparency and trust of single case design research (Johnson & Cook, 2019), as well as nonexperimental research and secondary analyses of pre-existing data sets (Weston et al., 2019). Although some have expressed concern that pre-registration requirements might set unattainable standards for autism researchers in low and middle income countries (LMICs) and, therefore, reduce the contribution of voices from these countries, data from Autism Research suggests the introduction of this requirement did not reduce the representation of studies from LMIC countries and may actually improve it (Anagnostou, 2023).

Finally, journals must do more to ensure that investigators report all relevant information about completed trials. This can be accomplished by requiring that authors adhere to the most recent CONSORT reporting checklist, which compels them to define pre-specified primary and secondary outcomes (Schulz et al., 2010), and by prompting authors either to report unadjusted means and standard deviations for all measured outcomes and pre-intervention covariates (including subscales of standardized assessments), or to deposit full de-identified data in supplementary materials, an open science repository, or the National Institute of Mental Health Data Archive (NDA), a platform initially created as the National Database of Autism Research (NDAR) to house de-identified data from federally funded autism research.

Conclusion

In recent years, systematic reviews and evaluations have uncovered numerous issues in autism intervention research, including the design of research with significant risks of bias, dissemination of findings with undeclared conflicts of interest, and failure to monitor or report adverse events (Bottema-Beutel et al., 2021a, 2021b; Sandbank et al., 2020) In this evaluation, we show that there are also substantial registration and reporting inadequacies that compound our inability to assess the promise interventions hold for effecting positive change in the autistic children for whom they are developed. Funding bodies, journal leadership, and principal investigators who study autism interventions could play important roles in changing norms around these issues, to ensure that money and researcher efforts put toward autism intervention research culminate in advances that will be of benefit to autistic people.

1.

Given that clinicaltrials.gov and ISRCTN were not established until 2000 and that other platforms were created even more recently, it is important to note that all 56 of the registered studies represented in this data set were published after the year 2000. In addition, 39 of the 56 registered studies were published after 2017, when the requirement to register became official NIH policy, though we reiterate that studies and platforms from in and outside the United States were included in the second and third parts of this investigation. These 39 studies account for 28% of studies in our data set which were published after 2017.

Footnotes

Authors’ Note: M.S., PhD, is an Assistant Professor in the Division of Occupational Science and Occupational Therapy in the Department of Health Sciences at The University of North Carolina at Chapel Hill. K.B.-B., PhD, is an Associate Professor of Special Education in the Lynch School of Education and Human Development at Boston College. Y.-C.S., MSOT, is a PhD candidate in the Division of Occupational Science and Occupational Therapy in the Department of Health Sciences at The University of North Carolina at Chapel Hill. N.C., PhD, is a research associate in Inclusive Educational and Clinical Programs in the Department of Curriculum and Instruction at the University of Arkansas. J.I.F., PhD, CCC-SLP is a Research Fellow in the Department of Hearing and Speech Sciences at Vanderbilt University Medical Center and an affiliate of the Frist Center for Autism and Innovation at Vanderbilt University. T.W., PhD, CCC-SLP is an Assistant Professor in the Department of Hearing and Speech Sciences at Vanderbilt University Medical Center, Vanderbilt Brain Institute, Vanderbilt Kennedy Center, and the Frist Center for Autism and Innovation, as well as Adjunct Associate Professor in the Department of Communication Sciences and Disorders at the John A. Burns School of Medicine at the University of Hawaii at Manoa.

Community Involvement Statement: Two of the authors of this work are parents of autistic individuals. There was no other community involvement in this work.

The author(s) declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: M.S. has received fees for presenting research findings in invited talks and providing expert evidence on the topic of early childhood autism interventions in court hearings. She previously taught courses in a program that was accredited by the Behavior Analyst Certification Board on both behavioral and NDBI early childhood interventions. K.B.-B. has previously received fees for consulting with school districts on intervention practices for autistic children and teaches courses on autism interventions in her role as an Associate Professor of Special Education. She has also accepted speaker fees to discuss her work on research quality, adverse events, and researcher conflicts of interest as they pertain to autism intervention research. She also receives royalties for a co-edited book titled Clinical Guide to Early Interventions for Children with Autism, published by Springer. J.I.F. has been paid to provide adaptive horseback riding lessons and has received grant funding from the National Institutes of Health to study the efficacy interventions geared toward infant siblings of autistic children. He is employed in a department that teaches students to provide early communication therapies. Nicolette Caldwell is a Board-Certified Behavior Analyst at the Doctoral level (BCBA-D) and is the current president-elect of the Arkansas Association for Behavior Analysis. She teaches courses in a university program accredited by the Behavior Analyst Certification Board and formerly provided quality assurance and consultation services for the Arkansas Medicaid waiver program which provides behavioral based services for children with autism ages 0 to 8. T.W. has previously been paid to provide traditional behavioral, naturalistic developmental behavioral, and developmental interventions to young children on the autism spectrum; has received grant funding from internal and external agencies, including the National Institutes of Health and the Vanderbilt Institute for Clinical and Translational Research, to study the efficacy of various interventions geared toward young children with autism; and is employed by the Department of Hearing and Speech Sciences at Vanderbilt University Medical Center, which offers intervention services for autistic children via their outpatient clinics and trains clinical students in the provision of treatments delivered over the course of early childhood. All other authors have no conflict of interests to declare.

Funding: The author(s) received no financial support for the research, authorship, and/or publication of this article.

References

  1. Anagnostou E. (2023, May 15–18). A journal editor perspective [Presentation]. 22nd Annual Meeting of the International Society for Autism Research, Stockholm, Sweden. [Google Scholar]
  2. Bashir R., Bourgeois F. T., Dunn A. G. (2017). A systematic review of the processes used to link clinical trial registrations to their published results. Systematic Reviews, 6(123), 1–17. 10.1186/s13643-017-0518-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Bottema-Beutel K., Crowley S., Sandbank M., Woynaroski T. (2021. a). Adverse event reporting in autism intervention literature for young children. Autism, 25, 322–335. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Bottema-Beutel K., Crowley S., Sandbank M., Woynaroski T. (2021. b). Conflicts of interest (COIs) in autism early intervention research: A meta-analysis of COI influences on intervention effects. Journal of Child Psychology and Psychiatry, 62, 5–15. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Boutron I., Page M. J., Higgins J. P. T., Altman D. G., Lundh A., Hróbjartsson A. (2022). Chapter 7: Considering bias and conflicts of interest among the included studies. In Higgins J. P. T., Thomas J., Chandler J., Cumpston M., Li T., Page M. J., Welch V. A. (Eds.), Cochrane handbook for systematic reviews of interventions (version 6.3). John Wiley & Son; s. www.training.cochrane.org/handbook [Google Scholar]
  6. Chan A. W., Hróbjartsson A., Haahr M. T., Gøtzsche P. C., Altman D. G. (2004). Empirical evidence for selective reporting of outcomes in randomized trials: Comparison of protocols to published articles. Journal of the American Medical Association, 291(20), 2457–2465. 10.1001/jama.291.20.2457 [DOI] [PubMed] [Google Scholar]
  7. Chen R., Desai N. R., Ross J. S., Zhang W., Chau K. H., Wayda B., Murugiah K., Lu D. Y., Mittal A., Krumholz H. M. (2016). Publication and reporting of clinical trial results: Cross sectional analysis across academic medical centers. British Medical Journal, 352, Article i637. 10.1136/bmj.i637 [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Chow J. C., Sandbank M., Hampton L. H. (2023). Guidance for increasing primary study inclusion and usability of data in meta-analysis: A reporting tutorial. Journal of Speech, Language, and Hearing Research, 66, 1899–1907. 10.1044/2023_JSLHR-22-00318 [DOI] [PubMed] [Google Scholar]
  9. Cuijpers P., Weitz E., Cristea I. A., Twisk J. (2017). Pre-post effect sizes should be avoided in meta-analyses. Epidemiology and Psychiatric Sciences, 26(4), 364–368. 10.1017/S2045796016000809 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Deeks J. J., Higgins J. P. T., Altman D. G. (2022). Chapter 10: Analysing data and undertaking meta-analyses. In Higgins J. P. T., Thomas J., Chandler J., Cumpston M., Li T., Page M. J., Welch V. A. (Eds.), Cochrane handbook for systematic reviews of interventions (version 6.3). John Wiley & Sons. www.training.cochrane.org/handbook [Google Scholar]
  11. European Commission. (2012). Commission Guideline-Guidance on posting and publication of result-related information on clinical trials in relation to the implementation of Article 57 (2) of Regulation (EC) No 726/2004 and Article 41 (2) of Regulation (EC) No 1901/2006. Official Journal of the European Union, 55, 7–10. [Google Scholar]
  12. Food and Drug Administration Amendments Act of 2007, Pub. L. No. 110-85, 21 U.S.C. § 801. https://www.govinfo.gov/content/pkg/PLAW-110publ85/pdf/PLAW-110publ85.pdf#page=82
  13. Fredrickson M. J., Ilfeld B. M. (2011). Prospective trial registration for clinical research: What is it, what is it good for, and why do I care? Regional Anesthesia & Pain Medicine, 36(6), 619–624. 10.1097/AAP.0b013e318230fbc4 [DOI] [PubMed] [Google Scholar]
  14. Gabelica M., Bojčić R., Puljak L. (2022). Many researchers were not compliant with their published data sharing statement: A mixed-methods study. Journal of Clinical Epidemiology, 150, 33–41. 10.1016/j.jclinepi.2022.05.019 [DOI] [PubMed] [Google Scholar]
  15. Goldacre B., DeVito N. J., Heneghan C., Irving F., Bacon S., Fleminger J., Curtis H. (2018). Compliance with requirement to report results on the EU Clinical Trials Register: Cohort study and web resource. British Medical Journal, 362, Article k3218. 10.1136/bmj.k3218 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Hartung D. M., Zarin D. A., Guise J. M., McDonagh M., Paynter R., Helfand M. (2014). Reporting discrepancies between the ClinicalTrials.gov results database and peer-reviewed publications. Annals of Internal Medicine, 160(7), 477–483. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Higgins J. P. T., Savović J., Page M. J., Elbers R. G., Sterne J. A. C. (2022). Chapter 8: Assessing risk of bias in a randomized trial. In Higgins J. P. T., Thomas J., Chandler J., Cumpston M., Li T., Page M. J., Welch V. A. (Eds.), Cochrane handbook for systematic reviews of interventions (version 6.3). John Wiley & Sons. www.training.cochrane.org/handbook [Google Scholar]
  18. Hyman S. L., Levy S. E., Myers S. M., & Council on Children with Disabilities, Section on Developmental and Behavioral Pediatrics. (2020). Identification, evaluation, and management of children with autism spectrum disorder. Pediatrics, 145(1), Article e20193447. 10.1542/peds.2019-3447 [DOI] [PubMed] [Google Scholar]
  19. Johnson A. H., Cook B. G. (2019). Preregistration in single-case design research. Exceptional Children, 86(1), 95–112. 10.1177/0014402919868529 [DOI] [Google Scholar]
  20. Jones C. W., Handler L., Crowell K. E., Keil L. G., Weaver M. A., Platts-Mills T. F. (2013). Non-publication of large randomized clinical trials: Cross sectional analysis. British Medical Journal, 347, Article f6104. 10.1136/bmj.f6104 [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Kampman J. M., Weiland N. H. S., Hollmann M. W., Repping S., Hermanides J. (2021). High incidence of outcome switching observed in follow-up publications of randomized controlled trials: Meta-research study. Journal of Clinical Epidemaiology, 137, 236–240. 10.1016/j.jclinepi.2021.05.003 [DOI] [PubMed] [Google Scholar]
  22. Kaplan R. M., Irvin V. L. (2015). Likelihood of null effects of large NHLBI clinical trials has increased over time. PLOS ONE, 10(8), Article e0132382. 10.1371/journal.pone.0132382 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Kerr N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review, 2(3), 196–217. [DOI] [PubMed] [Google Scholar]
  24. National Institutes of Health. (2016, September 16). NIH Policy on the dissemination of NIH-funded clinical trial information (NIH Publication No. NOT-OD-16-149). U.S. Department of Health and Human Services. https://grants.nih.gov/grants/guide/notice-files/not-od-16-149.html [Google Scholar]
  25. National Institutes of Health. (2021. a, March). Research on autism spectrum disorders, R01 clinical trial optional (NIH Publication No. PA-21-201). U.S. Department of Health and Human Services. https://grants.nih.gov/grants/guide/pa-files/PA-21-201.html [Google Scholar]
  26. National Institutes of Health. (2021. b, March). Research on autism spectrum disorders, R21 clinical trial optional (NIH Publication No. PA-21-200). U.S. Department of Health and Human Services. https://grants.nih.gov/grants/guide/pa-files/PA-21-200.html [Google Scholar]
  27. National Institutes of Health. (2021. c, March). Research on autism spectrum disorders, R03 clinical trial optional (NIH Publication No. PA-21-199). U.S. Department of Health and Human Services. https://grants.nih.gov/grants/guide/pa-files/PA-21-199.html [Google Scholar]
  28. Nosek B. A., Alter G., Banks G. C., Borsboom D., Bowman S. D., Breckler S. J., Buck S., Chambers C. D., Chin G., Christensen G., Contestabile M., Dafoe A., Eich E., Freese J., Glennerster R., Goroff D., Green D. P., Hesse B., Humphrys M., Yarkoni T. (2015). Promoting an open research culture. Science, 348(6242), 1422–1425. 10.1126/science.aab2374 [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Nosek B. A., Ebersole C. R., DeHaven A. C., Melor D. T. (2018). The preregistration revolution. Psychological and Cognitive Sciences, 115(11), 2600–2606. 10.1073/pnas.1708274114 [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Page M. J., Higgins J. P. T. (2016). Rethinking the assessment of risk of bias due to selective reporting: A cross-sectional study. Systematic Reviews, 5(108), 1–8. 10.1186/s13643-016-0289-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Piller C. (2015, December 13). Failure to report: A STAT investigation of clinical trials reporting. STAT. https://www.statnews.com/2015/12/13/clinical-trials-investigation/
  32. Ross J. S., Tse T., Zarin D. A., Xu H., Zhou L., Krumholz H. M. (2012). Publication of NIH funded trials registered in ClinicalTrials.gov: Cross sectional analysis. British Medical Journal, 344, Article d7292. 10.1136/bmj.d7292 [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Sandbank M., Bottema-Beutel K., Crowley S., Cassidy M., Dunham K., Feldman J. I., Crank J., Albarran S. A., Raj S., Mahbub P., Woynaroski T. G. (2020). Project AIM: Autism intervention meta-analysis for studies of young children. Psychological Bulletin, 146(1), 1–29. 10.1037/bul0000215 [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Sandbank M., Bottema-Beutel K., LaPoint S. C., Feldman J. I., Barrett D. J., Caldwell N., Dunham K., Crank J., Albarran S., Woynaroski T. (2023). Autism intervention meta-analysis of early childhood studies (Project AIM): Updated systematic review and secondary analysis. British Medical Journal, 383, Article e076733. 10.1136/bmj-2023-076733 [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Saric F., Barcot O., Puljak L. (2019). Risk of bias assessments for selective reporting were inadequate in the majority of Cochrane reviews. Journal of Clinical Epidemiology, 112, 53–58. 10.1016/j.jclinepi.2019.04.007 [DOI] [PubMed] [Google Scholar]
  36. Schulz K. F., Altman D. G., Moher D. (2010). CONSORT 2010 statement: Updated guidelines for reporting parallel group randomised trials. British Medical Journal, 340(7748), Article c332. 10.1136/bmj.c332 [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Serghiou S., Axfors C., Ionnidis J. P. A. (2023). Lessons learnt from registration of biomedical research. Nature Human Behavior, 7, 9–12. [DOI] [PubMed] [Google Scholar]
  38. Steinbrenner J. R., Hume K., Odom S. L., Morin K. L., Nowell S. W., Tomaszewski B., Szendrey S., McIntyre N. S., Yucesoy-Ozkan S., Savage M. N. (2020). Evidence-based practices for children, youth, and young adults with autism. The University of North Carolina at Chapel Hill, Frank Porter Graham Child Development Institute, National Clearinghouse on Autism Evidence and Practice Review Team. https://ncaep.fpg.unc.edu/sites/ncaep.fpg.unc.edu/files/imce/documents/EBP%20Report%202020.pdf [Google Scholar]
  39. TARG Meta-Research Group & Collaborators. (2023). Estimating the prevalence of discrepancies between study registrations and publications: A systematic review and meta-analyses. BMJ Open, 13(10), Article e076264. https://doi/org/10.1136/bmjopen-2023-076264 [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Trembath D., Waddington H., Sulek R., Varcin K., Bent C., Ashburner J., Eapen V., Goodall E., Hudry K., Silove N., Whitehouse A. (2021). An evidence-based framework for determining the optimal amount of intervention for autistic children. The Lancet Child & Adolescent Health, 5(12), 896–904. 10.1016/S2352-4642(21)00285-6 [DOI] [PubMed] [Google Scholar]
  41. Uniform Administrative Requirements, Cost Principles, and Audit Requirements for Federal Awards, 45 C.F.R. § 75.371 (2014. & rev. 2023). https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-A/part-75/subpart-D/subject-group-ECFRb1309e6966399c7/section-75.371
  42. Voils C. I., Crandell J. L., Chang Y., Leeman J., Sandelowski M. (2011). Combining adjusted and unadjusted findings in mixed research synthesis. Journal of Evaluation in Clinical Practice, 17(3), 429–434. 10.1111/j.1365-2753.2010.01444.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Wallach J. D., Boyack K. W., Ioannidis J. P. (2018). Reproducible research practices, transparency, and open access data in the biomedical literature, 2015–2017. PLOS Biology, 16(11), Article e2006930. 10.1371/journal.pbio.2006930 [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Wayant C., Scheckel C., Hicks C., Nissen T., Leduc L., Som M., Vassar M. (2017). Evidence of selective reporting bias in hematology journals: A systematic review. PLOS ONE, 12(6), Article e0178379. 10.1371/journal.pone.0178379 [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Weston S. J., Ritchie S. J., Rohrer J. M., Przybylski A. K. (2019). Recommendations for increasing the transparency of analysis of preexisting data sets. Advances in Methods and Practices in Psychological Science, 2(3), 214–227. 10.1177/2515245919848684 [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. What Works Clearinghouse. (2022). What Works Clearinghouse procedures and standards handbook, version 5.0. U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance (NCEE). https://ies.ed.gov/ncee/wwc/Handbooks [Google Scholar]
  47. Zarin D. A., Tse T., Williams R. J., Rajakannan T. (2017). Update on trial registration 11 years after the ICMJE policy was established. New England Journal of Medicine, 376(4), 383–391. 10.1056/NEJMsr1601330 [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Autism are provided here courtesy of SAGE Publications

RESOURCES