Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 May 1.
Published in final edited form as: Stroke. 2022 Mar 31;53(5):1802–1812. doi: 10.1161/STROKEAHA.121.038047

The Stroke Preclinical Assessment Network (SPAN): Rationale, Design, Feasibility, and Stage 1 Results

Patrick D Lyden 1,2,*, Francesca Bosetti 3, Márcio A Diniz 4, André Rogatko 4, James I Koenig 3, Jessica Lamb 1, Karisma A Nagarkatti 1, Ryan P Cabeen 5, David C Hess 6, Pradip Kamat 6, Mohammad B Khan 6, Kristofer Wood 6,&, Krishnan Dhandapani 7, Ali S Arbab 8, Enrique C Leira 9,10,13, Anil K Chauhan 11, Nirav Dhanesha 11, Rakesh B Patel 11, Mariia Kumskova 11, Daniel Thedens 12, Andreia Morais 14, Takahiko Imai 14, Tao Qin 14, Cenk Ayata 14,15, Ligia S B Boisserand 16, Alison L Herman 16, Hannah E Beatty 16, Sofia E Velazquez 16,17, Sebastian Diaz-Perez 17, Basavaraju G Sanganahalli 18, Jelena M Mihailovic 18, Fahmeed Hyder 18,19, Lauren H Sansing 16,17, Raymond C Koehler 20, Steven Lannon 20, Yanrong Shi 20, Senthilkumar S Karuppagounder 21, Adnan Bibic 22, Kazi Akhter 22, Jaroslaw Aronowski 23, Louise D McCullough 23, Anjali Chauhan 23, Andrew Goh 23, SPAN Investigators
PMCID: PMC9038686  NIHMSID: NIHMS1787471  PMID: 35354299

Abstract

Cerebral ischemia and reperfusion initiate cellular events in brain that lead to neurological disability. Investigating these cellular events provides ample targets for developing new treatments. Despite considerable work, no such therapy has translated into successful stroke treatment. Among other issues—such as incomplete mechanistic knowledge and faulty clinical trial design—a key contributor to prior translational failures may be insufficient scientific rigor during pre-clinical assessment: non-blinded outcome assessment; missing randomization; inappropriate sample sizes; and preclinical assessments in young male animals that ignore relevant biological variables such as age, sex, and relevant comorbid diseases. Promising results are rarely replicated in multiple laboratories. We sought to address some of these issues with rigorous assessment of candidate treatments across six independent research laboratories. The Stroke Preclinical Assessment Network (SPAN) implements state-of-the-art experimental design to test the hypothesis that rigorous preclinical assessment can successfully reduce or eliminate common sources of bias in choosing treatments for evaluation in clinical studies. SPAN is a randomized, placebo-controlled, blinded, multi-laboratory trial using a Multi-Arm Multi-Stage (MAMS) protocol to select one or more putative stroke treatments with an implied high likelihood of success in human clinical stroke trials. The first stage of SPAN implemented procedural standardization and experimental rigor. All participating research laboratories performed middle cerebral artery occlusion surgery adhering to a common protocol and rapidly enrolled 913 mice in the first of 4 planned stages with excellent protocol adherence, remarkable data completion and low rates of subject loss. SPAN Stage 1 successfully implemented treatment masking, randomization, pre-randomization inclusion/exclusion criteria and blinded assessment to exclude bias. Our data suggest that a large, multi-laboratory, pre-clinical assessment effort to reduce known sources of bias is feasible and practical. Subsequent SPAN stages will evaluate candidate treatments for potential success in future stroke clinical trials using aged animals and animals with co-morbid conditions.

Keywords: stroke, animal models, rigor, transparency, reproducibility

INTRODUCTION

Hundreds of proposed stroke treatments entered clinical trials after apparently supportive preclinical data indicated high likelihood of benefit, but only recanalization therapies succeeded1. This widely reported record of clinical translational failures reduced enthusiasm among trialists and sponsors, resulting in a paucity of ongoing stroke clinical trials testing putative cerebroprotectants2, 3. Contemporaneously with the declining enthusiasm for stroke therapeutics, science more widely entered a ‘reproducibility crisis’4, 5. Scientists raised concerns about the lack of reproducibility among basic science publications, including in neuroscience6. The lack of reproducibility between pre-clinical laboratories likely contributes to skepticism and pre-clinical therapeutic nihilism. Further, some have even questioned whether animal models contribute value in the search for effective stroke therapy7. Among many suggestions to improve reproducibility in pre-clinical development, some have argued for enhanced standards of rigor and transparency8, 9. Additional suggestions include testing candidate stroke therapies broadly in multi-site networks3, 10, and there have been a few such attempts already1113.

Intense analysis of preclinical development programs in stroke and neurodegeneration have identified key problems, some of which may have contributed to falsely favorable treatment evaluations. Several forms of experimental bias have bedeviled animal research in general, and stroke modeling specifically: attrition bias, detection bias, performance bias, confirmation bias, and selection bias, among others 7, 14. Minimizing these biases could produce more reliable assessments of candidate stroke treatments. The use of animals with more appropriate key biological variables (age and sex) and comorbid disorders that impact outcome in stroke patients (diabetes, hypertension) is also proposed to improve translational success. Finally and critically, to better model human clinical stroke trials, it is recommended that pre-clinical assessment efforts emphasize behavioral endpoints in addition to traditional quantitative morphometric outcomes15.

The development of a rigorous, multi-laboratory testing network is proposed to accelerate pre-clinical assessment of candidate stroke therapies with a timeline and resources that would not be feasible for a single lab. Such a network could allow investigators to directly compare potential treatments against each other. Natural heterogeneity across different laboratories could be embraced as a bar that an effective therapy must overcome to be deemed likely to have translational potential10, 16. Furthermore, this approach, if successful, ought to increase confidence in the quality and rigor of the preclinical data and guide future clinical trials6, 10, 17.

We sought to design and build a multi-laboratory preclinical assessment network on a foundation of rigor seeking to reduce biases and optimize heterogeneity across laboratories. We included reperfusion as a required design feature, given the clinical opportunities provided by thrombectomy to combine documented reperfusion with cerebroprotection. We sought to demonstrate valid representation of clinical trials results in a model system that could be used in the future with a sense of trust as potential therapies move from the discovery phase to translation to clinical trials.

METHODS

The data that support the findings of this study are available from the corresponding author upon reasonable request. SPAN is a randomized, placebo-controlled, blinded, multi-laboratory trial using a Multi-Arm Multi-Stage (MAMS) protocol to select one or more putative stroke treatments with a postulated high likelihood of success in human clinical stroke trials1820.

Structure of the network.

The relationship amongst all SPAN participants is illustrated in Figure 1. The Coordinating Center (CC) is charged with drafting all protocols, packaging and distributing treatments, collecting all data, performing quality control on all data, sending data to the statistical center, managing all communications, and coordinating all manuscripts. The SPAN Research Laboratories (Fig. 1) are charged with discussing and reaching consensus on best practices for all protocols, conducting all enrollment, stroke surgeries, image acquisition and behavioral video production. The SPAN image and video repository is the Laboratory of Neuroimaging Image Data Archive (LONI IDA). LONI is charged with housing and labeling all images and video from SPAN and making available masked versions for anonymous analysis. LONI also provides automated image analysis of brain and lesion volume.

Fig. 1. SPAN Structure.

Fig. 1.

The SPAN network is centrally managed by a Coordinating Center (CC) and consists of 6 SPAN Research Laboratories. Data are captured using REDCap; images and videos are stored in a central, secure repository, the Imaging Data Archive (IDA). A Steering Committee provides central oversight. The NINDS seeks independent advice and guidance from an External Advisory Board, EAB. Centralized randomization and statistical analysis are managed in the Stats core of the CC. Research laboratory numbers are not concordant with numbers in subsequent figures.

The governing body of the network is a Steering Committee, convened by the CC in conjunction with NINDS (Fig. 1). Membership includes the CC, the PI of each SPAN Research Laboratory and NINDS Program staff (Table S2). An independent External Advisory Board (EAB), appointed by and reporting to NINDS, includes basic and clinician scientists with expertise in cerebroprotection, representatives from pharmaceutical and biotech industry, and experts in regulatory affairs, statistics, and clinical trial design.

The network designed and deployed a highly novel approach to randomized, blinded, and distributed drug evaluation (Fig. S1). This novel approach was intended to allow for a secure, masked, cost-efficient, and tightly managed system with centralized quality-control. Electronic data capture was implemented using the Research Electronic Data Capture platform (REDCap, https://www.project-redcap.org )21, 22. SPAN has adopted clearly defined Standard Operative Procedures (SOPs) for all activities—the choice of animal models, surgical methods, behavior assessments, assessor training and certification—and an explicit experimental protocol (attached as a supplement)23.

Treatments selected for study in SPAN

During peer-reviewed selection of the SPAN Research Laboratories, the treatments to be studied in SPAN were also peer-reviewed and evaluated by the National Institutes of Health (NIH). The six treatments selected for study in SPAN are shown in Table S3. Tocilizumab (TB) is an FDA-approved recombinant, humanized anti-interleukin-6 (IL-6) receptor monoclonal antibody approved for treatment of various inflammatory disorders. Veliparib inhibits the DNA repair enzyme poly(ADP-ribose)polymerase (PARP) and has been evaluated in human oncology trials24. Fingolimod (FTY720, Novartis, Basel, Switzerland) is a sphingosine-1-phosphate receptor ligand that retains lymphocytes in the lymph nodes without impairing lymphocyte function25. Uric acid is a potent endogenous scavenger of peroxynitrite and hydroxyl radicals (OH) in humans. Fasudil is an inhibitor of rho-associated protein kinase (ROCK). Remote ischemic conditioning (RIC) behaves like a “neuroprotectant” and “vasculoprotectant” in pre-clinical animal stroke models2628.

Blinding and treatment packaging, labeling, and shipping.

To facilitate blinding, the drugs were prepared for either intravenous or intraperitoneal delivery, with matching placebo controls. Amber vials were labeled with plastic Cryolabel® labels using permanent acrylic adhesive and were custom color-coded. The intravenous drugs were planned to be given once, via the jugular vein, at the end of MCAo. The intraperitoneal drugs were to be given every 12 hours for 6 doses, the first dose at the end of MCAo.

Two RIC systems (RIC4000, Hatteras Instruments Inc. Cary, NC) were circulated amongst SPAN Research Laboratories. Upon receipt at each Laboratory, investigators verified accurate pressure readings from the RIC4000 systems with a Dwyer manometer. The RIC/RIC sham treatments were given 6 times, the first RIC at the end of MCA occlusion, 12 h post first RIC then daily for 4 days. Although randomized, RIC treatment could not be blinded; all behavioral and imaging outcomes from RIC were assessed in a blinded manner.

Animal stroke model

For Stage 1 of SPAN a filament middle cerebral artery occlusion (MCAo) was performed in young male and female C57BL6/J mice. Anesthesia was induced with 4% isoflurane in mixed oxygen:nitrous oxide 30:70 and maintained with 1-2% isoflurane in the same gas mixture. Bupivacaine and post-operative fluids were given subcutaneously and water-softened chow was offered to any animals that appeared unable to properly maintain hydration or feed. In Stage 1, all SPAN Research Laboratories used a 60 min occlusion followed by imaging at 48 hours and 30 (±2) days. All animal procedures were approved by each lab’s IACUC.

Rigor, randomization and blinding

To minimize selection bias, animals were enrolled into the SPAN database upon arrival at the study lab. An MRI compatible bar-coded ear tag (RapID Tag®, San Francisco) was affixed permanently to each subject, and the coded number was registered in the SPAN database. At this point the animal was considered “enrolled”, and any subsequent drop-out was to be documented. To prepare for randomization, a lab completed an “intention to treat” or ITT form stating the animal bar-code ID numbers they intended to use, and the dates they intended to perform surgery.

Randomization was stratified by laboratory, by sex, and by whether the lab was to administer RIC. Once a lab forwarded an ITT form to the CC, the animal was randomly assigned to one of the study treatments. The CC generated an email to the SPAN Research Laboratory informing them of the assigned coded treatment vial and route of administration, or assignment to RIC or RIC sham. At no time did the laboratory have any knowledge of which agent they were using in any animal, other than RIC and RIC sham.

To minimize allocation bias, each MCAo surgery was performed by an investigator without knowledge of the assigned treatment. This blinding was intended to eliminate any potential bias in surgical or anesthetic technique. Similarly, post-operative animal care, behavioral assessment recordings, and image acquisition were all completed by investigators unaware of the assigned treatment.

Behavioral Assessment

The schedule of assessments for each subject in SPAN is listed in Table S4. Behavioral tests were to be done at baseline, reperfusion, and post-operative days 1, 2, 7 (±1), and 30 (±2) after MCAo. SPAN used a modified version of the rodent neurological deficit scale, which includes an assessment for weight bearing and barrel rolling. Rodents tend to recover motor function to a normal score within 2 weeks, even after a large infarction, although not all observers agree29, 30. The corner test was selected as the primary outcome measure. The subject is placed in the corner apparatus for 10 trials and observed for right or left turning out of the corner. We computed a corner turning index as (left turns)/(total turns). In addition, SPAN investigators selected the hanging wire test and the grid walk (foot-fault) test to supplement the results of the corner test.

Corner and grid walk tests were video recorded by the Research Laboratories and then uploaded in a coded fashion into the LONI IDA central repository. After quality control checks, the CC de-identified and randomly assigned each behavior digital recording to 3 raters at other Laboratories in the network. Once viewed, raters submitted their scores to the CC who re-linked the scores from the blinded reviews to the correct subjects with appropriate quality control measures to assure correct linkage.

Imaging and morphometry.

Each subject underwent MR imaging at 48 hours and 30 days after stroke. The purpose of the early scan was to replace the assessment of edema and infarction commonly visualized with TTC (2,3,5-Triphenyltetrazolium chloride) staining. The later scan allowed quantification of brain atrophy (i.e., ventricular enlargement, and tissue volume shrinkage) as an estimate of the tissue impact of stroke. Both assessments offer different but complementary information about the efficacy of a candidate stroke treatment. All MRI scans were uploaded to LONI for blinded, automated analysis.

Data Quality and Monitoring.

As shown in Fig. S1, raw data (videos of behavior; MR images) were uploaded in real-time, thus mitigating against fraud and error. Quality control checks for accurate group assignment included weekly audits to assure matching animal identifiers with assigned treatment vials. The SPAN CC planned to conduct lab visits, to include verification of raw data; Good Laboratory Practice; animal use committee approval; and cross-training—these audits were held virtually during the COVID pandemic lockdown.

Training and Certification.

To improve reproducibility across all labs, the CC devised training sessions and broadcast virtual training webinars for all surgeons and behavioral raters. These broadcasts were targeted to specific tasks, e.g., surgery, behavior testing, corner test rating, digital recording technology, etc. All investigators planning to perform the MCAo surgery were required to perform at least 10 surgeries according to the SPAN SOP, perform TTC staining 48 hours later, and submit these images to the CC. After CC review of the submitted images, surgeons were certified to conduct MCAo procedures for SPAN.

Statistical Design

SPAN selected a novel MAMS design as it is more efficient than multiple individual trials. Recently validated, user-friendly MAMS R-code has been implemented20 based on a generalized Dunnett’s procedure31. SPAN implemented the MAMS design with four stages using futility and efficacy boundaries31. Stage 1 was the first of 3 planned interim analyses performed after 25, 50 and 75% recruitment with equal allocation among arms. The Statistical Analysis Plan (Supplemental Material) provides a detailed description of the planned analysis for each endpoint.

The primary outcome is the 30-day corner test. Secondary outcomes include MRI morphometry and the other behavioral measures. Stopping rules include an estimated treatment effect greater than an upper limit of 50% and less than a lower limit of 6% treatment effect size. To estimate needed sample size, simulations were performed using the MAMS R-package modified to incorporate parallelization (Code provided upon request). From prior reports of the corner test in aged mice, we estimated mean of 0.55 and standard deviation (SD) of 0.524 turning index (left turns/total turns) for the control arm32. We inflated the estimated standard deviation to 1.048 as a conservative assumption to account for probable variability among SPAN Research Laboratories. The power of 90% was calculated under the least favorable configuration, which was defined as the probability of rejecting only one null hypothesis in any stage (without loss of generality, H01: μ1 – μ0 = δ for k = 2, …,8)33. We show in Table S5 the minimum, maximum, and expected sample sizes under the null hypothesis and LFC hypothesis based on Triangular boundaries34. These sample sizes were not adjusted for missing data due animal death. The projected boundaries at each interim analysis are illustrated in Figure S2.

RESULTS

Enrollment and Study Populations.

Stage 1 enrollment began on 10/5/2020 and ended on 4/22/2021 after 913 subjects were enrolled (Table S6). Enrollment rate matched targeted expectation (Fig. S3). All tables and graphs presented in this report are pooled global results including all treatment and control groups, so any treatment effect would cause additional variance in the overall mean values of all outcomes.

A CONSORT style diagram (Fig. 2) details the flow of subjects from enrollment to final results (http://www.consort-statement.org). Eleven enrolled subjects (5 female, 6 male) were determined to be ineligible and never randomized. The Intention to Treat population (ITT) therefore included 902, of which 2 were later excluded prior to beginning MCAo surgery. Of the 900 subjects who began the stroke MCAo surgery, 30 did not finish surgery or reach the beginning of treatment, as shown in Table S6. The modified Intention to Treat (mITT) population therefore included 870 subjects and is the primary analysis population. Of these mITT subjects, we selected those who finished the planned treatment and survived 5 days following stroke for the Per Protocol (PP) population of 713 animals. The 5-day survival period was chosen to standardize the PP definition across all treatment groups, some of which included prolonged treatments over days after stroke. Following treatment of these 713 PP subjects, 134 animals died (133) or were unable to reach 30-day assessments (1), leaving 579 subjects with long-term evaluation. The numbers of subjects in each population by lab are tabulated in Table S6.

Fig. 2. Enrollment and Exclusion.

Fig. 2

Modeled after a clinical trial CONSORT diagram44, the disposition of all subjects enrolled in SPAN is shown. Each population is described in the text. The primary analysis was conducted on the modified intention to treat (mITT) population.

The descriptive statistics for the mITT and PP populations are summarized in Table 1 and are provided in more detail in the supplemental tables. Data here are presented as percent (%) or mean±SD. Jackson Labs supplied 895 mice and 18 (1.97%) were bred in-house as allowed by the SPAN protocol. Animals were 3 months old (2.9±0.5) and the mITT and PP populations did not differ with respect to key demographic variables (Table 1). There was an even split between males and females (Table 2 and Table S6). Females weighed significantly less than males (P < 0.001, t-test without correction) and were well matched in terms of demographics, neurological scores after surgery.

Table 1. Demographic variables in the SPAN mITT and PP populations.

All data mean±SD

SPAN Research Laboratory
Total 1 2 3 4 5 6
mITT 870 120 167 132 148 151 152
Baseline Age1 2.92 ± 0.46 2.68 ± 0.24 2.50 ± 0.17 3.16 ± 0.37 3.32 ± 0.39 2.86 ± 0.45 3.03 ± 0.48
Baseline Weight2 24.00 ± 3.77 23.09 ± 3.83 23.72 ± 3.45 23.40 ± 3.53 25.12 ± 3.80 24.41 ± 3.77 24.05 ± 3.95
Females 428 60 78 67 71 74 78
Males 442 60 89 65 77 77 74
Day 1 Weight 21.86 ± 3.54 21.00 ± 3.48 21.44 ± 3.18 21.45 ± 3.37 22.94 ± 3.61 22.24 ± 3.64 22.09 ± 3.77
Day 2 Weight 20.61 ± 3.50 19.78 ± 3.31 19.42 ± 2.98 20.54 ± 3.35 22.16 ± 3.66 21.90 ± 3.39 20.29 ± 3.50
Day 1 NDS1 1.19 ± 0.82 1.64 ± 0.61 1.07 ± 0.30 0.20 ± 0.46 1.10 ± 0.71 1.50 ± 0.82 1.71 ± 0.90
Day 2 NDS 1.04 ± 0.76 1.38 ± 0.58 1.02 ± 0.14 0.20 ± 0.58 1.04 ± 0.71 1.03 ± 0.76 1.61 ± 0.86
PP 713 101 157 121 134 90 110
Baseline Age1 2.91 ± 0.46 2.69 ± 0.22 2.50 ± 0.16 3.17 ± 0.36 3.33 ± 0.39 2.83 ± 0.40 2.99 ± 0.49
Baseline Weight2 24.07 ± 3.74 23.37 ± 3.80 23.75 ± 3.44 23.51 ± 3.51 25.10 ± 3.88 24.48 ± 3.73 24.22 ± 3.92
Females 339 47 73 59 66 40 54
Males 374 54 84 62 68 50 56
Day 1 Weight 21.89 ± 3.51 21.16 ± 3.50 21.45 ± 3.16 21.52 ± 3.31 22.92 ± 3.64 22.30 ± 3.57 22.05 ± 3.74
Day 2 Weight 20.68 ± 3.45 19.94 ± 3.24 19.46 ± 2.98 20.60 ± 3.31 22.29 ± 3.62 21.91 ± 3.30 20.31 ± 3.38
Day 1 NDS3 1.08 ± 0.73 1.58 ± 0.57 1.03 ± 0.18 0.15 ± 0.38 1.04 ± 0.64 1.33 ± 0.69 1.54 ± 0.80
Day 2 NDS 0.97 ± 0.69 1.34 ± 0.57 1.01 ± 0.11 0.12 ± 0.35 0.97 ± 0.59 1.01 ± 0.76 1.47 ± 0.75
1.

Months.

2.

Grams.

3.

NDS: Neurological Deficit Score 0=normal 4=maximum

mITT = modified intention to treat population. PP = per protocol population

Table 2. Demographic variables by sex.

All data mean±SD

Total Female Male
mITT 870 428 442
 Baseline Age1 2.92 ± 0.46 2.98 ± 0.49 2.86 ± 0.43
Baseline Weight2 24.00 ± 3.77 20.79 ± 1.72 27.11 ± 2.33*
Day 1 Weight 21.86 ± 3.54 18.89 ± 1.79 24.64 ± 2.30*
Day 2 Weight 20.61 ± 3.50 17.73 ± 1.93 23.25 ± 2.34*
Day 1 NDS1 1.19 ± 0.82 1.15 ± 0.84 1.22 ± 0.80
Day 2 NDS 1.04 ± 0.76 1.01 ± 0.78 1.07 ± 0.75
PP 713 339 374
Baseline Age1 2.91 ± 0.46 3.00 ± 0.49 2.83 ± 0.41
Baseline Weight2 24.07 ± 3.74 20.80 ± 1.66 27.05 ± 2.35
Day 1 Weight 21.89 ± 3.51 18.88 ± 1.74 24.61 ± 2.27
Day 2 Weight 20.68 ± 3.45 17.82 ± 1.93 23.25 ± 2.31
Day 1 NDS3 1.08 ± 0.73 1.03 ± 0.74 1.12 ± 0.72
Day 2 NDS 0.97 ± 0.69 0.93 ± 0.69 1.01 ± 0.68
1.

Months.

2.

Grams.

3.

NDS: Neurological Deficit Score 0=normal 4=maximum

*

P<0.001 males vs females

mITT = modified intention to treat population. PP = per protocol population

Feasibility of the SPAN protocol.

Sites followed the protocol closely and all aspects of the SPAN SOPs appeared to be followed similarly across all labs (Table S7). All MCAo surgeries were done on the scheduled date, on the correct side (right MCA), with occlusion lasting 60 minutes. Failure to complete the SPAN MCAo protocol was recorded in 30/902 animals (3.3%), usually due to death. Drop in laser Doppler flow signal (LDF) was measured in most cases: drop in LDF was 71.6 ± 19.2 % from baseline in the PP population (n=644) and 70.5 ± 20.8 % in the mITT population (n=751). In SPAN, the right common carotid artery remained un-occluded during the period of MCAo to allow for natural variations in collateral blood flow, which may account for the somewhat smaller reduction in LDF compared to previous reports from other laboratories. There were 2 of 902 (0.2%) randomized subjects who received an incorrect treatment, that is, a treatment vial assigned to a different animal.

Using a 4-point version of the Neurological Deficit Score35, in the mITT population the mean±SD neurological deficit score immediately after stroke was 1.19 ± 0.82, and the next day was 1.04 ± 0.76 (Table 1). The PP population did not differ. The mean weight loss over 2 days was 4 g (16%) as shown in Tables 1 and 2. Several adverse events were recorded intra-operatively and post-operatively (Tables S7, S8 and S9). Brief, reversible respiratory arrest occurred during MCAo in less than 1%. The nylon suture could not be advanced all the way into the MCA in <0.5%. All sites adhered to protocol-indicated use of incisional bupivacaine and lactated Ringer’s or normal saline after the MCAo surgery. The frequency of these events was similar across the mITT and PP populations and in the ‘Partial Treatment’ and ‘Lost to Follow-up’ groups. Intraoperative excessive bleeding was noted in 2-3%. Anesthesia duration did not differ in the mITT (mean±SD 77.6±34.0 minutes) compared to PP population (79.5 ± 34.0) but varied considerably across labs (Tables S4 and S6).

Post-operative care varied across labs (Table S10) and included subcutaneous fluids and soaked chow only for those animals with a severe deficit that impaired feeding and drinking. Variation in post-operative care did not appear related to the number of subjects lost-to-follow-up (Table S11).

SPAN Behavioral Assay Results.

Usable video of baseline corner tests was successfully uploaded in 868 of 870 (99.77%) mITT animals, for a total of 2,604 scores. Similarly, 7 days after stroke 619 corner tests were recorded, and 30 days after stroke 565 corner tests were recorded (Table S12). The missing 7-day tests were due to death in 247 (98.4%) or ‘other’ causes in 4 (1.59%), while missing 30-day tests were due to death in 291 (95.41%) or ‘other’ causes in 14 (4.59%) animals. Similar completion rates were observed in the PP population; these data indicate protocol feasibility and successful follow up (Table S13).

The overall (all subjects in all treatment groups) corner turning index (ratio of left turns over total turns) at baseline (pre-stroke) was 0.56±0.24 for all labs (range 0.46 to 0.62). The mean corner test turning index agrees with our a priori assumed mean of 0.55, while the SD is substantially better than 0.52 as used in our power analysis32. After 7 days following MCAo, the mean corner test ratio was 0.39±0.33 (n=619) and after 30 days was 0.41±0.35 (n=565). The change from baseline was −0.16 ±0.36 at Day 7 and −0.15±0.39 at Day 30. Corner test results differed numerically across labs (Fig. 3), but there were no statistically significant differences among the laboratories.

Fig. 3. Corner Test Results Across labs.

Fig. 3.

Data from the mITT population are summarized. Turning index was computed using a traditional formula: Left turns divided by total turns. Distribution of scores across labs was comparable at baseline, and after 7 and 30 days. Although the median values differ among the labs, there are no statistically significant differences. Violin plots show mean, median, interquartile range. The corner turning index cannot be greater than 1 or less than 0.

Like the corner test, completion rates for the grid walk test and hanging wire tests were excellent. Of available surviving animals, grid walk videos were successfully recorded and uploaded in PP animals 7 days after MCAo in 619/623 (99.4%) and 30 days after MCAo in 566/579 (97.8%). Across labs, grid walk completion rates were similar (Table S14), and indistinguishable between males and females. Hanging wire results were concordant across labs (Table S15) and between sexes.

Imaging and morphometry.

Feasibility proved to be excellent, with over 99% successful MR data acquisition/upload/analysis. One scan failed conversion from DICOM to a machine-readable format. Six scans lacked ADC (apparent diffusion coefficient) sequences; and 8 scans could not be analyzed due to motion artifact. Signal to noise ratios varied across sites from 1.84 to 9.57 for ADC and from 4.36 to 13.41 for DWI (Table S16 and S17). The 2 day lesion volume in the mITT population was 29.84 ± 30.05 mm3, which matches well with published results36. In both mITT and PP populations, volumes of whole brain, and cerebrospinal fluid were concordant across labs. Lesion volumes at 2 and 30 days after MCAo were significantly smaller at one lab compared to all others (Fig. S4). In exploratory analyses, excluding all data from this lab did not change any outcomes or conclusions.

DISCUSSION

SPAN Stage 1 demonstrated feasibility and practicality of a large, multi-laboratory preclinical stroke network with excellent protocol adherence, remarkable data completion and minimal subject attrition. Network SOPs were drafted, agreed to, and followed with excellent fidelity. Six independent research laboratories performed MCAo surgery to a common protocol. Treatment packaging and masking, randomization, distributed and blinded assessment, and data capture all worked quite well. The network has advanced into Stage 2.

SPAN tests the fundamental hypothesis that pre-clinical evaluation can be improved by removing the potential causes of error previously identified in pre-clinical studies, notably bias, inadequate statistical power, and mismatch of preclinical and clinical outcome measures. SPAN aims at identifying promising cerebroprotective interventions that could potentially advance to clinical trials rapidly, using a highly rigorous approach. SPAN conducts a common protocol across multiple laboratories and directly compares candidate stroke interventions to each other to assess reproducibility. The data presented here demonstrate the successful implementation of several needed features to reduce experimental bias in stroke research; the network may serve as a model system or blueprint for pre-clinical development in other disease areas.

SPAN seeks to utilize a networked infrastructure resembling a clinical trial and to implement the highest possible rigor using clinically relevant outcome measures10, 14. In stark contrast to stroke clinical trial design, however, many aspects of pre-clinical work are insufficiently developed, leading to several obstacles.

The first and most difficult obstacle in designing pre-clinical assessment networks is the lack of a “gold-standard”, a proven cerebroprotectant against which to benchmark pre-clinical tests and assays. In the absence of a positive control gold standard to validate the approach, the SPAN investigators chose to maximize the rigor of the comparative assessments—the SPAN infrastructure has the necessary capabilities and flexibility to eventually incorporate any future effective cerebroprotectant as a positive control.

Other obstacles to preclinical rigor—such as masking and blinding during animal trials—include the paucity of staff in each lab; difficulty in preparing identical-appearing placebos without sacrificing the blind; and the fact that in many labs the available staff perform multiple tasks, e.g., surgery, behavior, and histology. SPAN successfully overcame these barriers by providing each SPAN Research Laboratory with labeled drug vials that appear identical (photograph provided as a supplement); inactive compounds to match different formulations—placebo treatments—are provided in exactly matching, coded vials. This scheme appears to have succeeded well: of 902 enrolled subjects, all but 2 received the correct assigned vial (99.8%). Technical mastery of the randomization process was excellent. This system of enrollment followed by randomization prior to the MCAo surgery allowed SPAN to define a true “intention-to-treat” population37, perhaps a first in preclinical science.

Unlike in stroke clinical trials that utilize well characterized behavior rating scales (e.g., the modified Rankin scale), in pre-clinical assessment there is no universally preferred behavioral scale or task. After considerable literature review, discussion with consulting experts, and debate, SPAN selected the corner test as the primary outcome, primarily because the test is simple and does not require expensive equipment and so can be set up easily in any laboratory; is well characterized in aged mice; can be adapted to rats; and appears sensitive to stroke effects even long-term after stroke. To reduce possible rater bias, the CC distributed behavioral videos for review by raters blinded to treatment group (Fig. S1) at random among a pool of certified raters. Feasibility and compliance were excellent: out of nearly 10,000 assigned videos, 100% were successfully evaluated and scoring data uploaded.

The data presented here include all subjects, regardless of intervention. Thus, overall descriptive statistics include the contribution of treatment effects, if any. Despite the heterogeneity introduced by including all treatment and placebo group data, the baseline corner test results reflect excellent protocol adherence with mean scores exactly concordant with published data and excellent (narrow) variance. The effect of stroke on corner test scores 7 and 30 days after MCAo was modest (Fig. 3), which reflects the design of Stage 1 to try to induce smaller infarctions.

Another obstacle to pre-clinical stroke assessment is the lack of an agreed-upon pre-clinical stroke model that is widely accepted. SPAN investigators chose a single version of the animal MCAo, the nylon filament occlusion model. As is expected with the filament MCAo model, some randomized subjects died or otherwise did not complete surgery (Fig. 2), of which about 80% received all planned doses and survived 5 days after MCAo. Thus, SPAN implementation of the nylon filament MCAo went very well.

No agreed-upon measure of stroke size exists in the pre-clinical literature, although lesion imaging with TTC is commonly used 48 hours after MCAo and serial sectioning with histological staining is commonly used 30 days or more after MCAo38. SPAN investigators chose MRI for lesion volumetry to allow serial assessments in the same animal longitudinally, obviating the need for multiple study cohorts to measure early and late outcomes. The SPAN network uses the same digital repository that is used in several large clinical trials and registries, the Laboratory of NeuroImaging (LONI) at the Keck School of Medicine of USC 39. Among SPAN labs, 1,338 MRI scans were acquired, uploaded, and masked in the repository. An automated image analysis pipeline successfully provided blinded volumetric data; only a handful of images failed the analysis algorithm, primarily due to motion artifact; thus, the volumetry approach chosen for SPAN appears to be feasible.

A final and important obstacle to preclinical assessment is the lack of an agreed-upon statistical approach to multi-lab behavioral outcomes. SPAN chose a novel, adaptive process to evaluate 6 candidate treatments called Multi-Arm Multi-Stage (MAMS)31. The approach requires a single, primary endpoint that is analyzed serially after recruitment reaches 25, 50, 75, and finally 100% of the planned enrollment (Figs. S2 and S3). After each of the 4 stages, each candidate treatment is tested for futility and efficacy against an appropriate control group. If the calculated z-score for a given treatment falls below the futility boundary, it is dropped from further investigation; if a treatment effect falls above the efficacy boundary, then this treatment is declared effective and likewise, no longer evaluated. Otherwise, the candidate treatment advances to the next stage. For the upper efficacy boundary, we chose a 50% treatment effect size based on reported standardized effect sizes in preclinical studies14, 4043. After imposing rigorous experimental design standards, as planned herein, a standardized effect size of ≥50% would represent an unusually potent agent. For the lower futility boundary, we looked to large multi-national, multi-center clinical stroke trials in which the clinically meaningful treatment effect size was typically assumed to be 7%. We set the futility boundary (lower limit) at 6%. Using these assumptions, sample sizes were estimated using the MAMS procedure in r statistical software (Table S5)18. After each Stage in SPAN, there will be a selection of candidate treatments for continuation into the next stage; final evaluation of the candidate treatments will be summarized after completing all stages.

Conclusions.

The SPAN network agreed to a common experimental protocol and SOPs, completed pilot studies, and rapidly enrolled a pre-specified number of subjects into Stage 1 during a global pandemic and rolling lockdown. SPAN Stage 1 demonstrated feasibility with excellent protocol adherence, remarkable data completion and low rates of subject loss in a large, multi-laboratory network. Labs performed MCAo surgery to a common protocol, with expected heterogeneity in outcomes. Treatment packaging and masking, randomization, distributed and blinded assessment, and data capture all worked well. The infrastructure and approach used in SPAN Stage 1 may serve as a model system or blueprint for multi-laboratory preclinical development in other disease areas.

Supplementary Material

Supplemental Publication Material

Acknowledgments:

Roxan Ara and Asamoah Bosomtwi of core imaging facility for small animal (CIFSA) of Augusta University for their efforts in obtaining MR images

Novartis for a gift of fingolimod.

Sources of Funding:

NIH funding to Medical College of Georgia/Augusta University: R01 NS099455, 1UO1NS113356, R01NS112511 (DCH); R01NS110378 and R01NS117565 (KD) 19TPA34850076 (ASA).

NIH funding to Johns Hopkins University: U01NS113444, R01NS102583, and R01NS105894 (RCK)

NIH funding To Massachusetts General Hospital: U01 NS113443 (CA)

NIH National Center for Advancing Translational Science (NCATS) UCLA CTSI Grant Number, Grant UL1 TR001881–01 (A.R. and M.A.D)

NIH funding to Yale University U01NS113445 (LHS)

NIH funding to Carver College of Medicine/University of Iowa: R35HL139926, R01NS109910 & U01NS113388 (AKC)); U01NS113388 and U24NS107247 (ECL).

NIN funding to University of Texas: U01NS113451 (LDM and JA)

NIH funding to University of Southern California U24NS113452 (PDL)

The Laboratory of Neuro Imaging Resource (LONIR) at USC is supported in part by National Institutes of Health (grant number P41EB015922). Author RPC is supported in part by grant number 2020-225670 from the Chan Zuckerberg Initiative DAF, an advised fund of Silicon Valley Community Foundation.

Non-Standard Abbreviations

ADC

apparent diffusion coefficient

CC

Coordinating Center

DICOM

Digital imaging and communication in medicine

DWI

Diffusion weighted imaging

EAB

External Advisory Board

IACUC

Institutional animal care and use committee

IDA

Image Data Archive

IL-6

interleukin-6

ITT

intention to treat

LONI

Laboratory of NeuroImaging

MAMS

Multi-arm multi-stage

MCAo

Middle cerebral artery occlusion

mITT

modified intention to treat

MRI

magnetic resonance imaging

NINDS

National Institute of Neurological Disorders and Stroke

PARP

poly(ADP-ribose)polymerase

PP

per protocol

REDCap

Research Electronic Data Capture

RIC

remote ischemic conditioning

ROCK

rho-associated protein kinase

SPAN

Stroke Preclinical Assessment Network

SOP

Standard Operating Procedure

TB

Tocilizumab

TTC

2,3,5-triphenyltetrazolium chloride

Appendix

The SPAN Investigators.

Name Affiliation
Shahneela Siddiqui , Department of Neurology, Augusta University
Kevin Sheth, MD Department of Neurology, Yale School of Medicine, New Haven, CT
Charles Matouk, MD Department of Neurosurgery, Yale School of Medicine, New Haven, CT
Charles Dela Cruz, MD, PhD Department of Medicine, Yale School of Medicine, New Haven, CT
Jiangbing Zhou, PhD Department of Neurosurgery, Yale School of Medicine, New Haven, CT
Valina L. Dawson Department of Neurology, Institute of Cell Engineering, Johns Hopkins University, Baltimore, MD
Ted M. Dawson Department of Neurology, Institute of Cell Engineering, Johns Hopkins University, Baltimore, MD
Jian Liang Department of Neurology, Institute of Cell Engineering, Johns Hopkins University, Baltimore, MD
Peter C.M. van Zijl Department of Radiology, Johns Hopkins University, Baltimore, MD
Steven R. Zeiler Department of Neurology, Johns Hopkins University, Baltimore, MD
W. Taylor Kimberly Center for Genomic Medicine and Division of Neurocritical Care, Department of Neurology, Massachusetts General Hospital
Taylan Erdogan, BS Neurovascular Research Unit, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, USA
Lili Yu, PhD Neurovascular Research Unit, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, USA
Joseph Mandeville Department of Radiology, Harvard Medical School, Massachusetts General Hospital, Charlestown, MA, United States
Jonah Patrick Weigand Whittier Department of Radiology, Harvard Medical School, Massachusetts General Hospital, Charlestown, MA, United States

Footnotes

Disclosures:

Authors from MGH/AU/IW/JHU/Yale/UT/NINDS declare they have no disclosures

PL, JL, and KN (from USC) declare they have no disclosures

References and Notes

  • 1.O’Collins VE, Macleod MR, Donnan GA, Horky LL, van der Worp BH, Howells DW. 1,026 experimental treatments in acute stroke. Ann Neurol. 2006;59:467–477 [DOI] [PubMed] [Google Scholar]
  • 2.Bix GJ, Fraser JF, Mack WJ, Carmichael ST, Perez-Pinzon M, Offner H, Sansing L, Bosetti F, Ayata C, Pennypacker KR. Uncovering the rosetta stone: Report from the first annual conference on key elements in translating stroke therapeutics from pre-clinical to clinical. Transl Stroke Res. 2018;9:258–266 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Bosetti F, Koenig JI, Ayata C, Back SA, Becker K, Broderick JP, Carmichael ST, Cho S, Cipolla MJ, Corbett D, et al. Translational stroke research: Vision and opportunities. Stroke. 2017;48:2632–2637 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Sayre F, Riegelman A. The reproducibility crisis and academic libraries. College & Research Libraries. 2018;79:2 [Google Scholar]
  • 5.Ioannidis JP. Why most published research findings are false. PLoS Med. 2005;2:e124. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Steckler T. Editorial: Preclinical data reproducibility for r&d - the challenge for neuroscience. Springerplus. 2015;4:1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.van der Worp HB, Howells DW, Sena ES, Porritt MJ, Rewell S, O’Collins V, Macleod MR. Can animal models of disease reliably inform human studies? PLoS Med. 2010;7:e1000245. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Macleod MR, Fisher M, O’Collins V, Sena ES, Dirnagl U, Bath PM, Buchan A, van der Worp HB, Traystman RJ, Minematsu K, et al. Reprint: Good laboratory practice: Preventing introduction of bias at the bench. Int J Stroke. 2009;4:3–5 [DOI] [PubMed] [Google Scholar]
  • 9.Sena E, van der Worp HB, Howells D, Macleod M. How can we improve the pre-clinical development of drugs for stroke? Trends Neurosci. 2007;30:433–439 [DOI] [PubMed] [Google Scholar]
  • 10.Voelkl B, Vogt L, Sena ES, Wurbel H. Reproducibility of preclinical animal research improves with heterogeneity of study samples. PLoS Biol. 2018;16:e2003693. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Llovera G, Hofmann K, Roth S, Salas-Perdomo A, Ferrer-Ferrer M, Perego C, Zanier ER, Mamrak U, Rex A, Party H, et al. Results of a preclinical randomized controlled multicenter trial (prct): Anti-cd49d treatment for acute brain ischemia. Sci Transl Med. 2015;7:299ra121. [DOI] [PubMed] [Google Scholar]
  • 12.Maysami S, Wong R, Pradillo JM, Denes A, Dhungana H, Malm T, Koistinaho J, Orset C, Rahman M, Rubio M, et al. A cross-laboratory preclinical study on the effectiveness of interleukin-1 receptor antagonist in stroke. Journal of Cerebral Blood Flow & Metabolism. 2016;36:596–605 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Sena E. Multicentre preclinical animal research team project final report. 2016
  • 14.Sena ES, van der Worp HB, Bath PM, Howells DW, Macleod MR. Publication bias in reports of animal stroke studies leads to major overstatement of efficacy. PLoS Biol. 2010;8:e1000344. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Lyden P, Buchan A, Boltze J, Fisher M, Consortium* SX. Top priorities for cerebroprotective studies-a paradigm shift: Report from stair xi. Stroke. 2021;52:3063–3071 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Usui T, Macleod MR, McCann SK, Senior AM, Nakagawa S. Meta-analysis of variation suggests that embracing variability improves both replicability and generalizability in preclinical research. PLoS Biol. 2021;19:e3001009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.McNutt M. Journals unite for reproducibility. Science. 2014;346:679. [DOI] [PubMed] [Google Scholar]
  • 18.Jaki TF, Pallmann PS, Magirr D. The r package mams for designing multi-arm multi-stage clinical trials. Journal of Statistical Software. 2017 [Google Scholar]
  • 19.Wason JM, Trippa L. A comparison of bayesian adaptive randomization and multi-stage designs for multi-arm clinical trials. Stat Med. 2014;33:2206–2221 [DOI] [PubMed] [Google Scholar]
  • 20.Jaki T, Magirr D, Pallmann P. Mams: Designing multi-arm multi-stage studies, 2014. URL http://CRANR-projectorg/package=MAMSRpackageversion03. [Google Scholar]
  • 21.Harris PA, Taylor R, Minor BL, Elliott V, Fernandez M, O’Neal L, McLeod L, Delacqua G, Delacqua F, Kirby J. The redcap consortium: Building an international community of software platform partners. Journal of biomedical informatics. 2019;95:103208. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (redcap)—a metadata-driven methodology and workflow process for providing translational research informatics support. Journal of biomedical informatics. 2009;42:377–381 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Dirnagl U, Group MotM-SOP. Standard operating procedures (sop) in experimental stroke research: Sop for middle cerebral artery occlusion in the mouse. Nature Precedings. 2012 [Google Scholar]
  • 24.Diéras V, Han HS, Kaufman B, Wildiers H, Friedlander M, Ayoub J-P, Puhalla SL, Bondarenko I, Campone M, Jakobsen EH. Veliparib with carboplatin and paclitaxel in brca-mutated advanced breast cancer (brocade3): A randomised, double-blind, placebo-controlled, phase 3 trial. The Lancet Oncology. 2020;21:1269–1282 [DOI] [PubMed] [Google Scholar]
  • 25.Kovarik JM, Schmouder R, Barilla D, Wang Y, Kraus G. Single-dose fty720 pharmacokinetics, food effect, and pharmacological responses in healthy subjects. British journal of clinical pharmacology. 2004;57:586–591 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Hess DC, Blauenfeldt RA, Andersen G, Hougaard KD, Hoda MN, Ding Y, Ji X. Remote ischaemic conditioning—a new paradigm of self-protection in the brain. Nature Reviews Neurology. 2015;11:698–710 [DOI] [PubMed] [Google Scholar]
  • 27.Hoda MN, Siddiqui S, Herberg S, Periyasamy-Thandavan S, Bhatia K, Hafez SS, Johnson MH, Hill WD, Ergul A, Fagan SC. Remote ischemic perconditioning is effective alone and in combination with intravenous tissue-type plasminogen activator in murine model of embolic stroke. Stroke. 2012;43:2794–2799 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Ren C, Gao M, Dornbos D, Ding Y, Zeng X, Luo Y, Ji X. Remote ischemic post-conditioning reduced brain damage in experimental ischemia/reperfusion injury. Neurological research. 2011;33:514–519 [DOI] [PubMed] [Google Scholar]
  • 29.Rewell SS, Churilov L, Sidon TK, Aleksoska E, Cox SF, Macleod MR, Howells DW. Evolution of ischemic damage and behavioural deficit over 6 months after mcao in the rat: Selecting the optimal outcomes and statistical power for multi-centre preclinical trials. PLoS One. 2017;12:e0171688. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Trueman RC, Diaz C, Farr TD, Harrison DJ, Fuller A, Tokarczuk PF, Stewart AJ, Paisey SJ, Dunnett SB. Systematic and detailed analysis of behavioural tests in the rat middle cerebral artery occlusion model of stroke: Tests for long-term assessment. J Cereb Blood Flow Metab. 2017;37:1349–1361 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Magirr D, Jaki T, Whitehead J. A generalized dunnett test for multi-arm multi-stage clinical studies with treatment selection. Biometrika. 2012;99:494–501 [Google Scholar]
  • 32.Manwani B, Liu F, Xu Y, Persky R, Li J, McCullough LD. Functional recovery in aging mice after experimental stroke. Brain Behav Immun. 2011;25:1689–1700 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Dunnett CW. Selection of the best treatment in comparison to a control with an application to a medical trial. In: Santer TJTA, ed. Design of experiments: Ranking and selection. New York: Marcel Dekker; 1984:47–66. [Google Scholar]
  • 34.Whitehead J. The design and analysis of sequential clinical trials. John Wiley & Sons; 1997. [Google Scholar]
  • 35.Li X, Blizzard KK, Zeng Z, DeVries AC, Hurn PD, McCullough LD. Chronic behavioral testing after focal ischemia in the mouse: Functional recovery and the effects of gender. Exp Neurol. 2004;187:94–104 [DOI] [PubMed] [Google Scholar]
  • 36.Leithner C, Füchtemeier M, Jorks D, Mueller S, Dirnagl U, Royl G. Infarct volume prediction by early magnetic resonance imaging in a murine stroke model depends on ischemia duration and time of imaging. Stroke. 2015;46:3249–3259 [DOI] [PubMed] [Google Scholar]
  • 37.Peace KE. Statistical issues in drug research and development. CRC Press; 1989. [Google Scholar]
  • 38.Zille M, Farr TD, Przesdzing I, Muller J, Sommer C, Dirnagl U, Wunder A. Visualizing cell death in experimental focal cerebral ischemia: Promises, problems, and perspectives. J Cereb Blood Flow Metab. 2012;32:213–231 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Crawford KL, Neu SC, Toga AW. The image and data archive at the laboratory of neuro imaging. Neuroimage. 2016;124:1080–1083 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Sena ES, Currie GL, McCann SK, Macleod MR, Howells DW. Systematic reviews and meta-analysis of preclinical studies: Why perform them and how to appraise them critically. J Cereb Blood Flow Metab. 2014;34:737–742 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.O’Collins VE, Macleod MR, Cox SF, Van Raay L, Aleksoska E, Donnan GA, Howells DW. Preclinical drug evaluation for combination therapy in acute stroke using systematic review, meta-analysis, and subsequent experimental testing. J Cereb Blood Flow Metab. 2011;31:962–975 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Crossley NA, Sena E, Goehler J, Horn J, van der Worp B, Bath PM, Macleod M, Dirnagl U. Empirical evidence of bias in the design of experimental stroke studies: A metaepidemiologic approach. Stroke. 2008;39:929–934 [DOI] [PubMed] [Google Scholar]
  • 43.Macleod MR, O’Collins T, Horky LL, Howells DW, Donnan GA. Systematic review and metaanalysis of the efficacy of fk506 in experimental stroke. J Cereb Blood Flow Metab. 2005;25:713–721 [DOI] [PubMed] [Google Scholar]
  • 44.Begg C, Cho M, Eastwood S, Horton R, Moher D, Olkin I, Pitkin R, Rennie D, Schulz KF, Simel D, et al. Improving the quality of reporting of randomized controlled trials. The consort statement. JAMA. 1996;276:637–639 [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplemental Publication Material

RESOURCES