Abstract
Objective
Poor accrual rates impede clinical trial efficiency and significantly contribute to development costs for new interventions. Many providers recognize investigational treatments are their patients’ best opportunities for improvement, but operational clinical burdens impede providers’ awareness of, and ability to leverage, such opportunities. We aimed to develop a new workflow for non-intrusively apprising providers of trial opportunities for their patients and enabling providers to efficiently refer potential trial candidates to study teams for preliminary eligibility review.
Materials and methods
We developed a small information system to monitor institutional systems, identify patients potentially eligible for ongoing clinical trials, and give providers a non-intrusive, one-click method to refer such patients to study teams for preliminary eligibility vetting.
Results
In 18 months of pilot experience, providers invited study teams to vet 11% of 1844 patients found potentially eligible for 38 trials registered with the system. Seventy-nine patients were conservatively estimated to be accrued. Accrual rates were boosted for several trials. Results of a survey indicated most users were satisfied with the system.
Discussion
Providers’ time constraints impede their pursuit of investigational opportunities for their patients. In pilot experience, our novel approach to facilitating such pursuits yielded improved accrual, benefiting trials and presumably patients, too. Our approach may bear particular fruit for cross-disciplinary referrals for screening.
Conclusion
Systems for assisting providers in making investigational opportunities available to their patients may benefit from careful attention to provider workflow and time constraints. Our system might further benefit from improved patient/trial matching and shorter messages.
Keywords: Clinical trials, Clinical trial accrual, Clinical trial information systems, Clinical research workflow
1. Introduction
Poor accrual rates impede clinical trial efficiency [1] and significantly increase research costs. Development costs of new pharmaceuticals have been estimated over the last decade to average $1B [2–6], and re-estimated recently at $4–12B [7,8], roughly half attributable to clinical development [4].
Most studies of accrual have focused in oncology. Despite public willingness to participate (32% of American adults), or at least consider enrolling (38%), in cancer clinical trials [9], only about 3% of newly diagnosed adult cancer patients enroll [1,10,11] (10–20% at academic institutions [12–14]). This rate has not improved despite increased resources such as mass media campaigns [12] and third-party payer funding of trial participation [12,15,16]. Accrual rates to non-oncology trials may not be much higher [17].
Many factors contribute to poor accrual, but providers missing opportunities for their patients is key [15,18,19]. Studies report 18–50% of newly diagnosed adult cancer patients at both community and academic cancer centers were not considered by their oncologists for clinical trials [10,16,20]. Many providers recognize investigational treatments offer their patients the best opportunities for improvement, but operational clinical burdens impede providers’ awareness of, and ability to leverage, such opportunities [15,19,21,22]. In one study more than half (53%) of the patients approached to enter a cancer treatment trial reported the provider or coordinator spent 16–30 min presenting the trial [23]. In times of increasing pressure for clinical productivity [24,25], such allocation of time for research is challenging and sometimes outright infeasible. Another reported accrual challenge is the referring physician’s fear of loss of involvement with the patient.
Furthermore, as the population ages and medical science advances, patient acuity and complexity inevitably increase [26–28], but providers’ awareness of trials, especially in disciplines other than their own, is decreasingly likely. Poor cross-disciplinary trial awareness particularly impedes trials requiring accrual soon after diagnosis.
Automated systems for alerting providers to trial opportunities for their patients have been developed [29,30]. Barriers to wider use included custom programming effort not scalable to multiple trials [29] and perceived excessive alert intrusiveness [31].
We developed a new workflow to address these problems, supported by a small custom-developed information system interfaced with existing institutional systems, and easily scalable to the full range of an institution’s trials. Our goal was to non-intrusively apprise providers of trial opportunities for their patients and to enable providers, at their discretion, to access trial information and refer candidates to study teams for preliminary eligibility review. We report 18-month pilot phase experience.
2. Materials and methods
Deriving from a central premise that providers’ time (especially their in-clinic time) is sacrosanct, we conceived a new workflow, modified from Embi et al. [30], to address the above-noted goals. The system we developed – Medical University System for Accelerating Clinical Trials (MUSACT) – is illustrated in Fig. 1. A web-based application, with access secured by institutional credentials, allows study teams at the Medical University of South Carolina (MUSC) to register their studies with MUSACT. Information entered for each trial includes title, study team members, diagnoses of interest (by ICD-9 code, as presently used by the billing system), and, optionally, a link to a web-based resource providing information about the trial. Each morning, a query runs on MUSC’s enterprise data warehouse to review providers’ bills submitted in the last day and build a table identifying patients newly assigned any of the registered diagnoses of interest. Shortly, another application e-mails these patients’ providers, informing them of the relevant trial opportunities (Fig. 2). If a provider has more than one patient in a given day’s run who may be eligible for registered trials, all such patients are listed so that the provider never receives more than one e-mail per day from MUSACT. Each trial listed in the e-mail is hyperlinked to descriptive trial information if such a link was provided when the study was registered with MUSACT. “Inquire” links are provided in the e-mail for the provider to invite eligibility vetting across all listed patients and studies or just specific patients, or even just a specific study for a specific patient.
Fig. 1.
MUSACT Workflow. Study teams use the MUSACT website to register their studies, listing diagnoses of interest. Enterprise data warehouse receives nightly updates of clinical and billing data. Warehouse is then reviewed nightly to identify patients newly assigned diagnoses of interest to MUSACT-registered studies. Clinicians who assigned new diagnoses are notified by e-mail of trial opportunities and can click on “Inquire” links to invite study teams to access charts and preliminarily vet eligibility. Study team contacts clinician to discuss how a potentially eligible patient will be approached. MUSACT cannot be used by study teams to “trawl” through the EHR looking for trial candidates.
Fig. 2.
Sample MUSACT e-mail to clinician advising him of three potential trial opportunities for each of his two patients he recently newly diagnosed with, in this case, systemic lupus erythematosus.
MUSACT’s e-mails to providers also explain the system’s workflow, describing how clicking “Inquire” causes MUSACT to send e-mails to the study team (Fig. 3), inviting their review of the patient’s chart to preliminary vet eligibility. Providers are assured that study teams are cautioned in these invitations to not contact the patients and instead contact the providers to negotiate how patients will be approached regarding further vetting of eligibility, as some providers may want to control who (provider, provider’s staff, or study staff) presents which studies to which patients. Providers also are given an opportunity to “opt out” of MUSACT for a pre-determined time or permanently.
Fig. 3.
Sample MUSACT e-mail to study team advising them of a potential candidate for their trial. E-mail was generated by the MUSACT application when the clinician clicked on the trial’s “Inquire” link in the e-mail sent him by MUSACT (see Fig. 2) advising him of the patient’s potential eligibility for the trial.
When a provider clicks “Inquire,” MUSACT also launches a web browser presenting the provider’s MUSACT click statistics (Fig. 4) and offering an opportunity to identify others (e.g., department chair) to be e-mailed quarterly a summary of the provider’s MUSACT-based contributions to the institution’s research mission.
Fig. 4.
Sample confirmatory web page clinician sees upon inviting a study team to preliminarily review his patient’s eligibility for their trial. Page was generated by the MUSACT application when the clinician clicked on the trial’s “Inquire” link in the e-mail sent him by MUSACT (see Fig. 2) advising him of the patient’s potential eligibility for the trial.
MUSACT’s e-mails to study teams contain links to be clicked to notify MUSACT of screening and accrual.
Prior to implementation, MUSACT’s design not only was vetted with selected study teams drawn from multiple domains but also was reviewed with the chairs of MUSC’s three institutional review boards (IRBs), who unanimously opined that since MUSACT simply accelerates the existing processes of provider discovery of trials and referral of patients to study teams for initial eligibility review, no IRB review or approval was needed for its operation. MUSACT does not provide study staff (or any other personnel) any means of discovering patients potentially eligible for trials without the patients’ providers explicitly bringing those patients’ identities to the awareness of study staff.
3. Results
3.1. System operation
MUSACT was activated on December 12, 2011, when the College of Medicine at MUSC had 178 clinical trial awards funded for accrual in the Departments of Medicine, Neurosciences, Obstetrics and Gynecology, Ophthalmology, Otolaryngology, Pediatrics, Psychiatry, Surgery, and Urology. From 2006 to 2013, 1128 awards were active, 76% of which were corporate. Many studies during this time period did not meet projected enrollment. Fifty-three percent achieved 0–25%, 10% achieved 76–100%, and 6% achieved >100% of projected enrollment. Initially, 24 thoracic cancer trials and one polycythemia vera trial were registered in MUSACT. In its first 19 months of use, an additional six lupus-related trials, three pulmonary hypertension-related trials, one colon cancer-related trial, two myelofibrosis trials, one additional polycythemia vera trial, and one trial for mast cell disease were also registered with the system. In this pilot phase, principally aimed at exploring the feasibility/acceptability of MUSACT’s workflow, MUSACT was publicized neither to providers nor investigators. Providers’ awareness of the system came solely through the e-mails sent them by MUSACT informing them about trial opportunities for their patients. All study teams beyond the testbed program of thoracic oncology began using MUSACT after their investigators became aware of the system either via word of mouth or via e-mails the system had sent them that their patients might be eligible for trials already listed in the system.
Through July 21, 2013, MUSACT found 1844 patients who were seen in MUSC clinics and newly diagnosed with any of the diagnoses of interest to MUSACT-registered studies. The diagnosing providers for 1754 (95%) of these patients were notified by e-mail of their patients’ potential eligibility (based solely on the diagnosis) for trials in progress at MUSC. In the other 90 cases, MUSACT was unable to identify an e-mail address for the diagnosing provider in MUSC’s directory systems. In 196 (11%) of these 1754 cases, the diagnosing clinician invited the study teams to preliminarily vet these patients’ eligibility for trial.
Study screening and accrual statistics available from MUSACT’s database are necessarily incomplete because there is no way to require study staff to inform MUSACT of having screened or accrued a subject. Despite the voluntary nature of this reporting, study staff notified MUSACT they had screened 121 (62%) of these 196 cases for potential eligibility and had accrued 79 patients to these MUSACT-registered trials (40% of the cases referred for eligibility screening, or approximately 4% of the patients initially suspected by MUSACT as potential trial candidates based solely on diagnosis). In 34 of the 79 cases (17% of the 196 cases referred by clinicians for screening, or 43% of all accruals posted to MUSACT), study staff indicated the patient was accrued but did not also flag the patient as having been screened, underscoring the incompleteness of the screening and accrual statistics.
Informal surveys of the study teams making pilot use of MUSACT indicated the system was easy to use. Study teams’ opinions on the utility of the system for identifying potential candidates and improving accrual varied, as discussed below.
Lupus-related diagnoses were most commonly found by MUSACT (714, 39% of the 1844 cases overall), followed by 580 thoracic cancer-related diagnoses (31%), 200 colon cancer-related diagnoses (11%), 147 mast cell-related diagnoses (16%), 54 polycythemia vera diagnoses (3%), 49 pulmonary hypertension-related diagnoses (3%), and 18 myelofibrosis diagnoses (1%).
Of the 196 cases referred by clinicians for preliminary eligibility screening, 98 (50%) were for the mast cell disease study, 73 (37%) were for the lupus studies, 10 (5%) were for the colon cancer study, 5 (3%) were for the pulmonary hypertension studies, 5 (3%) were for the polycythemia vera studies, 3 (2%) were for the myelofibrosis studies, and 2 (1%) were for the thoracic cancer-related studies. Of note, the mast cell disease study was largely an internal control for monitoring proper system operation, as author L.A. was both the investigator for that study and the provider seeing most of the mast cell disease patients at MUSC.
Of the 714 lupus-related new diagnoses found by MUSACT, 73 (10%) were referred by providers for preliminary eligibility screening. Similarly, 2/580 (0.3%) of the new thoracic cancer cases were referred, 10/200 (5%) were of the new colon cancer cases were referred, 5/54 (9%) of the new polycythemia vera cases were referred, 3/18 (17%) of the myelofibrosis cases were referred, 5/49 (10%) of the pulmonary hypertension cases were referred. The large majority of the 98/147 (67%) of the mast cell disorder cases that were referred were “internal controls” (validating continued smooth operation of MUSACT processes) by author L.A., though a few such mast cell disorder cases were referred by other providers, too.
Of the 79 cases voluntarily reported by study teams to MUSACT as having been accrued to trials, 74 were for the mast cell study and 5 were for one particular lupus study.
The lupus research group was the only MUSACT-using research group which received significant clinician referrals through MUSACT and for which accrual records before and after trial registration on MUSACT were available. The mean monthly rates of screening and accrual for the three MUSACT-registered lupus trials which also were open prior to MUSACT go-live are shown in Table 1 and reflect the incomplete reporting of screenings and accruals to MUSACT.
Table 1.
Mean monthly rates of screening and accrual for the three MUSACT-registered lupus trials which also were open prior to MUSACT go-live.
| Trial (recruitment start date) | Before trial listed on MUSACT | After trial listed on MUSACT | ||
|---|---|---|---|---|
| Mean monthly screening rate |
Mean monthly accrual rate |
Mean monthly screening rate |
Mean monthly accrual rate |
|
| SLE in Gullah Health (February 2003) | 6.2 | 5.3 | 8.6 | 0.5* |
| SLE Database Project (February 2005) | 5.8 | 3.8 | 10.8 | 6.7 |
| ACCESS Trial for Lupus Nephritis (January 2011) | 2 | 0.2 | 3 | 0.3 |
Drop in accrual during this time period attributed to gap in study funding.
A pulmonary hypertension study team began using MUSACT in April 2012 to assist with accrual to two of its studies. One study, with an accrual goal of 60 patients over 3 years, had been open for 6 months, during which time 6 patients had been screened and 2 accrued. On the first day MUSACT began notifying providers about the availability of this study, a patient was referred through MUSACT and was screened and accrued. The next screening and accrual to this study took place three days later. Before these studies were then terminated 3 months later (due to the investigator’s departure), MUSACT had found 49 new diagnoses of pulmonary hypertension, 10% of which were referred by the diagnosing providers to the study team. All of these referrals were cross-disciplinary.
Only two providers clicked on the “opt-out” links in the e-mails sent them by MUSACT – but both chose various time-limited, rather than permanent, opt-out options.
3.2. Provider experience
A survey of providers who had had at least one opportunity to use MUSACT (i.e., had received at least one MUSACT e-mail offering trial opportunities for their patients since system go-live) was issued in March 2013 (15 months after go-live). For the same reason that MUSACT was engineered to make it very quick for providers to learn about trials and to refer their patients for preliminary eligibility screening, the survey was kept very short, just one or two questions for each provider. For providers who had responded (i.e., invited preliminary eligibility screening) to fewer than 100% of the e-mails MUSACT had sent them, the first question asked the provider to indicate the reason(s) why his/her response rate had not been higher, by checking off reasons from a supplied list and/or writing other reasons. All providers were also asked for their suggestions on how MUSACT could be re-engineered to make it more likely that the providers would click on the eligibility inquiry links.
A total of 336 providers were found to have received MUSACT’s e-mails since go-live, but the e-mail addresses of 11 were no longer known, another 14 surveys were bounced due to vacation-minders, and 2 were bounced due to full mailboxes. Out of the 309 surveys delivered, 86 responses (28%) were received. Twenty-six respondents (30%) did not remember receiving any MUSACT e-mails. Sixteen respondents (19%) remembered seeing the e-mails but could not remember why they had not clicked on any of the eligibility inquiry links. Fifteen respondents (17%) knew the patient(s) listed in the e-mail(s) would not be interested in, or eligible for, the listed trial(s). Twelve respondents (14%) remembered seeing the e-mails but were too busy to review them. Interestingly, only three respondents (3%) indicated they had not clicked on the eligibility inquiry links because they did not feel they had enough information about the trials, and the least frequently cited reason for not clicking on the eligibility inquiry links – just two respondents (2%) – was concern that referring a patient for preliminary eligibility screening might lead to too much time consumption later. Additional reasons cited for not clicking on the eligibility inquiry links included belief that other physicians or staff would pursue the listed trial opportunities (10 respondents (12%)), perceived system error (e.g., informed about a patient the provider did not remember seeing; 2 respondents (2%)), insufficient duration of relationship with the patient (e.g., an emergency physician or hospitalist; 3 respondents (3%)), and uninteresting trials (1 respondent (1%)).
Providers’ suggestions for enhancing the likelihood that they would click on the eligibility inquiry links included publicity about the system (6 respondents (7%)), integrating the links into MUSC’s electronic medical record system (1 respondent (1%)), sending of repeat reminders (2 respondents (2%)), sending MUSACT notices via pager instead of e-mail (3 respondents (3%)), including the patient name and diagnosis in the message subject line (a violation of MUSC policy; 3 respondents (3%)), “somehow” enhance MUSACT’s e-mail subject line to make it stand out better against the “noise” of other e-mails (2 respondents (2%)), and more precise identification of potentially relevant trials (i.e., using more parameters than just diagnosis code; 2 respondents (2%)). Nine respondents (10%) provided additional comments emphasizing insufficient time for reviewing MUSACT e-mails and assisting with accrual efforts. Altogether, 23 respondents (27%) indicated in one fashion or another on the survey that concerns about time consumption were a key factor in not responding to MUSACT’s notices.
4. Discussion
Many factors contributing to poor clinical trial accrual have been identified, but poor rates of provider consideration of trial opportunities is a significant factor. However, given increasing demands providers face for engagement in compensated activities, it is hard to find fault with providers who do not invest the (typically uncompensated) time required to pursue trials for their patients. Furthermore, absent any specific incentive to spur behavior to the contrary, a provider’s willingness to invest effort to accrue a patient to a trial would be expected to diminish as the degree of social connectivity between the provider and investigator diminishes. (For example, a provider would be most willing to invest effort to accrue a patient to a trial for which the provider himself is the investigator, but less willing to accrue to the trials of colleagues in his own discipline at his own institution, even less willing to accrue to trials in his own discipline at other institutions, and perhaps least willing to accrue to trials in other disciplines at his own or other institutions.) It is a frustrating irony that as modern patients seek more care, are subjected to more tests, are accorded more diagnoses, and become potentially eligible for more trials, they are decreasingly likely to be referred by their providers even for intra-disciplinary trials, let alone extra-disciplinary trials.
A rational approach, then, to spurring providers to refer more patients to trials would focus on decreasing the provider’s time required to find relevant trials and refer patients to them. MUSACT was designed to address these points of leverage. Rather than burdening the provider with finding relevant trials, MUSACT delivers concise notice about such trials to the provider, within a familiar workflow (e-mail), to be addressed when the provider is not burdened with higher priority issues such as direct patient care. MUSACT requires no provider training and minimal study staff training. Furthermore, MUSACT gives the provider easy access to a range of information about each trial. MUSACT makes the referral process, too, quite speedy. Ordinarily, a provider interested in referring a patient to a trial needs to identify the study team and their contact information and then contact them by the time-consuming acts of writing an e-mail or telephoning. MUSACT, though, eliminates the need to find the study team’s contact information and replaces the initial e-mail or telephone conversation with a single click to refer one or more patients to one or more trials.
Furthermore, MUSACT mitigates provider concern about “loss of control” by leaving to the provider’s discretion whether to refer a patient and how a potentially eligible patient will be approached about the trial. Providers usually know which patients, being considered for which trials, would be best approached by which personnel (provider, provider’s staff, or study staff) to minimize patient concern and maximize accrual.
In this pilot phase, focused on exploring the feasibility/acceptability of MUSACT’s workflow, and in which an interface with MUSC’s EMR was not available, MUSACT’s “matching” algorithm used billing ICD-9 codes as the sole parameter for identifying studies of potential relevance to a providers’ patients. Improved matching based on additional clinical parameters will be helpful. Even with multi-parameter matching, though, accurate identification of diagnoses and relevant trials may still be challenging. A few providers reported some diagnoses to be wrongly listed in the e-mails sent them by MUSACT, but as the system was merely echoing the diagnoses reported to the billing system by the provider or billing staff, this situation highlights the potential for data entry errors and the long-recognized insensitivity of the ICD-9 coding system for diagnostic coding accuracy [32–34]. It is unclear whether the forthcoming transition to the far more granular ICD-10 system will improve diagnostic coding accuracy [34]. Determination of diagnoses from natural language processing of clinical documents may prove to be a productive complement to diagnoses in administrative systems [35].
MUSACT also was purposely limited in this pilot phase (again out of a primary interest in exploring feasibility/ acceptability) to notifying providers about trial opportunities only for patients with new diagnoses rather than both new and pre-existing diagnoses. Revising the matching algorithm to include pre-existing diagnoses likely would dramatically increase the volume of notifications to providers about trial opportunities. While an increased volume of notifications might increase referrals and accrual, it also might antagonize some providers. Allowing providers (and study teams) an option to include or exclude patients with pre-existing diagnoses might be a useful compromise.
MUSC’s thoracic oncology program agreed to be the testbed for MUSACT, and though MUSACT’s processes worked smoothly upon launch, few referrals were made through MUSACT to the thoracic oncology trial program before the thoracic oncology trials were de-listed from the system a year later. This result was expected since the great majority of new diagnoses of thoracic malignancy are made by the thoracic oncologists who also are the thoracic cancer study investigators and thus are already familiar with their program’s trials and likely have already presented relevant trial opportunities to their patients by the time the bill for the encounter is processed (leading to MUSACT becoming aware of the new diagnosis). Therefore, MUSACT’s e-mails to the thoracic oncology provider-investigators that each of the newly diagnosed thoracic cancer patients they had recently seen was potentially eligible for the large slate of active thoracic oncology trials (all listed in the e-mail) were unhelpful to these provider-investigators. However, this testbed permitted proof of proper system operation and suggested feasibility and provider acceptability.
The subsequent experience in the lupus research program suggested that referrals generated by MUSACT’s e-mails came roughly equally from rheumatology colleagues as well as extra-disciplinary providers. MUSACT was found helpful by rheumatologists and other providers practicing outside of the dedicated lupus clinics who often diagnose lupus but may be unaware of the lupus clinical trial opportunities available for their patients. Among the lupus trials, the largest increase in patient screenings and enrollment when comparing pre-to post-MUSACT implementation was in the SLE Database Project, which increased from an average of 5.8 to 10.8 patients screened and 3.8 to 6.7 patients enrolled each month. This increase is not surprising given that the SLE Database Project is an observational study requiring only a diagnosis of lupus with very few exclusion criteria, making the MUSACT system ideal for identifying potential participants. An additional benefit of the increased enrollment into the SLE Database Project is that patients provide consent to be contacted for future studies, leading to additional recruitment opportunities.
The pulmonary hypertension study team’s experience with MUSACT, while short-lived, also underscored MUSACT’s potential for boosting cross-discipline referrals in particular.
Although time consumption in general was the most frequently cited concern in the provider survey, only 2% of respondents specifically cited concern about referral leading to too much time consumption later. Clearly, concern about the time required simply to review a MUSACT e-mail was greater than concern about the time required to facilitate accrual once preliminary screening suggests potential eligibility. Strategies to reduce the length of MUSACT’s e-mails – from publicity about the system, so that the e-mail’s explanatory preface can be reduced, to dynamic adjustment of the e-mail (paring it to the bare necessities for providers more experienced with the system)may be helpful. While many providers expressed concern about the volume of their total incoming e-mail traffic, only 2 (0.1%) of the 336 providers e-mailed by MUSACT opted out (and only temporarily at that), and a few providers even suggested MUSACT send them reminder e-mails. However, since most providers want to see fewer e-mails, a potential strategy is to include in each provider-directed MUSACT e-mail not just the cases newly diagnosed in the last 24h that match MUSACT-registered trials but rather the cases newly diagnosed in the last several days.
5. Conclusion
MUSACT’s e-mail-based “one-click” streamlined workflow for enabling providers to refer their patients to potentially relevant trials still presents a concerning amount of time consumption for some providers. However, pilot use of the system suggests the workflow is feasible and acceptable to most providers and study teams. Furthermore, MUSACT’s workflow appears to have potential for boosting accrual rates for at least some types of trials. In particular, cross-disciplinary referrals might be the system’s greatest potential for boosting accrual. Useful enhancements will include improving the matching of patients to trials and reducing the time providers must spend processing MUSACT’s messages.
Summary points.
What was already known
Patient accrual rate is the principal factor limiting clinical trial speed, contributing significantly to the extraordinary development costs for new interventions
Accrual rates are low for many reasons, but the clinician’s awareness of available trials and willingness to invest the effort required to facilitate accrual are key factors.
What this study added to our knowledge
It appears feasible to develop a system for automatically notifying clinicians of investigational opportunities for their patients, and when such a system is integrated in non-interruptive and low-burden fashion into existing workflow, it may result in improved rates of trial candidate referrals, screenings, and accrual.
Some trials not feasible via traditional accrual mechanisms became feasible via the system described here.
Most clinicians notified of accrual opportunities by our system found its approach acceptable. Very few clinicians opted out.
Acknowledgments
Funding
No external funding was received for the work described herein.
Footnotes
Authors’ contributions
LBA conceived of the project, performed all software development, and was the principal author of the manuscript. JCO and DLK were key users who contributed to the writing of the manuscript.
Disclosures
The authors declare they have no competing interests.
Conflicts of interest
The authors are aware of no Conflicts of interest.
REFERENCES
- 1.Joseph R. Viewpoints and concerns of a clinical trial participant. Cancer. 1994;74:2692–2693. doi: 10.1002/1097-0142(19941101)74:9+<2692::aid-cncr2820741818>3.0.co;2-m. [DOI] [PubMed] [Google Scholar]
- 2.DiMasi JA, Hansen RW, Grabowski HG. The price of innovation: new estimates of drug development costs. J. Health Econ. 2003;22:151–185. doi: 10.1016/S0167-6296(02)00126-1. http://dx.doi.org/10.1016/S0167-6296(02)00126-1. [DOI] [PubMed] [Google Scholar]
- 3.Adams CP, Brantner VV. Estimating the costs of new drug development: is it really $802 million? Health Aff. 2006;25:420–428. doi: 10.1377/hlthaff.25.2.420. http://dx.doi.org/10.1377/hlthaff.25.2.420. [DOI] [PubMed] [Google Scholar]
- 4.Vernon JA, Golec JH, DiMasi JA. Drug development costs when financial risk is measured using the Fama-French three-factor model. Health Econ. 2010;19:1002–1005. doi: 10.1002/hec.1538. http://dx.doi.org/10.1002/hec.1538. [DOI] [PubMed] [Google Scholar]
- 5.Morgan S, Grootendorst P, Lexchin J, Cunningham C, Greyson D. The cost of drug development: a systematic review. Health Policy. 2011;100:4–17. doi: 10.1016/j.healthpol.2010.12.002. http://dx.doi.org/10.1016/j.healthpol.2010.12.002. [DOI] [PubMed] [Google Scholar]
- 6.Mestre-Ferrandiz J, Sussex J, Towse A. The R&D Cost of a New Medicine. London, UK: Office of Health Economics; 2012. Dec, [accessed: 04.07.14]. Available at: http://news.ohe.org/wp-content/uploads/2012/11/RD-Cost-Exec-Sum-+-Contents.pdf. [Google Scholar]
- 7.Herber M. The truly staggering cost of inventing new drugs. [accessed 04.07.14];Forbes. 2012 Feb 10; http://www.forbes.com/sites/matthewherper/2012/02/10/the-truly-staggering-cost-of-inventing-new-drugs/ [Google Scholar]
- 8.Munos B. Lessons from 60 years of pharmaceutical innovation. Nat. Rev. Drug Discov. 2009;8:959–968. doi: 10.1038/nrd2961. http://dx.doi.org/10.1038/nrd2961. [DOI] [PubMed] [Google Scholar]
- 9.Comis RL, Miller JD, Aldigé CR, Krebs L, Stoval E. Public attitudes toward participation in cancer clinical trials. J. Clin. Oncol. 2003;21:830–835. doi: 10.1200/JCO.2003.02.105. http://dx.doi.org/10.1200/JCO.2003.02.105. [DOI] [PubMed] [Google Scholar]
- 10.Lara PN, Jr, Higdon R, Lim N, Kwan K, Tanaka M, Lau DH, Wun T, Welborn J, Meyers FJ, Christensen S, O’Donnell R, Richman C, Scudder SA, Tuscano J, Gandara DR, Lam KS. Prospective evaluation of cancer clinical trial accrual patterns: identifying potential barriers to enrollment. J. Clin. Oncol. 2001;19:1728–1733. doi: 10.1200/JCO.2001.19.6.1728. [DOI] [PubMed] [Google Scholar]
- 11.Sateren WB, Trimble EL, Abrams J, Brawley O, Breen N, Ford L, McCabe M, Kaplan R, Smith M, Ungerleider R, Christian MC. How sociodemographics, presence of oncology specialists, and hospital cancer programs affect accrual to cancer treatment trials. J. Clin. Oncol. 2002;20:2109–2117. doi: 10.1200/JCO.2002.08.056. http://dx.doi.org/10.1200/JCO.2002.08.056. [DOI] [PubMed] [Google Scholar]
- 12.Umutyan A, Chiechi C, Beckett LA, Paterniti DA, Turrell C, Gandara DR, Davis SW, Wun T, Chen MS, Jr, Lara PN., Jr Overcoming barriers to cancer clinical trial accrual. Cancer. 2008;112:212–219. doi: 10.1002/cncr.23170. http://dx.doi.org/10.1002/cncr.23170. [DOI] [PubMed] [Google Scholar]
- 13.Simon MS, Brown DR, Du W, LoRusso P, Kellogg CM. Accrual to breast cancer clinical trials at a university-affiliated hospital in metropolitan Detroit. Am. J. Clin. Oncol. 1999;22:42–46. doi: 10.1097/00000421-199902000-00011. [DOI] [PubMed] [Google Scholar]
- 14.Kotwall CA, Mahoney LJ, Myers RE, Decoste L. Reasons for non-entry in randomized clinical trials for breast cancer: a single institutional study. J. Surg. Oncol. 1992;50:125–129. doi: 10.1002/jso.2930500215. [DOI] [PubMed] [Google Scholar]
- 15.Ellis PM. Attitudes towards and participation in randomised clinical trials in oncology: a review of the literature. Ann. Oncol. 2000;11:939–945. doi: 10.1023/a:1008342222205. [DOI] [PubMed] [Google Scholar]
- 16.Martel CL, Li Y, Beckett L, Chew H, Christensen S, Davies A, Lam KS, Lau DH, Meyers FJ, O’Donnell RT, Richman C, Scudder S, Tanaka M, Tuscano J, Welborn J, Wun T, Gandara DR, Lara PN., Jr An evaluation of barriers to accrual in the era of legislation requiring insurance coverage of cancer clinical trial costs in California. Cancer J. 2004;10:294–300. doi: 10.1097/00130404-200409000-00006. [DOI] [PubMed] [Google Scholar]
- 17.Baquet CR, Commiskey P, Mullins CD, Mishra SI. Recruitment and participation in clinical trials: socio-demographic, rural/urban, and health care access predictors. Cancer Detect. Prev. 2006;30:24–33. doi: 10.1016/j.cdp.2005.12.001. http://dx.doi.org/10.1016/j.cdp.2005.12.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Hunter CP, Frelick RW, Feldman AR, Bavier AR, Dunlap WH, Ford L, Henson D, Macfarlane D, Smart CR, Yancik R, et al. Selection factors in clinical trials: results from the community clinical oncology program physician’s patient log. Cancer Treat. Rep. 1987;71:559–565. [PubMed] [Google Scholar]
- 19.Lovato LC, Hill K, Hertert S, Hunninghake DB, Probstfield JL. Recruitment for controlled clinical trials: literature summary and annotated bibliography. Control. Clin. Trials. 1997;18:328–357. doi: 10.1016/s0197-2456(96)00236-x. http://dx.doi.org/10.1016/S0197-2456(96)00236-X. [DOI] [PubMed] [Google Scholar]
- 20.Go RS, Frisby KA, Lee JA, Mathiason MA, Meyer CM, Ostem JL, Walther SM, Schroeder JE, Meyer LA, Umberger KE. Clinical trial accrual among new cancer patients at a community-based cancer center. Cancer. 2006;106:426–433. doi: 10.1002/cncr.21597. http://dx.doi.org/10.1002/cncr.21597. [DOI] [PubMed] [Google Scholar]
- 21.Mansour EG. Barriers to clinical trials. Part III: Knowledge and attitudes of health care providers. Cancer. 1994;74(Suppl.):2672–2675. doi: 10.1002/1097-0142(19941101)74:9+<2672::aid-cncr2820741815>3.0.co;2-x. [DOI] [PubMed] [Google Scholar]
- 22.Ross S, Grant A, Counsell C, Gillespie W, Russell I, Prescott R. Barriers to participation in randomised controlled trials: a systematic review. J. Clin. Epidemiol. 1999;52:1143–1156. doi: 10.1016/s0895-4356(99)00141-9. http://dx.doi.org/10.1016/S0895-4356(99)00141-9. [DOI] [PubMed] [Google Scholar]
- 23.Wright JR, Whelan TJ, Schiff S, Dubois S, Crooks D, Haines PT, DeRosa D, Roberts RS, Gafni A, Pritchard K, Levine MN. Why cancer patients enter randomized clinical trials: exploring the factors that influence their decision. J. Clin. Oncol. 2004;22:4312–4318. doi: 10.1200/JCO.2004.01.187. http://dx.doi.org/10.1200/JCO.2004.01.187. [DOI] [PubMed] [Google Scholar]
- 24.Dyrbye LN, Shanafelt TD. Physician burnout: a potential threat to successful health care reform. JAMA. 2011;305:2009–2010. doi: 10.1001/jama.2011.652. http://dx.doi.org/10.1001/jama.2011.652. [DOI] [PubMed] [Google Scholar]
- 25.Morrison I. The future of physicians’ time. Ann. Intern. Med. 2000;132:80–84. doi: 10.7326/0003-4819-132-1-200001040-00013. http://dx.doi.org/10.7326/0003-4819-132-1-200001040-00013. [DOI] [PubMed] [Google Scholar]
- 26.Weiss KB. Managing complexity in chronic care: an overview of the VA state-of-the-art (SOTA) conference. J. Gen. Intern. Med. 2007;22(Suppl. 3):374–378. doi: 10.1007/s11606-007-0379-x. http://dx.doi.org/10.1007/s11606-007-0379-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Safford MM, Allison JJ, Kiefe CI. Patient complexity: more than comorbidity. The vector model of complexity. J. Gen. Intern. Med. 2007;22(Suppl. 3):382–390. doi: 10.1007/s11606-007-0307-0. http://dx.doi.org/10.1007/s11606-007-0307-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Derlet RW, Richards JR, Kravitz RL. Frequent overcrowding in U.S. emergency departments. Acad. Emerg. Med. 2001;8:151–155. doi: 10.1111/j.1553-2712.2001.tb01280.x. http://dx.doi.org/10.1111/j.1553-2712.2001.tb01280.x. [DOI] [PubMed] [Google Scholar]
- 29.Afrin LB, Oates JC, Boyd CK, Daniels MS. Leveraging of open EMR architecture for clinical trial accrual; AMIA Annu Symp Proc 2003; 2003. [accessed: 04.07.14]. pp. 16–20. Available at: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1480210/ [PMC free article] [PubMed] [Google Scholar]
- 30.Embi PJ, Jain A, Clark J, Harris CM. Development of an electronic health record-based clinical trial alert system to enhance recruitment at the point of care; AMIA Annu Symp Proc 2005; 2005. [accessed: 4.07.14]. pp. 231–235. Available at: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1560758/ [PMC free article] [PubMed] [Google Scholar]
- 31.Embi PJ, Jain A, Harris CM. Physicians’ perceptions of an electronic health record-based clinical trial alert approach to subject recruitment: a survey. [accessed: 04.07.14];BMC Med. Inform. Decis. Mak. 2008 8:13. doi: 10.1186/1472-6947-8-13. http://dx.doi.org/10.1186/1472-6947-8-13, Available at: http://www.biomedcentral.com/content/pdf/1472-6947-8-13.pdf. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Fisher ES, Whaley FS, Krushat WM, Malenka DJ, Fleming C, Baron JA, Hsia DC. The accuracy of Medicare’s hospital claims data: progress has been made, but problems remain. [accessed 04.07.14];Am. J. Public Health. 1992 82:243–248. doi: 10.2105/ajph.82.2.243. Available at: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1694279/pdf/amjph00539-0093.pdf. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Jollis JG, Ancukiewicz M, DeLong ER, Pryor DB, Muhlbaier LH, Mark DB. Discordance of databases designed for claims payment versus clinical information systems: implications for outcomes research. Ann. Intern. Med. 1993;119:844–850. doi: 10.7326/0003-4819-119-8-199310150-00011. http://dx.doi.org/10.7326/0003-4819-119-8-199310150-00011. [DOI] [PubMed] [Google Scholar]
- 34.Quan H, Li B, Saunders LD, Parsons GA, Nilsson CI, Alibhai A, Ghali WA. IMECCHI Investigators, Assessing validity of ICD-9-CM and ICD-10 administrative data in recording clinical conditions in a unique dually coded database. Health Serv. Res. 2008;43:1424–1441. doi: 10.1111/j.1475-6773.2007.00822.x. http://dx.doi.org/10.1111/j.1475-6773.2007.00822.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Demner-Fushman D, Chapman WW, McDonald CJ. What can natural language processing do for clinical decision support? J. Biomed. Inform. 2009;42:760–772. doi: 10.1016/j.jbi.2009.08.007. http://dx.doi.org/10.1016/j.jbi.2009.08.007. [DOI] [PMC free article] [PubMed] [Google Scholar]




