The education and counseling intervention performed by the nurse for obese and overweight women has a positive effect on healthy lifestyle behaviors and body mass index.
Keywords: Scale-up, Innovation, Training, Evaluation, Commercialization, Behavioral interventions
Abstract
The National Cancer Institute established a dissemination and implementation accelerator program called SPeeding Research-tested INTerventions (SPRINT) to improve the uptake of evidence-based interventions. The purpose of this study was to describe the origin, structure, and goals of the SPRINT program, and present evaluation results from the first two cohorts of the program. Qualitative (focus group) and quantitative (survey) data collected from participants were used to evaluate the course and inform program improvement efforts. The majority of the participants (over 90% in both cohorts) rated the course highly, and over 80% would recommend the course to other researchers. Most participants indicated knowing relatively little about business model concepts before SPRINT, but after SPRINT, nearly all respondents felt that they had at least “some” knowledge of each business model component. Participants also indicated that they learned about the product–market fit of their intervention and gained insights from customer discovery interviews that would enable them to make their intervention more “stakeholder focused”. Participants also indicated that they plan to use the knowledge and skills they gained during the SPRINT program in their future work. Feedback from participants was used by the management team to implement various program improvements to better serve the next cohort of participants. While behavioral researchers face significant barriers to commercializing their interventions, they recognize the importance of translating their research into practice. Training researchers to consider scale-up, implementation, and commercialization from the outset can help reduce the number of proven interventions that are never used in practice.
Implications.
Practice: Practitioners can be key partners in the development and testing of interventions and should be engaged early on in the process in order to increase the likelihood that the intervention will ultimately be taken to scale
Policy: Agencies that fund behavioral research should explore ways to structure awards to facilitate dissemination and implementation and/or commercialization efforts.
Research: Future intervention research should incorporate a “designing for implementation” approach with input from real-world practitioners incorporated continuously through the development and testing process so that the final intervention is more likely to be put into practice.
BACKGROUND
Increasing the uptake of evidence-based cancer control interventions has been a long-standing mission of the National Cancer Institute’s Division of Cancer Control and Population Sciences (NCI/DCCPS). NCI’s Research-tested Intervention Programs (RTIPs) website (https://rtips.cancer.gov/rtips/index.do), established in 2003, features nearly 200 searchable evidence-based cancer control interventions (including those focused on diet, human papilloma virus vaccination, survivorship, and informed decision making) that are available at little to no cost. The website provides access to intervention materials, allowing clinicians and public health practitioners to easily adapt the programs and avoid having to develop materials from scratch. Providing access to evidence-based programs and associated programmatic materials through the RTIPs website was a good first step toward increasing the use of evidence-based interventions (EBIs), but NCI felt that more could be done to facilitate and speed the translation of research into practice.
To this end, in the Summer of 2015 a small team from NCI participated in the Department of Health and Human Services (HHS) Ignite Accelerator, an innovation start-up program designed for HHS staff members who want to improve the way their program, office, or agency works. The team joined the Ignite Accelerator with the goal of finding ways to improve the uptake of RTIPs programs by creating mobile apps. Over the course of the program, the NCI team interviewed 34 researchers, clinicians, and public health practitioners to determine whether any RTIPs programs could be turned into mobile apps to increase uptake. However, the interviews revealed that mobile apps were not a high priority for these stakeholders. Instead, the team learned that researchers didn’t have a plan or process for implementing the program they had tested. The majority of researchers interviewed also stated that they didn’t have the skillset needed to market their program. Realizing that addressing these barriers would be a more efficient way to improve the uptake of evidence-based interventions than turning RTIPs interventions into apps, the team decided to focus on creating a dissemination and implementation (D&I) accelerator program for behavioral researchers.
In 2015, a few federal accelerator programs were available to scientists. The National Science Foundation’s I-Corps program, based on Stanford University’s “Lean LaunchPad” course developed by Steve Blank [1–3], was the first accelerator program specifically designed for scientists. The NCI Small Business Innovative Research (SBIR) program then started a similar program for SBIR grantees, with the aim of supporting mentoring and networking opportunities for investigators interested in commercializing promising biotechnologies. Although the I-Corps and SBIR programs are designed specifically for scientists, the NCI team determined that they did not adequately serve behavioral researchers, who have unique needs when it comes to commercializing their work. Therefore, NCI staff worked with VentureWell, the contractor for both NSF I-Corps and SBIR I-Corps, to adapt the program specifically for cancer control researchers. The result was the SPeeding Research-tested INTerventions (SPRINT) training program.
SPRINT program description
NCI-funded researchers are eligible to participate in SPRINT if they have a currently active (or recently completed) R01 grant that focuses on developing or testing a behavioral intervention. In order to apply to the program, eligible investigators have to assemble a team for their project. At a minimum, the team has to have a Principle Investigator (PI), who is the primary researcher behind the behavioral intervention, and an Entrepreneurial Lead (EL), often a graduate student or a postdoctoral researcher who helps the PI execute the work required for SPRINT (e.g., both the PI and EL are expected to conduct customer discovery interviews and prepare the weekly presentations). PIs can also decide to include an additional co-investigator on their team if they wish. PIs are also strongly encouraged—but not required—to have a Mentor as part of their team. Ideally, chosen Mentors are experienced entrepreneurs or business executives.
Once they have their team in place, PIs fill out a relatively brief application form on the SPRINT website (https://www.nci-sprint.com/how-to-apply.html), which asks them to describe their team, their intervention, and why they are interested in participating in the SPRINT program. SPRINT applicants then participate in a 20-min interview with SPRINT program staff. Teams selected to participate receive a small award to help defray the cost of travel to in-person training sessions as well as any expenses incurred during the “discovery” phase of the training.
After teams are accepted to the program, they participate in an “introductory webinar” that orients participants to the goals and expectations of the program and introduces them to the SPRINT course instructors. A few weeks after the introductory webinar, the teams come to the NCI Shady Grove campus in Rockville, Maryland for a 3-day kick-off meeting, which consists of in-person training, “out of the building” time for participants to conduct their first set of customer discovery interviews, and presentations from each team on their project and the progress they have made on their business model canvas to date.
Osterwalder’s Business Model Canvas (a template that organizes the fundamental components of a business model in a clear and systematic way) is designed to help teams brainstorm, articulate, and frame “hypotheses” regarding their business venture, which they then test and update by conducting customer discovery interviews. The Business Model Canvas synthesizes a wide range of different business model conceptualizations into one comprehensive reference model and organizes the key components of a business into a simple visual diagram that encourages entrepreneurs to consider all the essential aspects of their business, as well as the ways these pieces fit together. The hypotheses teams generate while constructing and modifying their business model canvas can help inform both the targets and content of customer discovery interviews. The hypotheses reveal the questions that teams need to answer to verify different components of their business model, which in turn helps teams identify the individuals they need to interview in order to obtain the information they need. To test their hypotheses, teams generally need to interview a range of different stakeholders in the health ecosystem, including patients, health service providers, insurance representatives, clinic or hospital executives, staff at nonprofit and faith-based organizations, and government employees.
After the in-person kick-off meeting, SPRINT teams participate in weekly webinars, which consist of a lecture by the course instructors, followed by presentations from each team on the development of their business model and what they have learned from interviews conducted during the previous week. Office hours with instructors are also offered every week for any team that wants further mentoring and project coaching. Over the course of the SPRINT program, each team has to interview over 30 stakeholders. Participants spend an average of 12 hr per week on the course, although this can vary widely, from less than five hours to more than twenty hours per week, depending on the individual’s role on the team, the way the team divides up the work, and how many interviews the team does over the course of the program.
At the end of the program, SPRINT teams travel back to the NCI campus for a 2-day close-out meeting, where each team gives their final presentation along with a “journey video” that provides an overview of their experience in the SPRINT program. During the close-out meeting SPRINT teams also participate in focus groups and have the chance to set up meetings with NCI staff to further discuss their projects. Table 1 provides an overview of the SPRINT course schedule. After the course ends, SPRINT program staff continue to keep in touch with teams via an alumni listserv, create opportunities for past participants to present their work at various venues (such as the Annual Conference on the Science of Dissemination and Implementation in Health and the Annual Meeting of the Society of Behavioral Medicine), and keep researchers informed about ways to further their commercialization or dissemination work (e.g., by setting up webinars about funding available through the Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) programs).
Table 1.
SPRINT course schedule
| Week 1 |
| Topics: Business Models, Missions Models and Customer Discovery |
| Initial stakeholder interviews (5 initially required but increased to 10 after second cohort) |
| Preliminary elevator pitch and elevator pitch on value proposition and customer segments |
| Week 2 |
| Topic: Stakeholder Ecosystems |
| Interview minimum of five stakeholders |
| Revised Elevator pitch delivered during webinar |
| Week 3 |
| Topic: Distribution Channels |
| Interview minimum of five stakeholders |
| Revised Elevator Pitch delivered during webinar |
| Week 4 |
| Topic: Customer/Stakeholder Relationships |
| Interview minimum of five stakeholders |
| Revised Elevator Pitch delivered during webinar |
| Week 5 |
| Topic: Revenue Streams |
| Interview minimum of five stakeholders |
| Revised Elevator Pitch delivered during webinar |
| Week 6 |
| Topic: Key Activities and Key Partners |
| Interview minimum of five stakeholders |
| Revised Elevator Pitch delivered during webinar |
| Week 7 |
| Topic: Key Resources and Costs |
| Interview minimum of five stakeholders |
| Revised Elevator Pitch delivered during webinar |
| Work on creating 2-min Team Journey Video to present at Close-out Meeting |
| Week 8—Close Out Meeting |
| Topic: Lessons Learned and Next Steps |
| Interview minimum of five stakeholders |
| Final Elevator Pitch |
| 2-min Team Journey Video |
SPRINT logic model
Figure 1 describes the SPRINT logic model, including the relevant contextual factors, inputs, activities, reach, outputs, and goals of the program. For the purposes of this evaluation, we focused on the following short-term outcomes: (i) investigators gain an understanding of business concepts, (ii) teams learn about the product–market fit of their intervention, and (iii) teams gain the knowledge/skills needed to commercialize and make their program more “stakeholder focused.” We also discuss preliminary evidence for progress toward select medium-term outcomes, including (i) participants experience a paradigm shift in the way they approach the development and implementation of evidence-based interventions, (ii) behavioral scientists have greater capacity to translate their research into practice, and (iii) more interventions are designed with implementation/commercialization in mind from the outset. Achieving these short- and medium-term goals is expected to lead to an eventual increase in the number of evidence-based interventions in the market place, which should in turn lead to the ultimate goal of the program: improved population health.
Fig 1.
SPRINT logic model. The logic model outlines the inputs, processes, outputs and goals of the SPRINT training program, as well as relevant contextual factors.
METHODS
SPRINT program improvement efforts draw from the “Plan-Do-Study-Act (PDSA) cycle,” which is a blueprint for testing a program change by developing a plan to test the change, carrying out the test, observing the consequences, and determining what modifications should be made to the program [4]. Following this model, the SPRINT program is implemented using an iterative approach informed by evaluation results as well as real-time feedback and observation about specific components of the program to determine whether desired results (as outlined in the logic model) are being achieved. Three rounds of the SPRINT program have been completed to date, with the first cohort completing the program in the summer of 2016, the second cohort in the spring of 2017, and the third cohort in the spring of 2018. A mixed-methods approach which collects both qualitative (focus group) and quantitative (survey) data from participants is used to evaluate the program. The evaluation results from the first two cohorts are presented here. This analysis was reviewed and deemed exempt from IRB review by the Chesapeake IRB (Pro00030081).
Data collection
SPRINT participants filled out a baseline web survey at the beginning of the course that asked them to provide basic demographic information, and describe their experience with, and knowledge of, commercialization. During the close-out meeting, participants were invited to participate in focus groups, where they were asked to discuss their general experience in the course, the role of each team member in the process, lessons learned around conducting customer interviews, and their thoughts regarding the possibility of commercializing their intervention. After the course ended, a final web survey was sent out asking participants to rate various aspects of the course, suggest course improvements, describe the current status of their project and potential next steps, assess the knowledge they gained about business model components as a result of the course, and describe the impact they expect the course to have on their future work.
RESULTS
The results of the baseline surveys revealed that most SPRINT participants were female (82% in Cohort I and 59% in Cohort II) and white (86% in Cohort I and 88% in Cohort II). The data also indicated that most participants had very little experience with commercialization activities prior to their participation in the SPRINT program, although a few participants did have limited experience with interviewing potential customers about a product, service, or technology; participating in industry funded research; contributing ideas utilized by a company to improve a product, service, or process; and conducting market research.
The response rate for the postcourse evaluation survey was high in both cohorts (81% and 78%, respectively). Overall, SPRINT received very high ratings from both cohorts in the postcourse evaluation survey, with over 90% of Cohort I respondents rating the course as “Excellent” or “Very good” and over 92% of Cohort II respondents rating the course as “Very good” or “Good” (the response options were slightly different in the two surveys). A few respondents in each cohort rated the course as “Fair” but none felt that the course was “Not good.” Respondents also overwhelmingly agreed or strongly agreed that they gained new skills or new knowledge as a result of SPRINT, and that they felt confident applying what they learned from the course (with over 90% of respondents in each cohort endorsing those statements). Most participants in both cohorts also agreed or strongly agreed that course activities were appropriate, and that the course was relevant to their needs/interests.
Although more than 90% of Cohort I participants felt that the teaching strategies and instructor feedback were appropriate, agreement with these items was lower among Cohort II participants at 80% for teaching strategies and 67% for instructor feedback. Still, the majority of respondents in both cohorts (86% in Cohort I and 84% in Cohort II) indicated that they would recommend the course to other researchers, with one participant writing “Absolutely. We need more researchers to think like this.” The participants who said they were unsure if they would recommend the course, cited either (i) the significant time commitment required, or (ii) the fact that the course might not be a good fit for some researchers, as the main reason for their reluctance to recommend the course.
Short-term outcome 1: investigators gain an understanding of business concepts
It is important for scientists to learn about business concepts if they wish to commercialize their work because it is not enough just to have a proven, effective intervention; business are more than just the product or service they sell—things like the pricing model or distribution channel used can also determine whether or not a business will ultimately succeed, so it is vital for researchers to have a deep understanding of these critical components of a business [5]. The lectures in the SPRINT program are focused on the nine different components of Osterwalder’s Business Model Canvas, including key partners, key activities, the value proposition, customer relationships, customer segments, key resources, channels, cost structure, and revenue streams [6]. Most participants indicated knowing relatively little about business model concepts prior to SPRINT (especially in regard to value propositions, revenue streams, and cost structures). After SPRINT, almost all respondents felt that they had at least “some” knowledge of each business model component and most felt they had a “moderate amount” or “great deal” of knowledge about each component. However, even though knowledge of each business model component increased substantially, survey results also indicate that participants could use more information regarding cost structure, revenue streams, and key resources (Fig. 2).
Fig 2.
Business model knowledge gain in Cohorts I and II. This figure shows participants’ self-reported assessment of their knowledge of individual business model components before and after participation in SPRINT.
Short-term outcome 2: investigators learn about the product–market fit of their intervention
Survey results suggest that the program was successful in increasing researchers’ ability to assess the product–market fit of their interventions. Participants in both cohorts indicated that they learned a great deal about identifying partners to help commercialize their program/product, assessing their program or product’s readiness for implementation, and identifying and validating key customer segments for their program or product (with Cohort I participants also indicating they learned a lot about identifying and validating the market for their program or product). However, both cohorts indicated that they could still use more information on getting investors interested in supporting their program or product, developing a scalable business model for their program/product, and identifying a viable commercialization path for their program or product.
Short-term outcome 3: teams gain knowledge/skills needed to make their intervention more “stakeholder focused”
Although teams were required to conduct at least 30 customer discovery interviews, most teams in both cohorts conducted between 30 and 45 interviews, with two teams in both Cohorts doing 50 or more. Interviews are a central component of the course, and even when they do not yield insights into how an aspect of the business model can be improved or lead to specific “pivots” in the commercialization approach, they still have value in terms of getting participants comfortable with conducting these kinds of interviews and talking to different stakeholders. Although the interview requirement was demanding, participants noted during the focus group sessions that interviewing people outside their fields “was transformative,” that it taught them how to “engage these stakeholders and figure out what questions [researchers] should be asking,” and that it was “critical for making sure [the intervention] actually meets the needs and demands of practitioners and patients.” In light of this feedback, a decision was made to require future cohorts to interview at least 40 stakeholders.
By going through the customer discovery process, all teams are able to identify specific changes they can make to improve their intervention and make it more responsive to the needs of stakeholders, either in terms of intervention content or in terms of delivery and implementation. For example, one team realized that they needed to “adapt the intervention to be simpler and more flexible” in order to meet user needs, while another team decided to switch from a “business model to a mission-based model,” and a third team concluded that they would need to tier their pricing structure “to allow for more consumer choice,” noting that the interviews they did in the SPRINT program led them “to develop more targeted models for different price points.”
Medium-term goals
Evaluation data indicate that the SPRINT program is also well positioned to achieve several of its medium term goals, including increasing the number of evidence-based interventions (EBIs) in the marketplace. For instance, four teams in Cohort I and three teams in Cohort II indicated that they plan to seek additional funding (e.g., through an SBIR grant) to continue commercializing their intervention, and one team from Cohort I has already established a Limited Liability Corporation (LLC) to deliver its intervention in the marketplace.
Data also indicate that the SPRINT program is building capacity among behavioral scientist for translating research into practice, and beginning to create a paradigm shift in the way researchers approach intervention development, which in turn may lead to more interventions being designed with implementation and commercialization in mind from the outset. When asked how SPRINT might affect their future work, most respondents in Cohort I agreed or strongly agreed that SPRINT enhanced their capabilities to plan and design interventions for implementation, and that they plan to use what they learned from the course in the design of future interventions (as well as in their career and research program more generally). Fewer respondents agreed that the program increased their interest in working with a technology-based start-up or that they will use information and ideas from the course in their teaching (however some of the participants, particularly mentors, do not teach, and indicated that the question was not applicable). The responses to these items for Cohort II looked generally similar, in that most respondents planned to use ideas from the course in their future work and felt that the program enhanced their ability to plan and design interventions for implementation (Fig. 3).
Fig 3.
Perceived longer-term impact of SPRINT. This figure shows participants’ responses to survey questions asking them to assess how they will use SPRINT training in their work and how they feel SPRINT has impacted their plans, abilities, and interests.
Comments from the focus groups provide evidence to suggest that completing the SPRINT program inspires a paradigm shift in the way investigators approach their research. For example, one participant stated, “I learned a lot - it kind of reshaped the way I look at interventions and delivery of services in general in public health. I never saw it [before] as selling a product… I look forward to applying some of the things I learned in my personal research”, while another participant shared that their “dissemination plans are going to be a whole lot different now. And that’s exciting.”
Impact of evaluation
Following the “PDSA cycle” model for quality improvement, the SPRINT management team engages in an ongoing process of data collection and gradual process improvement. Feedback gathered from participants during and after each cohort is discussed during the planning phase for the next cohort, and changes to address identified issues are tested in the subsequent cohort, after which feedback is again gathered to determine whether the issue persists. For instance, after receiving feedback from the first cohort that communication during the application and initial interview process could be improved, program staff and instructors attempted to provide the second cohort with much more information regarding course logistics and expectations ahead of time. Feedback from Cohort II suggests that although our efforts did ameliorate the issue to an extent, there is still room for improvement around communication processes and that participants would appreciate an even greater lead time for course logistics to better plan their travel and work schedules.
The feedback received from Cohorts I and II also indicated that participants felt that there was a lack of time for networking and working with members of other teams. To address what participants felt was a “missed opportunity,” time for informal socializing was built into the first day of the Kick-off meeting for Cohort III in order to give participants a chance to talk to one another and learn more about each other’s projects. Additional changes made for Cohort III based on feedback from the first two cohorts included restructuring the kick-off meeting in order to reduce the time participants spent traveling to in-person interviews in the Washington D.C. area, and providing participants with some resources and ideas for setting up interviews with relevant stakeholders in the D.C. metro area ahead of the kick-off meeting. Participants expressed a need for assistance with setting up interviews because many of them were based in other states and were not as familiar with organizations in the D.C. metro area.
Additionally, after some participants confessed that they didn’t find the weekly webinars (which consist of a lecture and group presentations) to be engaging, the instructors and management team began exploring different formats for the weekly sessions to make them more effective. One design that was piloted with Cohort III participants replaced a few of the group presentation sessions (where each team only gets a few minutes to present and time for feedback is limited) with dedicated one-on-one meetings between the instructors and each team, which allowed for more in-depth discussion and guidance. The SPRINT management team received some positive feedback from participants via email after the first session that used this format and decided to apply the change to two additional weekly sessions.
After receiving feedback from the first two cohorts that it would be helpful to hear about the experiences of behavioral scientists who are engaging in this type of work, the management team also decided to add a panel featuring alumni from the first two cohorts to the close-out session for Cohort III where former participants could discuss their experience and what they are now doing with their interventions. The panel was well-received and helped generate ideas and facilitate discussion between current and former participants, so the management team formally added the alumni panel into the course syllabus for future cohorts. In addition to making SPRINT more responsive to the needs of the participants (who felt they could benefit from learning about the experiences of researchers who are similar to themselves), these alumni panels also help the program make progress toward its goals of developing a strong alumni network and establishing a cadre of scientists who are interested in translating research into practice. The program also bolsters the alumni network and sustains interest in commercialization topics among past participants by maintaining an alumni listserv that keeps members informed of relevant opportunities (funding, training, etc.), as well as by organizing panels of SPRINT trainees at scientific conferences (which has the additional benefit of increasing interest in and awareness of SPRINT among the larger research community).
Future efforts
In addition to the changes that have already been implemented, the SPRINT management team recognizes that additional efforts are needed to improve both the program itself, and the way in which it is assessed. For example, although the initial rounds of the program were pilots intended to demonstrate the feasibility and potential impact of this type of training, expanding the reach of the program and ensuring greater participant diversity will be priorities for the management team going forward. Additionally, there is a need to assess program outcomes in a more concrete manner. Currently, evidence is only available regarding the SPRINT program’s impact on proximal outcomes, such as change in knowledge, attitudes and intentions. However, there are plans to follow up with each cohort once a reasonable amount of time has passed (approximately 3 years), in order to assess actual changes in participants’ behaviors, career plans, research activities, or teaching/mentoring as a result of the program. The planned survey will ask participants to describe any steps they have taken to secure additional funding or otherwise pursue the dissemination or commercialization of their intervention. It will also ask respondents to provide specific examples of the ways in which they have used information from the course in their work.
Thoughts for the field
The SPRINT program has become an important new tool in NCI’s effort to increase the dissemination and implementation of evidence-based behavioral interventions. Not all projects are suitable or ready for commercialization, so there is no expectation that every team that completes the SPRINT program will decide to pursue commercialization. However, even when researchers conclude that commercialization is not a viable path for their intervention, they still walk away from the training program with a new vocabulary, a new set of skills, and a different way of conceptualizing their research. They are also better oriented to the larger context in which their intervention would ultimately be deployed, and have an appreciation for the fact that whether their intervention gets adopted and used depends not only on its effectiveness, but also on the extent to which it addresses the needs of multiple stakeholders—including payors, service providers, and other important actors in the implementation process. By training researchers to design interventions with scale-up, implementation, and commercialization in mind from the outset, NCI aims to reduce the number of interventions that are shown to be effective but are never used in practice because they lack product–market fit and are not responsive to the needs of decision makers.
Some of the feedback we received from SPRINT participants during the course evaluation has implications for attempts to commercialize effective, evidence-based interventions as well as for Dissemination & Implementation (D&I) efforts more generally. For example, as valuable as the participants felt the training was for them, several noted that the course would be even more helpful for investigators earlier on—both earlier in their careers and earlier in the intervention development process. They felt that thinking about commercialization and scale-up, and talking to potential stakeholders, would be most valuable before researchers have started developing and testing their intervention so they can design their intervention and implementation strategy with the needs of their customers in mind. Universities could consider integrating commercialization training in their PhD programs or offering SPRINT-like programs as part of their early-career development initiatives to better prepare their students and staff for implementing, disseminating, and scaling their research.
Participants also discussed the challenges they would face in trying to pursue commercialization activities after the SPRINT course was over. One theme that emerged was the reality of time constraints and competing priorities. Participants noted that it would be very difficult to pursue commercialization, or even D&I activities, while fulfilling their responsibilities as researchers or professors. They felt that in order to pursue commercialization they would either have to “quit their day job” to focus on a business full-time, or they would have to squeeze in their commercialization efforts between other tasks, such that any progress on the commercialization work would necessarily be slow and incremental.
Recognizing these challenges and finding ways to enable researchers to pursue commercialization and D&I work in conjunction with their other professional responsibilities will be critical for getting more evidence-based interventions to market. Participants stated that the ways academia and research funding are structured make it difficult to engage in commercialization, even for researchers who are motivated to do this work. Researchers are expected to focus on getting grants and publishing papers, while commercialization activities and even D&I efforts are not explicitly supported, encouraged or incentivized (e.g., they do not count toward tenure). In addition, several participants noted that although the technology transfer offices at their institutions were very willing to talk to their teams about the interventions they were attempting to commercialize and gave them some helpful advice regarding intellectual property protection, they did not seem to be interested in the types of products behavioral interventionists have to offer and were unwilling to put resources behind them. The way research funding is structured might also be creating barriers to commercialization efforts. For example, a siloed funding environment that supports research on just on one disease or condition at a time may work against market demands for products that serve multiple patient groups or address comorbid conditions. Similarly, traditional research designs—and traditional research grants—usually do not allow researchers the flexibility to significantly change their protocols or interventions based on feedback from their end users or other stakeholders, as would be encouraged in a business context.
SPRINT participants reported that despite their desire to have their interventions be put into practice and to see their research have a real-world impact, lack of time, competing priorities, and limitations on what can be done with research funding make it very difficult to engage in the work that would be necessary to bring their products to market. Participants felt that to be able to incorporate commercialization and intervention scale-up into their jobs, their institutions would have to start recognizing the value of these activities, and funders would need to start structuring their awards to facilitate D&I and/or commercialization (e.g., by including funding for an additional year that would be used solely for dissemination activities).
However, aside from these structural barriers, the evaluation data revealed an even more profound obstacle to commercialization efforts in the behavioral intervention context: the fact that some participants felt a dissonance between their identity as researchers/academics and the focus on costs that is at the center of commercialization efforts. Participants simply do not think of themselves as entrepreneurs, they do not see themselves running a business, and they feel that on some level commercialization does not fit who they are. As one participant put it, “we’ve invested so many years being academics… do you want us to abandon the long pursuit of an academic career to build a business? [that] sounds threatening actually”. However, even with this reluctance to think about the business and cost aspects of their work, there was a realization among participants that if they really wanted their work to have a population-level impact, they would need to consider expanding their view of what their role is and the kind of work they need to engage in. As one of the participants noted, “[starting a business] isn’t why I got into this- at the same time I spent [a long time] developing this intervention. I would like it to make it out to the real world to help people”.
CONCLUSION
Evaluation results indicate that the SPRINT training program effectively increases behavioral researchers’ knowledge of commercialization concepts and improves their ability to make their interventions more “stakeholder focused,” thereby increasing the likelihood that they will be adopted and implemented in real-world practice. Although there are significant challenges to commercializing behavioral interventions, teaching researchers commercialization concepts and encouraging them to consider scale-up, implementation, and commercialization from the outset could help reduce the number of interventions that are shown to be effective but never end up being widely used.
Acknowledgments
This study was funded by ICF, Inc. under contract with the National Cancer Institute (HHSN261201400002B; HHSN26100011)
Compliance with Ethical Standards
Conflicts of Interest: Authors declare that they have no conflicts of interest
Human Rights: All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Activities reported in this manuscript were deemed exempt from IRB oversight under clause 45 CFR 46.101(b)(4) of the Department of Health and Human Services regulation (Chesapeake IRB; Pro00030081).
Informed Consent: Exempt protocol.
Welfare of Animals: This article does not contain any studies with animals performed by any of the authors.
References
- 1. Blank S. Why the lean start-up changes everything. Harvard Bus Rev. 2013;91(5):63–72. [Google Scholar]
- 2. Blank S, Dorf B. The Startup Owner’s Manual: The Step-by-Step Guide for Building a Great Company. Pescadero, CA: K&S Ranch; 2012. [Google Scholar]
- 3. Ries E. The Lean Startup: How Today’s Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses. New York, NY: Crown Publishing Group; 2011. [Google Scholar]
- 4. Institute for Healthcare Improvement. Plan-Do-Study-Act (PDSA) Worksheet http://www.ihi.org/knowledge/Pages/HowtoImprove/ScienceofImprovementTestingChanges.aspx. Accessed 18 November 2018.
- 5. Committee on Science, Space, and Technology. Innovation Corps: A Review of a New National Science Foundation Program to Leverage Research Investments. House of Representatives, Second Sess. Washington, DC: U.S. Government Publishing Office; 2012. [Google Scholar]
- 6. Osterwalder A, Pigneur Y. Business Model Generation: A Handbook for Visionaries, Game Changers, and Challengers. Hoboken, NJ: John Wiley & Sons; 2010. [Google Scholar]



