Skip to main content
HHS Author Manuscripts logoLink to HHS Author Manuscripts
. Author manuscript; available in PMC: 2015 Jun 1.
Published in final edited form as: Am J Prev Med. 2014 Jun;46(6):653–659. doi: 10.1016/j.amepre.2014.02.008

Pricing Health Behavior Interventions to Promote Adoption

Lessons from the Marketing and Business Literature

Kurt M Ribisl 1, Jennifer Leeman 1, Allison M Glasser 1
PMCID: PMC4447483  NIHMSID: NIHMS690532  PMID: 24842743

Abstract

The relatively high cost of delivering many public health interventions limits their potential for broad public impact by reducing their likelihood of adoption and maintenance over time. Practitioners identify cost as the primary factor for which interventions they select to implement, but researchers rarely disseminate cost information or consider its importance when developing new interventions. A new approach is proposed, whereby intervention developers assess what individuals and agencies adopting their interventions are willing to pay and then design interventions that are responsive to this price range. The ultimate goal is to develop effective and affordable interventions, called lean interventions, which are widely adopted and have greater public health impact.

Introduction

Public health practitioners have been slow to adopt evidence– based prevention interventions (EBIs).1,2 To help address this problem, a growing number of scholars advocate applying lessons from the marketing and business literature.36 The private sector has developed a number of sophisticated approaches for pricing and developing products that researchers might apply to interventions to promote their adoption. The purpose of this paper is to (1) describe the role of cost as a barrier to adoption of EBIs; (2) detail approaches the private sector uses to understand consumer willingness to pay for products; and (3) recommend ways that intervention developers can apply these approaches to design interventions that are both efficacious and affordable.

Role of Cost in Public Health Interventions

An EBI's potential to improve health is determined not only by its efficacy but also by the extent to which it is adopted and implemented in practice.7 To ensure adoption and implementation, interventions need to be designed with the needs and constraints of practitioners and settings in mind.8 Although both researchers and practitioners place a high priority on intervention efficacy, researchers often give less attention to other factors that are important to practitioners. With the goal of achieving large effect sizes, researchers have created complex, costly interventions that are not affordable in practice.9 Although the focus on efficacy may be central to the discovery stage of the research continuum, as researchers progress to studies of effectiveness, they also need to consider factors such as cost, which are critical to ensuring broad-scale adoption and implementation.10,11

The costs of delivering interventions include the price of purchasing the program, if there is such a cost, and also the costs of supplies, equipment, space, and staff time. Some of these costs such as equipment and supplies translate into line-item costs in an agency's budget. Other costs, such as staff time and space, often represent opportunity costs––resources diverted from delivering different EBIs or performing other activities related to the organization's mission.

Failing to create interventions that target organizations can afford may be a central contributor to the underuse of EBIs in practice.10,1214 In numerous studies, public health and other decision makers have identified the high cost and resource intensity of interventions as one of the primary barriers to their use.15,16 In one study, state–level public health practitioners ranked availability of adequate resources as the top priority in selecting interventions, with evidence of scientific effectiveness as the second highest.17

An EBI's cost not only affects its potential for widespread adoption but also organizations' ability to implement the intervention with fidelity and maintain it over time.7,18,19 It is remarkable that practitioners identify cost as a central criterion for selecting EBIs, yet researchers rarely measure cost and almost never query their potential customers about the affordability of interventions they are developing. To promote greater adoption of interventions, researchers must become savvier about assessing intervention costs, understanding what organizations adopting these interventions are willing to pay, and keeping interventions affordable. Lean manufacturing is utilized in the business world, where manufacturers consider customer value, eliminate processes that do not add value to their business, and create more efficiency in their operation.20 There is a growing movement in healthcare to adopt “lean thinking” to control costs and improve quality.21,22 Public health organizations face funding constraints that require similar approaches to make better use of available funding, necessitating a movement toward lean interventions.

Current Use of Cost Information in Intervention Development

When researchers do report on costs, they most frequently report the findings of cost effectiveness analyses (CEA) as opposed to costs adopting organizations incurred delivering the intervention. A well-done CEA compares interventions to identify those that yield the greatest unit of benefit per unit of cost or, in other words, the biggest “bang for the buck.” Many CEAs are conducted from the perspective of society or a third-party payer, as opposed to the perspective of the implementing organization.23 Analyses performed from a societal or third-party perspective typically include costs incurred over an extended period of time and may report an intervention's cost effectiveness in relation to the widely accepted willingness to pay threshold of $50,000 per quality adjusted life year (QALY) saved. Although this perspective may be relevant for some customers, such as those making decisions about what Medicare will cover, most customers want answers to a different question. They want to know what an intervention will cost their organization over the short term so they can assess whether it can be implemented within the constraints of their practice setting.24,25 When researchers do estimate an organization's actual costs to implement an intervention, they rarely do sensitivity analyses to assess variations in costs that occur when interventions are delivered in different settings or formats.26 To promote the adoption of EBIs in practice, researchers need to specify the full costs of implementing an intervention and develop a greater understanding of the price that organizations are willing to pay to implement the EBI. In the business world, venture capitalists or other investors ask, Will the dogs eat the dog food? to indicate whether the product will sell at the recommended retail price. Similarly, considering the full price of an EBI, researchers should be asking, Are we offering something that organizations want and can reasonably afford to buy?

Researchers often conduct extensive formative research on the target audience and rely upon formal program planning models to identify and meticulously shape intervention components. However, they rarely build “ability to afford” assessments into this early phase. Thus, the current practice is comparable to an architect designing a house for a couple and asking them whether they would like various features such as a gourmet kitchen, wine cellar, patio, swimming pool, and detached three-car garage––without ever asking them about their construction budget. It is easy to design a house that the homeowners will love if there are no cost constraints, but that does not reflect reality for most homebuyers. To promote EBI adoption, evidence on what is efficacious needs to be balanced with evidence on what organizations are willing to pay. In other words, an approach is needed that helps strike the proper balance between the “wish list” and the “wallet.”

Lessons from the Marketing and Business Literature

Recognizing the need to balance EBI costs with adopters' budgets, the key question is, What is the best way to do this? These are challenges faced every day by businesses around the world as they develop products with the goal of neither overcharging nor undercharging consumers. If they overcharge consumers, they will have low sales and a high inventory. If they undercharge, they will have left potential profit on the table. In response to this challenge, businesses have developed a number of marketing approaches to identifying who makes the decision to use the product (the customers) and the types of products they need and want.27 We suspect that many, if not the majority, of currently available interventions are not affordable to end users. A company that perennially creates overpriced products will eventually go out of business. However, researchers creating health behavior change interventions that have high efficacy, but are too costly to be viable in the real world, often have great success in securing grants, publishing their findings, and advancing in their fields, regardless of whether practitioners actually adopt their overpriced interventions.

Pricing from a Business Perspective

Businesses set prices by focusing on the product or the customers.28 In the product-led approach, engineering and manufacturing departments design and build what they think is a superior product because of its advanced technology or capability. Nagle and colleagues28 describe an example of a company that built very durable, high–quality office furniture that could last for 20 years, but nevertheless had low market share. The company was not successful at selling its product because, although durability is a desirable quality in a piece of furniture, it is not what consumers in their target market valued. Their consumers were start-up companies that hoped to be acquired in the near future, and cared more about the short–term balance sheet than owning long-lasting furniture. In the product-led model, product development leads to a certain cost, which leads to a price, and consumers purchase the product if they value it enough to pay the price (Figure 1). Nagle et al.28 argue that this approach is problematic because of the potential for consumers to undervalue and therefore under-purchase the product, resulting in wasted resources and low impact on the consumer market. However, this is the approach taken by most intervention researchers who develop their intervention products, disseminate them without regard to price, and then wait for consumers (e.g., public health decision makers) to decide whether or not to expend the resources required to implement the intervention.

Figure 1. Alternative approaches to value creation.

Figure 1

Adapted from O'Connell JM, Griffin S. Overview of methods in economic analyses of behavioral interventions to promote oral health. J Public Health Dent 2011;71(1S):S101–S118.

The alternate, customer-led approach reverses this process.28 It starts with customers and an understanding of the value they place on the features of a product, which are used to set a price, and then the product is designed with these parameters in mind. Nagle and colleagues28 note that the goal is then to only make products that are profitable given their value to the target consumers. Thus, the company develops many promising product ideas, but then scraps ideas for which the price a consumer is willing to pay is low because of the perceived value of the product. Researchers who apply this customer-led approach would spend more time during the formative stage of intervention development to better understand (1) the value of interventions to individuals or organizations adopting them and (2) the price they are willing to pay. Researchers then would develop interventions that fit the organizations' budgets.

The business literature describes several approaches to understanding what people will pay for goods and services. Private sector methods to determine price include cost-plus, competition-driven, and value-based pricing.28 Cost-plus, or cost-based, pricing is the most common and simple form of pricing. Using this strategy, prices are set based on the cost of the product, plus some percentage of the cost that serves as the profit. However, this strategy does not reflect consumer demand and can lead to overpricing when demand is low and underpricing when demand is high. Competition-driven, or share-driven, pricing involves consideration of competitors' prices to set the price of one's own product. This generally involves lowering the price of the product to meet the market share price, even though its value could exceed the value of others' products.

A more strategic method, value-based pricing, starts with customers to determine the value they place on a product, and subsequently prices the product according to value.28 Unlike businesses, which set prices to maximize revenue and profits, researchers want to increase intervention impact by maximizing reach and adoption. Although a company might not care if the number of customers is very low as long as profit margins are high (called skim pricing in the business literature), a public health researcher would worry about this low population reach because it is unlikely to move the needle on population health.

An example of a customer-led approach that utilizes value-based pricing is the Kickstarter initiative (kickstarter.com). Kickstarter employs crowdfunding, a type of crowdsourcing that involves recruiting backers to fund mostly small-scale, promising, innovative projects within a specified period of time.29 Through this method, product developers gauge potential consumers' interest in the product and only pursue projects if they meet their fundraising goal. Like EBI developers, product developers using the Kickstarter platform have already generated ideas and planned for implementation, and are ready to make their work available to those who will use it. For example, Air Quality Egg is a product that a community of designers launched on Kickstarter that assesses air quality in the vicinity of one's home and allows the user to share the data captured with others on the web (kickstarter.com/projects/edborden/air-quality-egg). This project has exceeded its fundraising goal of $39,000 by $105,592 since it launched in March of 2012. Although some comments from backers were critical of the concept, it is clear that many people highly valued the mission of the product and the capability of the developers to create a quality product. When comparing this to intervention development, determining if decision makers believe that an intervention is worthwhile and feasible given the amount of money available can increase the intervention's viability and likelihood of adoption. The website quirky.com has a similar process to vet new product ideas. The first step is a product evaluation where the concept is described and critiqued. Engineering is the next step when the product must be able to be successfully prototyped. Once the product passes the evaluation and engineering phases, quirky.com visitors participate in a pricing exercise that determines how much they would pay, and if the gap between the perceived value and production cost is too large, the product is stalled.30

Price Sensitivity and Willingness to Pay

Businesses assess both price sensitivity and willingness to pay to determine the viability of new products and their pricing. Price sensitivity is a market's responsiveness to a change in price, with a large response indicating a sensitive market, and a small response an insensitive market.31 Although there are a variety of methods to assess price sensitivity, the most common involves surveying consumers of a particular product on their preferences and intentions.28 Survey methods have advantages over other methods such as experimental approaches or real-world field tests, where the product is offered at various price points and resulting sales are tallied. Survey methods are much cheaper, offer rapid results, and can be tested before a product is designed.28 Buy–response survey methods aim to capture a consumer's willingness to pay for a product. Businesses rarely ask consumers an open-ended question such as, How much would you pay for this product? Instead, they describe the product and sometimes randomize consumers to view three to five different prices. Responses are reported on a scale of likelihood of purchase or by category of purchase intention. Responses are then converted into estimates of the number of people for each response category who would actually purchase that product. These estimates can be used to predict the total number of products purchased and how much revenue will be generated for various price points.

Figure 2 displays the results of a buy-response survey comparing two different interventions priced between $250 and $1000. Intervention A remains fairly popular among potential buyers at all of the price points, and demand decreases linearly and fairly gradually with each higher price. Intervention B is less popular at all price points than Intervention A, and demand decreases dramatically when it is priced above $500. This tells potential intervention developers that they can price Intervention A at multiple levels and still anticipate strong demand, whereas Intervention B should be priced at $500 or less if they want it to be adopted widely. In addition, if they would only offer one intervention, Intervention A would be the better choice. Conducting buy-response surveys such as these would be a useful approach for EBIs because it would allow researchers to determine what price range would be acceptable to different subgroups of decision makers. Researchers then could potentially create a variety of customized packages/bundles of EBI components matched to the prices that different customers are willing to pay.6 Large private companies might be able to afford EBIs with more features than a local health department, for instance. The private company may want a worksite health–promotion program involving measurement of height and weight, blood draws for testing lipids, and one-on-one risk factor counseling. In contrast, the health department might prefer an online risk assessment with self-reported height and weight and tailored messages based on self-reported risk factors. Finally, the buy-response example focuses on a dollar cost, but could also be adapted to describe varying amounts of staffing resources needed to deliver and sustain an intervention.

Figure 2. Purchase probability curve.

Figure 2

Adapted from O'Connell JM, Griffin S. Overview of methods in economic analyses of behavioral interventions to promote oral health. J Public Health Dent 2011;71(1S):S101–S118.

In addition to surveying decision makers on willingness to pay, researchers should prospectively collect cost data during intervention development to make this information transparent to potential adopters. O'Connell and Griffin32 suggest conducting a cost analysis to capture all economic costs required to deliver an intervention, also referred to as micro-costing. Understanding the costs associated with different intervention components can better equip researchers to predict and apply decision makers' willingness to pay and do a better job of pricing interventions for what real-world customers can afford. In addition, having cost information helps the customer in their efforts to make a case for the intervention's return on investment to higher–level decision makers. Table 1 provides a summary of existing challenges and recommendations to guide researchers in addressing cost in their intervention development.

Table 1. Status quo and proposed solution for fixing the disconnect between practitioners and intervention developers.

Practitioners
Status quo Proposed solution for researchers
Have limited budgets, staffing, and other resources Engage in customer research to identify the resources available to practitioners and their willingness to pay for EBIs3
Must accommodate to varying priorities, budgets, and staffing levels Provide a pool of EBIs that vary in cost and resource requirements4
Need EBIs that are easy to adapt to the local context and implement in practice Design EBIs to be lean and maximize reach
Need to know the costs of using an EBI Collect data on the costs of adapting, implementing, and delivering EBIs24
Intervention developers/researchers
Status quo Proposed solution for academic institutions/funders
Are rewarded with publications, grants, and promotions for developing maximally effective EBIs regardless of EBI cost, staffing resources, or implementation feasibility Reward based on work contributing to all phases of the research spectrum: initial discovery, refinement, further development, and delivery to practice7
Have few incentives to develop EBIs that meet practitioner needs and become widely adopted Give priority to proposals that design interventions with scalability in mind8,33
Have limited information about practitioners' resource constraints Fund customer research5 to understand end user budgets and resources, including buy-response studies

EBI, evidence-based intervention

Diabetes Self-Management Example

Diabetes self-management (DSM) interventions provide an example of how researchers might use value–based pricing strategies to increase adoption. Many researcher–developed DSM interventions involve multiple in-person contacts over extended periods of time. The resources required to implement these interventions are a poor fit with the settings that provide diabetes care.9

If the goal is to improve DSM, then researchers need to rethink their approach to designing interventions to increase their adoption in practice. Researchers first need to identify the agencies and organizations that provide care to individuals with diabetes, particularly those with the greatest need, that is, their potential customers. Potential customers may include, for example, health departments, Federally Qualified Health Centers, health insurers, disease management organizations, or physician practices. They then need to identify the individuals within these organizations who are responsible for making the decision to adopt new programs. Researchers would then survey representative decision makers from one or more targeted organizations to identify the amount they are willing to pay for specific DSM interventions and factors that influence the amount they are willing to pay, such as evidence supporting intervention effects.33,34 Surveys may present a menu of DSM interventions with various resource requirements and ask respondents to assess their affordability on a Likert scale with interventions varying on mode of delivery, type and number of involved professionals, included materials, and number of contacts.

For organizations that account for per-unit costs such as health insurers and disease management organizations, surveys could involve direct questions about the amount organizations are willing to pay per client/patient. For other organizations, the survey may need to get at costs less directly by asking about the resources expended for existing disease management programs. Researchers could use findings from these surveys to design interventions that better fit the price their customers will pay. Researchers may design an intervention for a single type of customer or may customize it to fit the resources of different customers. For example, they might design a DSM intervention for health departments that a health educator could deliver through six group sessions and educational materials. They might package the same content in a format that a physician's office could use that would include these components plus a personal DSM coach and access to tailored materials on a website.

Conclusions

The current approach to developing EBIs favors the development of high-efficacy and high-cost programs. Unfortunately, many organizations cannot afford to pay for these complex and intensive interventions. Therefore, many EBIs are not adopted as widely as they could be. Fortunately, the business literature contains pricing approaches that can be used to survey the person who is deciding to adopt or purchase the intervention or program. Even before the intervention is fully developed, the decision maker is presented with several programs at specific prices and asked to indicate their interest. We are not aware of anyone using these approaches to develop EBIs but believe they have potential, especially given recent calls in the literature for applying lessons from the marketing and business literature.36 Willingness to pay and cost data should be published to fuel the development of more realistic and scalable interventions. Researchers will likely be astonished at meager real–world intervention budgets, which could trigger the development of lean interventions that conform to the limited budgets of many health organizations.

Acknowledgments

The authors would like to thank Alexis Moore and Marissa Hall for their thoughtful comments on an earlier version of this paper. The authors would also like to thank Noel Brewer for many insightful suggestions that helped clarify our thinking and improve the arguments presented in this paper. 4CNC: Moving Evidence into Action, a collaborating site in the Cancer Prevention and Control Research Network (Grant U48/DP001944) from the CDC and National Cancer Institute provided funding for this article.

Footnotes

No financial disclosures were reported by the authors of this paper.

References

  • 1.Brownson RC, Fielding JE, Maylahn CM. Evidence-based public health: a fundamental concept for public health practice. Annu Rev Public Health. 2009;30:175–201. doi: 10.1146/annurev.publhealth.031308.100134. [DOI] [PubMed] [Google Scholar]
  • 2.Hannon PA, Fernandez ME, Williams RS, et al. Cancer control planners' perceptions and use of evidence-based programs. J Public Health Manag Pract. 2010;16(3):e1–8. doi: 10.1097/PHH.0b013e3181b3a3b1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Dearing JW, Kreuter MW. Designing for diffusion: how can we increase uptake of cancer communication innovations? Patient Educ Couns. 2010;81(S):S100–S110. doi: 10.1016/j.pec.2010.10.013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Dearing JW, Maibach EW, Buller DB. A convergent diffusion and social marketing approach for disseminating proven approaches to physical activity promotion. Am J Prev Med. 2006;31(4S):S11–S23. doi: 10.1016/j.amepre.2006.06.018. [DOI] [PubMed] [Google Scholar]
  • 5.Kreuter MW, Bernhardt JM. Reframing the dissemination challenge: a marketing and distribution perspective. Am J Public Health. 2009;99(12):2123–27. doi: 10.2105/AJPH.2008.155218. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Maibach EW, Van Duyn MAS, Bloodgood B. A marketing perspective on disseminating evidence-based approaches to disease prevention and health promotion. Prev Chronic Dis. 2006;3(3):A97. [PMC free article] [PubMed] [Google Scholar]
  • 7.Glasgow RE, Lichtenstein E, Marcus AC. Why don't we see more translation of health promotion research to practice? Rethinking the efficacy-to-effectiveness transition Am J Public Health. 2003;93(8):1261–7. doi: 10.2105/ajph.93.8.1261. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Klesges LM, Estabrooks PA, Dzewaltowski DA, Bull SS, Glasgow RE. Beginning with the application in mind: designing and planning health behavior change interventions to enhance dissemination. Ann Behav Med. 2005;29(S):S66–S75. doi: 10.1207/s15324796abm2902s_10. [DOI] [PubMed] [Google Scholar]
  • 9.Leeman J, Mark B. The chronic care model versus disease management programs: a transaction cost analysis approach. Health Care Manage Rev. 2006;31(1):18–25. doi: 10.1097/00004010-200601000-00004. [DOI] [PubMed] [Google Scholar]
  • 10.Glasgow RE, Emmons KM. How can we increase translation of research into practice? Types of evidence needed Annu Rev Public Health. 2007;28:413–33. doi: 10.1146/annurev.publhealth.28.021406.144145. [DOI] [PubMed] [Google Scholar]
  • 11.Wilson KM, Brady TJ, Lesesne C. An organizing framework for translation in public health: the knowledge to action framework. Prev Chronic Dis. 2011;8(2):A46. [PMC free article] [PubMed] [Google Scholar]
  • 12.Baker EA, Ramirez LKB, Claus JM, Land G. Translating and disseminating research-and practice-based criteria to support evidence-based intervention planning. J Public Health Manag Pract. 2008;14(2):124–30. doi: 10.1097/01.PHH.0000311889.83380.9b. [DOI] [PubMed] [Google Scholar]
  • 13.Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50. doi: 10.1186/1748-5908-4-50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Green LW, Ottoson JM, García C, Hiatt RA. Diffusion theory and knowledge dissemination, utilization, and integration in public health. Annu Rev Public Health. 2009;30:151–74. doi: 10.1146/annurev.publhealth.031308.100049. [DOI] [PubMed] [Google Scholar]
  • 15.Armstrong R, Waters E, Crockett B, Keleher H. The nature of evidence resources and knowledge translation for health promotion practitioners. Health Promot Int. 2007;22(3):254–60. doi: 10.1093/heapro/dam017. [DOI] [PubMed] [Google Scholar]
  • 16.Baker EA, Brownson RC, Dreisinger M, McIntosh LD, Karamehic-Muratovic A. Examining the role of training in evidence-based public health: a qualitative study. Health Promot Pract. 2009;10(3):342–8. doi: 10.1177/1524839909336649. [DOI] [PubMed] [Google Scholar]
  • 17.Brownson RC, Ballew P, Dieffenderfer B, et al. Evidence-based interventions to promote physical activity: what contributes to dissemination by state health departments. Am J Prev Med. 2007;33(1S):S66–S78. doi: 10.1016/j.amepre.2007.03.011. [DOI] [PubMed] [Google Scholar]
  • 18.Thaker S, Steckler A, Sánchez V, Khatapoush S, Rose J, Hallfors DD. Program characteristics and organizational factors affecting the implementation of a school-based indicated prevention program. Health Educ Res. 2008;23(2):238–48. doi: 10.1093/her/cym025. [DOI] [PubMed] [Google Scholar]
  • 19.Veniegas RC, Kao UH, Rosales R, Arellanes M. HIV prevention technology transfer: challenges and strategies in the real world. Am J Public Health. 2009;99(1S):S124–S30. doi: 10.2105/AJPH.2007.124263. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Anvari A, Ismail Y, Hojjati SMH. A study on total quality management and lean manufacturing: through lean thinking approach. World Applied Sciences Journal. 2011;12(9):1585–96. [Google Scholar]
  • 21.Holden RJ. Lean thinking in emergency departments: a critical review. Ann Emerg Med. 2011;57(3):265–78. doi: 10.1016/j.annemergmed.2010.08.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Mazzocato P, Savage C, Brommels M, Aronsson H, Thor J. Lean thinking in healthcare: a realist review of the literature. Qual Saf Health Care. 2010;19(5):376–82. doi: 10.1136/qshc.2009.037986. [DOI] [PubMed] [Google Scholar]
  • 23.Weatherly H, Drummond M, Claxton K, et al. Methods for assessing the cost-effectiveness of public health interventions: key challenges and recommendations. Health Policy. 2009;93(2–3):85–92. doi: 10.1016/j.healthpol.2009.07.012. [DOI] [PubMed] [Google Scholar]
  • 24.Chen HT. The bottom-up approach to integrative validity: a new perspective for program evaluation. Eval Program Plann. 2010;33(3):205–14. doi: 10.1016/j.evalprogplan.2009.10.002. [DOI] [PubMed] [Google Scholar]
  • 25.Kessler R, Glasgow RE. A proposal to speed translation of healthcare research into practice: dramatic change is needed. Am J Prev Med. 2011;40(6):637–44. doi: 10.1016/j.amepre.2011.02.023. [DOI] [PubMed] [Google Scholar]
  • 26.Ritzwoller DP, Sukhanova A, Gaglio B, Glasgow RE. Costing behavioral interventions: a practical guide to enhance translation. Ann Behav Med. 2009;37(2):218–27. doi: 10.1007/s12160-009-9088-5. [DOI] [PubMed] [Google Scholar]
  • 27.Storey D, Saffitz G, Rimon J. Social marketing. In: Glanz K, Rimer B, Vishwanath K, editors. Health behavior and health education: theory, research and practice. 4th. San Francisco CA: Jossey-Bass; 2008. [Google Scholar]
  • 28.Nagle T, Hogan J, Zale J. The strategy and tactics of pricing: a guide to growing more profitably. 5th. Upper Saddle River NJ: Prentice Hall; 2011. [Google Scholar]
  • 29.Kärkkäinen H, Jussila J, Multasuo J. Can crowdsourcing really be used in B2B innovation?. MindTrek 2012: Proceedings of the 16th International Academic MindTrek Conference; 2012. pp. 134–41. [Google Scholar]
  • 30.Kuang C. Quirky's combination for off-the-shelf success: crowdsourced test market and innovative inventors. fastcompany.com/3004345/quirkys-combination-shelf-success-crowdsourced-test-market-and-innovative-inventors. [Google Scholar]
  • 31.Schindler RM. Pricing strategies: a marketing approach. Los Angeles CA: SAGE Publications, Inc; 2012. [Google Scholar]
  • 32.O'Connell JM, Griffin S. Overview of methods in economic analyses of behavioral interventions to promote oral health. J Public Health Dent. 2011;71(1S):S101–S118. doi: 10.1111/j.1752-7325.2011.00236.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Denis JL, Hébert Y, Langley A, Lozeau D, Trottier LH. Explaining diffusion patterns for complex health care innovations. Health Care Manage Rev. 2002;27(3):60–73. doi: 10.1097/00004010-200207000-00007. [DOI] [PubMed] [Google Scholar]
  • 34.Grimshaw JM, Eccles MP, Lavis JN, Hill SJ, Squires JE. Knowledge translation of research findings. Implement Sci. 2012;7(1):50. doi: 10.1186/1748-5908-7-50. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES