Crowdsourcing is the act of outsourcing tasks, traditionally performed by an employee, a company, or a particular agency, to an outside group of people or community, through an open call for participation.1 Crowdsourcing is commonly used within business industries for a variety of tasks such as marketing, product design, and development. Only recently has crowdsourcing been used to generate ideas and to collect data for clinical research.2,3
As with many sub-healthcare specializations, manual therapy would benefit from improved, more transferable, clinical research. Unfortunately, the same challenges to performing clinical research for manual therapy are similar to those in other specialties; challenges such as patient access,4 maintenance of methodological integrity, and costs. An appropriately generalizable population for manual therapy requires patients that have similar levels of disability, pain, and functional losses. Recruiting subjects from colleges, to research labs from the general population, or through other convenience methods, often results in populations with dissimilarities to clinical subjects; with small enough differences to affect the final result of the study. Crowdsourcing clinical research to manual therapists has a number of potential benefits; most notably, the ability to provide quick and efficient patient access. It is also likely that the effects of interventions in a ‘real’ world environment are more likely to represent clinical practice than those gathered in artificial care situation.5
Methodological integrity includes elements such as external and spectrum validity, as well as control of biases associated with clinician and patient expectations. A previous editorial outlined the challenges associated with mode of administration bias,6 and many were involved with research participants attempting to function or respond in a way, to support the trial at hand. For the purist, who may claim that total internal control is needed for any clinical trial, it is worth noting a few interesting facts. First, most treatment guidelines use data from systematic reviews or meta-analyses, and many are either dated with respect to current literature or advocated in the absence of supportive evidence.7 Second, many clinicians are not aware of healthcare guidelines8 and getting clinicians involved on the front line of research eliminates the gap between research creation and clinical action. Lastly, fewer than half of the medical treatments used today are supported by evidence. I would say that rigidly controlling all elements of a clinical trial has only been marginally effective.
Costs may be the biggest deterrent for manual therapy clinical research as funding paylines have been reduced whereas expenses for managing clinical trials have increased exponentially.9 The pharmaceutical industry has recognized the value of crowdsourcing and has attempted to facilitate its use to reduce research and development costs.3 Since increasing costs associated with a large trial are crippling for researchers, crowdsourcing may be an important mechanism to increase trial sizes, in more diverse geographic regions, without incurring the costs of dedicated oversight teams in multiple locations.
Over the last year, the American Physical Therapy Program and the American Academy of Orthopedic Manual Physical Therapists have sponsored initiatives to foster research associated with crowdsourcing. It is my impression that this is the research model of the future, and as far as having clinicians drive the success of dedicated research projects, well, I cannot think of a better group to do it.
References
- 1.Johnston SC, Hauser SL. Crowdsourcing scientific innovation. Ann Neurol 2009;65:A7–8 [DOI] [PubMed] [Google Scholar]
- 2.Bradley JC, Lancashire RJ, Lang AS, Williams AJ. The spectral game: leveraging open data and crowdsourcing for education. J Cheminform 2009;1:9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Ekins S, Williams AJ. Reachign out to collaborators: crowdsourcing for pharmaceutical research. Pharm Res 2010;3:393–5 [DOI] [PubMed] [Google Scholar]
- 4.Froud R, Eldridge S, Lall R, Underwood M. Estimating the number needed to treat from continuous outcomes in randomized controlled trials: methodological challenges and worked example using data from the UK Back Pain Exercise and Manipulation (BEAM) trial. BMC Med Res Methodol 2009;9:35. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Dreyer NA, Tunis SR, Berger M, Ollendorf D, Mattox P, Gliklich R. Why observational studies should be among the tools used in comparative effectiveness research. Health Aff (Millwood) 2010;29:1818–25 [DOI] [PubMed] [Google Scholar]
- 6.Cook C. Mode of administration bias. J Man Manip Ther 2010;18:61–3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Ortiz E, Eccles M, Grimshaw J, Woolf S. Current validity of AHRQ clinical practice guidelines. Rockville, MD: Agency for Healthcare Research and Quality (US); 2002 [PubMed] [Google Scholar]
- 8.Di Iorio D, Henley E, Doughty A. A survey of primary care physician practice patterns and adherence to acute low back problem guidelines. Arch Fam Med 2000;9:1015–21 [DOI] [PubMed] [Google Scholar]
- 9.Grignolo A. The clinical trials transformation initiative (CTTI). Ann 1st Super Sanita 2011;47:14–8 [DOI] [PubMed] [Google Scholar]
