Abstract
Objectives.
This paper focuses on the methods of a single study incorporating data from chiropractic clinics into an evidenced based investigation of the appropriateness of manipulation for chronic back pain.
Methods.
A cluster sample of clinics (125) from 6 sites across the US was chosen for this observation study. Patients with chronic low-back and neck pain were recruited using iPads, completed a series of online questionnaires and gave permission for their patient records to be scanned. Patient records for a random sample were also obtained. RAND staff and clinic personnel collected record data.
Results
We obtained survey data from 2024 patients with chronic low back pain, chronic neck pain or both. We obtained patient record data from 114 out of 125 clinics. These included the records of 1475 of the individuals who had completed surveys (prospective sample), and a random sample of 2128 patients. Across 114 clinics, 22% of clinics had patient records that were fully electronic; 32% had paper files; and 46% used a combination. Of the 114 clinics, about 47% scanned the records themselves with training from RAND. We obtained a total of 3,603 scanned records. The patient survey data was collected from June 2016 February 2017. The Provider surveys from June 2016 to March 2017 and the chart pull from April 2017 to December 2017.
Conclusions:
Clinics can be successfully recruited for practice-based studies and patients can be recruited using iPads. Obtaining patient records presents considerable challenges and clinics varied in whether they had electronic files, non-electronic records or a mixture. Clinic staff can be trained to select and scan samples of charts to comply with randomization and data protection protocols in transferring records for research purposes.
MeSH terms: Chiropractic, Pain, Surveys and Questionnaires, Chronic pain, Low back pain, Neck pain, Complementary Therapies
INTRODUCTION
Appropriateness of care decisions have been based on the published literature on safety and efficacy and the judgments of experts (both clinical and scientific experts). [1] What has been missing has been the voice of patients and data from real clinics.[ 2]
The Center of Excellence for Research in Complementary and Integrative Health (CERC) [1] was established at RAND specifically to develop a method for appropriateness studies in complementary and integrative health (CIH) that included patient preferences and costs. [3] We also wished to study the appropriateness of the manipulation and mobilization used in chiropractic clinics and to relate that to the outcomes from care. To achieve both these objectives required the recruitment of clinics and patients. In the last decade, a lot of attention has been given to Evidenced Based Practice (EBP) as being the most appropriate care [4–7].But to evaluate efficacy and effectiveness of therapies and to understand patients’ experiences and beliefs in chiropractic and other areas of CIH, researchers need to collect data from real practices.
This article describes how chiropractic clinics were recruited, trained and incorporated along with their patients in our study to ensure real-world patient data about preferences and experience are included in appropriateness decisions. In the broader context, the study was based on the premise that Evidence-Based Practice (EBP) requires Practice Based Evidence--i.e. evidence from real Clinical Practice needs to inform EBP.
THE PROBLEM
Within EBP a hierarchy of evidence has been established which places randomized controlled trials (RCTs) at the pinnacle and expert opinion at the bottom. In between are such things as observation studies. [8] The result is we get hierarchies of evidence based on which body is considered superior (the gold standard). Usually this is based on methodological criteria so that a RCT that is double blinded is considered better than an observation study. The problem here however, is it depends on what you want the evidence for. If you are interested in efficacy and causal inference, then the RCT research design is superior but if your interest is in effectiveness (what works in real practice) it may not be the most relevant research design [2] Many of the RCTS will exclude patients with comorbidities but in the real-world patients come to the clinic with numerous comorbidities. The strongest evidence comes from Systematic Reviews and particularly when a series of RCTs are including in a meta-analysis which can only be done if the studies are reasonably homogeneous. But basing EBP on systematic reviews ignores that in whole areas of health care there are insufficient, or any, reviews to systematize or use to develop a meta-analysis. This is especially true for CIH [9,10, 11]. Examples of misleading meta-analysis have already been documented in the literature [12]. Furthermore, studies with negative results are less likely to be published. This itself has a tremendous impact on the “evidence” [13,14].
We are left with a dilemma here. True efficacy studies are not based on normal practice but on trials which in many important features do not resemble practice. But the studies that are truly based on practice cannot determine efficacy and therefore, and since EBP is primarily based on such studies, EBP is not based strictly on true practice. RCTs are usually based on new LBP/NP patients that haven’t already been treated, while most patients in a practice have been getting treatment for a long time and these chiropractic patients are a self-selected group. Pragmatic trials such as comparative effectiveness trials have tried to overcome this problem by making normal practice the focus and simply measuring the outcomes. [16]
Part of the dilemma of trying to create EBP is that practice is essentially case based [17]. Case studies are the lowest rank for evidence within EBP and have been critiqued within such fields as ethics as being potentially very misleading [18]. But as noted by Godlee [18] from the point of view of practice the “research literature is poorly organized, largely of poor quality and irrelevant to clinical practice, often conflicting, and often not there at all” (p1621). Even when it does exist it may be on a patient sample quite dissimilar from the one treated by the provider. A strong case can therefore be made that practice based research is also required for true EBP. It requires a rethinking of the methodological approaches to what evidence should count in EBP and a rethinking of the “house of evidence” to the “houses of evidence”. [19,20].
In summary the problem is, how do you do make the evidence, practiced based in a way that ensures rigorous methods are applied and valid and reliable data are collected? It is this challenge that CERC was confronted with in regard to appropriateness.
THE SOLUTION
While it might appear self-evident that EBP should be Practice Based Evidence to implement it in a project requires a whole range of strategies from recruiting chiropractors, recruiting clinics, training clinic staff, collecting data from the clinics. Each step provided distinct challenges in this study. Some of these might be distinct to this study and the research question about appropriateness. But many of the solutions could be relevant to other chiropractic studies and to other health professions.
METHODS
This study was approved by RAND’S Institutional Review Board, referred to as the Human Subjects Protection Committee (HSPC). This study was registered as an observational study on ClinicalTrials.gov ID:
In the following sections, we outline the steps taken to incorporate the chiropractic clinics in the data collection process for our study. It involved the following steps:
Selecting the study sites
Recruiting the chiropractors and the practices
Training the staff
Recruiting the patients
Obtaining the patient files
1: Selecting the Study Sites
We selected six regions for the study. These regions were in and around the following cities: San Diego, CA; Dallas/Fort Worth, TX; Minneapolis/ St. Paul, MN; Seneca Falls, NY; Tampa, FL; Portland, OR. This sample of cities was a convenience sample, and it served the purpose of an observational study like ours. Within our study, we did not intend to use any methods of inferential statistics based on this sample. In other words, the aim of the study was not to draw conclusions for the entire population based on our sample. However, we still attempted to make this sample geographically representative of the country.
2. Recruiting the Practices
Pilot Study
Before launching the National study, we conducted a pilot study in and around Los Angeles. In a pilot study of 83 patients we found only 3 patients who chose not to participate online and requested paper questionnaires. We decided in the full study to only offer online participation and to recruit using an IPad given to the clinics. Our success in enrolling more patients than we had planned would suggest this is an efficient method to recruit patients in practices. This result has important implications for future studies. In addition to the information about online survey of the patients the pilot study was used to test out the burden of the data collection on the clinic and the patient and to redefine our data collection instruments. The pilot study is described in Part 3 of the JMPT series on this study. [21]
Recruiting Chiropractors (Selection Criteria and Recruitment Methods)
Recruitment of chiropractors (and their patients) for the study was a multi-stage process. First, we decided on the geographic study locations. Next, in these locations, we recruited clinics (chiropractors) using various recruitment methods. In the third stage, we visited the selected clinics and instructed them on patient recruitment and scanning patient files. The clinics recruited their patients for the study during the final stage. We will briefly describe each stage below.
Selection Criteria
Our target was to recruit at least 20 chiropractic clinics in each of the study locations. We hit the target in each region and managed to enroll a total of 125 clinics. We recruited one chiropractor per clinic. We included licensed chiropractors who: 1) worked in a stand-alone (community) chiropractic practice; 2) were in a solo or multi-provider practice; 3) were part of a chiropractic-only or integrative practice; 4) saw a minimum of 25 patients per week and; 5) had at least 5 years of experience post-training. We excluded clinics based in chiropractic schools, hospitals, or Department of Veteran Affairs or Department of Defense clinics. Another exclusion criterion was the percentage of a chiropractor’s patients who were on workers’ compensation or personal injury claims. If the percentage was higher than 50%, we did not enroll the chiropractor in the study.
Recruitment Methods
We used multiple methods to recruit chiropractors for the study. We briefly describe these recruitment methods, as well as their advantages and disadvantages.
Advertising in professional magazines and newsletters
We made announcements about our study in professional journals (e.g. Journal of Manipulative and Physiological Therapeutics - JMPT) and newsletters. Our call for participants was placed in Dynamic Chiropractic, a nationwide chiropractic publication with a circulation of 55,000 U.S. chiropractors. This call was also circulated in the newsletters and on the websites of some state chiropractic associations.
According to our rough estimates, these announcements generated about 70 responses Nationwide within the first three months. About half of these responses were from chiropractors who were either outside of the study regions or did not meet our inclusion criteria. The other half were from chiropractors who qualified for the study. However later, several of them declined participation for various reasons. Thus, we recruited about 30 chiropractors (about a quarter of the participating chiropractors) through these announcements. The key advantage of these announcements is their capacity to reach many potential participants. In addition to that, they are usually low-cost. However, they may have low response rates. To improve the response rates, one can make multiple waves of the announcements. Furthermore, our experience shows that some responses can be belated. Because of that it is a good idea to start early and to send the calls way before fieldwork. In our case, we made the recruitment announcements 6 months before we started setting up the clinics up for the study.
Recruiting at professional events like Conventions and District meetings
To recruit clinics, we attended three state chiropractic conventions and one National meeting in DC for the American Chiropractic Association. In the other states either our recruitment was going well or the State meetings did not correspond with our own schedule (occasionally they had already been held before we started recruitment). At each of these conventions, the organizers kindly provided us with an exhibition stand/table, where we talked to interested chiropractors about the study and signed them up. We also had an opportunity to make the call for participation during the plenary sessions at the conventions. This proved to be one of the most efficient ways of recruiting chiropractors. We were able to sign up at least 20 chiropractors (which was our target) within two days at each of these conventions. The disadvantage of recruiting at the conventions is that it can be relatively costly. The main cost items are registration fees, exhibition stand costs and travel expenditures. Also, the sample is biased in favor of those who attend conventions.
Local society meetings were also attended. It does depend, however, on how active the society is and how many attend the meetings, and the numbers are always smaller than the conventions.
Getting assistance from individuals with extensive networks
During the recruitment period, we heavily relied on individuals who had extensive personal/professional network and influence among chiropractors. We reached out to and actively sought support from: a) former and current leaders of the National and State Chiropractic Associations; b) heads of the local chapters of the associations; c) chiropractic college presidents and professors; e) former participants of our previous chiropractic studies; and f) other persons well-connected with the chiropractors in the study regions. These individuals spread the word about the study among their network by talking to, emailing and calling people. They actively encouraged chiropractors to participate in the study. These people also provided us with valuable intelligence (like who should we contact, membership lists of local chiropractic associations, and information on important upcoming events) during the recruitment process. In a way, they were our research champions. The crucial role of such champions in any CAM research like ours cannot be overstated.
Using Social Media
The research champions circulated the study announcement through the social media too. They posted the announcement on the websites and the Facebook pages of various chiropractic groups. Additionally, we developed several special versions of a call for participants. These versions were blurbs specifically designed for personal Facebook and Twitter posts. We asked our research champions to disseminate the blurbs through their social network accounts. Finally, there was an option to directly contact potential participants via Facebook or LinkedIn. Nonetheless, we rarely used that option.
The social media allowed us to reach out to specific online communities at almost no cost. However, this recruitment method was limited to chiropractors who actively used social media. Another issue with this method was that we had limited control over the dissemination of the announcement. We did not know when and how often our blurbs were posted, re-posted/re-tweeted and shared. We were also usually unaware who saw the posts and who responded to them. Because of this information gap, we were unable to evaluate the effectiveness of the recruitment via social media.
Sending emails and making follow-up calls
Except for San Diego and Dallas, we searched google for the contacts of active chiropractors in all study regions. Then we sent emails to them, and solicited their participation in our study. The initial response rate was quite low. We usually got about 4 responses per 100 emails sent, on average. To improve the success rate, we sometimes made follow-up phone calls. Thus, we were able to recruit about one fifth of our total sample by sending emails and making follow-up calls. This recruitment method can be time consuming, especially when one does not have a ready-to-use database of contacts. It may not have a great success rate either.
Snowball sampling
We used snowball sampling method extensively during the recruitment period. In our case, the snowball sampling mostly happened when we started to visit the clinics and set them up for patient recruitment. We asked already participating chiropractors to recommend their colleagues who were potentially interested in and qualified for the study. Quite often, these chiropractors directly contacted potential participants on our behalf and asked them to join the study. In some study regions, this proved to be an effective way of recruiting chiropractors. Using snowball sampling, we enlisted almost half of our sample within a week in Minneapolis, MN, and Seneca Falls, NY, areas.
3. Training clinic staff
Visiting Clinics
Whenever it was possible, we asked the chiropractors to inform their front office personnel about the study and notify them in advance that we would contact the clinics. This facilitated scheduling of the clinic visits, as well as the clinic visits themselves. A team member emailed the chiropractor consent forms to the chiropractors and conducted short phone interviews to formally confirm their eligibility for the study. She also scheduled visits to the clinics of the chiropractors. When a clinic needed additional information about the study or was having second thoughts about participating, IDC, who is quite well known in the profession, made personal calls. If it was logistically not feasible, a clinic could decide to forgo the training visit. In those cases, they participated in a telephone briefing using a protocol manual and a PowerPoint presentation sent to them.
Training Staff
To set up the clinics for the patient recruitment, at least four of the study team members traveled to each study region for a week. During these weeks, each team member visited on average one clinic per day. They briefed the chiropractors and the clinic staff about the study and provided instructions for the patient recruitment. They also trained the staff on how to use our patient recruitment tool a WIFI i embedded i-Pad with a preloaded online Patient Recruitment Form. The Patient Recruitment Form screened out the first level of ineligible patients (younger than 21, worker’s compensation or personal injury patients, patients who did not have low back or neck pain) and gathered contact information (emails and phone numbers) of those who qualified and agreed to participate in our study.
After the briefings and the training, the team members usually stayed at the clinics and observed patient recruitment by the staff or the chiropractors. They provided feedback, as appropriate, on how the clinics could make the process more efficient. A few of the clinics were briefed via phone. We sent the recruitment materials to these clinics by email, first and sent the iPads. Then team members called them and conducted the training on patient recruitment. Each training session had both presentation slides (Powerpoints) and protocol documents that we walked the clinic staff through. In terms of the number of recruited patients, these clinics did as well as the others.
4. Recruiting Patients
Patients and Following-up with the Clinics
Participating clinics were allocated exactly four weeks to recruit patients. We extended the patient recruitment period if we knew beforehand that the clinic would be closed for a significant number of days during the four weeks. During that period, each clinic enrolled their patients to our study using an i-Pad, which we provided to them and which the clinic retained after the study
The i-Pad had an online Patient Recruitment Form, which the patients were asked to fill out. The clinics were provided with study “talking points” for the patient, brochures and posters and some chiropractors took it upon themselves to introduce the study. The URL of the Patient Recruitment Form was unique for each clinic/i-Pad. This allowed us to track the number of patients who signed up for our study at every clinic. Once the patient touched the i-Pad, there was no further involvement for the clinic with the patient with regards to the patient survey. From that point on the patient was directly involved with RAND online. While the patient population has been described elsewhere [3] and the type of data collected describe in Part 3 of this series in JMPT,[21] we collected online questionnaire data from 2024 individuals. Participants responded to up to eight questionnaires per person during a three-month follow-up period.
The patients were payed an incentive to participate. The payment schedule is shown in Table1.
Table 1.
Action | Incentive |
---|---|
Touched iPad but did not qualify | 0 |
Completed iPad Recruitment Form and Screened in | $5 |
Completed online screener | $5 |
Completed first online questionnaire | $10 |
Completed second questionnaire (Longest survey in study, called “Baseline”) | $25 |
Completed five biweekly follow-ups | $75 ($15 each) |
Complete final survey (“Endline”) | $50 |
Bonus for completing all tools | $25 |
Total Possible | $200 |
We provided generous incentives to encourage patient participation, and to encourage chiropractor participation. During our pilot study, multiple chiropractors commented to us that while they did not have strong feelings about the value of the incentives provided to the clinic, it did matter to them that patients were well-incentivized. They wanted the study to be a positive experience for any patient who opted to participate. Note, the value of the incentives increased throughout the follow-up period to encourage patient retention through the full three months.
5. Obtaining the patient records
Here we discuss how the records were selected and delivered to RAND. We obtained two different samples of patient records. The first were the records of the patients in the study who answered the surveys and who gave consent for us to access their records. We used these data to connect their care with measured outcomes. A total of 1702 patients gave us consent to access their records, but we could get records for only 1475 out of 1702. The decrease from 1702 to 1475 is largely because of clinics who dropped out of that part of the study (these clinics accounted for 182 patient charts). In some other cases, the clinics were not able to either find the records in the time frame set or they were stored off site because of the age of the record.
The second set was a random selection of the clinic’s patient records. This sample was used to determine three things: the amount of chronicity being seen in chiropractic clinics; whether our sample of enrolled chronic patients was in anyway different from those not included in the study to determine possible bias in our patient sample; and to determine the rate of appropriate care.
A clinic could decide to pull all records themselves or RAND staff could visit the clinic and either do it or assist in doing it.. The first determination was whether the patient charts were electronic, paper or a combination. If the clinic had paper records and had the technology to scan the records inhouse they would do so. If not, RAND loaned them a scanner o. If the clinic had electronic patient records, _they could copy the record onto an encrypted hard drive directly. In either case, the patient records were then transferred via an encrypted hard drive or were directly uploaded to a secure special RAND website. If needed RAND supplied the encrypted hard drive, the computer, laptop, the scanner, and the mouse to the clinic.
We collected complete patient record data for patients from 114 out of 125 clinics. Across 114 clinics 22% had patient records that were fully electronic; 32% had paper files; and 46% used a combination. Of the 125 clinics, 43% could scan the records themselves with training and assistance from RAND for a total of 3,606 scanned records. This involved selecting and scanning a random sample (2128) and a prospective sample (1478) of patient records and complying with data protection protocols in transferring records.
Of the 114 clinics, 47% (or 54 clinics) could scan the records themselves with training and assistance from RAND. This involved selecting and scanning a random sample and a prospective sample of patient records and complying with data protection protocols in transferring records. We obtained a total of 3,603 scanned records; of those, 2,128 were randomly selected patient records (random sample) and 1,475 were records of those who took our surveys (prospective sample). For the enrolled study survey patients, RAND informed the clinic of those patients who gave informed consent. For these we provided the clinic evidence of the HIPAA form that patients signed giving consent for us to copy their patient records. We also provided evidence that all the RAND staff had completed HIPAA training and were HIPAA compliant. [22] The more complicated process was the randomly selected sample. If the clinic had electronic records, we obtained an encrypted patient list and generated a random sample from that list that was then returned to the clinic on line through Accellion or in an encrypted hard drive file. They would then pull that sample and scan the files. If they had only paper files, we used a techniques developed in our earlier study of chiropractic [23] which involves using a measuring tape to measure the physical storage space. If the clinic had only paper records we asked the clinic to visualize the records as occupying space as though they were on a book case and then calculate, or measure if they were organized in that way, the amount of linear space the records occupied. In a chart provided by RAND they would then enter the total space used for records in the clinic starting with the number 1 and ending with the number equal to the total of the space in inches. RAND then generated a list of random numbers which tells them what measurements to use to select a random file from the space using a measuring tape RAND supplied. If they had a combination of electronic and paper records we had them use a combination of both methods.
Either the clinic staff or RAND staff would then pull records according to the appropriate randomization scheme until they had 10 chronic low back patient records and 12 chronic neck patient files according to the clinic staff’s determination of those conditions. If the patient had both low back and neck pain, we put them in the neck pain group. This was based on the fact that low back pain is known to be much more common in chiropractic practices than neck pain. A Random Sample Patient Log was created for every record that was pulled whether that chart was deemed chronic low back or neck pain or not.
Each clinic who participated (sent samples of their patient records) received a payment of a $200 Amazon gift card whether they or RAND did the actual record selection and delivery. In another article [24] we document the extraction process used to derive data from the patient files.
DISCUSSION
The success of enrolling the clinics, the staff and the patients in this study was very encouraging. In many ways, this study could have been seen by the chiropractors as threatening. It is after all, a study about the appropriateness of care. But in both this study, and the previous one done by RAND on acute low back pain [22], RAND has been able to enroll a high percentage of the clinics we approached. However, this was not primarily from cold calls or emails. Snowballing and personal contacts worked best. In this study, we were also able to enroll a large number of patients. Our experience was if the chiropractor was supportive of the study, then the staff was as well. If both those were, patients seemed not only willing to participate but also enthusiastic to do so.
A major logistical challenge in this study was how to collect data from sites with a mix of electronic and paper records. Data collection at “mixed” clinics took more time and effort. Chiropractic is clearly in a transition mode of moving into electronic medical files. As this becomes more widespread, sampling records for studies will be easier. Working with clinics that had recently changed to electronic systems, also proved difficult as some of their patient files would not have been transferred to the new system. Paper only files were most time-intensive. Dealing with staples, tape, carbon type paper, shelfing/reshelving files, and moving across various rooms/storage units as providers often would keep only the active files in the front office and the rest could be in storage, the basement, or a variety of locations.
The other challenge came from the way chiropractors kept records. Sometimes there were legibility issues, some chiropractors used their own cryptic code, some would have various parts of their patients records in different files so piecing that information was difficult. The research team often had to understand the unique annotation style. Electronic files (or even hybrid) did not present this type of problem for the most part.
This is only the second study done on appropriateness in chiropractic. The previous one was also conducted by RAND but was done over 20 years ago so there is no comparative data other than the earlier study. We can say the participation rate of chiropractors and patients in this study was considerably higher. But this study had features the earlier one did not have. It focused on chronic back pain so was dealing with patients with a much longer experience of chiropractic and pain. It also followed patients over time and collected data at numerous points in the patients care. Getting providers to participate in a study that will come into their practice and measure the extent to which the care provided is appropriate, is itself a significant achievement since this could be seen as a very threatening study to a provider. So, our results bode well for future studies in chiropractic. But as we have tried to show in the paper it takes a lot of effort. We hope that by sharing how it was done, and the results, both chiropractic researchers and chiropractors will be encouraged to continue doing practice-based research.
LIMITATIONS
One limitation is that we did not use random samples. This makes generalizing from the data collected problematic. But this is a limitation about the results not the methods themselves. While the extent of the resources was not a limitation for this study, it does pose a problem for replicating the study at least on the same scale. But the methods are applicable even for much smaller scale projects. We were able to try numerous approaches to recruiting but future studies could choose the most successful.
CONCLUSION
In this paper we have outlined the way in which we can make the evidence practiced-based in a way that ensures rigorous methods are applied and valid and reliable data are collected.
Through this process, we learned that clinic staff is essential. The study demonstrated that at least in chiropractic, and we think CIH generally, there is a strong desire among practitioners to be involved in research and therefore a good basis to move the P into EBP. If the chiropractor supported the study so did the staff and if the staff and chiropractor supported the staff so did the patients. Another lesson from this study was the amount of effort needed to obtain a substantial and empowered sample. RAND was helped by its earlier studies in chiropractic and its positive reputation in the chiropractic community, but it was also helped by the responsiveness of the profession to engage in research.
FUNDING SOURCES AND CONFLICTS OF INTEREST
This study was funded by the NIH’s National Center for Complementary and Integrative Health Grant No: 1U19AT007912-01. All authors report that they were funded by a grant from the National Center for Complementary and Integrative Health during the study. No conflicts of interest were reported for this study.
Footnotes
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Contributor Information
Ian D. Coulter, RAND Corporation, Health, Santa Monica, California, USA
Gursel R. Aliyev, RAND Corporation, Health.
Margaret D. Whitley, RAND Corporation, Health, Santa Monica, California, USA
Lisa S. Kraus, RAND Corporation, Health, Santa Monica, California, USA.
Praise O. Iyiewuare, RAND Corporation, Health, Santa Monica, California, USA
Ryan W. Gery, RAND Corporation, Health, Santa Monica, California, USA
Lara G. Hilton, Deloitte Consulting, Los Angeles, California, USA
Patricia M. Herman, RAND Corporation, Health, Santa Monica, California, USA
References
- [1].Coulter ID, Herman PM, Ryan GW, Hays RD, Hilton LG, Whitley MD, CERC. Researching the Appropriateness of Care in the Complementary and Integrative Health (CIH) Professions: Part I. JMPT (in press). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [2].Coulter ID. Putting the practice into evidence-based dentistry. CDA Journal 2007;35(1):45–9 [PubMed] [Google Scholar]
- [3].Herman PM, Kommareddi M, Sorbero ME, Rutter CM, Hays RD, Hilton LG, Ryan GW, Coulter ID. Characteristics of chiropractic patients being treated for chronic low back and chronic neck pain. Journal of Manipulative and Physiological Therapeutics. 2018 [DOI] [PMC free article] [PubMed] [Google Scholar]
- [4].Brook RH. Appropriateness: The next frontier. British Medical Journal 1994;308:218–219. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [5].Coulter ID. Evidenced-based practice and appropriateness of care studies. Journal of Evidence-Based Dental Practice 2001; 1(3):222–6. [Google Scholar]
- [6].Coulter ID. Expert panels and evidence: The RAND alternative. J Evid Base Dent Pract 2001; 1:142–48. [Google Scholar]
- [7].Sackett DL. Evidence-based medicine (editorial). Spine 1998;23(10):1085–6. [DOI] [PubMed] [Google Scholar]
- [8].Coulter ID. Evidence based complementary and alternative medicine: promises and problems. Forsch Komplementarmed. 2007. April;14(2):102–8. Epub 2007 Apr 23) [DOI] [PubMed] [Google Scholar]
- [9].Linde K, Coulter ID. Systematic reviews and meta-analyses In: Lewith G, Jonas W, Walach H (Eds.) Clinical Research in Complementary Therapies, 2nd edition Elsevier; Oxford, England, 119–134, 2011. [Google Scholar]
- [10].Coulter ID. Evidence Summaries and Synthesis: Necessary but Insufficient Approach for Determining Clinical Practice of Integrated Medicine? Integrative Cancer Therapies 2006; 5(4):282-Lau, J; Ioannidis, JP; Schmid, CH. Quantitative Synthesis in Systematic Reviews. Annals of Internal Medicine., 1997, 127(9), 820–26. [DOI] [PubMed] [Google Scholar]
- [11].Bland CJ; Meurer LN; Maldonado G A Systematic Approach to Conducting a Non-Statistical Meta-Analysis of Research Literature. Academic Medicine., 1995, 70(7), 642–653. [DOI] [PubMed] [Google Scholar]
- [12].Egger M; Smith GD. Misleading Meta-analysis. [editorial] British Medical Journal., 1995, 310(6982), 752–54. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [13].Eskinazi D; Muehsam D Is the Scientific Publishing of Complementary and Alternative Medicine Objective? Journal of Alternative & Complementary Medicine., 1999, 5(6), 587–94. [DOI] [PubMed] [Google Scholar]
- [14].Easterbrook PJ; Berlin JA; Gopalan R; Matthews DR. Publication Bias in Clinical Research. Lancet., 1991, 337(8746), 867–72. [DOI] [PubMed] [Google Scholar]
- [15].Coulter ID, Khorsan R, Crawford C, Hsiao AF. Integrative health care under review: An emerging field. J Manipulative Physiol Ther. 2010. Nov-Dec;33(9):690–710.) [DOI] [PubMed] [Google Scholar]
- [16].Coulter ID. Comparative Effectiveness Research: Does the Emperor Have Clothes? Alternative Therapies Health Med 2011; 17(2):8–15. [PubMed] [Google Scholar]
- [17].Glasziou P; Guyatt GH; Dans AL; Dans LF; Straus S; Sackett DL. Applying the Results of Trials and Systematic Reviews to Individual Patients. [Editorial] American College of Physicians Journal Club., 1998, 129(3), A15–16. [PubMed] [Google Scholar]
- [18].Godlee F Applying research to individual patients. Evidence Based Case Reports Will Help. [Editorial] British Medical Journal., 1998, 16, 1621–22 [DOI] [PMC free article] [PubMed] [Google Scholar]
- [19].Coulter I; Elfenbaum P; Jain S; Jonas W SEaRCH™ Expert Panel Process: Streamlining the Link Between Evidence and Practice. BMC Research Notes., 2016, 9, 16. doi 10.1186/s13104-015-1802-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [20].Jonas WB. The Evidence House: How to Build an Inclusive Base for Complementary Medicine. The Western Journal of Medicine., 2001, 175(2), 79–80. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [21].Whitley MD, Coulter ID, Ryan GW, Hays RD, Sherbourne C, Herman PM. Researching The Appropriateness Of Care In The Complementary And Integrative Health Professions: Part 3: Designing Instruments with Patient Input. JMPT (in press) [DOI] [PMC free article] [PubMed] [Google Scholar]
- [22].Iyiewuare P, Coulter ID, Whitley MD, Herman PM. Researching the Appropriateness of Care in the Complementary and Integrative Health Professions: Part 2 HIPAA and Practice-Based Research: What Every Practitioner and Practice Should Know. Journal of Manipulative and Physiological Therapeutics. 2018. [in press] [DOI] [PMC free article] [PubMed] [Google Scholar]
- [23].Shekelle PG, Coulter ID, Hurwitz EL, Genovese B, Adams AH, Mior SA, Brook RH. Congruence Between Decisions to Initiate Chiropractic Spinal Manipulation for Low Back Pain and Appropriateness Criteria in North America. Intern Med 129(1):9–17, 1998 [DOI] [PubMed] [Google Scholar]
- [24].Roth CP, Coulter ID, Kraus LS, Ryan GW, , Jacob G, , Marks JS, Hurwitz EL, , Vernon H, Shekelle PG, Herman PM. Researching the Appropriateness of Care in the Complementary and Integrative Health Professions: Part 5 Using Patient Records: Selection, Protection and Abstraction, JMPT (in press) [DOI] [PMC free article] [PubMed] [Google Scholar]