Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2015 Mar 1.
Published in final edited form as: J Addict Med. 2014 Mar-Apr;8(2):96–101. doi: 10.1097/ADM.0000000000000018

Evaluation of an Electronic Medical Record System at an Opioid Agonist Treatment Program

Lawrence S Brown Jr 1, Steven Kritz 1, Melissa Lin 1, Roberto Zavala 1
PMCID: PMC3962043  NIHMSID: NIHMS561141  PMID: 24562402

Abstract

Objectives

The Addiction Research and Treatment Corporation evaluated the impact of an electronic medical record system.

Methods

A prospective pre and post-implementation design was utilized that examined the domains of quality, productivity, satisfaction, risk management, and financial performance.

Results

There were highly statistically significant improvements in timely completion of Annual Medical and 30-Day, 90-Day, and Annual Multidiscipline assessments. There was no statistically significant change in obtaining Hepatitis C viral load for Hepatitis C antibody positive patients. The prevalence of risk management events was too low to detect statistically meaningful changes. Patient satisfaction was unchanged pre and post-implementation, while staff satisfaction trended upward post-implementation. Productivity significantly declined for counseling staff; there was a non-significant productivity decline for medical services staff; and a non-significant productivity increase for case manager staff. Revenue per capita staff increased by 0.6%; while cost per patient visit increased by 5.7%.

Conclusions

Despite less robust results than expected, had we not implemented the electronic system, recent changes in documentation and reimbursement for services would have paralyzed our agency.

Keywords: electronic medical record system, opioid agonist treatment program, pre and post-implementation study design, hierarchy of corporate objectives

INTRODUCTION

Despite decades of predictions that the electronic medical record (EMR) revolution is coming, most healthcare organizations still use paper medical charts and manual processes. The transformation to an electronic platform has been promoted to reduce costs, provide better patient care and services, and dramatically improve outcomes. A 2008 survey of EMR implementation published in the New England Journal of Medicine indicated that only 4% of physicians have a fully functional system, with 13% having a basic system (DesRoches et al, 2008). This has increased to 27% and 69%, respectively (Schoen et al, 2012).

There are many good reasons why EMRs have not proliferated. First, it was the many vendors and the daunting interoperability issues. Then there were the transition issues. With the adoption of technology in most spheres of business, in came Moore’s Law, which holds that computing capacity doubles every 18 months, leading to the temptation to hold out for newer, better, faster, and cheaper products.

The most significant roadblock to EMR implementation was financial. Executives were reluctant to commit millions of dollars unless assured of positive cash flows within a reasonable period of time, the return on investment issue. Unfortunately, demonstrating this return can be challenging, as many EMR benefits are intangible, non-financial or difficult to quantify. It is possible however, to establish a sound business justification using realistic assumptions and verifiable data.

Because published evaluations of the implementation of integrated EMRs in substance abuse treatment programs are virtually non-existent, we report the findings from a study evaluating an EMR at the Addiction Research and Treatment Corporation (ARTC), funded by the National Institute on Drug Abuse (NIDA). In the wake of the Mental Health Parity and Addiction Equity Act of 2008 and the Affordable Care Act of 2010, both of which place mental health more prominently within the entire US healthcare system; and the latter, which mandates electronic information systems, the findings from this study have the potential to inform decision-making for both providers and policy-makers.

Aside from the policy and research implications, ARTC was interested in an EMR for numerous reasons discussed comprehensively in a prior publication (Louie et. al., 2012). Briefly, ARTC was interested in whether an EMR would enhance the agency’s compliance with state and federal regulations while improving productivity (increasing the volume of services), quality of care, patient satisfaction, and staff satisfaction. This paper reports on the impact of an EMR at ARTC, a community-based, medication-assisted substance abuse treatment agency that provides on-site primary medical care and HIV-related services at locations in Brooklyn and Manhattan in New York City.

METHODS

Setting and Population

In operation since 1969, ARTC is a community-based, free-standing, outpatient not-for-profit corporation, the most common corporate structure for substance abuse treatment programs (N-SSATS, 2011). ARTC is one of the largest substance abuse treatment organizations in the nation, and the largest non-hospital based Opioid Agonist Treatment Program (OATP) in New York State, serving more than 3,000 patients annually, 60% of whom are male, 51% Hispanic, 44% African American, and 4% other. The mean age of the patients is 52 years and Medicaid is the payor for over 90% of the patients.

ARTC’s seven OATP clinics are CARF accredited, and dually licensed by the New York State Office of Alcoholism and Substance Abuse Services for substance abuse treatment and the New York State Department of Health for primary medical services, including HIV/AIDS care.

Despite this history and current capacities, there were considerable challenges in 2006. The only major components of ARTC’s operations stored in an electronic database were selected counseling and medical services, methadone administration/dispensing data, and billing. Even these areas were not thoroughly integrated, and any assessment of the quality or integrity of these information sub-systems was limited.

Study Design

This is a prospective, comparative study, utilizing a pre and post-implementation design to determine whether there were improvements post-implementation. eCinicalWorks was chosen as the EMR and was interfaced with an in-house developed electronic dispensing and behavioral health program. The specifications of the EMR are available at www.eClinicalWorks.com.

The pre-implementation period was from July 1, 2006 to June 30, 2007, chosen to have a sufficient amount of patients enrolled prior to EMR implementation. The post-implementation period was from November 1, 2009 to October 31, 2010, reflecting the 12-month period following installation and training of all staff at clinical and administrative sites.

To increase buy-in for this project, agency stakeholders (patients, direct-care providers, and supervisors/managers) participated in needs assessment meetings to choose the specific aims in the domains of: (1) quality; (2) productivity; (3) satisfaction; (4) risk management; and, (5) financial performance. Every effort was made to control for extraneous variables, such as staff turnover and patient demographic changes between the pre and post-implementation periods that might confound the analysis.

For the quality specific aim, we proposed five hypotheses. Post-implementation, there would be improvement in the timeliness of completion of patient a) annual medical assessments for patients with length of stay ≥ 365 days, b) 30-day multidiscipline assessments for patients with length of stay ≥ 30 days, c) 90-day multidiscipline assessments for patients with length of stay ≥ 90 days, d) annual multidiscipline assessments for patients with length of stay ≥ 365 days, and e) assessments for hepatitis C (HCV) viral load in those patients with a positive HCV antibody test for patients with length of stay ≥ 60 days. The denominators for each of these measures were the total number of patients eligible to be considered in the pre and separately in the post-implementation period.

For the productivity specific aim, three hypotheses were advanced. Post-implementation, the annual number of patient visits would increase for: a) individual addiction counseling, b) primary medical care, and c) HIV-related case management. The productivity hypotheses were based on meetings with stakeholders (patients and clinicians) with knowledge of the shortcomings of our pre-EMR paper records, which were separated by discipline. This resulted in time wasted hunting for pertinent records, which were often not located. It was believed that implementation of the EMR would remove this barrier, reducing the time needed to provide at least the same level of care. We also found literature support for this measure and potential outcome (Bingham, 1997; Zdon, Middleton, 1999)

For the satisfaction-related hypotheses, we proposed that post-implementation overall satisfaction would increase for: a) patients and b) clinical and management staff. For risk management, the hypothesis was that following implementation, there would be a decrease in the annual combined rate of patient complaints, incidents, and medication errors. For the financial performance specific aim, we hypothesized that post-implementation a) revenue per capita staff per annum would increase and b) costs per visit per annum would decrease.

Patients were recruited proportional to the census at each of the seven clinics, using a convenience sampling technique, resulting in 1,000 participants of the nearly 2,800 patients available for both the pre and post-implementation surveys. Patients received a $4 MetroCard (for public transportation) for their time and inconvenience in completing satisfaction surveys. There were 148 staff members (direct care and supervisors/managers) from the seven clinical and central administrative sites eligible to participate during the pre-implementation period to investigate the satisfaction-related hypotheses. Of these, 99 (66.9%) participated. For the post-implementation period, 155 staff members were eligible, of which, 92 (59.4%) participated.

Data Sources, Collection, and Entry

Data sources varied according to the specific aim studied. However, for all data sources, the investigators developed case report forms utilized by trained research assistants to collect data. Trained staff performed the computer entry of all data and quality assurance of data collection and entry was performed by supervisory staff and the Project Manager, who is one of the authors (SK).

For the quality specific aim, paper patient charts provided pre-implementation data and electronic patient charts provided the data post-implementation. For the productivity specific aim, various clinical logs and spreadsheets in mixed paper and electronic formats provided pre-implementation data, while this same information was provided in only electronic format post-implementation. For the specific aims of risk and financial performance, there were no changes in data collection media pre and post-implementation.

Patient, clinician, and management stakeholders participated in completing an anonymous written survey for the satisfaction specific aim. For the patients, the pre and post-implementation surveys contained the same 6 questions, using a 1–5 Likert scale (not satisfied, slightly satisfied, somewhat satisfied, satisfied, and very satisfied). The pre and post-implementation surveys for clinicians and managers contained the same 17 questions, using the same 1–5 Likert scale as used with the patient survey.

Statistical Analysis

For continuous outcomes, the anticipated sample sizes were sufficiently large enough for a minimal effect size detected with 80% power at two-sided alpha= 0.05 and 0.01. For binary outcomes, the anticipated sample sizes were sufficiently large enough for a minimal difference detected with 80% power at two-sided alpha= 0.05 and 0.01. For categorical outcomes, chi square tests with p-value < 0.05 were used to determine the statistically significant differences between the two time intervals. McNemar’s discordant pairs (matching before/after for each measure), conditional logistic regression and other approaches for binary outcomes were used for analysis. The satisfaction surveys, as noted above, utilized a 1–5 Likert scale. The findings were reported in “collapsed” format as a percent for Satisfied and Very Satisfied, and as a continuous variable. Thus, the study was well powered to observe even small differences when comparing pre and post-intervention data. As the due date for quality measures varied based upon patient admission dates, the denominators for these quality measures were not the same in the pre- and post periods (see Table 1). The admission, length of stay (or drop-out rate) and other patient demographics did not vary significantly between the pre and post periods.

Table 1.

Annual Medical and 30-Day, 90-Day & Annual Multidiscipline Assessments

Measure Study Period Number Due # (%)On Time # (%) Late + # (%) Not Completed P-value (exact test)
Annual Medical Assessments* Pre-implementation 420 350 (83%) 48 (12%) + 22 (5%) <0.001
Post-implementation 423 411 (97%) 12 (3%) + 0 (0%)
30-Day Multidiscipline Assessments** Pre-implementation 613 441 (72%) 168 (27%) + 4 (1%) <0.001
Post-implementation 704 614 (87%) 90 (13%) + 0 (0%)
90-Day Multidiscipline Assessments*** Pre-implementation 576 242 (42%) 311 (54%) + 23 (4%) <0.001
Post-implementation 608 423 (70%) 185 (30%) + 0 (0%)
Annual Multidiscipline Assessments**** Pre-implementation 420 294(70%) 88 (21%) + 38 (9%) <0.001
Post-implementation 423 407 (96%) 16 (4%) + 0 (0%)
*

+ 30 days of 1-year anniversary; length of stay ≥ 365 days

**

≤ 30 days after admission; length of stay ≥ 30 days

***

≤ 90 days after admission; length of stay ≥ 90 days

****

≤ 365 days after admission; length of stay ≥ 365 days

Since the collected data did not involve clinical interventions or protected health information, requirements for HIPAA authorization were precluded. The ARTC Institutional Review Board approved the study protocol, surveys (including payment for patient participants), and case report forms via expedited review and waiver of informed consent. This project was also exempt from the regulatory requirements for human subjects research under 45 CFR 46.101 (b) (2).

RESULTS

Quality

As shown in Table 1, only 83% (350 of 420) annual medical assessments were completed on-time (within 30 days of their anniversary) during the pre-implementation period. Post-implementation, 97% were completed on-time (p<0.001; exact test). There were 72% of 30-day multidiscipline assessments completed on-time (on or prior to the due date) during the pre-implementation period, while 87% of these assessments were completed on-time post-implementation, (p<0.001; exact test). There were 42% of 90-day multidiscipline assessments completed on-time (on or prior to the due date) during the pre-implementation period; post-implementation, 70% of 90-day multidiscipline assessments were completed on-time (p<0.001; exact test). Seventy percent (294 of 420) of annual multidiscipline assessments were completed on-time (on or prior to the due date) during the pre-implementation period. During the post period, 96% of annual multidiscipline assessments were completed on-time (p<0.001; exact test). For each of the assessments, a percentage (5%; 1%; 4%; and 9% of the annual medical; 30-day; 90-day; and annual multidisciplinary assessments, respectively) were not done at all during the pre-implementation period, while none of the assessments were missed during the post-implementation period.

Table 2 provides the findings for HCV viral load determination. During the pre-implementation period, 85% of eligible patients had hepatitis C viral load performed; during the post-implementation period, 81% of eligible patients had HCV viral load performed; a non-significant difference. The number of eligible patients was derived by subtracting patients with an outside primary care provider, plus those patients that refused HCV viral load determination from the total number of patients with positive HCV antibody. The total number of eligible patients was considerably less in the post-implementation period due to a major increase in the number of patients with an outside primary care provider that occurred between the pre and post-implementation periods.

Table 2.

Performance of HCV Viral Load

Study Period HCV VL Done / Eligible Patients (%) P-value (exact test)
Pre-implementation 151/178 (85%) NS
Post-implementation 64/79 (81%)

Productivity

Table 3a, 3b, and 3c provide productivity results by clinic (two clinics did not have case managers during this study period). Corporate-wide, during the 12-month pre-implementation period, there were 64,345 addiction-related counseling visits; post-implementation period 52,652 visits occurred, a statistically significant decline (p=0.0003; paired t-test). Corporate-wide, pre-implementation, 5,221 primary medical care visits occurred as compared to 4,028 visits post-implementation, a non-statistically significant decline (p=0.057; paired t-test). Corporate-wide, during pre-implementation period, 2,680 case manager HIV counseling visits occurred; whereas post-implementation, 3,058 visits occurred, a non-statistically significant increase (p=0.72; paired t-test).

Table 3a.

Human Services Productivity

CLINIC PRE (# of Visits) POST (# of Visits) CHANGE PAIRED T-TEST (P-VALUE) SIGN TEST P VALUE
#1 10791 8652 −2139 0.0003 0.016
#2 9984 8440 −1544
#3 12298 11012 −1286
#4 8682 5926 −2756
#5 6707 5668 −1039
#6 8401 6722 −1679
#7 7482 6232 −1250

Table 3b.

Medical Services Productivity

CLINIC PRE (# of Visits) POST (# of Visits) CHANGE PAIRED T-TEST (P-VALUE) SIGN TEST P VALUE
#1 833 748 −85 0.057 0.11
#2 921 443 −478
#3 507 369 −138
#4 809 547 −262
#5 820 548 −272
#6 599 737 138
#7 732 636 −96

Table 3c.

Case Manager Productivity

CLINIC PRE (# of Visits) POST (# of Visits) CHANGE PAIRED T-TEST (P-VALUE) SIGN TEST P VALUE
#1 429 447 18 0.72 1
#2 188 981 793
#3 852 533 −319
#4 844 690 −154
#5 367 407 40

Satisfaction

The mean score for all 6 questions for the pre-implementation patient survey was 3.78 (SD 0.750); whereas the mean score post-implementation patient survey was 3.74 (SD 0.775), a non-significant difference. There was a non-significant difference in the score for each of the 6 patient survey questions evaluated individually. The length of stay for patients taking the satisfaction survey during the pre and post-implementation periods was not significantly different.

The staff pre-implementation survey mean score for all 16 questions was 3.11 (SD 0.819); whereas post-implementation the mean score was 3.32 (SD 0.728), a non-significant difference. As shown in Table 4, the post-implementation mean score, a) increased statistically significantly for 5 questions, b) increased, but not statistically significantly for 9 questions, and c) decreased but not statistically significantly for 2 questions. Generally, 33% of staff were satisfied or very satisfied with the overall record system pre-implementation, while 42% were satisfied or very satisfied with the EMR post-implementation.

Table 4.

Staff Satisfaction Survey Findings

Survey Questions Improved Post-implementation? p-value
Q1: How satisfied are you with the ability to access needed reports or obtain information for needed reports? Yes p=0.008
Q2: How satisfied are you with the user friendliness of this system? No NS
Q3: How satisfied are you with the reliability of this system? Yes NS
Q4: How satisfied are you with the overall integrity of the information that flows to or from you or your staff with regard to clinical records and/or billing system? Yes NS
Q5: How satisfied are you with the efficiency of the system in managing your clinical and/or business operations? Yes NS
Q6: How satisfied are you with the system overall? Yes NS
Q7: How satisfied are you with the ability of the system to track your productivity and/or your staff? Yes p=0.005
Q8: How satisfied are you with the organization of the patient records and/or reports? Yes p=0.03
Q9: How satisfied are you with your ability to access the patient’s medical record and/or reports? Yes NS
Q10: How satisfied are you with your ability to track/find test results, consultant reports and/or management reports? Yes NS
Q11: How satisfied are you that the patient record and/or management report format helps to prevent you from overlooking information? Yes p=0.03
Q12: How satisfied are you with your ability to communicate patient and/or administrative information to and from other clinical staff? Yes NS
Q13: How satisfied are you that you can communicate patient and/or administrative information to and from administrative staff? Yes p=0.03
Q14: How satisfied are you with your ability to communicate clinical and/or administrative information to and from patients? Yes NS
Q15: How satisfied are you with the overall quality of care provided based on your experience at the clinic and/or the records/reports you review? Yes NS
Q16: How satisfied are you with the quality of your work experience? No NS

Risk Management

There were 64 patient-related incident reports and 15 patient complaints pre-implementation; and 79 patient-related incident reports and 28 patient complaints post-implementation. With an average daily census of 2782 pre-implementation and 2733 post-implementation, the difference in event rates was not significant (chi-square = 1.671 df two sided p = 0.20 and 3.591 df two sided p = 0.06, respectively). There were 8 medication error reports during administration/ dispensing of 584,126 medication doses pre-implementation and 7 medication error reports during administration/dispensing of 586,766 medication doses post-implementation. The difference in event rates was not significant (chi-square = 0.001 df two sided p = 1.00).

Financial Performance

During the pre-implementation period, revenue per capita staff was $66,900; whereas during the post-implementation period, it was $67,280, a non-significant 0.6% increase. During the pre-implementation period, cost per patient visit was $28.09; whereas during the post-implementation period, it was $29.68, a non-significant 5.7% increase.

DISCUSSION

To better conceptualize the various clinical and management issues involved in implementing an EMR, a hierarchy of corporate objectives was devised, consisting of (from most to least importance): compliance with regulations; financial performance; quality of care; patient satisfaction; and staff satisfaction (Louie et al, 2012). Each of the five specific aims of this research was related to at least one of these objectives as critical elements of the program that needed to be addressed in developing the EMR.

The pre-implementation findings yielded expected and unexpected information. The expected findings were 1) the relatively high timely completion rates of the annual medical and multidiscipline assessments; 2) the reasonably high rate of offering HCV viral load testing; 3) that patients were more satisfied with their care than staff were with the EMR system in place for providing that care; and 4) that the risk management events were relatively small. The unexpected findings were 1) a higher number of missed medical and multidiscipline assessments than expected during the pre-implementation period, and 2) a relatively low timely completion rate of the 90-day multidiscipline behavioral assessments, especially during the pre-implementation period.

Post-implementation, there was a statistically significant improvement in completion rates of the annual medical and multidiscipline behavioral assessments and there were no missed assessments. Given that these quality measures are also regulatory requirements, elimination of missed assessments meets the most important item in our hierarchy of corporate objectives described previously.

The finding for the other quality measure (obtaining HCV viral load) was statistically unchanged when comparing pre and post-implementation findings. For the other domains (productivity, satisfaction, risk management, and financial performance) the findings were less robust than expected, and in the case of productivity of human services and medical services staff, were the opposite of what was expected. However, the overall trend for staff satisfaction with the EMR was positive, and in some of the more important measures was significantly improved.

Staff-related issues can be an internal factor and staff receptivity and computer-related skills changed during intervening years of this study, which we suspect contributed to the productivity findings. While the staff satisfaction results may be due to the time spent prior to implementation, we suspect that the greater scrutiny of the quality of the documentation in the post-implementation period may have contributed to the productivity among the human services (counseling) staff. A review of the literature on the impact of EMR indicates that these challenges are universal (Marshall, Chin, 1998; Strasberg et al, 1998; Zdon, Middleton, 1999; Likourezos et al, 2004; Pizziferi et al, 2005; Valenti, 2005). We recommend that future investigators design their studies to analyze systematically these factors.

Limitations and Confounders

Admittedly, some may argue with the study design, with the specific measures chosen, that there was potential bias in data collection or in the data sources, that more attention should be provided to evaluating the specific EMR utilized, that more attention should have been provided to other contextual issues or confounders or that the findings may not be generalizable to other addiction treatment settings.

The pre and post-implementation design of this study avoided ethical issues that may occur when withholding an intervention with a potential benefit and the specific measures chosen for this study were substantiated by New York State regulations, and/or their use in many published studies (Title 14NYCRR Part 828 section 828.9; Title 14NYCRR Part 828 section 828.13; Kian et al, 1995; Bingham, 1997; Bates et al, 1998; Marshall, Chin, 1998; Strasberg et al, 1998; Zdon, Middleton, 1999; NIH Consensus Statement on Management of Hepatitis C, 2002; Wang et al, 2003; Likourezos et al, 2004; Pizziferi et al, 2005; Valenti, 2005). The number of measures selected in this study was large compared to other studies cited.

The literature does not validate any particular type of EMR or any particular setting; and most EMR vendors at the time of this study chose not to focus on both medical and behavioral settings of care. It is hoped that this study stimulates other investigators to replicate our study in other combined medical and behavioral settings, as there are few OATPs like ARTC that provide onsite primary medical care and HIV-related services to a largely disenfranchised population that experiences significant disparities in access and quality of healthcare services.

Finally, cost considerations and training logistics add to the amount of time needed to implement any EMR. This enhances the potential for the introduction of additional confounders, but all healthcare institutions must grapple with the allocation of scarce financial and human resources. Similarly, investigations in this area of health care delivery are not without limitations, as investigators must make decisions regarding which questions/hypotheses to pose and the merits of different study designs in doing so.

CONCLUSIONS

Despite results that were somewhat less robust than expected in some of the domains examined, this study revealed invaluable lessons. Not all measures in healthcare are going to be equally affected and no study design is without limitations especially since unanticipated intervening factors may be just as important as those that are anticipated. While beyond the scope of the research reported in this paper and prior to this study, there were substantive efforts to prepare and evaluate an EMR (Louie et. al., 2012). This is particularly the case for the Affordable Care Act of 2010, which will utilize EMRs as the backbone of the healthcare infrastructure. Of note, since the time the study described here ended, we have been able to exploit system capabilities in order to measure and impact patient outcomes in ways that were not possible pre-implementation of the EMR.

Acknowledgments

We acknowledge the Addiction Research and Treatment Corporation (ARTC) patients, clinicians, managers, senior executives, and Board of Trustees. We also acknowledge the expertise and contributions of Crystal Fuller, PhD, Mailman School of Public Health, Columbia University, who provided study design and statistical consultation; John Kimberly, PhD, The Wharton School, University of Pennsylvania, who provided business management consultation; and Donald Hoover , PhD, Rutgers University, who provided statistical consultation. Of note, as of the date of acceptance of this manuscript, ARTC is well along in the process of rebranding itself to START Treatment and Recovery Centers (START). The changeover should be completed by February 2014.

Funding: This study was supported by the National Institute on Drug Abuse (R01DA022030).

References

  • 1.Bates DW, Leape LL, Cullen DJ, et al. Effect of Computerized Physician Order Entry and a Team Intervention on Prevention of Serious Medication Errors. JAMA. 1998;280:1311–1316. doi: 10.1001/jama.280.15.1311. [DOI] [PubMed] [Google Scholar]
  • 2.Bingham A. Computerized patient records benefit physician offices. Healthcare Financial Management. 1997;51(9):68–71. [PubMed] [Google Scholar]
  • 3.DesRoches CM, Campbell EG, Rao SR, et al. Electronic Health Records in Ambulatory Care — A National Survey of Physicians. NEJM. 2008;359(1):50–60. doi: 10.1056/NEJMsa0802005. [DOI] [PubMed] [Google Scholar]
  • 4.Kian LA, Stewart MW, Bagby C, et al. Justifying the cost of a computer-based patient record. Healthcare Financial Management. 1995;49(7):58. [PubMed] [Google Scholar]
  • 5.Likourezos A, Chalfin DB, Murphy DG, et al. Physician and Nurse Satisfaction with an Electronic Medical Record System. J Emer Med. 2004;27(4):419–424. doi: 10.1016/j.jemermed.2004.03.019. [DOI] [PubMed] [Google Scholar]
  • 6.Louie B, Kritz SA, Brown LS, et al. Electronic health information system at an opioid treatment programme: roadblocks to implementation. Journal of Evaluation in Clinical Practice. 2012;18(4):734–8. doi: 10.1111/j.1365-2753.2011.01663.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Pizziferi L, Kittler AF, Volk LA, et al. Primary care physician time utilization before and after implementation of an electronic health record: A time motion study. J Biomed Inform. 2005;38:176–188. doi: 10.1016/j.jbi.2004.11.009. [DOI] [PubMed] [Google Scholar]
  • 8.Marshall PD, Chin HL. The Effects of an Electronic Medical Record on Patient Care: Clinician Attitudes in a Large HMO. Proc AMIA Symposium. 1998:150–154. [PMC free article] [PubMed] [Google Scholar]
  • 9.NIH Consensus Statement on Management of Hepatitis C. 3. Vol. 19. NIH/NIDA; 2002. [PubMed] [Google Scholar]
  • 10.National Survey of Substance Abuse Treatment Services (N-SSATS 2011). Substance Abuse and Mental Health Services Administration. Data on Substance Abuse Treatment Facilities. 2012. BHSIS Series S-64, HHS Publication No. (SMA) 12–4730. [Google Scholar]
  • 11.Schoen C, Osborn R, Squires D, et al. A Survey Of Primary Care Doctors In Ten Countries Shows Progress In Use Of Health Information Technology, Less In Other Areas. Health Aff. 2012;31(12):2805–16. doi: 10.1377/hlthaff.2012.0884. [DOI] [PubMed] [Google Scholar]
  • 12.Strasberg HR, Tudiver F, Holbrook AM, et al. Moving towards an electronic patient record: A survey to assess the needs of community family physicians. AMIA Fall Symposium. 1998:230–4. [PMC free article] [PubMed] [Google Scholar]
  • 13.Title 14NYCRR Part 828 section 828.9. Requirements for the Operation of Chemotherapy Substance Abuse Programs.
  • 14.Title 14NYCRR Part 828 section 828.13. Requirements for the Operation of Chemotherapy Substance Abuse Programs.
  • 15.Valenti WM., Committee Chair . Clinical Guidelines for the Medical Management of Hepatitis C. New York State Dept of Health; 2005. [Google Scholar]
  • 16.Wang SJ, Middleton B, Prosser LA, et al. A Cost-Benefit Analysis of Electronic Medical Records in Primary Care. Am J Med. 2003;114(5):397–403. doi: 10.1016/s0002-9343(03)00057-3. [DOI] [PubMed] [Google Scholar]
  • 17.Zdon L, Middleton B. Ambulatory Electronic Records Implementation Cost Benefit: An Enterprise Case Study. Health Information Manage and Syst Soc. 1999;4:97–117. [Google Scholar]

RESOURCES