Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2011 Apr 27.
Published in final edited form as: Acad Emerg Med. 2009 Nov;16(11):1124–1131. doi: 10.1111/j.1553-2712.2009.00557.x

Study Designs and Evaluation Models for Emergency Department Public Health Research

Kerry B Broderick 1, Megan L Ranney 1, Federico E Vaca 1, Gail D’Onofrio 1, Richard E Rothman 1, Karin V Rhodes 1, Bruce Becker 1, Jason S Haukoos 1
PMCID: PMC3082772  NIHMSID: NIHMS284779  PMID: 20053232

Abstract

Public health research requires sound design and thoughtful consideration of potential biases that may influence the validity of results. It also requires careful implementation of protocols and procedures that are likely to translate from the research environment to actual clinical practice. This article is the product of a breakout session from the 2009 Academic Emergency Medicine consensus conference entitled “Public Health in the ED: Screening, Surveillance, and Intervention” and serves to describe in detail aspects of performing emergency department (ED)-based public health research, while serving as a resource for current and future researchers. In doing so, the authors describe methodologic features of study design, participant selection and retention, and measurements and analyses pertinent to public health research. In addition, a number of recommendations related to research methods and future investigations related to public health work in the ED are provided. Public health investigators are poised to make substantial contributions to this important area of research, but this will only be accomplished by employing sound research methodology in the context of rigorous program evaluation.

Keywords: public health, clinical research, study design, program evaluation, models, validity, assessment


Emergency departments (EDs) are highly complex medical environments that serve as our society’s primary medical safety net. Emergency medicine researchers are well positioned to design and implement projects that produce generalizable knowledge to improve the public’s health. The design and execution of robust ED-based public health studies is challenging. Whether the focus is alcohol, tobacco, or other drug use, human immunodeficiency virus (HIV) or sexually transmitted infections (STIs), intimate partner violence, or other areas of health promotion and injury prevention, several common themes associated with both study design and model evaluation affect the quality of public health research in the ED.

This article was developed in the context of the 2009 Academic Emergency Medicine consensus conference entitled “Public Health in the ED: Surveillance, Screening, and Intervention” held on May 13, 2009. We report on the findings of a conference workshop intended to review study designs and evaluation models specific to ED-based public health research. This article reviews concepts related to ED-based public health research, including 1) study designs, 2) participant selection and retention, and 3) measurement and analyses. It frames the key concepts raised in discussion of these topics during the consensus workshop, with a broader goal of educating researchers and providing a more focused foundation for performing high-quality ED-based public health research.

STUDY DESIGNS

The choice of a valid study design is critical. Selection of a design and its features appropriate to the study’s context will minimize threats to internal validity by providing unbiased estimates of effect measures (all italicized words are defined in Data Supplement S1, available as supporting information in the online version of this paper). Research is generally grouped into four unique general categories, including experimental, quasi-experimental, preexperimental, and observational designs. Previous ED-based research has utilized all of these designs to achieve the goal of maximizing the internal validity of research performed to broadly improve the public’s health.

Emergency department–based public health research may also be separated into four specific categories that include surveillance, screening or testing, interventions, and economic evaluation. Ideal study designs depend primarily on the study question being asked and the type of investigation being conducted; however, some universal principles that increase the methodologic quality of the studies do apply.

Surveillance

Surveillance is a term that describes the systematic collection of population-based information to report primarily disease occurrence and their etiologies. Generally, surveillance has been divided into researchor non–research-related categories. Surveillance is likely to be nonresearch when it involves the regular, ongoing collection and analysis of health-related data, conducted to monitor the frequency of occurrence and distribution of disease or injury in a population. As such, these systems typically are under the purview of governmental and international organizations, including the Centers for Disease Control and Prevention (CDC) and the World Health Organization (WHO).1,2

Surveillance conducted under a research purview, on the other hand, occurs when it involves the collection and analysis of health-related data conducted either to generate knowledge that is applicable to other populations and settings from those which the data were collected or to contribute to new knowledge about the health condition. Surveillance research in EDs is becoming increasingly common.3,4

Surveillance research, by definition, is observational, and the most robust observational design includes prospective data collection. Surveillance research may also involve linking data collected in the ED to other sources of data, including those from national organizations, state health departments, or legal or welfare systems. Additionally, the validity and generalizability of surveillance research in the ED may be increased by use of multiple centers, standardized definitions of cases and outcomes, and around-the-clock recruitment. Excellent examples of such research include defining the prevalence of violence-related injury and STIs.57

A large, high-impact ED-based surveillance study conducted near the beginning of the HIV epidemic was performed in the ED at Johns Hopkins University, using a then-novel identity-unlinked testing approach for characterizing the prevalence and demographic factors associated with HIV infection.8 This approach has now been widely adopted to help characterize various other public health issues in both ED and other health care settings.9

Linking data collected in the ED with other data sources related to major public health issues may improve our understanding of and perspective on these health issues. For example, Davidson et al.10 in 1997 linked ED visits for alcohol-related complaints with the Colorado Death Registry to learn that 5-year mortality rates among alcohol-intoxicated patients were 2.4 times that of an age- and sex-matched comparison group and that these alcohol-related visits were a significant predictor of increased morbidity and mortality. Similarly, a study of men with injuries inflicted by a female partner found that over 50% of the injured men had previous histories of arrest for domestic violence.11

Real-time surveillance research has the potential for immediate reporting to the public health community as alerts of incipient and ongoing threats to the public’s health and it can also be used to assess the effectiveness of ongoing interventions.3 ED-based infectious disease surveillance for HIV and methicillin-resistant Staphylococcus aureus (MRSA), for example, were critical in early identification of the existence of these national epidemics and helped inform public health strategies directed at curtailing their spread.12

Audit Studies

Audit studies can be used to track access to outpatient care for vulnerable populations.1315 Lack of access to outpatient primary and specialty care is both a major public health concern and a contributor to ED crowding. Using experimental designs, auditing has been used as a research tool that employs an intentionally deceptive approach to uncover intentional or unintentional discrimination in a variety of markets (e.g., housing, credit, employment).16,17 Asplin et al.14 incorporated this approach in the health care sector by having the same person call the same clinic twice with the same scripted request for an appointment following a visit to the ED. The investigators varied the person’s insurance status, however, to assess the impact of insurance status on access to care, demonstrating that patients with private insurance were more likely to receive an appointment than those with Medicaid or those without insurance.

Capture–Recapture

Capture–recapture is a relatively uncommon surveillance technique originally described for population biology. In recent years, however, it has been extended to epidemiologic investigations to estimate incidence or prevalence.18,19 Although several capture–recapture approaches have been used, its general methodology includes examining the overlap in identification of cases from different data sources or populations. By calculating the expected number of persons in each combination of data sources, capture–recapture methodology allows for the estimation of persons identified by no data sources. The addition of persons identified by no source to the number of unique persons already identified provides an estimate of disease prevalence.2022

Interventions

An intervention is broadly defined as anything introduced into a research environment that is specifically controlled by the investigator. Examples of interventions in ED-based public health research may include a unique approach to screening, use of different types of counselors or different approaches to providing specific content to patients, or a multifaceted screening or testing program. Studies that attempt to evaluate the efficacy or effectiveness of an intervention typically use experimental designs; however, quasi-experimental and preexperimental designs have been used with some success. Experimental designs, also often referred to as randomized controlled trials, are considered the highest quality and most valid approach to assessing the impact of an intervention. This design provides the opportunity to create two or more study groups that are theoretically balanced across all measured and unmeasured characteristics with the exception of the intervention. As such, investigators are better able to assess the effect of the intervention while minimizing bias. There are a number of well-designed randomized controlled trials described or discussed in the ED-based public health research literature.2328

Randomized controlled trials are subject to certain biases, two of the more common being selection bias and measurement bias. Selection bias occurs when the study sample does not represent the population from which it was selected. This may occur when the sample size is too small, if refusal rates are high, or if eligibility criteria are too stringent. As an example, ED-based studies evaluating brief interventions for unhealthy alcohol use generally have strict inclusion and exclusion criteria. Many of these studies exclude patients with alcohol dependence, other drug use, or psychiatric illness, thus selecting a sample that is not representative of the broader group of patients with alcohol dependence. Also, choosing an ED as the site for recruitment limits the sample population and creates a potential bias relative to other clinical venues.

There are a number of quasi-experimental designs that have been used in public health ED research and are often used when the performance of a randomized controlled trial is too expensive, is premature relative to the maturation of the content area, or is simply impractical or not feasible. Specific quasi-experimental designs include nonequivalent control group, interrupted time–series, and the equivalent time–samples designs. Equivalent time–sample designs, for example, in which the intervention is alternated sequentially or in a randomized fashion with the control condition, allows the investigator to approximate true randomization. This approach attempts to balance two or more study groups (depending on the number of times the intervention and control are alternated, the length of each time period, and the number of subjects enrolled in each period). All quasi-experimental designs use some form of quasi-randomization to assess the effect of an intervention.

Preexperimental designs are considered the weakest experimental design in terms of ensuring internal validity. In general, two unique preexperimental designs exist, including one-group pretest–posttest (also commonly referred to as “before–after” or “pre–post”) and static group comparison (also commonly referred to as a “historical control group” or “nonconcurrent control group”) designs. These specific designs do not allow for the control for secular trends, and the groups being compared are likely to be dissimilar. The dissimilarity is most apparent when the physician is selecting patients for an intervention. When a preexperimental design is used, the two groups are likely not comparable at all, thus requiring multivariable modeling to adjust for variation between the two groups while assessing the independent effect of the intervention. This approach can result in erroneous conclusions about the usefulness of an intervention. Regardless of the study’s design, duplicate findings by different groups of investigators in different settings are required to provide evidence that an intervention is effective.

Economic Evaluations

Economic analyses are important components of program evaluation, providing integrated financial considerations related to public health interventions. Cost-effectiveness, or the costs per outcome (e.g., quality-adjusted life-years), must be distinguished from cost benefit, in which both costs and outcomes are expressed in monetary terms. In cost–effectiveness research, the denominator can be identified and enumerated. In cost–benefit analysis, the benefit is often nebulous and difficult to value in dollars. While many cost–effectiveness analyses related to public health interventions exist, there is still a need for critical economic evaluation of many specific ED interventions.2935 Most cost–effectiveness analyses are also constructed using theoretical models (i.e., combining data from multiple unique sources), which is useful, but use of actual clinical trial data to inform the economic evaluation is essential. Economic analyses performed concurrently with controlled clinical trials may provide a strong basis for understanding the financial impact of the intervention being studied and can be used to leverage funding to support research. A cost–benefit analysis, however, identifying actual costs of an intervention (the numerator in the analysis), is complex and generally requires the input of a health care economist.

PARTICIPANT SELECTION AND RETENTION

There are multiple methodologic challenges that influence results and limit generalizability, as has been seen in many of the public health ED studies reported to date.3638 Participant selection and retention may lead to bias, small sample sizes may reduce precision of estimates, and both are particularly important when designing clinical research. In total, the number of subjects included in public health research has been, in some instances, relatively small compared to other efficacy research for other medical conditions.38 Recent studies, in particular, for alcohol use routinely carry high refusal rates, particularly among adolescents.39

Participant Selection

It is critical that researchers clearly define the types of participants for inclusion in the study and make every attempt to recruit all such study participants presenting to the ED during the study period. Adequate assessment of unintentional selection bias requires comparisons of those who were included versus those who were missed.

Unintentional selection bias commonly results in a convenience study sample. The limited time periods when researchers or assistants are present in the ED may result in exclusion of a significant subsection of the population and because EDs are open 24 hours per day, this may represent an important but limiting methodologic challenge. To adequately assess this potential source of bias, it is crucial to report the number and characteristics of potential participants from time periods in which sampling was not performed.

Limitation of recruitment to specific risk groups is another problem with ED-based public health research. Although this strategy allows for easier recruitment and permits detection of modest intervention effects with smaller sample sizes, recruiting only a selected sample limits generalizability. In addition, certain groups may have a greater inclination to participate than others. Without assessing this propensity to participate, results may be biased by baseline openness to the intervention or screening method.40

Finally, adequate enrollment is important in the context of a clinically meaningful outcome effect. While some public health research has the benefit of having large effect sizes, many others have modest to small effect sizes or suffer from confounding (which may mask a relatively larger effect size).41 In an attempt to identify modest effect sizes in the context of confounding, or within subgroups, there is a need for larger sample sizes to reduce the probability of type II error. Pilot testing and enrollment may serve as an important component of improving the precision of sample size estimates for larger, more definitive research.

Participant Retention

Loss to follow-up of ED patients has been a major challenge for ED researchers4245and a critique of emergency medicine public health research.46 Study enrollment and follow-up protocols must be intentionally and thoughtfully designed and followed rigorously as the study proceeds. Using proven techniques (e.g., gift cards, multiple sources of contact) of tracking patients has demonstrated that the majority of ED patients can be located at 12 months.46,47 Some studies have achieved as high as 95% follow-up at 3 months through use of multiple locators for their patients;48 others report use of telephone cards and innovative compensation methods to increase follow-up.46,49 These techniques must be replicated and further developed to ensure consistent, maximal follow-up for patients enrolled in prospective observational or experimental research.

MEASUREMENT AND ANALYSES

Assessment Effects

Assessment effect continues to plague screening and intervention research. Although difficult, there is a significant need to disentangle assessment from treatment effect. One particular problem with public health research is that control groups often receive a level of care exceeding that of true “usual care,” and as a result may do better than expected, falsely attenuating differences between groups. The screening or assessment can act as a type of intervention. McCambridge and Day50 used an experimental design and the Alcohol Use Disorders Identification Test (AUDIT) questionnaire to demonstrate that just administering the AUDIT instrument alone decreased alcohol use in college students. Daeppen et al.51 included a control group without any assessments in their study of brief alcohol intervention among injured ED patients, yet still reported no difference between alcohol consumption between study groups.

An ongoing study by D’Onofrio et al. (unpublished data) to determine the efficacy of a brief intervention performed by ED practitioners for harmful and hazardous drinkers includes the evaluation of “assessment effect.” One-half of the control group received assessments similar to the two intervention arms; the other half received no assessments and will only be contacted at 12 months.

Limiting assessments and contacts with patients needs to be balanced with ability to contact patients over time. Greater attention to details in the characteristics of the control group and its selection and quantifying the assessment exposure will assist in accurately measuring outcomes and describing magnitudes of intervention effects. More research needs to be done in the area of limiting assessment effect, due to conflicting study results regarding the minimal dose of the intervention. This issue is extremely important and has the potential to have a large economic impact. If the screening alone has adequate power for behavior change, effort and personnel could be garnered for other important patient care issues.

Accurate and reliable measures of social desirability may differ for different racial and ethnic groups.5258 While computer-based interviewing facilitates greater patient anonymity and less social desirability bias, some assessment methods (e.g., face-to-face) may have considerable influence on measurable outcomes in study participants that reflect a bias toward greater social desirability. This may be of particular importance when non-English study participants are interviewed in their native language.57,58

Consenting Effects

Great variability likely occurs in the consent processes that are used across public health research conducted in the ED. The process of consenting study participants may, in and of itself, introduce bias. To our knowledge, this is an area that has never been studied in the ED. While standardizing the consent or its process among various institutions may not be feasible, publishing the specific study consent, as well as describing the consenting process, may assist other investigators in assessing its potential effects. Attempts by a consortium of ED public health investigators toward standardizing consents and consenting protocols may assist in minimizing this as a moderating effect among public health research studies.59,60 Publishing the consent forms as part of the electronic journal should be strongly considered.

Outcome Measures

Inclusion of the most meaningful, objective, and valid outcomes should take priority when designing a study. Too often researchers include variables without adequately considering how these will be used in the analyses and whether they are necessary to investigate the primary aims of the study. In most cases less is better; accordingly, evaluation tools should be thoroughly validated before beginning the study to ensure that they do not contain motivational or intervention content (i.e., content that has the potential to create an assessment effect, as previously discussed). Standardization of outcome measures is also important, as this provides comparability across studies. The National Emergency Department HIV Testing Consortium recently published consensus-based nomenclature and definitions related to reporting of such programmatic results.59

For alcohol-related research, outcomes other than alcohol and drug consumption may be equally important. Examples include alcohol- or drug-related injuries, number of ED visits, or contacts with legal authorities. Uniformity in outcome measurement and reporting would also help to broaden the understanding of how intervention results could be effectively and broadly applied to population health.6164

Threats to External Validity

Using sound study design to minimize threats to internal validity is important, and conducting larger-scale research to enhance external validity and generalizability is required before widespread dissemination. Researchers have established relationships with colleagues at other institutions, so most studies are conducted at the same clinical sites where previous studies have been performed. As such, “contamination” of the study site and contamination of the data are practical and important concerns. Some examples of contamination in this context include 1) additional education of staff related to previous studies; 2) direct involvement of staff in previous studies at the same institution; and 3) differences in care provided above the standard as a result of these. Accordingly the “negative” effects found in some studies may be accounted for, in part, by the increase in the quality of “usual care” among those patients enrolled in the control groups. While difficult, ED public health researchers might consider doing sequential studies at different clinical sites to avoid this potential threat to validity. This may also improve collaboration and public health education in the community and generate interest in these important topics in nonacademic sites.

External validity is also threatened by the unique demographic characteristics of the patient populations at certain institutions. What is successful at a tertiary care urban hospital may not work in a suburban community ED. To address this concern, multicenter studies including nonacademic institutions, should be encouraged.

Screening Instruments and Assessment Tools

Just as a universal definition of what is being studied helps in interpretation of study design, agreement on common screening and assessment tools could improve comparability of public health intervention studies.59,6164 The wide variety of topics and populations studied by ED-based public health research makes the task of choosing standardized screening and assessment instruments challenging. Still, examination of the problems encountered in screening, brief intervention, and referral to treatment (SBIRT) studies can offer some instructive lessons for future researchers.

The SBIRT studies to date have used a wide range of screening and assessment instruments. While the AUDIT is the most frequently used screening instrument, there are many others.5254,65,66 Several studies use a prescreen or brief screen to identify study participants who may meet inclusion criteria for further screening or assessment. As already mentioned, the screening alone can act as an intervention, thus modifying the results.50

Choosing the “best-practice” screening approach can be challenging. Various screening tools have variable sensitivities and specificities and are influenced by such things as culture and sex.5254,66 The use of computerized screens could theoretically assist in this, as they could be preprogrammed to choose the most appropriate screen in that individual based on a few preliminary epidemiologic questions such as sex, race/ethnicity, or age.55,56,6771

SUMMARY

Several common themes resulted from the discussions related to study designs and program evaluation conducted as part of the 2009 Academic Emergency Medicine consensus conference, and several recommendations were formulated to improve the quality and translation of public health research in the ED (Table 1). Public health investigators are poised to make substantial contributions to this important area of research. This will only be accomplished, however, by employing sound research methodology.

Table 1.

Study Dsign and Program Evaluation Recommendations for Future ED-based Public Health Research

  1. Include multiple centers and maximize use of electronic medical records to increase the validity and generalizability of surveillance, screening, and other public health programs.

  2. Develop approaches to maximize participation of groups of patients who are generally not likely to participate in screening and intervention research.

  3. Develop innovative methods of improving study subject retention and disseminate details related to follow-up methodologies.

  4. Develop simple and reliable protocols and procedures to minimize assessment effects in the context of needed initial screening.

  5. Outcome measurements should be meaningful, objective, and valid and when possible should be standardized within content or topic area.

  6. Standardize screening, assessment, and outcome measurement tools and nomenclature.

  7. Explore alternatives to screening, brief intervention, and referral to treatment (SBIRT) and brief negotiated interventions.

  8. Educate researchers about the importance of techniques to monitor and ensure treatment fidelity within studies.

  9. Train public health and emergency medicine investigators in formal research methods.

  10. Establish an emergency medicine public health research consortium or content-specific consortia.

Supplementary Material

Supplemental Data

Acknowledgments

Dr. Broderick is supported, in part, by the Substance Abuse and Mental Health Services Administration (SAMHSA; TI18302). Dr. Haukoos is supported, in part, by an Independent Scientist Award (K02 HS017526) from the Agency for Healthcare Research and Quality. Dr. Rothman is supported, in part, by a Health Sciences Grant from Gilead Sciences. Dr. Rhodes is supported, in part, by the National Institute of Mental Health (K23 MH64572). Dr. D’Onofrio is supported, in part, by SAMHSA (CSATIU79T1020253); the National Institutes of Drug Abuse (NIDA; R01DA025991); the National Institutes of Alcohol Abuse and Alcoholism (NIAAA; R01AA14963); and the National Heart, Lung, and Blood Institute (NHLBI; R01 HL081153).

Footnotes

This work is the output from a consensus workshop conducted during the May 2009 Academic Emergency Medicine Consensus Conference in New Orleans, LA: “Public Health in the ED: Surveillance, Screening, and Intervention.”

Workshop participants included (in alphabetical order) Daniel Andersen, Judith Bernstein, Steven L. Bernstein, Marian Betz, Chris Buresh, Carlos Camargo, Doris Chan, Ethan Cowan, Cinnamon Dixon, Kathryn Dong, Denise Dowd, John Finnell, Charles Gerardo, Brian Geyer, Adit Ginde, Corita Grudzen, Michael Handrigan, Fred Harchelroad, Jason Haukoos, James Heffelfinger, Nancy Holson, Jeffrey Hom, Yu-Hsiang Hsieh, Nina Joyce, Michael Lyons, Ken Malone, Priya Mammen, Nancy Miertschin, Ward Myers, Matt Prekker, Michael S. Radeos, Junaid Razzak, Lynne Richardson, Matthew Scholer, Carolyn Snider, Kirk Stiffler, Ashley Sullivan, Carolyn Synovitz, Breena Taira, Jeffrey J. Thompson, Stephen Wall, Margaret Warner, Lauren Whiteside, Lee Wilbur, and Leslie Zun.

Supporting Information:

The following supporting information is available in the online version of this paper:

Data Supplement S1. Study designs and evaluation models for emergency department public health research.

The document is in PDF format.

Please note: Wiley Periodicals Inc. is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article.

References

  • 1.Centers for Disease Control and Prevention. Web-based Injury Statistics Query and Reporting System (WISQARS) [Accessed Aug 15, 2009]; Available at: http://www.cdc.gov/injury/wisqars/index.html.
  • 2.World Health Organization. World Report on Violence and Health. [Accessed Aug 15, 2009]; Available at: http://www.who.int/violence_injury_prevention/violence/world_report/en/
  • 3.Talan DA, Moran GJ, Mower WR, et al. EMERGEncy ID NET: An emergency department-based emerging infections sentinel network. The EMERGEncy ID NET Study Group. Ann Emerg Med. 1998;32:703–711. doi: 10.1016/s0196-0644(98)70071-x. [DOI] [PubMed] [Google Scholar]
  • 4.Hakenewerth AM, Waller AE, Ising AI, Tintinalli JE. North Carolina Disease Event Tracking and Epidemiologic Collection Tool (NC DETECT) and the National Hospital Ambulatory Medical Care Survey (NHAMCS): comparison of emergency department data. Acad Emerg Med. 2009;16:261–269. doi: 10.1111/j.1553-2712.2008.00334.x. [DOI] [PubMed] [Google Scholar]
  • 5.Sege RD, Kharasch S, Perron C, et al. Pediatric violence-related injuries in Boston: results of a city-wide emergency department surveillance program. Arch Pediatr Adolesc Med. 2002;156:73–76. doi: 10.1001/archpedi.156.1.73. [DOI] [PubMed] [Google Scholar]
  • 6.Kelen GD, Fritz S, Qaqish B, et al. Unrecognized human immunodeficiency virus infection in emergency department patients. N Engl J Med. 1988;318:1645–1650. doi: 10.1056/NEJM198806233182503. [DOI] [PubMed] [Google Scholar]
  • 7.Macdonald S, Cherpitel CJ, Borges G, Desouza A, Giesbrecht N, Stockwell T. The criteria for causation of alcohol in violent injuries based on emergency room data from six countries. Addict Behav. 2005;30:103–113. doi: 10.1016/j.addbeh.2004.04.016. [DOI] [PubMed] [Google Scholar]
  • 8.Kelen GD, DiGiovanna T, Bisson L, Kalainov D, Siverston KT, Quinn TC. Human immunodeficiency virus infection in emergency department patients. Epidemiology, clinical presentations, and risk to health care workers: the Johns Hopkins experience. JAMA. 1989;262:516–522. doi: 10.1001/jama.262.4.516. [DOI] [PubMed] [Google Scholar]
  • 9.Emergency Medicine Network (EMNet) [Accessed Aug 15, 2009]; Publications page. Available at: http://www.emnet-usa.org/publicat.htm#or.
  • 10.Davidson P, Koziol-McLain J, Harrison L, Timken D, Lowenstein SR. Intoxicated ED patients: a five-year follow-up of morbidity and mortality. Ann Emerg Med. 1997;30:593–597. doi: 10.1016/s0196-0644(97)70074-x. [DOI] [PubMed] [Google Scholar]
  • 11.Muelleman RL, Burgess P. Male victims of domestic violence and their history of perpetrating violence. Acad Emer Med. 1998;5:869–870. doi: 10.1111/j.1553-2712.1998.tb02815.x. [DOI] [PubMed] [Google Scholar]
  • 12.Moran GJ, Krishnadasan A, Gorwitz RJ, et al. EMERGEncy ID Net Study Group. Methicillin-resistant S. aureus infections among patients in the emergency department. N Engl J Med. 2006;355:666–674. doi: 10.1056/NEJMoa055356. [DOI] [PubMed] [Google Scholar]
  • 13.Kellermann AL the Medicaid Access Study Group. Access of Medicaid recipients to outpatient care. N Engl J Med. 1994;330:1426–1430. doi: 10.1056/NEJM199405193302007. [DOI] [PubMed] [Google Scholar]
  • 14.Asplin B, Rhodes KV, Levy H, et al. Insurance status and access to urgent ambulatory care follow-up appointments. JAMA. 2005;294:1248–1254. doi: 10.1001/jama.294.10.1248. [DOI] [PubMed] [Google Scholar]
  • 15.Rhodes KV, Veith TL, Levy H, Asplin BR. Referral without access: for psychiatric services, wait for the beep. Ann Emerg Med. 2009;54:272–278. doi: 10.1016/j.annemergmed.2008.08.023. [DOI] [PubMed] [Google Scholar]
  • 16.Fix M, Struck RJ. Clear and Convincing Evidence: Measurement of Discrimination in America. Washington, DC: Urban Institute Press; 1993. [Google Scholar]
  • 17.Bertrand M, Mullainathan S. Are Emily and Greg More Employable Than Lakisha and Jamal? [Accessed Aug 15, 2009];A Field Experiment on Labor Market Discrimination. Available at: http://www.nber.org/papers/w9873. [Google Scholar]
  • 18.McCarty DJ, Tull ES, Moy CS, Kwoh CK, Laporte RE. Ascertainment corrected rates: applications of the capture-recapture methods. Int J Epidemiol. 1993;22:559–565. doi: 10.1093/ije/22.3.559. [DOI] [PubMed] [Google Scholar]
  • 19.Walker NK, Vandal AC, Holden JK, et al. Does the capture-recapture analysis provide more reliable estimates of the incidence and prevalence of leg ulcers in the community? Aust N Z J Pub Health. 2002;26:451–455. doi: 10.1111/j.1467-842x.2002.tb00346.x. [DOI] [PubMed] [Google Scholar]
  • 20.Neugebauer R, Wittes J. Annotation: voluntary and involuntary capture-recapture samples–problems in the estimation of hidden and elusive populations. Am J Pub Health. 1994;84:1068–1069. doi: 10.2105/ajph.84.7.1068. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Wittes JT, Sidel VW. A generalization of the simple capture-recapture model with applications to epidemiological research. J Chronic Dis. 1968;21:287–301. doi: 10.1016/0021-9681(68)90038-6. [DOI] [PubMed] [Google Scholar]
  • 22.Wittes JT, Colton T, Sidel VW. Capture-recapture methods for assessing completeness of case ascertainment when using multiple information sources. J Chronic Dis. 1974;27:25–36. doi: 10.1016/0021-9681(74)90005-8. [DOI] [PubMed] [Google Scholar]
  • 23.MacMillan HL, Wathen CN, Jamieson E, et al. Approaches to screening for intimate partner violence in health care settings: a randomized trial. JAMA. 2006;296:530–536. doi: 10.1001/jama.296.5.530. [DOI] [PubMed] [Google Scholar]
  • 24.Victoria CG, Habicht J, Bryce J. Evidence-based public health: moving beyond randomized trials. Am J Public Health. 2004;94:400–405. doi: 10.2105/ajph.94.3.400. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Benson K, Hartz AJ. A comparison of observational studies and randomized controlled trials. N Engl J Med. 2000;342:1878–1886. doi: 10.1056/NEJM200006223422506. [DOI] [PubMed] [Google Scholar]
  • 26.Rhodes KV, Drum M, Anliker EA, Frankel R, Howes DS, Levinson W. Lowering the threshold for discussions of domestic violence: a randomized controlled trial of computer screening. Arch Intern Med. 2006;165:1–8. doi: 10.1001/archinte.166.10.1107. [DOI] [PubMed] [Google Scholar]
  • 27.D’Onofrio G, Pantalon MV, Degutis LC, et al. Brief intervention for hazardous and harmful drinkers in the emergency department. Ann Emerg Med. 2008;51:742–750. doi: 10.1016/j.annemergmed.2007.11.028. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Academic ED SBIRT Research Collaborative. The impact of screening, brief intervention and referral for treatment (SBIRT) on emergency department patients’ alcohol use. Ann Emerg Med. 2007;50:699–710. doi: 10.1016/j.annemergmed.2007.06.486. [DOI] [PubMed] [Google Scholar]
  • 29.Kraemer KL. The cost-effectiveness and cost-benefit of screening and brief intervention for unhealthy alcohol use in medical settings. Subst Abuse. 2007;28:67–77. doi: 10.1300/J465v28n03_07. [DOI] [PubMed] [Google Scholar]
  • 30.Paltiel AD, Weinstein MC, Kimmel AD. Expanded screening for HIV in the United States–an analysis of cost-effectiveness. N Engl J Med. 2005;352:586–595. doi: 10.1056/NEJMsa042088. [DOI] [PubMed] [Google Scholar]
  • 31.Walensky RP, Freedberg KA, Weinstein MC, Paltiel AD. Cost-effectiveness of HIV testing and treatment in the United States. Clin Infect Dis. 2007;45:S248–S254. doi: 10.1086/522546. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Sanders GD, Bayoumi AM, Sundaram V, et al. Cost-effectiveness of screening for HIV in the era of highly active antiretroviral therapy. N Engl J Med. 2005;352:570–585. doi: 10.1056/NEJMsa042657. [DOI] [PubMed] [Google Scholar]
  • 33.Fleming MF, Mundt MP, French Mt, Manwell LB, Stauffacher EA, Barry KL. Benefit-cost analysis of brief physician advice with problem drinkers: long-term efficacy and benefit-cost analysis. Alcohol Clin Exp Res. 2002;26:36–43. [PubMed] [Google Scholar]
  • 34.Gentilello LM, Ebel BE, Wickizer TM, Salkever DS, Rivara FP. Alcohol interventions for trauma patients treated in emergency departments and hospitals: a cost benefit analysis. Ann Surg. 2005;241:541–550. doi: 10.1097/01.sla.0000157133.80396.1c. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Maciosek MV, Coffield AB, Edwards NM, Flottenmesch TJ, Goodman MJ, Solberg LI. Priorities among effective clinical preventive service: results of a systematic review and analysis. Am J Prev Med. 2006;31:52–61. doi: 10.1016/j.amepre.2006.03.012. [DOI] [PubMed] [Google Scholar]
  • 36.Kypri K. Methodological issues in alcohol screening and brief intervention research. Subst Abus. 2007;28:31–42. doi: 10.1300/J465v28n03_04. [DOI] [PubMed] [Google Scholar]
  • 37.Saitz R, Svikis D, D’Onofrio G, Kraemer KL, Perl H. Challenges applying alcohol brief intervention in diverse practice settings: populations, outcomes, and costs. Alcohol Clin Exp Res. 2006;30:332–338. doi: 10.1111/j.1530-0277.2006.00038.x. [DOI] [PubMed] [Google Scholar]
  • 38.D’Onofrio G, Degutis LC. Preventive care in the emergency department: screening and brief intervention for alcohol problems in the emergency department: a systematic review. Acad Emerg Med. 2002;9:627–638. doi: 10.1111/j.1553-2712.2002.tb02304.x. [DOI] [PubMed] [Google Scholar]
  • 39.Spirito A, Monti PM, Barnett NP, et al. A randomized clinical trial of a brief motivational intervention for alcohol-positive adolescents treated in an emergency department. J Pediatr. 2004;145:396–402. doi: 10.1016/j.jpeds.2004.04.057. [DOI] [PubMed] [Google Scholar]
  • 40.Austin PC, Grootendorst P, Anderson GM. A comparison of the ability of different propensity score models to balance measured variables between treated and untreated subjects: a Monte Carlo study. Stat Med. 2007;26:734–753. doi: 10.1002/sim.2580. [DOI] [PubMed] [Google Scholar]
  • 41.Calderon Y, Haughey M, Leider J, Bijur PE, Gennis P, Bauman LJ. Increasing willingness to be tested for human immunodeficiency virus in the emergency department during off-hour tours: a randomized trial. Sex Transm Dis. 2007;34:1025–1029. doi: 10.1097/OLQ.0b013e31814b96bb. [DOI] [PubMed] [Google Scholar]
  • 42.Field DL, Hedges JR, Arnold K, Goldstein-Wayne B, Rouan GW. Limitations of chest pain follow-up from an urban teaching hospital emergency department. J Emerg Med. 1988;1:362–368. doi: 10.1016/0736-4679(88)90002-9. [DOI] [PubMed] [Google Scholar]
  • 43.Jones J, Clark W, Bradford J, Dougherty J. Efficacy of a telephone follow-up system in the emergency department. J Emerg Med. 1988;1:249–254. doi: 10.1016/0736-4679(88)90336-8. [DOI] [PubMed] [Google Scholar]
  • 44.Magnussen AR, Hedges JR, Vanko M, McCarten K, Moorhead JC. Follow-up compliance after emergency department evaluation. Ann Emerg Med. 1993;22:560–569. doi: 10.1016/s0196-0644(05)81942-0. [DOI] [PubMed] [Google Scholar]
  • 45.Fletcher SW, Appel FA, Bourgois M. Improving emergency-room subject follow-up in a metropolitan teaching hospital. N Engl J Med. 1974;291:385–388. doi: 10.1056/NEJM197408222910804. [DOI] [PubMed] [Google Scholar]
  • 46.Woolard RH, Carty K, Wirtz P, et al. Research fundamentals: follow-up of subjects in clinical trials: addressing subject attrition. Acad Emerg Med. 2004;11:859–866. doi: 10.1111/j.1553-2712.2004.tb00769.x. [DOI] [PubMed] [Google Scholar]
  • 47.Mello MJ, Longabaugh R, Baird J, Nirenberg T, Woolard R. DIAL: a telephone brief intervention for high-risk alcohol use with injured emergency department patients. Ann Emerg Med. 2008;51:755–764. doi: 10.1016/j.annemergmed.2007.11.034. [DOI] [PubMed] [Google Scholar]
  • 48.Cottler LB, Compton WM, Ben-Abdallah A. Achieving a 96.6 percent follow-up rate in a longitudinal study of drug abusers. Drug Alcohol Depend. 1996;41:209–217. doi: 10.1016/0376-8716(96)01254-9. [DOI] [PubMed] [Google Scholar]
  • 49.UCLA Integrated Substance Abuse Programs. Center for Advancing Longitudinal Drug Abuse Research. [Accessed Aug 15, 2009]; Available at: http://www.caldar.org.
  • 50.McCambridge J, Day M. Randomized controlled trial of the effects of completing the Alcohol Use Disorders Identification questionnaire on self-reported hazardous drinking. Addiction. 2007;103:241–248. doi: 10.1111/j.1360-0443.2007.02080.x. [DOI] [PubMed] [Google Scholar]
  • 51.Daeppen JB, Gaume J, Bady P, et al. Brief alcohol intervention and alcohol assessment do not influence alcohol use in injured patients treated in the emergency department: a randomized controlled clinical trial. Addiction. 2007;102:1224–1233. doi: 10.1111/j.1360-0443.2007.01869.x. [DOI] [PubMed] [Google Scholar]
  • 52.Cherpitel CJ. Screening for alcohol problems in the emergency department. Ann Emerg Med. 1995;26:158–166. doi: 10.1016/s0196-0644(95)70146-x. [DOI] [PubMed] [Google Scholar]
  • 53.Cherpitel CJ. Analysis of cut points for screening instruments for alcohol problems in the emergency department. J Stud Alcohol. 1995;56:695–700. doi: 10.15288/jsa.1995.56.695. [DOI] [PubMed] [Google Scholar]
  • 54.Cherpitel CJ. Comparison of screening instruments for alcohol problems between black and white emergency room patients from two regions of the country. Alcohol Clin Exp Res. 1997;21:1391–1397. [PubMed] [Google Scholar]
  • 55.Tourangeau R, Smith TW. Asking sensitive questions: the impact of data collection mode, question format, and question context. Public Opin Q. 1996;60:275–304. [Google Scholar]
  • 56.De Leeuw E, Hox J, Kef S. Computer-assisted self-interviewing tailored for special populations and topics. Field Methods. 2003;15:223–251. [Google Scholar]
  • 57.Marin G, Marin BV. Research with Hispanic Populations. Newbury Park, CA: Sage Publications, Inc.; 1991. Hispanics: Who are they? pp. 12–13. [Google Scholar]
  • 58.Triandis HC, Marin G, Lisansky J, Betancourt H. Simpatia as a cultural script of Hispanics. J Pers Soc Psychol. 1984;47:1363–1375. [Google Scholar]
  • 59.Lyons MS, Lindsell CJ, Haukoos JS, et al. Nomenclature and definitions for emergency department human immunodeficiency virus (HIV) testing: report from the 2007 conference of the National Emergency Department HIV Testing Consortium. Acad Emerg Med. 2009;16:168–177. doi: 10.1111/j.1553-2712.2008.00300.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.Sachs GA, Hougham GW, Sugarman J, et al. Conducting empirical research on informed consent: challenges and questions. IRB Human Res. 2003;25:1–7. [PubMed] [Google Scholar]
  • 61.European resuscitation council. Recommended guidelines for uniform reporting of data from out-of-hospital cardiac arrest. Br Heart J. 1992;67:325–333. doi: 10.1136/hrt.67.4.325. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Idris AH, Becker LB, Ornato JP, et al. Utstein-style guidelines for reporting of laboratory CPR research. Circulation. 1996;94:2324–2336. doi: 10.1161/01.cir.94.9.2324. [DOI] [PubMed] [Google Scholar]
  • 63.Dick WF, Baskett PJ, Grande C, et al. Recommendations for uniform reporting of data following major trauma-the Utstein style. An International Trauma Anaesthesia and Critical Care Society (ITACCS) initiative. Br J Anaesth. 2000;84:818–819. doi: 10.1093/oxfordjournals.bja.a013601. [DOI] [PubMed] [Google Scholar]
  • 64.Idris AH, Berg RA, Bierens J, et al. Recommended guidelines for uniform reporting of data from drowning: the “Utstein style”. Resuscitation. 2003;59:45–57. doi: 10.1016/j.resuscitation.2003.09.003. [DOI] [PubMed] [Google Scholar]
  • 65.Bernstein E, Bernstein J, Levenson S. Project ASSERT: An ED-based intervention to increase access to primary care, preventive services, and the substance abuse treatment system. Ann Emerg Med. 1997;30:181–189. doi: 10.1016/s0196-0644(97)70140-9. [DOI] [PubMed] [Google Scholar]
  • 66.Caviness CM, Hatgis C, Anderson BJ, et al. Three brief screens for detecting hazardous drinking in incarcerated women. J Stud Alcohol Drugs. 2009;70:50–54. doi: 10.15288/jsad.2009.70.50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67.Maio RF, Shope JT, Blow FC, et al. A randomized controlled trial of an emergency department-based interactive computer program to prevent alcohol misuse among injured adolescents. Ann Emerg Med. 2005;45:420–429. doi: 10.1016/j.annemergmed.2004.10.013. [DOI] [PubMed] [Google Scholar]
  • 68.Weisband S, Kiesler S. Self-Disclosure on Computer Forms: Meta-Analysis and Implications; Proceedings of the SIGCHI conference on human factors in computing systems: common ground; [Accessed Aug 15, 2009]. Available at: http://sigchi.org/chi96/proceedings/papers/Weisband/sw_txt.htm. [Google Scholar]
  • 69.Turner CF, Forsyth BH, O’Reilly JM, et al. Automated self-interviewing and the survey measurement of sensitive behavior. In: Couper MP, Baker RP, Bethlehem J, et al., editors. Computer-Assisted Survey Information Collection. New York, NY: John Wiley; 1998. [Google Scholar]
  • 70.Newman JC, Des Jarlais DC, Turner CF, Gribble J, Cooley P, Paone D. The differential effects of face-to-face and computer interview modes. Am J Public Health. 2002;92:294–297. doi: 10.2105/ajph.92.2.294. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 71.Neumann T, Neuner B, Weiss-Gerlach E, et al. The effect of computerized tailored brief advice on at-risk drinking in subcritically injured trauma study participants. J Trauma. 2006;61:805–814. doi: 10.1097/01.ta.0000196399.29893.52. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplemental Data

RESOURCES