Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2010 Dec 30.
Published in final edited form as: Health Promot Pract. 2010 Mar;11(2):161–165. doi: 10.1177/1524839909353023

Appraising Quantitative Research in Health Education: Guidelines for Public Health Educators

Leonard Jack Jr 1,, Sandra C Hayes 2, Jeanfreau G Scharalda 3, Barbara Stetson 4, Nkenge H Jones-Jack 5, Matthew Valliere 6, William R Kirchain 7, Michael Fagen 8, Cris LeBlanc 9
PMCID: PMC3012621  NIHMSID: NIHMS257122  PMID: 20400654

Abstract

Many practicing health educators do not feel they possess the skills necessary to critically appraise quantitative research. This publication is designed to help provide practicing health educators with basic tools helpful to facilitate a better understanding of quantitative research. This article describes the major components—title, introduction, methods, analyses, results and discussion sections—of quantitative research. Readers will be introduced to information on the various types of study designs and seven key questions health educators can use to facilitate the appraisal process. Upon reading, health educators will be in a better position to determine whether research studies are well designed and executed.

Keywords: health education, quantitative research, study designs, research methods

Appraising the Quality of Quantitative Research in Health Education

Practicing health educators often find themselves with little time to read published research in great detail. Some health educators with limited time to read scientific papers may get frustrated as they get bogged down trying to understand research terminology, methods, and approaches. The purpose of appraising a scientific publication is to assess whether the study’s research questions (hypotheses), methods and results (findings) are sufficiently valid to produce useful information (Fowkes and Fulton, 1991; Donnelly, 2004; Greenhalgh and Taylor, 1997; Johnson and Onwuegbuze, 2004; Greenhalgh, 1997; Yin, 2003; and Hennekens and Buring, 1987). Having the ability to deconstruct and reconstruct scientific publications is a critical skill in a results-oriented environment linked to increasing demands and expectations for improved program outcomes and strong justifications to program focus and direction. Health educators do must not solely rely on the opinions of researchers, but, rather, increase their confidence in their own abilities to discern the quality of published scientific research. Health educators with little experience reading and appraising scientific publications, may find this task less difficult if they: 1) become more familiar with the key components of a research publication, and 2) utilize questions presented in this article to critically appraise the strengths and weaknesses of published research.

Key Components of a Scientific Research Publication

The key components of a research publication should provide important information that is needed to assess the strengths and weaknesses of the research. Key components typically include the: publication title, abstract, introduction, research methods used to address the research question(s) or hypothesis, statistical analysis used, results, and the researcher’s interpretation and conclusion or recommended use of results to inform future research or practice. A brief description of these components follows:

Publication Title

A general heading or description should provide immediate insight into the intent of the research. Titles may include information regarding the focus of the research, population or target audience being studied, and study design.

Abstract

An abstract provides the reader with a brief description of the overall research, how it was done, statistical techniques employed, key results,and relevant implications or recommendations.

Introduction

This section elaborates on the content mentioned in the abstract and provides a better idea of what to anticipate in the manuscript. The introduction provides a succinct presentation of previously published literature, thus offering a purpose (rationale) for the study.

Methods

This component of the publication provides critical information on the type of research methods used to conduct the study. Common examples of study designs used to conduct quantitative research include cross sectional study, cohort study, case-control study, and controlled trial. The methods section should contain information on the inclusion and exclusion criteria used to identify participants in the study.

Analyses

Quantitative data contains information that is quantifiable, perhaps through surveys that are analyzed using statistical tests to determine if the results happened by chance. Two types of statistical analyses are used: descriptive and inferential (Johnson and Onwuegbuze, 2004). Descriptive statistics are used to describe the basic features of the study data and provide simple summaries about the sample and measures. With inferential statistics, researchers are trying to reach conclusions that extend beyond the immediate data alone. Thus, they use inferential statistics to make inferences from the data to more general conditions.

Results

This section presents the reader with the researcher’s data and results of statistical analyses described in the method section. Thus, this section must align closely with the methods section.

Discussion (Conclusion)

This section should explain what the data means thereby summarizing main results and findings for the reader. Important limitations (such as the use of a non-random sample, the absence of a control group, and short duration of the intervention) should be discussed. Researchers should discuss how each limitation can impact the applicability and use of study results. This section also presents recommendations on ways the study can help advance future health education and practice.

Critically Appraising the Strengths and Weaknesses of Published Research

During careful reading of the analysis, results, and discussion (conclusion) sections, what key questions might you ask yourself in order to critically appraise the strengths and weaknesses of the research? Based on a careful review of the literature (Greenhalgh and Taylor, 1997; Greenhalgh, 1997; and Hennekens and Buring, 1987) and our research experiences, we have identified seven key questions around which to guide your assessment of quantitative research.

1) Is a study design identified and appropriately applied?

Study designs refer to the methodology used to investigate a particular health phenomenon. Becoming familiar with the various study designs will help prepare you to critically assess whether its selection was applied adequately to answer the research questions (or hypotheses). As mentioned previously, common examples of study designs frequently used to conduct quantitative research include cross sectional study, cohort study, case-control study, and controlled trail. A brief description of each can be found in Table 1.

Table 1.

Definitions of Study Designs

Cross Sectional Study: A cross-sectional study is a descriptive study in which disease, risk factors, or other characteristics are measured simultaneously (at one particular point in time) in a given population (Last, 2001).
Cohort Study: A cohort study is an analytical study in which individuals with differing exposures to a suspected factor are identified and then observed for the occurrence of certain health effects over a period of time (Last, 2001). Comparison may be made with a control group, but interventions are not normally applied in cohort studies.
Case-Control: A case-control study is an analytical study which compares individuals who have a specific condition (“cases”) with a group of individuals without the condition (“controls”) (Last, 2001). A case-control study generally depends on the collection of retrospective data, thus introducing the possibility of recall bias. Recall bias is the tendency of subjects to report events in a manner that is different between the two groups studied.
Controlled Trial: A controlled trial is an experimental study comparing the intervention administered in one group of individuals (also referred as treatment, experimental or study group) and the outcome compared to a similar group (control group) that did not receive the intervention (Fowkes, 1991). A controlled trial may or may not use randomization to assign individuals to groups, and it may or may not use blinding to prevent them from knowing which treatment they get. In the event study participants are randomly assigned (meaning everyone has an equal chance of being selected) to a treatment or control group, this study design would be referred to as a randomized controlled trial.

2) Is the study sample representative of the group from which it is drawn?

The study sample must be representative of the group from which it is drawn. The study sample must therefore be typical of the wider target audience to whom the research might apply. Addressing whether the study sample is representative of the group from which it is drawn will require the researcher to take into consideration the sampling method and sample size.

Sampling Method

Many sampling methods are used individually or in combination. Keep in mind that sampling methods are divided into two categories: probability sampling and non-probability sampling (Last, 2001). Probability sampling (also called random sampling) is any sampling scheme in which the probability of choosing each individual is the same (or at least known, so it can be readjusted mathematically to be equal). Non-probability sampling is any sampling scheme in which the probability of an individual being chosen is unknown. Typically, researchers should offer a rationale for utilizing non-probability sampling, and when utilized, be aware of its limitations. For example, use of a convenience sample (choosing individuals in an unstructured manner) can be justified when collecting pilot data around which future studies employing more rigorous sampling methods will be utilized.

Sample Size

Established statistical theories and formulas are used to generate sample size calculations—the recommended number of individuals necessary in order to have sufficient power to detect meaningful results at a certain level of statistical significance. In the methods section, look for a statement or two confirming whether steps where taken to obtain the appropriate sample size.

3) In research studies using a control group, is this group adequate for the purpose of the study?

Source of controls

In case-control and cohort studies, the source of controls should be such that the distribution of characteristics not under investigation are similar to those in the cases or study cohort.

Matching

In case-control studies both cases and controls are often matched on certain characteristics such as age, sex, income, and race. The criteria used for including and excluding study participants must be adequately described and examined carefully. Inclusion and exclusion criteria may include: ethnicity, age of diagnosis, length of time living with a health condition, geographic location, and presence or absence of complications. You should critically assess whether matching across these characteristics actually occurred.

4) What is the validity of measurements and outcomes identified in the study?

Validity is the extent to which a measurement captures what it claims to measure. This might take the form of questions contained on a survey, questionnaire or instrument. Researchers should address one or more of the following types of validity: face, content, criterion-related, and construct (Last, 2001; William and Donnelly, 2008).

Face validity

Face validity assures that, upon examination, the variable of interest can measure what it intends to measure. If the researcher has chosen to study a variable that has not been studied before, he/she usually will need to start with face validity.

Content validity

Content validity involves comparing the content of the measurement technique to the known literature on the topic and validating the fact that the tool (e.g., survey, questionnaire) does represent the literature accurately.

Criterion-related validity

Criterion-related validity involves making sure the measures within a survey when tested proves to be effective in predicting criterion or indicators of a construct.

Construct validity

Construct validity deals with the validation of the construct that underlies the research. Here, researchers test the theory that underlies the hypothesis or research question.

5) To what extent is a common source of bias called blindness taken into account?

During data collection, a common source of bias is that subjects and/or those collecting the data are not blind to the purpose of the research. This can likely be the result of researchers going the extra mile to make sure those in the experimental group benefit from the intervention (Fowkes and Fulton, 1991). Inadequate blindness can be a problem in studies utilizing all types of study designs. While total blindness is not possible, appraising whether steps were taken to be sure issues related to ensure blindness occurred is essential.

6) To what extent is the study considered complete with regard to drop outs and missing data?

Drop outs

Regardless of the study design employed, one must assess not only the proportion of drop outs in each group, but also why they dropped out. This may point to possible bias, as well as determine what efforts were taken to retain participants in the study.

Missing data

Despite the fact that missing data are a part of almost all research, it should still be appraised. There are several reasons why the data may be missing. The nature and extent to which data is missing should be explained.

7) To what extent are study results influenced by factors that negatively impact their credibility?

Contamination

In research studies comparing the effectiveness of a structured intervention, contamination occurs when the control group makes changes based on learning what those participating in the intervention are doing. Despite the fact that researchers typically do not report the extent to which contamination occurs, you should nevertheless try to assess whether contamination negatively impacted the credibility of study results.

Confounding factors

A confounding factor in a study is a variable which is related to one or more of the measurements (measures or variables) defined in a study. A confounding factor may mask an actual association or falsely demonstrate an apparent association between the study variables where no real association between them exists. If confounding factors are not measured and considered, study results may be biased and compromised.

Conclusion

The guidelines and questions presented in this article are by no means exhaustive. However, when applied, they can help health education practitioners obtain a deeper understanding of the quality of published research. While no study is 100% perfect, we do encourage health education practitioners to pause before taking researchers at their word that study results are both accurate and impressive. If you find yourself answering ‘no’ to a majority of the key questions provided, then it is probably safe to say that, from your perspective, the quality of the research is questionable.

Over time, as you repeatedly apply the guidelines presented in this article, you will become more confident and interested in reading research publications from beginning to end. While this article is geared to health educators, it can help anyone interested in learning how to appraise published research. Table 2 lists additional reading resources that can help improve one’s understanding and knowledge of quantitative research. This article and the reading resources identified in Table 2 can serve as useful tools to frame informative conversations with your peers regarding the strengths and weaknesses of published quantitative research in health education.

Table 2.

Publications on How to Read, Write and Appraise Quantitative Research

  1. Dawson R, Algozzine R. Doing Case Study Research: A Practical Guide for Beginning Researchers. 2006. New York, NY: Teachers College Press. Qualitative content analysis in nursing research: concepts, procedures and measures to achieve trustworthiness. Nurse Educ Today. 2004 Feb; 24(2):105–12.

  2. Hodges B. Writing for publication: a personal view. Pediatric Nursing. 2007 March;19(2):35–6.

  3. Jackson N., Water E, The Guidelines for Systematic Reviews of Health Promotion and Public Health Intervention Taskforce. The challenge of systematically reviewing public health interventions. Journal of Public Health. 2004 Sept; 26(3):303–7.

  4. Lee P. Understanding the basic aspects of research papers. Nursing Times. 2006;102(27):28–30.

  5. Morgan D, Morgan RK. Single Case Research Methods for the Behavioral and Health Sciences. 2008. Sage Publications, Thousand Oaks, CA.

  6. Stang A, Schmidt-Pokrzywniak A. Submissions of scientific papers should not become sophistication. Journal of Clinical Epidemiology. 2002 May; 60(5):535.

Contributor Information

Leonard Jack, Jr., Email: ljack@xula.edu, Associate Dean for Research and Endowed Chair of Minority Health Disparities, College of Pharmacy, Xavier University of Louisiana, 1 Drexel Drive, New Orleans, Louisiana 70125; Telephone: 504-520-5345; Fax: 504-520-7971.

Sandra C. Hayes, Email: shayes@tougaloo.edu, Central Mississippi Area Health Education Center, 350 West Woodrow Wilson, Suite 3320, Jackson, MS 39213; Telephone: 601-987-0272; Fax: 601-815-5388.

Jeanfreau G. Scharalda, Email: sjeanf@lsuhsc.edu, Louisiana State University Health Sciences Center School of Nursing, 1900 Gravier Street, New Orleans, Louisiana 70112; Telephone: 504-568-4140; Fax: 504-568-5853.

Barbara Stetson, Email: Barbara.stetson@louisville.edu, Department of Psychological and Brain Sciences, 317 Life Sciences Building, University of Louisville, Louisville, KY 40292; Telephone: 502-852-2540; Fax: 502-852-8904.

Nkenge H. Jones-Jack, Email: nkenge@mac.com, Epidemiologist & Evaluation Consultant, Metairie, Louisiana 70002. Telephone: 678-524-1147; Fax: 504-267-4080.

Matthew Valliere, Email: mtvallie@dhh.la.gov, Chronic Disease Prevention and Control, Bureau of Primary Care and Rural Health, Office of the Secretary, 628 North 4th Street, Baton Rouge, LA 70821-3118; Telephone: 225-342-2655; Fax: 225-342-2652.

William R. Kirchain, Email: wkirchai@xula.edu, Division of Clinical and Administrative Sciences, College of Pharmacy, Xavier University of Louisiana, 1 Drexel Drive, Room 121, New Orleans, Louisiana 70125; Telephone: 504-520-5395; Fax: 504-520-7971.

Michael Fagen, Email: mfagen1@uic.edu, Co-Associate Editor for the Evaluation and Practice section of Health Promotion Practice, Department of Community Health Sciences, School of Public Health, University of Illinois at Chicago, 1603 W. Taylor St., M/C 923, Chicago, IL 60608-1260, Telephone: 312-355-0647; Fax: 312-996-3551.

Cris LeBlanc, Centers of Excellence Scholar, College of Pharmacy, Xavier University of Louisiana, 1 Drexel Drive, New Orleans, Louisiana 70125; Telephone: 504-520-5345; Fax: 504-520-7971.

References

  1. Fowkes FG, Fulton PM. Critical appraisal of published research: introductory guidelines. British Medical Journal. 1991;302:1136–40. doi: 10.1136/bmj.302.6785.1136. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Donnelly RA. The Complete Idiots Guide to Statistics. Alpha Books; New York, NY: 2004. pp. 6–7. [Google Scholar]
  3. Greenhalgh T, Taylor R. How to read a paper: Papers that go beyond numbers (qualitative research) British Medical Journal. 1997;315:740–743. doi: 10.1136/bmj.315.7110.740. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Greenhalgh T. How to read a paper: Assessing the methodological quality of published papers. British Medical Journal. 315:305–308. doi: 10.1136/bmj.315.7103.305. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Johnson RB, Onwuegbuze AJ. Mixed methods research: A research paradigm whose time has come. Educational Researcher. 2004;33:14–26. [Google Scholar]
  6. Hennekens CH, Buring JE. Epidemiology in Medicine. Little, Brown and Company; Boston, Massachusetts: 1987. pp. 106–108. [Google Scholar]
  7. Last JM. A dictionary of epidemiology. 4. Oxford University Press, Inc; New York, New York: 2001. [Google Scholar]
  8. Trochim WM, Donnelly J. Research methods knowledge base. 3. Atomic Dog; Mason, Ohio: 2008. pp. 6–8. [Google Scholar]

RESOURCES