SYNOPSIS
Objectives
Our objectives were to describe the methodology of the Pregnancy Risk Assessment Monitoring System (PRAMS), examine recent response rates, determine characteristics associated with response, and track response patterns over time.
Methods
PRAMS is a mixed-mode surveillance system, using mail and telephone surveys. Rates for response, contact, cooperation, and refusal were computed for 2001. Logistic regression was used to examine the relationship between maternal and infant characteristics and the likelihood of response. Response patterns from 1996 to 2001 were compared for nine states.
Results
The median response rate for the 23 states in 2001 was 76% (range: 49% to 84%). Cooperation rates ranged from 86% to 97% (median 91%); contact rates ranged from 58% to 93% (median 82%). Response rates were higher for women who were older, white, married, had more education, were first-time mothers, received early prenatal care, and had a normal birthweight infant. Education level was the most consistent predictor of response, followed by marital status and maternal race. From 1996 to 2001, response to the initial mailing decreased in all states compared, but the decrease was offset by increases in mail follow-up and telephone response rates. Overall response rates remained unchanged.
Conclusions
The PRAMS mail/telephone methodology is an effective means of reaching most recent mothers in the 23 states examined, but some population subgroups are more difficult to reach than others. Through more intensive follow-up efforts, PRAMS states have been able to maintain high response rates over time despite decreases in response to the initial mailing.
Survey researchers are finding it increasingly difficult to attain high response to surveys.1 Many factors have been identified as attributing to the decline in response rates, including the proliferation of telemarketers, the increased use of caller-id and other call-blocking technologies, increased concerns about confidentiality, and busier lifestyles. Response rates for most telephone and face-to-face surveys have experienced a decline. Mail surveys, in general, have not experienced the same extent of decline but some have seen response rates fall.2,3 To maintain historical levels of response, the U.S. Census and many other surveys have had to incorporate operational changes to enhance contacts.4,5 It is unclear how robust mixed-mode surveys are in this changing survey research environment.
In light of these trends, we examined the response rates for the Pregnancy Risk Assessment Monitoring System (PRAMS), a mixed-mode surveillance system designed to provide states with ongoing, population-based, state-specific information on selected maternal behaviors and experiences that occur before and during pregnancy and during a child’s early infancy. Although the basic PRAMS methodology, which consists of a mail survey with telephone follow-up for nonrespondents, has not changed, many states have intensified their efforts to locate and contact sampled women in order to maintain adequate levels of response. PRAMS is part of the Centers for Disease Control and Prevention (CDC) initiative to reduce infant mortality and low birthweight. Since the inception of PRAMS in 1987, the number of states (includes New York City) participating in the program has increased from six to 32, with 10 of those states joining between 1999 and 2001. PRAMS is open to any independent entity that reports vital records, including the 50 states, U.S. territories, New York City, and the District of Columbia; the term “states” is traditionally used to describe participating jurisdictions and will be used in this report. PRAMS surveillance now covers 62% of all U.S. births.
This article has four purposes. First, we describe the basic PRAMS methodology and the measures that have been introduced to improve response. Second, we examine the most recently available response rates, which are from the 2001 birth cohort. Third, we identify the characteristics associated with response in 2001. Finally, we examine the trends in response rates from 1996 to 2001. We discuss how factors such as respondent behaviors, more intense efforts to locate respondents, and more intensive follow-up efforts have contributed to the observed changes. This information will be of interest to other researchers using a mixed-mode data collection approach.
METHODOLOGY
PRAMS methodology
As part of the PRAMS methodology, each state can tailor various data collection strategies to meet its unique needs. We describe the standard procedures used by most states and give examples of state-specific options, all of which are based on the CDC model protocol.6 The details of the PRAMS surveillance methodology have been described elsewhere.7,8
The population of interest for PRAMS is all mothers who give birth within their state of residence to a live-born infant during the surveillance period. A state’s birth-certificate file serves as the sampling frame for identifying new mothers. Women are sampled between two and six months after giving birth. The PRAMS sample is stratified so that subpopulations of particular public health interest are oversampled, such as mothers of low birthweight infants and racial/ethnic minority groups. Annual sample sizes range from 1,325 to 3,270, with larger sample sizes for states with more complex stratification schemes (Table 1).
Table 1.
PRAMS stratification variables, annual sample sizes, and incentives and rewards, 2001
Baby items are not sent to women whose babies have died.
Michigan 2001 data only available for July through December.
New York excludes New York City.
PRAMS, a mixed-mode surveillance system based on Dillman’s Tailored Design Method,9 incorporates many techniques designed to enhance response. These include a personalized mailing package, use of incentives and rewards, and repeated but varied contact attempts. The primary data collection mode is mail, with telephone contact for nonresponders. Multiple attempts to contact sampled women are made within each mode. The multiple contact attempts via mail include a preletter, three separate mailings containing the survey, and a tickler (thank you/reminder note between the first and second survey mailing). Since 1996, several minor modifications to the methodology have been introduced to improve response. The most significant change is that the third mailing, which was optional, is now strongly encouraged by the PRAMS protocol.
Telephone contact begins one to two weeks after mailing the last questionnaire packet. A variety of sources is used to obtain one or more valid telephone numbers for a mother. Increasingly, telephone numbers are captured on the birth certificate file. Many states make use of other health department program databases such as those for the Special Supplemental Nutrition Program for Women, Infants and Children (WIC), newborn screening, and immunization to locate mothers who may also be participating in or tracked by those programs. Internet phone directories are another popular source that has recently become available. Up to 15 call attempts, staggered over different times of the day and different days of the week, are made to each telephone number.
All PRAMS data are collected by the states, usually by health department staff members. Typically, two full-time staff members are needed to carry out most aspects of PRAMS data collection: a program coordinator and a data manager.6 In an effort to allocate more intensive resources to the telephone follow-up, however, more states are choosing to contract out the telephone portion to professional survey research organizations. A few states contract out all data collection activities. The only staffing requirement is that telephone interviewers must be female.
Currently, the PRAMS questionnaire is in its fifth revision (each revision is used for approximately four years). The current version was implemented with the 2004 birth cohort and will remain in use through the 2007 birth cohort. Each state’s questionnaire consists of a set of core questions common to all PRAMS states and a state-specific section. For the latter, states may choose from a standard library of questions or develop their own. Optional questions may be inserted among core questions, resulting in a unique survey for each state. The questionnaire takes approximately 20 minutes to complete. Mail and telephone versions are available in English and Spanish.
All states have adopted some type of participation incentive (sent to all women selected for the survey) or reward (sent to all women who return a completed survey). The incentive or information about the reward is included in the initial mail packet. The use of incentives and rewards as a strategy to increase response rates has been tested and proven to be effective.10 In recent years, many PRAMS states have enhanced their incentives and rewards to encourage response. Many states now offer multiple incentives or a combination of an incentive and a reward (Table 1).
Response rate analyses
We examined response rates and a variety of maternal and infant characteristics that might be related to a woman’s likelihood of responding to the PRAMS questionnaire. Response rates were calculated for the 23 states that collected at least six months of data in 2001: Alaska, Alabama, Arkansas, Florida, Hawaii, Illinois, Louisiana, Maine, Maryland, Michigan, Nebraska, New Mexico, New York, New York City, North Carolina, Ohio, Oklahoma, South Carolina, Texas, Utah, Vermont, Washington, and West Virginia. Response rates for 2001 are based on the total number of women sampled during the year. Respondents were defined as eligible women who were selected for the survey and completed at least 95% of the questions on the questionnaire within nine months of giving birth. Data for Michigan are based on six months of sampling representing the last half of 2001.
We examined response rates by mode (mail, telephone) and by type of contact (Mail 1, 2, 3) for each of the 23 states. In 2001, three states (Florida, Washington, and West Virginia), did not conduct a third mailing. Response rates by type of contact were calculated so that they represented the marginal contribution of each and thus summed to the response rate (“overall” in Table 2). We also calculated contact rates and cooperation rates for each state. The contact rate was defined as the percentage of selected women who were contacted during data collection. Contact was defined as any direct communication with the sampled woman. The cooperation rate was defined as the percentage of contacted women who completed a questionnaire. The response rate and cooperation rate differ in terms of the denominator: the response rate denominator is all sampled women, whereas the cooperation rate denominator is all contacted women. We calculated refusal rates as the percentage of women who refused a telephone interview or who returned a mailed questionnaire marked “refused.” All rates were defined according to the standards set by the American Association for Public Opinion Research (AAPOR).11
Table 2.
PRAMS response rates by state and type of contact, 2001
Florida, Washington, and West Virginia do not send a third mailing.
Response rate represents the percentage of all women in the sample who participated. Rates by type of contact may not add to overall rate due to rounding.
Contact rate represents the percentage of women who were contacted, regardless of whether they participated.
Cooperation rate represents the percentage of contacted women who participated.
Michigan 2001 data only available for July through December.
New York excludes New York City.
Response rates were calculated by state for a variety of maternal characteristics, including age, race, ethnicity, marital status, education, parity, and initiation of prenatal care; and infant characteristics including birthweight, age at PRAMS initial contact, and death status. All maternal and infant characteristics were obtained for both responders and nonresponders from birth certificate records except for data on infant deaths, which was obtained from death certificate records or from the mother. Because of delays in processing death certificate information and incomplete information from mothers, data on whether some infants are alive or dead may be missing from PRAMS.
For each state, we fit logistic regression models to examine the relationships between each maternal and infant characteristic and the likelihood of response, controlling for the presence of all other variables in the model. The variables we examined were birthweight (<2500 grams [low] or ≥2500 grams [normal]); mother’s age (<20, 20–29, ≥30 years); race (black, white, or other); ethnicity (Hispanic origin or not of Hispanic origin); marital status (married or not married); education (<12, 12, or >12 years); parity (1, >1); and initiation of prenatal care (first trimester, >first trimester [late], or not at all). We included each of these variables in the model for each state and then used backward selection to reduce the models to only those variables significantly (p<0.05) related to the likelihood of response. No interaction terms were included in the models. For dichotomous variables, the Wald test was used to assess significance. For variables with more than two categories, both the Wald test and the likelihood ratio test were used to determine significant differences in response. In cases where the tests gave confiicting conclusions, results from the likelihood ratio test, which looks at the overall effect of the variable, took preference over those from the Wald test.
We examined trends in response rates from 1996 to 2001 for the nine states that had response data available over this time period. We examined overall response, contact, cooperation, and refusal rates as well as response rates by each mode of administration. Tests for linear trends were conducted using least squares estimation. Prior to conducting the analysis, we examined demographic distributions of the 1996 and 2001 samples for the states to ensure they were similar. Six states had similar sample characteristics between 1996 and 2001. Three states modified their sampling scheme during this period, which significantly altered the distribution of one or more of the demographic characteristics examined. We included those states in the analysis because it did not appear that the sampling changes would significantly affect response comparisons.
RESULTS
Over the 23 states, the median response rate was 76%, ranging from 49% in New York City to 84% in Vermont (Table 2). Of the 23 states, 21 achieved response rates of 70% or greater in 2001. (CDC has established a minimum response rate threshold of 70% for a state’s data to be of sufficient quality to be included in published results.)
The first mailing garnered the greatest response, with diminishing response to each successive mailing. The complete mail phase of PRAMS data collection achieved response rates ranging from 41% (New York City) to 76% (Vermont). Response rates across the states improved by 3 to 25 percentage points after telephone follow-up. Every state except Arkansas, Maine, and Vermont needed the responses received via telephone to achieve overall response rates of at least 70%.
The contact rate ranged from 58% to 93% (Table 2). For all 23 states, the cooperation rates were high (86% to 97%), suggesting that once a sampled woman was contacted, she was usually willing to complete the survey. In general, cooperation rates were higher than the contact rates (Table 2). The refusal rate for contacts by mail or by telephone averaged 2.9% (range: 0.7% to 6.1%) across the states (data not shown).
In all of the 23 states where comparisons could be made, response rates were higher for white women than for black women, for married women than for unmarried women, for first-time mothers than for multiparous women, and for women who initiated prenatal care in the first trimester than for women with late or no prenatal care; in addition, response rates increased with higher levels of maternal education (Table 3). Response rates for races categorized as “Other” tended to be higher than those for black women but lower than those for white women. In general, the response rate either remained the same or increased as the mother’s age increased. Response rates for non-Hispanic women were the same or higher than those for Hispanic women in most states with sizable Hispanic populations. Response rates for women who delivered a normal birthweight infant were the same or higher than those for women who delivered a low birthweight infant in most states. More than 90% of the infants were between two and three months of age at the time of the first mailing. Response rates for women first contacted two months after delivery were comparable with those for women first contacted three months after delivery (76% vs. 74%; data not shown). When we examined response rates by infant status, we found that response was somewhat lower for women whose baby had died than for women whose baby was alive (overall 66% vs. 74%, data not shown). Because of the sparse number of infant deaths in many states, comparisons within each state were not possible.
Table 3.
PRAMS response rates by state and maternal and infant characteristics, 2001
Indicates less than 10 occurrences per category.
Characteristics that were significantly associated (p,0.05) with the likelihood of response in logistic regression modeling.
The category “Other” race represents various population subgroups in each state; they include Alaska Natives, Native Americans, Asians, and other non-whites.
Michigan 2001 data only available for July through December.
New York excludes New York City.
Logistic regression modeling revealed that several variables were frequently related to response across the states (Table 3). The most consistent predictor of response was maternal education, significantly associated with response in 21 of the 23 states. Marital status and maternal race were significant predictors of response in 18 states each, with parity and initiation of prenatal care significant in 15 and 14 states, respectively. Hispanic ethnicity was a significant predictor of response in about half of the states with sizable Hispanic populations. According to the models, the infant’s birthweight and mother’s age were not good predictors of response in most states.
From 1996 to 2001, the overall response rate significantly increased in two states and significantly decreased in two states (Table 4); the mean response rate over the nine states remained essentially unchanged (Figure). When response rates were compared by each type of contact, some patterns emerged. From 1996 to 2001, every state experienced a declining trend in response rates to the first mailing, with a mean decline of 6 percentage points (range: 3 percentage points in Washington to 12 percentage points in Alaska). This declining trend was statistically significant in four of the nine states. Conversely, the response from follow-up mailings and telephone follow-up generally increased over this period. The increase in response due to follow-up mailings was modest, between 1 and 4 percentage points in the six states experiencing an increase. Six of nine states showed an increase in response over this period for telephone follow-up, ranging from 2 to 13 percentage points. This increase was statistically significant in four states. The other three states exhibited no change or a slight decrease in phone response.
Table 4.
Comparison of response rates by state and type of contact, PRAMS, 1996–2001
Statistically significant (p < 0.05) increasing or decreasing trend over time period from 1996 to 2001.
Trend analysis not conducted on follow-up mail response. Florida, Washington, and West Virginia did not send a third mailing in 1996 through 2001.
Response rate represents the percentage of all women in the sample who participated. Rates by type of contact may not total to overall rate due to rounding.
Contact rate represents the percentage of women who were contacted, regardless of whether they participated.
Cooperation rate represents the percentage of contacted women who participated.
Alaska conducted telephone follow-up in 1997 through 2001, but not in 1996. The Alaska telephone response trend analysis was conducted on years 1997–2001.
New York excludes New York City.
Oklahoma breakout of mail information unavailable in 1996.
Simple average over states. Mail 1 and follow-up mail response rate averages exclude Oklahoma.
Figure.
Response, contact, and cooperation rates, PRAMS 1996–2001 (average over nine states)
The mean contact rate increased by 3 percentage points from 1996 to 2001; the increasing trend was statistically significant (Figure). Four states experienced an increase in the contact rate (mean: 10 percentage points), four states had a decrease (mean: 3 percentage points), and one state had no change (Table 4).
From 1996 to 2001, the mean cooperation rate decreased by 2 percentage points; the decreasing trend was statistically significant (Figure). Six states experienced a drop (range: 1 to 8 percentage points, average: 4 percentage points). In the remaining three states, the cooperation rate remained the same in one state, and increased 1 and 3 percentage points in the other two (Table 4). Although refusals do not constitute a sizeable portion of the PRAMS nonresponse, the mean refusal rate for PRAMS significantly increased by almost 100% from 1.5% in 1996 to 2.9% in 2001 (data not shown).
The three states that modified their sampling scheme between 1996 and 2001 were carefully examined. Florida oversampled teens in 2001 but not in 1996. West Virginia stratified by birthweight and adequacy of prenatal care in 1996 and by birthweight and maternal age in 2001. South Carolina oversampled very low birthweight infants in 2001 but not in 1996. Only the South Carolina change involved oversampling a group that was significantly associated with the likelihood of response in their state regression model. As a result, response rates could be expected to decrease there. Based on the nature of the sampling change alone, response rates would also be expected to decrease in Florida. Florida did experience a decrease in response over this period but response rates increased slightly in South Carolina.
DISCUSSION
Our study findings indicate that overall response rates for the 23 states in 2001 were good, ranging from 68% to 84% (with the exception of New York City at 49%). Twenty-one of the 23 states achieved response rates of 70% or higher. Although mail is the primary mode of response, telephone follow-up was an essential component of the overall response rate, contributing, on average, 15 percentage points to the overall response rate. Only three states were able to achieve at least a 70% response rate from mail response alone. These results are consistent with or slightly better than other studies that have used a mixed-mode methodology.12,13 The contact rates ranged from 76% to 93% with the exception of New York City at 58%. Cooperation rates were high, 86% to 97%, indicating that sampled women were generally willing to participate in the survey once they were contacted.
Although we are aware of no other U.S. population-based health surveys using the same methodology as PRAMS, we examined response rates from other population-based surveys with similar target populations and surveys using similar methodology to compare them with the PRAMS response rates. The State and Local Area Integrated Telephone Survey (SLAITS) National Survey of Early Childhood Health is a random-digit dial, U.S. population-based telephone survey of households with children between 4 and 35 months old.14 SLAITS was conducted in 2000. SLAITS response rates are not directly comparable to PRAMS rates because households must first be screened to determine if they have a child of eligible age. The response rate among eligible households was 79.2%, but this rate does not include eligible households that refused to go through the initial screening process. A population-based survey of Puerto Rican women who recently gave birth in the mainland U.S. employed sampling methodology similar to PRAMS but conducted face-to-face interviews with sampled women.15 As face-to-face interviews generally elicit higher responses than mail or telephone surveys, the PRAMS Hispanic response rate of 69% compares favorably with the 74% response rate achieved in that study. Finally, the Behavioral Risk Factor Surveillance System (BRFSS) is a state, population-based, random-digit dial telephone surveillance system in all 50 states that targets households with adults aged 18–65.16 The median BRFSS response rate over the 23 PRAMS states was 51% in 2001; the median cooperation rate was 54%. From 1996 to 2001, the median BRFSS response rate fell from 63% to 51%. In 2003, the BRFSS began experimenting with mail modes of administration to counteract declining response rates to the phone survey.17
We found that the characteristics of women most likely to respond to the PRAMS survey were consistent across states. In each of the 23 states, response rates were higher for women with 12 or more years of education, married women, white women, first-time mothers, and women who initiated prenatal care in the first trimester. For all states, maternal education was the most consistent predictor of response, followed by marital status and race. Parity, prenatal care initiation, and ethnicity were moderately associated with response status, while birthweight and maternal age were poor predictors. These results are consistent with the findings obtained from a previous analysis of PRAMS response rates in 1996, with one exception.7 Parity, which we found to be associated with response in some states, was the most consistent predictor of response in 1996 (along with maternal education). A postpartum mail survey in the state of Washington also found marital status and race to be significantly associated with response.18
The characteristics of women who are hardest to reach are also the characteristics associated with higher risk of poor birth outcomes. For PRAMS to accomplish its overall goal of reducing infant mortality and low birthweight, it is crucial that adequate response rates be obtained among the groups at highest risk for these outcomes. Nonresponse can introduce bias and affect the validity of epidemiologic analyses of these data. However, knowing which high-risk groups may be poorly represented within the group of PRAMS respondents helps us to focus efforts on improving responses among those subpopulations.
Despite the differences in response between demographic groups, our findings indicate that many high-risk groups are reached in PRAMS states. For example, overall response rates exceeded 70% for women of other races, women who delivered a low birthweight infant, and women with more than one child. Response rates among Hispanic women were 70% or higher in 11 of the 21 states with Hispanic populations. On the other hand, less than half of states achieved 70% response in 2001 for some important high-risk groups, including black women, teens, unmarried women, women with less than a high school education, and women who received late or no prenatal care. Overall, response rates were less than 50% in New York City, a diverse urban area presenting many challenges for survey researchers. The low contact rate in New York City, coupled with its high cooperation rate, suggests that difficulties locating sampled women may be the main barrier to achieving adequate response there. A patient satisfaction study of poor New York City residents who had recently been inpatients at an urban teaching hospital achieved a similar response rate (50%) using a mail survey followed by telephone calls to nonresponders.19 Other researchers have similarly noted lower response rates in urban, densely populated areas.20,21
Based on Dillman’s Tailored Design Method, the PRAMS methodology incorporates many techniques designed to enhance response. These include a personalized mailing package, use of response incentives and rewards, and repeated but varied contact attempts. The latter two techniques have been proven to enhance response in controlled experimental settings.22,23 In particular, use of response incentives has been shown to incrementally increase response rates by 8% to 19%.10 Furthermore, the PRAMS survey has government sponsorship and covers topics of high saliency to new mothers; both of these factors have been shown to be positively associated with response.24,25 We believe all of these factors have contributed to the high response rates and low refusal rates achieved by PRAMS as compared to other health surveys using similar modes of administration.26,27
Trend analyses of response rates achieved by PRAMS states from 1996 to 2001 confirm some of the prevailing trends in survey research. Although among nine study states, overall response rates increased in four states and decreased in four, examination of the response rates by type of contact reveals a more complete picture of the underlying issues. In every state with data available for comparison, the response to the initial mailing declined over this time period, with an average decline of 6 percentage points. These declines are consistent with findings suggesting the public is becoming increasingly resistant to unsolicited surveys. Another indicator of this shift toward more public resistance to surveys is the significant decline in cooperation rates observed between 1996 and 2001. Eight of nine states experienced no change or a drop in the cooperation rate; the average change was a decline of 2.3 percentage points. Furthermore, the average refusal rate across the nine states significantly increased over this time period.
Our results show that initial requests to complete a mail survey are increasingly being ignored. As a result, multiple follow-up attempts have become an essential component of the survey protocol, necessary to maintain historic response rates. Many PRAMS states compensated for the decline in response to the initial mailing by enhancing telephone follow-up efforts and conducting two follow-up mailings if they were not already doing so. In most cases, these enhanced efforts were sufficient to maintain or even increase overall response rates. States that were unable to increase telephone response rates saw their overall response rates fall.
There are some limitations to our analysis of response rates across time. First, although states are essentially conducting data collection in a similar manner, changes occurred between 1996 and 2001 that could not be controlled for, such as use of a different version of the survey instrument, modifications to the sampling scheme, or changes in staffing. It is unclear what impact, if any, these factors may have had on response rates over this period. We were able to examine the impact of modifications to the sampling scheme. In two out of three states that modified their sampling scheme during this time period, we could predict the effect the changes should have on response rates based on population demographics. In one state the change in response rates was as predicted and in the other the change in response rates was in the opposite direction of what was predicted. It appears that the sampling changes had little impact on overall response rates relative to other factors discussed in this article. Second, our multivariate analyses were based on demographic characteristics available from birth certificate records, but the quality of birth certificate data may vary considerably across states. Additionally, there may be other factors associated with response rates that were not available to include in our analyses. Some of the system-related aspects of participating state programs are currently being examined through a comprehensive program evaluation of PRAMS.28
CONCLUSIONS
PRAMS is a stable surveillance system that is achieving good response rates consistently over time in nearly all participating states. Future methodological efforts will be focused on maintaining acceptable response rates overall and further improving response rates among higher-risk groups. For example, some PRAMS states are already beginning to explore using different incentives for different subpopulations, offering substantial cash rewards ($20 or more) to women in high-risk subpopulations, developing culturally sensitive survey materials, and using other avenues (media, faith-based groups) for publicizing PRAMS to improve response in high-risk populations. The challenges faced by survey researchers will no doubt increase and change in the future. Thus, vigilant oversight of each component of survey operations and the continued development of creative approaches will be required to maintain acceptable response rates in the presence of these ever-increasing challenges.
Acknowledgments
The authors appreciate the input from others on the CDC PRAMS team (Christopher Johnson, Brian Morrow, Toyia Austin, Robert Lazo, and Nedra Whitehead) and from the PRAMS Working Group: Albert Woolbright, PhD (Alabama); Kathy Perham-Hester, MS, MPH (Alaska); Gina Redford, MAP (Arkansas); Alyson Shupe, PhD (Colorado); Helen Marshall (Florida); Carol Hoban, MS, MPH (Georgia); Limin Song, MPH, CHES (Hawaii); Theresa Sandidge, MA (Illinois); Joan Wightkin (Louisiana); Martha Henson (Maine); Diana Cheng, MD (Maryland); Yasmina Bouraoui, MPH (Michigan); Jan Jernell (Minnesota); Linda Pendleton, LMSW (Mississippi); JoAnn Dotson (Montana); Jennifer Severe-Oforah (Nebaska); Lakota Kruse, MD (New Jersey); Ssu Weng, MD, MPH (New Mexico); Candace Mulready-Ward, MPH (New York City); Anne Radigan-Garcia (New York); Paul Buescher, PhD (North Carolina); Sandra Anseth, RN (North Dakota); Amy Davis (Ohio); Dick Lorenz (Oklahoma); Ken Rosenberg, MD, MPH (Oregon); Sam Viner-Brown (Rhode Island); Mirela Dobre (South Carolina); Tanya J. Guthrie, PhD (Texas); Lori Baksh (Utah); Peggy Brozicevic (Vermont); Linda Lohdefinck (Washington); Melissa Baker, MA (West Virginia); and the CDC PRAMS Team, Applied Sciences Branch, Division of Reproductive Health.
REFERENCES
- 1.Hox JJ, De Leeuw ED. A comparison of nonresponse in mail, telephone, and face-to-face surveys. Quality Quantity. 1994;28:329–44. [Google Scholar]
- 2.Steeh C. Trends in non-response rates, 1952–1979. Public Opin Q. 1981;45:40–57. [Google Scholar]
- 3.Connelly NA, Brown TL, Decker DJ. Factors affecting response rates to natural resource-focused mail surveys: empirical evidence of declining rates over time. Society & Natural Resources. 2003;16:541–9. [Google Scholar]
- 4.Curtin R, Presser S, Singer E. The effects of response rate changes on the index of consumer sentiment. Public Opin Q. 2000;64:413–28. doi: 10.1086/318638. [DOI] [PubMed] [Google Scholar]
- 5.Dillman D, Sinclair M, Clark J. Alexandria (VA): American Statistical Association; 1992. Mail-back response rates for simplified decennial census questionnaire designs. American Statistical Association Proceedings of the Survey Research Methods Section; pp. 776–83. 1992 Aug 9–13; Boston. [Google Scholar]
- 6.Centers for Disease Control and Prevention (US) PRAMS model surveillance protocol, Version 3; 2003. [cited 2005 Aug 29]. Available from: URL: http://www.cdc.gov/prams.
- 7.Adams MM, Shulman HB, Bruce C, Hogue C, Brogan D. The Pregnancy Risk Assessment Monitoring System: design, questionnaire, data collection and response rates. Paediatr Perinat Epidemiol. 1991;5:333–46. doi: 10.1111/j.1365-3016.1991.tb00718.x. [DOI] [PubMed] [Google Scholar]
- 8.Gilbert BC, Shulman HB, Fischer LA, Rogers MM. The Pregnancy Risk Assessment Monitoring System (PRAMS): methods and 1996 response rates from 11 states. Matern Child Health J. 1999;3:199–209. doi: 10.1023/a:1022325421844. [DOI] [PubMed] [Google Scholar]
- 9.Dillman DA. Mail and internet surveys: the tailored design method. 2nd ed. New York: John Wiley and Sons; 2000. [Google Scholar]
- 10.Church A. Estimating the effect of incentives on mail survey response rates: a meta-analysis. Public Opin Q. 1993;57:62–79. [Google Scholar]
- 11.The American Association of Public Opinion Research. 3rd ed. Lenexa (KS): AAPOR; 2004. [cited 2005 Aug 29]. Standard definitions: final dispositions of case codes and outcome rates for surveys. Also available from: URL: http://www.aapor.org/pdfs/standarddefs_ver3.pdf. [Google Scholar]
- 12.Fowler FJ, Jr, Gallagher PM, Stringfellow VL, Zaslavsky AM, Thompson JW, Cleary PD. Using telephone interviews to reduce nonresponse bias to mail surveys of health plan members. Med Care. 2002;40:190–200. doi: 10.1097/00005650-200203000-00003. [DOI] [PubMed] [Google Scholar]
- 13.Sibbald B, Addington-Hall J, Brenneman D, Freeling P. Telephone versus postal surveys of general practitioners: methodological considerations. Br J Gen Pract. 1994;44:297–300. [PMC free article] [PubMed] [Google Scholar]
- 14.Blumberg SJ, Olson L, Osborn L, Srinath KP, Harrison H. Design and operation of the National Survey of Early Childhood Health, 2000. Vital Health Stat. 2002;1:1–97. [PubMed] [Google Scholar]
- 15.Oropesa RS, Landale NS. Nonresponse in follow-back surveys of ethnic minority groups: an analysis of the Puerto Rican Maternal and Infant Health Study. Matern Child Health J. 2002;6:49–58. doi: 10.1023/a:1014368217422. [DOI] [PubMed] [Google Scholar]
- 16.Centers for Disease Control and Prevention (US) Atlanta: CDC; 2001. Behavioral Risk Factor Surveillance System summary data quality report. [Google Scholar]
- 17.Link M, Mokdad A. Are web and mail feasible options for the Behavioral Risk Factor Surveillance System?. In: Cohen SB, Lepkowski JM, editors. Eighth Conference on Health Survey Research Methods; Peachtree City (GA). 2004. Feb 20–23, [Google Scholar]; Hyattsville (MD): National Center for Health Statistics; 2004. pp. 149–58. Also available from: URL: http://www.cdc.gov/nchs/data/misc/proceedings _hsrm2004.pdf. [Google Scholar]
- 18.Holt VL, Martin DP, LoGerfo JP. Correlates and effect of non-response in a postpartum survey of obstetrical care quality. J Clin Epidemiol. 1997;50:1117–22. doi: 10.1016/s0895-4356(97)00096-6. [DOI] [PubMed] [Google Scholar]
- 19.Harris LE, Weinberger M, Tierney WM. Assessing inner-city patients’ hospital experiences. A controlled trial of telephone interviews versus mailed surveys. Med Care. 1997;35:70–6. doi: 10.1097/00005650-199701000-00006. [DOI] [PubMed] [Google Scholar]
- 20.Synodinos N, Yamada S. Response rate trends in Japanese surveys. Int J Public Opin Res. 2000;12:48–72. [Google Scholar]
- 21.Goyder J, Lock J, McNair T. Urbanization effects on survey nonresponse: a test within and across cities. Quality Quantity. 1992;26:39–48. [Google Scholar]
- 22.Perneger TV, Etter JF, Rougemont A. Randomized trial of use of a monetary incentive and a reminder card to increase the response rate to a mailed health survey. Am J Epidemiol. 1993;138:714–22. doi: 10.1093/oxfordjournals.aje.a116909. [DOI] [PubMed] [Google Scholar]
- 23.James J, Bolstein R. The effect of monetary incentives and follow-up mailings on the response rate and response quality in mail surveys. Public Opin Q. 1990;54:346–61. [Google Scholar]
- 24.Edwards P, Roberts I, Clarke M, DiGuiseppi C, Pratap S, Wentz R, et al. Increasing response rates to postal questionnaires: systematic review. BMJ. 2002;324:1183. doi: 10.1136/bmj.324.7347.1183. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Fox R, Crask M, Kim J. Mail survey response rate: a meta-analysis of selected techniques for inducing response. Public Opin Q. 1988;52:467–91. [Google Scholar]
- 26.Asch DA, Jedrziewski MK, Christakis NA. Response rates to mail surveys published in medical journals. J Clin Epidemiol. 1997;50:1129–36. doi: 10.1016/s0895-4356(97)00126-1. [DOI] [PubMed] [Google Scholar]
- 27.Brambilla DJ, McKinlay SM. A comparison of responses to mailed questionnaires and telephone interviews in a mixed mode health survey. Am J Epidemiol. 1987;126:962–71. doi: 10.1093/oxfordjournals.aje.a114734. [DOI] [PubMed] [Google Scholar]
- 28.Rogers M, Lansky A, Laswell S, Rojas-Smith L, Hersey J, Moore C. PRAMS program evaluation: learning as we go; 2004. Nov 3–6, Presented at the 18th Annual Conference of the American Evaluation Association; Atlanta. [Google Scholar]