Skip to main content
Health Services Research logoLink to Health Services Research
. 2016 Nov 24;53(1):256–272. doi: 10.1111/1475-6773.12611

Sampling for Patient Exit Interviews: Assessment of Methods Using Mathematical Derivation and Computer Simulations

Pascal Geldsetzer 1,, Günther Fink 1, Maria Vaikath 1, Till Bärnighausen 1,2,3
PMCID: PMC5785309  PMID: 27882543

Abstract

Objective

(1) To evaluate the operational efficiency of various sampling methods for patient exit interviews; (2) to discuss under what circumstances each method yields an unbiased sample; and (3) to propose a new, operationally efficient, and unbiased sampling method.

Study Design

Literature review, mathematical derivation, and Monte Carlo simulations.

Principal Findings

Our simulations show that in patient exit interviews it is most operationally efficient if the interviewer, after completing an interview, selects the next patient exiting the clinical consultation. We demonstrate mathematically that this method yields a biased sample: patients who spend a longer time with the clinician are overrepresented. This bias can be removed by selecting the next patient who enters, rather than exits, the consultation room. We show that this sampling method is operationally more efficient than alternative methods (systematic and simple random sampling) in most primary health care settings.

Conclusion

Under the assumption that the order in which patients enter the consultation room is unrelated to the length of time spent with the clinician and the interviewer, selecting the next patient entering the consultation room tends to be the operationally most efficient unbiased sampling method for patient exit interviews.

Keywords: Patient exit interview, patient questionnaire, sampling, operational efficiency, selection bias


Patient exit interviews — interviews at the point of patients' exit from a clinical consultation or health care facility — are an important data collection approach in health services research (Turner et al. 2001; Hrisos et al. 2009). They are commonly used to assess patients’ satisfaction with the health care services received (Ejigu, Woldie, and Kifle 2013; Alonge et al. 2014; Asfaw et al. 2014; Chimbindi, Bärnighausen, and Newell 2014; Sando et al. 2014; Islam et al. 2015), patients’ out‐of‐pocket expenditures (Peabody et al. 2010; Chimbindi et al. 2015; Opwora et al. 2015), health care utilization (The Demographic and Health Surveys Program 2015; Etiaba et al. 2015), provider behavior during the clinical consultation (Stange et al. 1998; Ostroff, Li, and Shelley 2014), and patients’ knowledge about their condition (Senarath et al. 2007; Anya, Hydara, and Jaiteh 2008; Israel‐Ballard, Waithaka, and Greiner 2014). A number of standardized patient exit questionnaires have been developed for use by researchers, including the EUROPEP instrument (Wensing 2006), the RAND Patient Satisfaction Questionnaire (RAND Health 2015), the Patient Experiences Questionnaire (Steine, Finset, and Laerum 2001), and the patient exit questionnaires that form part of the Demographic and Health Surveys (DHS) Program's Service Provision Assessments (The Demographic and Health Surveys Program 2015). Patient exit interviews are popular, particularly in low‐ and middle‐income countries, because it is operationally more efficient to identify patients at clinics than through population‐based surveys. Exit interviews also allow researchers to collect data about patients' experiences with health care services with a minimum recall period.

If the group of eligible participants is large, such as in studies interviewing patients who have accessed a common clinical service or patients with a common condition or symptom, it will often not be operationally feasible to interview all patients of interest who are exiting a health care facility. Instead, a subset of patients is interviewed. How this subset is chosen (i.e., the sampling method) is of central importance to achieving both unbiased estimates and a sufficiently large sample size (operational efficiency).

Table 1 provides a summary of possible sampling methods for patient exit interviews. “Simple random sampling” refers to selecting patients for an interview by subjecting all eligible patients to a randomization (e.g., through a coin flip or smartphone application). “Systematic random sampling,” on the other hand, uses a sampling interval (i.e., selecting every xth patient) with a random start point. We elaborate on each of these methods and outline their advantages and disadvantages in the discussion section.

Table 1.

Summary of Sampling Methods for Patient Exit Interviews

Sampling Method Description of Method Bias Operational Efficiency
(1) Sampling all eligible patients All eligible patients exiting the consultation room are interviewed. Unbiased Generally only feasible if the ratio of data collectors to eligible patients is high and/or the interview is consistently shorter than the consultation length
(2) Sampling the next patient exiting the consultation room After returning from an interview, the data collector selects the next patient exiting the consultation room. Biaseda Maximum operational efficiency
(3) Simple random sampling All eligible patients are randomized to being interviewed or not.b Unbiasedc Logistically complex to implement; generally less operationally efficient than methods (2), (4), and (5)
(4) Systematic random sampling Every xth patient exiting the consultation room is selected for interview. Unbiased in the absence of a cyclical pattern in the order in which patients exit the consultation room Difficult to set a feasible intervald without pilot testing; generally less operationally efficient than (2) and (5)
(5) Sampling the next patient entering the consultation room After returning from an interview, the data collector selects the next patient entering the consultation room. Unbiasede Generally more efficient than (3) and (4); somewhat less efficient than (2)
a

The probability of selection into the sample is inversely related to the time spent in the consultation room.

b

Either a research team member or the clinician(s) selects eligible patients for interview using a randomization device, such as a coin flip or a smartphone application.

c

For this method to be unbiased, all eligible patients at the facility must be subjected to the randomization.

d

“Interval” refers to x when every xth patient is selected for interview.

e

This method is unbiased as long as the order in which patients enter the consultation room is random with respect to both the time spent in the consultation room and the time spent with the interviewer.

In this paper, we (1) assess the frequency of use of different sampling methods for patient exit interviews; (2) evaluate each method's operational efficiency using simulation; (3) discuss each method's probability of yielding an unbiased (i.e., representative) sample; and (4) describe a novel method of sampling patients for exit interviews that is both unbiased (under one assumption) and operationally efficient.

Methods

Literature Review

We conducted a review of studies that employed patient exit interviews as one of their data collection methods to gauge the frequency with which different sampling methods were used. We searched Medline via PubMed for studies published between May 23, 2014 and May 23, 2015 using variations of terms for patients, exit, and interview. The abstracts and full‐text versions of all retrieved articles were analyzed using the following inclusion criteria: the interviews were (1) conducted with the users of a health care service; (2) administered after the health care service was used; and (3) performed at a health care facility. We excluded studies, which used self‐administered questionnaires only. We did not restrict our search to certain geographic regions. All search terms were in the English language.

Simulation Study

We built a simulation in the Stata 13.0 statistical package to evaluate the operational efficiency of each sampling method for patient exit interviews. A method was judged to be the most operationally efficient method if it (1) maximized the percentage of all eligible participants that were interviewed; and (2) did not result in unacceptably high waiting times for patients until the next interviewer became available.

The simulation assumed that all patients seen at the facility were eligible to be interviewed, entered the consultation room in random order, spent a random length of time in the consultation room and, if selected for interview, a random length of time with the interviewer. We varied the number of consultation rooms and the number of interviewers from one to ten, and ran the simulation for each possible combination of the number of rooms to the number of interviewers. In addition, we varied the mean consultation length from 5 to 15 minutes, and the mean interview length from 10 to 40 minutes. The standard deviation varied from 18.7 to 74.8 percent of the mean consultation length, and from 8.0 to 60.0 percent of the mean interview length. We varied the threshold for an unacceptably high patient waiting time until an interviewer is available from 0 to 20 minutes; patients whose waiting time exceeded this threshold were not interviewed. Each simulation assumed that a total of 10,000 patients were seen at the facility during the data collection period. The outcomes recorded were (1) the proportion of all patients seen at the health care facility during the data collection period that were interviewed; (2) the mean number of patients interviewed per day; and (3) the percentage of patients who were not interviewed because they waited longer than the threshold time for an interviewer to become available.

Simulating a Typical Scenario

While we ran the simulation for a variety of scenarios and assumptions, we defined one particular scenario as typical. For this typical scenario, we chose a mean consultation length of 10.7 minutes (standard deviation of 6.7 minutes), which was the mean consultation length and variance reported by an assessment of primary care consultation lengths across six countries (Deveugele et al. 2002). The interview length for this scenario was 25 minutes (standard deviation of 7 minutes), which is a typical interview length and variance of patient exit interviews we have conducted in various primary care settings in sub‐Saharan Africa (Chimbindi, Bärnighausen, and Newell 2014; Chimbindi et al. 2015; and several ongoing studies). Keeping track of the interval of patients to be selected with systematic random sampling requires additional time and effort by the data collection team. Because (1) our simulation does not take into account this additional cost, and (2) our hypothesis was that sampling the next patient entering the consultation room is the operationally most efficient unbiased sampling method, we set the interval for systematic random sampling at the highest possible number that would result in a percentage of patients selected for interview at least 10 percent higher than the proportion of patients interviewed with sampling the next patient entering the consultation room. The maximum acceptable patient waiting time until an interviewer is available was set to be 5 minutes.

Results

Frequency of Use of Each Sampling Method

The literature search retrieved 56 records, and after removing duplicates, screening abstracts, and full‐text reviews, 24 studies were included in this rapid review. Seven studies were excluded because they used self‐administered questionnaires; five of these were from high‐income countries. Appendix SA2 summarizes the included studies by sampling method used. All identified studies were carried out in a low‐ or middle‐income country with the majority (16 studies) being from sub‐Saharan Africa. Nine studies did not describe the sampling methodology used for the patient exit interviews. The remaining studies employed one of four sampling methods: (1) interviewing all eligible participants (seven studies); (2) systematic random sampling (four studies); (3) consecutive sampling (i.e., interviewing all eligible patients at a health care facility until a sample target is met; two studies); or (4) interviewing the next patient exiting the consultation room (one study).

Operational Efficiency of Each Sampling Method

The simulations resulted in the following ranking of sampling methods ordered by decreasing operational efficiency: (1) sampling the next patient exiting the consultation room; (2) sampling the next patient entering the consultation room; (3) systematic random sampling; and (4) simple random sampling. This order was generally consistent across all scenarios assessed in the simulation. Exceptions were scenarios in which the interview length was considerably shorter than the consultation length and/or both the interview and consultation length had a very small variance. In these settings, assuming that the selection interval is set at an (near) optimal level and if the need for additional human resources to monitor the selection interval is ignored, systematic random sampling tended to be operationally more efficient than sampling the next patient entering the consultation room. Sampling all patients and consecutive sampling resulted in unfeasibly long waiting times for interviewees except in scenarios in which the consultation length was consistently higher than the interview length and/or the ratio of the number of data collectors to consultation rooms was high.

Table 2 summarizes the results of the simulations run for the “typical scenario” described in the methods. Across the 16 consultation room‐to‐interviewer combinations, sampling the next patient entering the clinical consultation room resulted on average in 21.7 percent fewer patients being interviewed than with sampling the next patient exiting the consultation room. Using 5 minutes as the maximum acceptable patient waiting time until an interviewer becomes available, systematic random sampling resulted in an average of 9.0 percent of selected patients not being interviewed. This measure decreases to an average of 3.4 percent when a maximum acceptable waiting time of 20 minutes is used. The resulting missingness is not random because the probability of exceeding the maximum acceptable patient waiting time until an interviewer became available increased with decreasing time spent in the consultation room. In 11 of the 16 room‐to‐interviewer combinations simple random sampling resulted in a higher percentage of selected patients exceeding the maximum acceptable patient waiting time than with systematic random sampling.

Table 2.

Simulation Results—The Operational Efficiency of Possible Sampling Methods for Patient Exit Interviewsa

Consultation Roomsb Interviewers Sampling the Next Patient Exiting the Consultation Room Sampling the Next Patient Entering the Consultation Room Systematic Random Samplingd Simple Random Samplingd , f
% of All Patients Interviewed No. of Patients Interviewed per Dayc % of All Patients Interviewed No. of Patients Interviewed per Dayc Intervale % of All Patients Interviewed No. of Patients Interviewed per Dayc % of Selected Patients Missedd % of All Patients Interviewed No. of Patients Interviewed per Dayc % of Selected Patients Missedd
Mean (SD) Mean (SD) Mean (SD) Mean (SD)
1 1 33.6 14.9 (1.1) 25.0 11.1 (0.9) 3 28.2 12.4 (1.3) 15.5 18.3 8.0 (1.7) 27.6
1 2 62.3 27.7 (1.9) 48.7 21.4 (1.4) 1 69.9 30.9 (1.7) 30.1 42.2 18.5 (2.9) 12.1
1 5 98.7 43.3 (3.9) 94.8 42.1 (3.2) 1 99.6 44.1 (4.3) 0.4 94.2 41.5 (4.0) 0.5
1 10 100.0 44.1 (4.1) 100.0 44.4 (3.8) 1 100.0 43.9 (4.8) 0.0 100.0 44.2 (4.1) 0.0
2 1 18.8 16.2 (1.3) 13.6 11.9 (1.0) 6 14.7 13.0 (1.2) 12.1 9.3 8.0 (1.9) 31.2
2 2 36.3 31.8 (2.4) 27.0 23.5 (1.5) 3 30.4 26.7 (1.8) 8.8 22.2 19.1 (3.7) 17.5
2 5 81.0 71.1 (4.4) 64.0 55.6 (4.6) 1 87.1 76.4 (4.1) 13.0 62.3 54.7 (5.6) 3.1
2 10 100.0 88.5 (6.4) 98.6 85.7 (7.6) 1 100.0 86.9 (9.6) 0.0 98.6 85.7 (9.4) 0.1
5 1 8.2 17.7 (1.4) 6.0 12.7 (1.6) 15 6.2 13.4 (1.8) 7.7 3.9 8.4 (2.1) 32.8
5 2 16.2 34.4 (4.8) 11.6 25.2 (1.8) 7 13.0 27.6 (4.6) 9.2 9.2 19.6 (4.4) 20.5
5 5 39.4 85.6 (6.5) 28.5 61.9 (5.0) 3 32.5 69.1 (10.9) 2.5 26.8 57.0 (10.8) 6.1
5 10 74.7 162.4 (11.4) 55.9 121.4 (7.6) 1 81.4 173.2 (26.5) 18.6 56.0 119.1 (21.2) 1.2
10 1 4.3 18.5 (1.5) 3.2 13.2 (2.8) 28 3.2 14.1 (1.5) 9.2 2.1 8.7 (2.5) 31.8
10 2 8.8 36.2 (5.3) 6.1 25.4 (4.7) 14 6.7 27.7 (5.5) 7.0 4.6 19.0 (5.6) 21.8
10 5 21.1 87.9 (16.4) 15.3 63.8 (8.7) 5 18.4 76.6 (16.5) 8.3 13.6 59.1 (6.1) 9.0
10 10 41.6 173.2 (33.2) 29.9 124.5 (25.1) 3 33.1 137.8 (31.9) 0.8 28.9 120.3 (28.2) 2.6
a

The simulations were run for a total of 10,000 patients being seen at the health care facility, a mean consultation length of 10.7 minutes (SD: 6.7 minutes), and a mean interview length of 25.0 minutes (SD: 7.0 minutes). The minimum consultation and interview lengths are 30 seconds.

b

This is the number of rooms in which patients are being seen.

c

The simulation assumes a workday of 8 hours without breaks.

d

The simulation assumes that the maximum acceptable time for participants to wait until an interviewer becomes available is 5 minutes. If this waiting time is exceeded, the patient will have been missed by the interviewer(s).

e

This is the interval set for the systematic random sampling (e.g., an interval of three signifies that every third patient is selected for interview). Where possible, the interval was set at the highest number needed to achieve at least a 10% higher proportion of patients interviewed than with sampling the next patient entering the consultation room (assuming no selected patients are missed).

f

The probability of selecting a given patient for an interview was set at the probability of all patients interviewed with sampling the next patient entering the consultation room.

% = Percentage; No. = number; SD = standard deviation.

Discussion

With Table 1 serving as a summary, this section will briefly describe each sampling method, discuss the method's probability of yielding a representative (i.e., unbiased) sample, and elaborate on its operational efficiency using the findings of our simulations.

Interviewing All Eligible Participants and Consecutive Sampling

Consecutive sampling, as used by the studies included in this review, refers to the data collection team interviewing all eligible patients at a facility until a target sample size for the facility is reached. Thus, the approaches of consecutive sampling and interviewing all eligible participants are conceptually similar because they both interview all eligible patients (i.e., a census) during the data collection period.

Bias

This approach results in a sample that is the same as, and therefore with certainty representative of, eligible participants who attended the facility during the data collection period. Thus, the degree to which the results are representative of all patients of interest at the health care facility depends on the degree to which the data collection period is representative of the larger time frame of interest. One means of increasing the representativeness of the data collection period for this larger time frame might be to select a sample of multiple (shorter) data collection periods.

Operational Efficiency

This sampling method will result in eligible patients queuing up to be interviewed, and thus unfeasibly high patient waiting times, when the consultation length is not consistently longer than the interview length. Thus, this approach is generally only feasible in settings with low volumes of eligible patients or if the patient exit interview is very short compared to the consultation length.

Sampling the Next Patient Exiting the Consultation Room

In this sampling method, the data collector arrives at the health care facility, or returns from a previous interview, and selects the next eligible participant exiting the clinical consultation. We suspect that at least some of the studies in our review, which did not state what sampling method was used, or which claimed to have sampled all eligible participants, simply selected the next patient exiting the consultation room.

Bias

Sampling the next patient exiting the consultation room results in a nonrepresentative sample. To explain the reasons for this claim, we assume that all patients fall into one of two categories: quick patients or slow patients, whereby slow patients spend more time in the clinical consultation than quick patients. If it takes a clinician, on average, M times as long to see a slow patient as compared to a quick patient, and the proportion of all patients who are quick patients is given by α, then the total treatment time T is given by

T=αNt+(1α)NMt

where N equals the total number of patients seen during the workday, and t equals the time required to see a quick patient. Then, the proportion of time clinicians spend seeing quick patients can be written as:

αNtαNt+(1α)NMt=αMα(M1)

If the data collector selects patients for exit interviews at a random time point (arriving in the morning, or after finishing another interview), this proportion must always be the same as the proportion of quick patients in the interview sample in order for the interview sample to be representative of the patient population. In other words, a representative sample of interview participants would require that the share of quick patients in the sample is α, that is, that αMα(M1)=α. This would only be possible if the clinician spent as much time with each slow patient as with each quick patient, which by assumption is not the case. For any setting in which some patients take more time (i.e., M > 1), αMα(M1)<α. In this situation, quick patients will always be underrepresented.

The intuition for this result is relatively straightforward: if two patients, one slow and one quick, start at the same point in time, the probability that the slow patient will still be around when the interviewer returns from an interview (or arrives at the facility) is larger than for the quick patient. Quick patients will thus be systematically missed, and average responses systematically biased toward patients with whom the clinician spent more time. An attempt could be made to reduce this bias through sampling weights that account for consultation length. However, this would require that the consultation times are recorded either by a designated study team member (which will usually lead to reduced operational efficiency because the team member could instead conduct interviews) or by the clinical team (which will not be feasible in many cases).

Operational Efficiency

Our simulations found that this method is almost always the most operationally efficient sampling method, and it excludes the possibility of patients having to wait until an interviewer is available. It is also logistically simple to implement.

Simple Random Sampling

This sampling method was not used by any of the studies identified by our literature review. A sampling frame is usually not available for patient exit interviews as many patients may not have an appointment, and a significant portion of those patients with an appointment may not attend. Thus, a randomization device (e.g., a coin or a smartphone with a randomization application) is likely required to randomly select patients. Table 3 outlines options for selecting patients when using simple (or systematic) random sampling.

Table 3.

Typical Options for Selecting Patients When Using Simple or Systematic Random Sampling

Who Selects Patients? When Are Patients Selected? Advantages Disadvantages
Interviewer Prior to consultation (in waiting area) All study team members can conduct interviews*
Does not place burden of patient selection on the clinical team
Biased if seating order in the waiting area is not random
Possibly biased if interviewer fails to keep track of the patient flow through the waiting area
Unethical if enquiring about eligibility criteria in the waiting area violates patient confidentiality
Clinician During the consultation All study team members can conduct interviews*
May increase clinical team's interest in the study
Biased if clinician fails to reliably conduct the randomization or to adhere to the sampling intervala
Requires buy‐in from clinical team
Designated study team memberb At exit from the consultation Third person to monitor adherence to patient selectionc
Does not place the burden of patient selection on the clinical team
Loss of operational efficiency because the study team member selecting patients could be conducting interviews instead

All study team members can both select and interview patients.

a

A clinician may forget to randomize or fail to correctly execute the randomization process.

b

Necessary because the interviewer would miss patients leaving the consultation room while he/she is conducting interviews.

c

The presence of a third person responsible for selecting patients may make it more difficult for the interviewer to skip certain patients (e.g., because they are perceived to be difficult interviewees).

Bias

With the exception of a census (i.e., sampling all eligible participants), this is the most rigorous method of sampling patients for exit interviews because it is the only approach that is entirely independent of the order in which patients wait in the waiting area, or exit the consultation room. For this method to yield an unbiased sample, all eligible patients at the health care facility need to be subjected to the random selection. If only patients who leave the consultation room while an interviewer is available are subject to randomization, the same bias will be introduced as with sampling the next patient exiting the consultation room.

Operational Efficiency

Ensuring that each eligible patient is randomized tends to add considerable operational complexity, the precise nature of which depends on the setting and who (interviewers, clinicians, or a designated study team member) randomizes patients to being interviewed (Table 3). Furthermore, this method is generally less operationally efficient than systematic random sampling and sampling the next patient entering the consultation room.

Systematic Random Sampling

In the case of systematic random sampling, the first patient to be interviewed is selected at random, and subsequently every xth patient is interviewed whereby the interval (x) is determined prior to the data collection. Table 3 outlines typical operational options for ensuring that the interval (x) is maintained.

Bias

Systematic random sampling will result in a random sample as long as the order, in which patients exit the clinical consultation, is random. While patterns in the order in which patients exit consultation rooms are fairly likely to exist at most facilities (e.g., patients without appointments are only seen at certain times of the day), the probability of being in the systematic random sample is the same for any one eligible patient. Thus, these patterns will only affect the representativeness of the interview sample if they occur in a periodic way throughout the data collection period, such that the pattern systematically coincides with the interval of the systematic random sample.

Operational Efficiency

Systematic random sampling requires the data collection team to monitor the interval with which patients are selected for interview. This can be accomplished in several ways, each of which has drawbacks (Table 3). Additionally, in most simulation scenarios, systematic random sampling was unable to achieve a higher operational efficiency than sampling the next patient entering the consultation room without resulting in patients having to wait until the next interviewer becomes available (Table 2). These patient waiting times are likely to compromise the representativeness of the sample, because some patients may leave the facility rather than wait for an interviewer.

It is important to bear in mind that the operational efficiency achieved with systematic random sampling in Table 2 assumes that the interval of selection is set at or near the optimal level. However, optimal interval setting is difficult to accomplish without considerable pilot testing. Ignoring the human resource needs to monitor the selection interval and assuming that the interval is set at or near the optimal level, systematic random sampling was the operationally most efficient unbiased sampling method in our simulations when the interview length was substantially shorter than the consultation length and/or the variances of both the consultation and interview lengths were considerably smaller than in the typical scenario shown in Table 2. Systematic random sampling performed poorly when the consultation and/or the interview length had a high variance.

Sampling the Next Patient Entering the Consultation Room

When the interviewer returns from an interview or arrives at the health care facility, he/she does not select the next patient exiting the consultation room, but instead selects the next patient entering the consultation room. In the case of multiple consultation rooms, the interviewer selects the next patient entering any of the consultation rooms.

Bias

We have shown mathematically that patients with longer consultation lengths are more likely to be interviewed when sampling the next patient exiting the consultation room (see the section entitled “Sampling the Next Patient Exiting the Consultation Room”). This bias is eliminated if interviewers do not select the next patient exiting, but rather wait for the next patient entering the consultation room. It is important to note that this sampling method is only unbiased under the assumption that the interviewer's completion time for the previous interview (or arrival time at the facility) is random with respect to the characteristics of the next patient who will enter the consultation room. This will be the case if the order in which patients enter the consultation room is unrelated to the length of time patients spend with the clinician and the interviewer.

In high volume settings where patients exit the consultation room at fairly regular intervals, sampling the next patient entering the consultation room will, in practice, be similar to systematic random sampling with the sampling interval being determined by both the interview and the consultation length. A disadvantage of sampling the next patient entering the consultation room compared to systematic random sampling is that researchers employing the latter method have somewhat more control over their sample size (by adjusting the sampling interval). This can sometimes be leveraged to create a self‐weighting sample, such as when sampling the same number of patients from facilities that were chosen with probability proportional to size. In contrast, researchers employing the method of sampling the next patient entering the consultation room will tend to sample more patients at busier facilities (or those with comparatively shorter consultation lengths) and may therefore need to weight their observations after data collection is completed.

Operational Efficiency

Our simulations demonstrate that sampling the next patient entering the consultation is, in the majority of scenarios, a more operationally efficient method than systematic and simple random sampling. Important additional advantages of this sampling approach over systematic and simple random sampling are as follows: (1) it can be easily implemented in any setting without pilot testing; (2) it is simple to implement for data collection and clinical teams; (3) it eliminates the possibility of burdening patients with a waiting time until an interviewer is available; and (4) it does not require any time or effort on the part of the clinical team. While sampling the next patient entering the consultation room is operationally less efficient than sampling the next patient exiting the consultation, this loss of operational efficiency is relatively minor. For instance, across the scenarios shown in Table 2, the mean percentage of patients interviewed is 46.6 percent with sampling the next patient exiting the consultation room, and 39.2 percent with sampling the next patient entering the consultation room. Similarly, across the scenarios, the mean number of patients interviewed per data collection day is 59.6 versus 46.5 for sampling the next patient exiting and the next patient entering the consultation room, respectively.

Conclusions

We have proposed a new, simple sampling method for patient exit interviews (sampling the next patient entering the consultation room) and demonstrated the relative advantages of this approach for typical primary health care settings. We show that sampling the next patient entering the consultation room tends to be the most operationally efficient unbiased sampling method as long as one assumption is met: the order with which patients are seen by the clinician is random with respect to the time spent in the consultation room and with the interviewer.

Our analysis and simulation results also allow for the following additional conclusions. First, sampling the next patient exiting the consultation room should only be used if either of the following two conditions is met: (1) the researcher is not concerned about having a sample, in which patients who spent a longer time in the consultation room are overrepresented, or (2) it is feasible to time consultation lengths so that observations can be weighted. Second, a number of assumptions have to be met for systematic random sampling to be unbiased and more operationally efficient than sampling the next patient entering the consultation room: (1) there is no periodicity in the order with which patients enter the consultation room; (2) the interview length is considerably shorter than the consultation length, or both the interview and consultation lengths do not differ significantly between patients; (3) the sampling interval is set at or near the optimal level; and (4) the researchers find a reliable way to monitor the sampling interval without reducing the number of available interviewers. Lastly, simple random sampling (i.e., randomizing all eligible patients to being interviewed or not using a randomization device) is the only sampling method, which will always yield an unbiased sample without any additional assumptions.

Supporting information

Appendix SA1: Author Matrix.

Appendix SA2: Study Characteristics and Sampling Methodology Used.

Acknowledgments

Joint Acknowledgment/Disclosure Statement: The authors gratefully acknowledge financial support from the Wellcome Trust, NIH (NICHD R01‐HD084233, NIAID R01‐AI124389, R01‐AI112339, NIA P01 AG041710) and the International Initiative for Impact Evaluation (3ie) and the Clinton Health Access Initiative (CHAI).

Disclosures: None.

Disclaimers: None.

References

  1. Alonge, O. , Gupta S., Engineer C., Salehi A. S., and Peters D. H.. 2014. “Assessing the Pro‐Poor Effect of Different Contracting Schemes for Health Services on Health Facilities in Rural Afghanistan.” Health Policy Plan 30(10): 1229–42. [DOI] [PubMed] [Google Scholar]
  2. Anya, S. E. , Hydara A., and Jaiteh L. E.. 2008. “Antenatal Care in The Gambia: Missed Opportunity for Information, Education and Communication.” BMC Pregnancy Childbirth 8: 9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Asfaw, E. , Dominis S., Palen J. G., Wong W., Bekele A., Kebede A., and Johns B.. 2014. “Patient Satisfaction with Task Shifting of Antiretroviral Services in Ethiopia: Implications for Universal Health Coverage.” Health Policy Plan 29 (Suppl 2): ii50–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Chimbindi, N. , Bärnighausen T., and Newell M. L.. 2014. “Patient Satisfaction with HIV and TB Treatment in a Public Programme in Rural KwaZulu‐Natal: Evidence from Patient‐Exit Interviews.” BMC Health Services Research 14: 32. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Chimbindi, N. , Bor J., Newell M. L., Tanser F., Baltusen R., Hontelez J., de Vlas S., Lurie M., Pillay D., and Bärnighausen T.. 2015. “Time and Money: The True Costs of Health Care Utilization for Patients Receiving ‘Free’ HIV/TB Care and Treatment in Rural KwaZulu‐Natal.” Journal of Acquired Immune Deficiency Syndromes 70(2): e52–60. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. The Demographic and Health Surveys Program . 2015. “SPA Overview” [accessed on June 29, 2015]. Available at http://dhsprogram.com/What-We-Do/Survey-Types/SPA.cfm
  7. Deveugele, M. , Derese A., van den Brink‐Muinen A., Bensing J., and De Maeseneer J.. 2002. “Consultation Length in General Practice: Cross Sectional Study in Six European Countries.” British Medical Journal 325 (7362): 472. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Ejigu, T. , Woldie M., and Kifle Y.. 2013. “Quality of Antenatal Care Services at Public Health Facilities of Bahir‐Dar Special Zone, Northwest Ethiopia.” BMC Health Services Research 13: 443. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Etiaba, E. , Onwujekwe O., Uzochukwu B., and Adjagba A.. 2015. “Investigating Payment Coping Mechanisms Used for the Treatment of Uncomplicated Malaria to Different Socio‐Economic Groups in Nigeria.” African Health Sciences 15 (1): 42–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Hrisos, S. , Eccles M. P., Francis J. J., Dickinson H. O., Kaner E. F., Beyer F., and Johnston M.. 2009. “Are There Valid Proxy Measures of Clinical Behaviour? A Systematic Review.” Implementation Science 4: 37. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Islam, F. , Rahman A., Halim A., Eriksson C., Rahman F., and Dalal K.. 2015. “Perceptions of Health Care Providers and Patients on Quality of Care in Maternal and Neonatal Health in Fourteen Bangladesh Government Healthcare Facilities: A Mixed‐Method Study.” BMC Health Services Research 15: 237. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Israel‐Ballard, K. , Waithaka M., and Greiner T.. 2014. “Infant Feeding Counselling of HIV‐Infected Women in Two Areas in Kenya in 2008.” International Journal of STD and AIDS 25 (13): 921–8. [DOI] [PubMed] [Google Scholar]
  13. Opwora, A. , Waweru E., Toda M., Noor A., Edwards T., Fegan G., Molyneux S., and Goodman C.. 2015. “Implementation of Patient Charges at Primary Care Facilities in Kenya: Implications of Low Adherence to User Fee Policy for Users and Facility Revenue.” Health Policy Plan 30 (4): 508–17. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Ostroff, J. S. , Li Y., and Shelley D. R.. 2014. “Dentists United to Extinguish Tobacco (DUET): A Study Protocol for a Cluster Randomized, Controlled Trial for Enhancing Implementation of Clinical Practice Guidelines for Treating Tobacco Dependence in Dental Care Settings.” Implementation Science 9: 25. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Peabody, J. W. , Florentino J., Shimkhada R., Solon O., and Quimbo S.. 2010. “Quality Variation and Its Impact on Costs and Satisfaction: Evidence from the QIDS Study.” Medical Care 48 (1): 25–30. [DOI] [PubMed] [Google Scholar]
  16. RAND Health . 2015. “Patient Satisfaction Questionnaire from RAND Health” [accessed on August 19, 2015]. Available at http://www.rand.org/health/surveys_tools/psq.html
  17. Sando, D. , Geldsetzer P., Magesa L., Andrew I., Machumi L. M.‐S., Mary N., Li D. M., Spiegelman E., Siril H., Mujinja P., Naburi H., Chalamilla G., Kilewo C., Anna‐Mia E., Fawzi W. W., and Bärnighausen T.. 2014. “Evaluation of a Community Health Worker Intervention and the World Health Organization's Option B versus Option A to Improve Antenatal Care and PMTCT Outcomes in Dar es Salaam, Tanzania: Study Protocol for a Cluster‐Randomized Controlled Health Systems Implementation Trial.” Trials 15: 359. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Senarath, U. , Fernando D. N., Vimpani G., and Rodrigo I.. 2007. “Factors Associated with Maternal Knowledge of Newborn Care among Hospital‐Delivered Mothers in Sri Lanka.” Transactions of the Royal Society of Tropical Medicine and Hygiene 101 (8): 823–30. [DOI] [PubMed] [Google Scholar]
  19. Stange, K. C. , Zyzanski S. J., Smith T. F., Kelly R., Langa D. M., Flocke S. A., and Jaen C. R.. 1998. “How Valid Are Medical Records and Patient Questionnaires for Physician Profiling and Health Services Research? A Comparison with Direct Observation of Patients Visits.” Medical Care 36 (6): 851–67. [DOI] [PubMed] [Google Scholar]
  20. Steine, S. , Finset A., and Laerum E.. 2001. “A New, Brief Questionnaire (PEQ) Developed in Primary Health Care for Measuring Patients’ Experience of Interaction, Emotion and Consultation Outcome.” Family Practice 18 (4): 410–8. [DOI] [PubMed] [Google Scholar]
  21. Turner, A. G. , Angeles G., Tsui A. O., Wilkinson M., and Magnani R.. 2001. Sampling Manual for Facility Surveys. MEASURE Evaluation Manual Series. Chapel Hill, NC: MEASURE Evaluation, Carolina Population Center, University of North Carolina at Chapel Hill. [Google Scholar]
  22. Wensing, M. 2006. EUROPEP 2006: Revised Europep Instrument and User Manual. Nijmegen: Centre for Quality of Care Research. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Appendix SA1: Author Matrix.

Appendix SA2: Study Characteristics and Sampling Methodology Used.


Articles from Health Services Research are provided here courtesy of Health Research & Educational Trust

RESOURCES