Abstract
PURPOSE We compared the completeness of data collection using paper forms and using electronic forms loaded on handheld computers in an office-based patient interview survey conducted within the American Academy of Family Physicians National Research Network.
METHODS We asked 19 medical assistants and nurses in family practices to administer a survey about pneumococcal immunizations to 60 older adults each, 30 using paper forms and 30 using electronic forms on handheld computers. By random assignment, the interviewers used either the paper or electronic form first. Using multilevel analyses adjusted for patient characteristics and clustering of forms by practice, we analyzed the completeness of the data.
RESULTS A total of 1,003 of the expected 1,140 forms were returned to the data center. The overall return rate was better for paper forms (537 of 570, 94%) than for electronic forms (466 of 570, 82%) because of technical difficulties experienced with electronic data collection and stolen or lost handheld computers. Errors of omission on the returned forms, however, were more common using paper forms. Of the returned forms, only 3% of those gathered electronically had errors of omission, compared with 35% of those gathered on paper. Similarly, only 0.04% of total survey items were missing on the electronic forms, compared with 3.5% of the survey items using paper forms.
CONCLUSIONS Although handheld computers produced more complete data than the paper method for the returned forms, they were not superior because of the large amount of missing data due to technical difficulties with the hand-held computers or loss or theft. Other hardware solutions, such as tablet computers or cell phones linked via a wireless network directly to a Web site, may be better electronic solutions for the future.
Keywords: Practice-based research; data collection; health surveys; questionnaires; Internet; PDA computers; computers, handheld; office visits; primary care
INTRODUCTION
The collection and management of survey data in office-based clinical research is challenging. Data collection frequently takes place in association with office visits where it is of secondary importance to patient care. Data collection may be performed by regular office personnel who have minimal training in research methods. Once collected, data must be transmitted to the research office and accurately entered, coded, and cleaned before analysis. These are time-consuming and exacting tasks that allow opportunities for errors. Electronic data forms may improve the accuracy and efficiency of office-based surveys. Electronic survey forms may be more accurate (or at least more complete) than paper forms because limits can be imposed on data fields, and respondents can be “forced” to answer each question. If electronic devices are indeed more accurate and efficient for office-based surveys, their added expense compared with paper forms may be justified because of the lower costs for data management downstream.
Hardware options for electronic data management in office-based studies include handheld computers—also referred to as personal digital assistants (PDAs)—tablet computers, notebook computers, and desktop computers. Data can be either stored on the computer for transmission to the research office or entered directly into a remote database using a Web-based interface. Pace and Staton1 reviewed the limited experience of practice-based research networks (PBRNs) with electronic data collection, describing potential advantages and challenges of a variety of approaches. They claimed that “[PDAs] appear to be the best current option for electronic point-of-care data collection.” To address this assertion, we studied use of handheld computers for administering and transmitting patient survey data. We compared the completeness of data gathered using paper forms vs handheld computers in an office-based study in which adults aged 65 years or older were interviewed regarding pneumococcal immunizations. We also describe the technical difficulties experienced with handheld computers.
METHODS
Recruitment of Study Practices
We sent letters and e-mails in spring 2002 to 164 physician members of the American Academy of Family Physicians National Research Network (AAFP NRN) inviting them to recruit 1 of their nurses or medical assistants to participate in the Pneumococcal Immunization Study of Older Adults. Forty-four physicians expressed an interest. Twenty-two did not enroll because of lack of resources for writing an application to obtain local institutional review board (IRB) approval (75%) or difficulties with institutional computer security issues such as firewalls (25%), leaving 22 practices agreeing to participate.
One nurse or medical assistant from each participating practice was instructed to interview 60 adult patients aged 65 years or older about pneumococcal immunizations—30 with paper forms and 30 with electronic forms loaded on handheld computers. The nurses and medical assistants (hereafter referred to as practice interviewers) were randomly assigned to use either the paper form or the electronic form first so that one method or the other would not have an unfair advantage due to a potential training effect. Data collection began August 22, 2002, and was completed on September 26, 2003.
One of the authors (T.V.S.) trained each practice interviewer via individual phone meetings. The interviewers were instructed to approach consecutive patients aged 65 years and older who visited the office for any reason. Follow-up contact between AAFP NRN staff and the interviewers continued via telephone calls and e-mails until data collection was completed.
Data Collection Methods and Form Content
The practice interviewers used Sony Clie PEG-T615C handheld computers (Sony Electronics Inc, San Diego, California) with the Palm operating system version 4.1 (Palm, Inc, Sunnyvale, Calif). The original data collection plan called for use of Pendragon Internet Forms (Pendragon Software Corporation, Libertyville, Illinois) that would have allowed remote transmission of encrypted data from each handheld computer to the Pendragon secure server via the Internet; however, because institutional firewalls protecting many practices precluded the use of these forms, all but one of the participating practices mailed the handheld computers back to the AAFP at the completion of data collection. Pendragon Forms 3.2 software was used to program the form into the handheld computers. Electronic data were downloaded to Microsoft Excel 2000 (Microsoft Corp, Redmond, Washington) for storage. The practice interviewers mailed the paper forms to the AAFP, where data were entered by AAFP NRN staff. Staff double-checked 20% of the paper forms for data entry errors, which occurred in less than 1% of items.
The 40 survey items were identical on both forms. The items asked about patients’ knowledge and beliefs regarding pneumococcal immunizations (7 items); prior immunization (3 items); reasons for having/not having a prior immunization (11 items); preference for an immunization at the current office visit and reasons for wanting/not wanting an immunization (9 items); selected demographics (7 items); and administrative issues (3 items). These last questions asked the interviewer to obtain/supply data on prior pneumococcal immunization from the patient’s chart and asked whether the patient received a pneumococcal immunization at the current visit and the date of the visit. For the electronic forms only, all items were “forced choice” (a response had to be recorded before proceeding) except for the 10 demographic and administrative items.
Outcome Measures
For this study, we counted errors of omission (no response). We defined the survey return rate as the proportion of completed forms returned relative to those requested (60 per site). For example, if 50 were received, this was a survey return rate of 83%. We used 2 additional measures based on returned forms only: the proportion of returned forms with 1 or more errors of omission (returned form error rate) and the proportion of returned form items with errors of omission (returned form item error rate). For brevity, we refer to returned forms with any errors of omission as forms with errors and to form items with errors of omission as form items with errors.
Types of Errors
For this investigation we did not count errors of commission—items that were answered but should not have been. For instance, if a respondent reported not having had the immunization but answered the questions about why he/she did receive that immunization, these items were not counted as errors; we assumed that the response to the first question was correct. We counted errors of omission but not errors of commission because the former are more detrimental to survey results since omitted responses obviously are not available for analysis, whereas errors of commission can simply be ignored or recoded as not applicable.
Counting Applicable Items
Where each interviewed respondent is presented with the same number of applicable survey items, calculating the rate of errors of omission from returned forms is straightforward since the denominator is a constant. In forms with “skip patterns,” however, applicable items to count vary based on responses to particular questions (eg, “Have you ever received a pneumonia shot?”). The number of subsequent questions for respondents depends on their answer to this and other questions. Across the 2 survey forms, there were 8 possible response patterns, with applicable items per respondent ranging from 18 to 29.
Statistical Analyses
We conducted statistical analyses using SAS version 9.1 (SAS Institute, Inc, Cary, North Carolina). Rates, frequency distributions, and descriptive statistics (means, SDs) were computed for forms with errors and eligible form items. The intraclass correlation coefficient was computed to assess potential clustering effects of forms within sites; the coefficient for returned forms within sites was 22% for forms with errors (yes/no) and 16.6% for form items with errors, so we used multilevel methods appropriate for clustered data. We tested the associations between the primary outcomes (percentage of forms with errors and percentage of form items with errors) and form type, as well as for order (first vs second) and possible interaction of form type by order. All analyses were adjusted for patient characteristics and clustering of forms within sites.
To determine whether the likelihood of a returned form with errors differed by form type (paper, electronic) or study group (paper first, electronic first), we used general linear mixed models (random intercept) with forms with errors (yes vs no) as the outcome (logit link) to extend the traditional logistic regression model to accommodate the data’s hierarchical structure (Proc MIXED with GLIMMIX macro).2 The proportion of form items with errors was analyzed using general linear mixed models (random intercept) with SAS Proc MIXED. Patient-level covariates included race/ethnicity, sex, age, and education. We examined variance components after study variables and patient-level covariates had been added to the model, and determined that site random effects should be retained. We used .05 as the probability of a type I error for statistical significance. The study was approved by the University of Missouri–Kansas City Social Science Institutional Review Board.
RESULTS
Return Rates
Three of the 22 original practices dropped out of the study before data collection; 2 had staffing difficulties and 1 had their handheld computer stolen before data collection.
Overall, 1,003 of the expected 1,140 forms were returned to the data center. For the 19 practices returning any forms, the form return rates were 94% (537 of 570) for paper forms and 82% (466 of 570) for electronic forms (P <.001). Sixteen (84%) of the practices returned at least 30 paper forms; 10 (53%) of the practices returned at least 30 electronic forms (Table 1 ▶). Practice J returned 29 paper forms but only 4 electronic forms because of staffing shortages encountered during their electronic phase of the study. Practices C and N (electronic-first group) returned only paper forms because 1 handheld computer was lost/stolen before data collection and 1 was lost/stolen after data collection. Practice K returned only electronic forms and did not implement the paper form phase because of a staffing shortage after electronic data collection was completed. Data from these 4 sites were included in the statistical analyses.
Table 1.
Site | Paper No. (%) | Electronic No. (%) |
Paper-first group | ||
A | 3/30 (10) | 2/22 (9) |
D | 2/30 (7) | 1/30 (3) |
E | 10/30 (33) | 2/20 (10) |
F | 6/30 (20) | 1/28 (4) |
H | 9/30 (30) | 0/30 (0) |
J | 20/29 (69) | 0/4 (0) |
O | 3/30 (10) | 2/29 (7) |
P | 1/30 (3) | 1/31 (3) |
R | 30/30 (100) | 1/25 (4) |
S | 6/30 (20) | 1/32 (3) |
Total | 90/299 (30) | 11/251 (4) |
Electronic-first group | ||
B | 4/30 (13) | 0/30 (0) |
C | 18/32 (56) | NAa |
G | 30/30 (100) | 0/30 (0) |
I | 17/24 (71) | 1/28 (4) |
K | NAb | 0/32 (0) |
L | 8/30 (27) | 0/32 (0) |
M | 2/30 (7) | 2/33 (6) |
N | 11/30 (37) | NAc |
Q | 6/32 (19) | 1/30 (3) |
Total | 96/238 (40) | 4/215 (2) |
Overall total | 186/537 (35) | 15/466 (3) |
NA = not applicable.
Note: Not all sites returned 60 forms as specified in the protocol.
a Handheld computer was lost/stolen before data collection.
b No paper forms were returned because of staffing difficulties.
c Handheld computer was lost/stolen after data collection.
Sixteen (84%) of 19 practice sites returned both paper and electronic forms. For these practices, there was no statistical difference between the mean number of paper forms and electronic forms returned per practice (29.7 vs 27.1, t15 = 1.45, P = .17), although there were 9.4% more returned paper forms than electronic forms (475 vs 434).
Error Rates
When comparing the 2 types of forms according to the returned form error rate, the rate was 35% (186 of 537) for paper and 3% (15 of 466) for electronic (P <.001) (Table 1 ▶). Across the 18 sites returning paper forms, there were 469 errors of omission; for the 17 sites returning electronic forms, there were 43 errors of omission. The returned form item error rate for paper forms and electronic forms was 3.5% and 0.4%, respectively, an absolute difference of 3.1% favoring electronic forms (P <.001) (Table 2 ▶).
Table 2.
Site | Paper No. (%) | Electronic No. (%) |
Paper-first group | ||
A | 3/727 (0.4) | 6/547 (1.1) |
D | 6/760 (0.8) | 2/740 (0.3) |
E | 14/737 (1.9) | 6/522 (1.1) |
F | 12/760 (1.6) | 1/714 (0.1) |
H | 27/767 (3.5) | 0/781 (0.0) |
J | 55/764 (7.2) | 0/121 (0.0) |
O | 3/808 (0.4) | 2/735 (0.3) |
P | 2/752 (0.3) | 6/774 (0.8) |
R | 34/757 (4.5) | 5/604 (0.8) |
S | 14/777 (1.8) | 2/876 (0.2) |
Total | 170/7,609 (2.2) | 30/6,414 (0.5) |
Electronic-first group | ||
B | 10/777 (1.3) | 0/771 (0.0) |
C | 67/774 (8.7) | NAa |
G | 100/670 (14.9) | 0/689 (0.0) |
I | 81/585 (13.8) | 5/699 (0.7) |
K | NAb | 0/800 (0.0) |
L | 9/750 (1.2) | 0/799 (0.0) |
M | 6/806 (0.7) | 6/864 (0.7) |
N | 20/764 (2.6) | NAc |
Q | 6/823 (0.7) | 2/799 (0.3) |
Total | 299/5,949 (5.0) | 13/5,421 (0.2) |
Overall total | 469/13,558 (3.5) | 43/11,835 (0.4) |
NA = not applicable.
Note: Not all sites returned 60 forms as specified in the protocol, and the number of items per form varied because of skip patterns.
a Handheld computer was lost/stolen before data collection.
b No paper forms were returned because of staffing difficulties.
c Handheld computer was lost/stolen after data collection.
In models with main effects for form type (paper vs electronic) and order (first vs second), only the main effect for type was statistically significant, favoring greater completeness of electronic forms in terms of both any omissions on returned forms (Table 3 ▶) and omitted items on returned forms (Table 4 ▶). Order of administration did not significantly influence these outcomes.
Table 3.
Type of Form | Order of Administration | Forms With Errors, % (95% CI) |
Paper | Either | 26.5 (19.9–34.5) |
Electronic | Either | 5.1 (3.5–17.6) |
CI = confidence interval.
Note: type of form: F1,15=156.93, P<.001; order: F1,17=0.71, P=.41.
Table 4.
Type of Form | Order of Administration | Items With Errors, % (95% CI) |
Paper | Either | 3.00 (2.09–3.91) |
Electronic | Either | 0.94 (0.01–1.87) |
CI = confidence interval.
Note: type of form: F1,15=49.43, P<.01; order: F1,17=2.65, P=.12.
Results for Specific Items
For the 469 and 43 errors of omission observed for paper and electronic forms, respectively, (1) the pneumococcal immunization items accounted for 56% and 0% of paper and electronic errors of omission, respectively; (2) demographic items accounted for 23% and 53%; and (3) administrative items accounted for 21% and 47%. When the analysis was restricted to the 10 demographic and administrative items that were not forced choice on both forms, the electronic forms were again superior to the paper forms. There was a 0.9% error rate (43 errors per 4,537 eligible items) for returned electronic forms, compared with a 4.0% error rate (210 errors per 5,250 eligible items) for returned paper forms (P <.001).
DISCUSSION
Despite the widespread use of handheld computers (PDAs) by physicians for electronic reference and prescription writing, few studies have examined the usefulness of handhelds for data gathering and transmission for office-based research studies.3–13 Besides the current study, 3 other controlled trials have compared data collection using handheld computers and paper and pencil methods.4–6 In an audit of care in hospital medical wards in the United Kingdom, an electronic system saved time, decreased staffing costs, and reduced errors.4 McBride et al5 found that quality scores on patient satisfaction questionnaires completed by patients using handheld computers were comparable to those obtained with paper and pencil questionnaires, but there was evidence of lower internal consistency and reliability with the handhelds. Missinou et al6 reported comparable data quality and a preference of the researchers for handheld computers compared with paper forms in a field study in remote Africa. In a nonrandomized study in a primary care setting, the investigators concluded that use of hand-held computers resulted in few protocol violations and other incorrect data entries, and saved time.7 None of these studies involved remote data transmission. The International Primary Care Network (IPCN) collected data on otitis media from 131 family physicians and general practitioners in 4 countries using Newtons, an early handheld computer, and transmitted the data to a central server via satellite.13 They experienced a number of technical difficulties, including a shutdown of the satellite network by Apple Computers.
In a review of electronic data management in PBRNs, Pace and Staton1 state that “PDAs work well for collecting defined data elements at the point of care.” Our observation is that they work well sometimes. We cannot give them an unequivocal endorsement. The practice interviewers in this study obtained more complete forms with fewer errors of omission using handheld computers compared with paper among forms returned, but 3 study practices turned in no or few electronic forms because of staffing reasons (1 practice) and loss/theft of the handheld computer (2 practices). Among all returned forms, the electronic method yielded forms that were more complete, but this method also produced 15.2% fewer returned forms compared with the paper method. (As noted previously, a third practice that had agreed to participate dropped out of the study because the handheld computer was stolen before beginning data collection.)
In addition, the percentage of items with errors, ostensibly the most important outcome measure for survey research, was excellent for both methods: 3.5% for paper forms and 0.4% for electronic forms. And, when restricted to only those 10 items without forced choice on both forms, these omission error rates were 4.0% for paper forms and 0.9% for electronic forms. For both outcomes, the electronic forms were superior.
Our data suggest that nurses and medical assistants, when interviewing and recording responses, can produce complete data using either paper forms or electronic forms on handheld computers. If the most important outcome is that the returned forms not have errors of omission, however, the handheld computers were far superior, with omission-free rates of 97% vs 65%—an absolute difference of 32%.
The potential advantage of electronic transmission of data to the central research office was not realized in our study because of firewall problems and related institutional computer security issues. Given the growing number of reported security breaches into university, government, and corporation databases across the globe, these problems are likely to increase in the future. Five practices that wanted to participate could not do so because of computer security issues, moreover. There does not appear to be an easy solution to these technical issues, especially in instances where the practice is part of a larger organization (eg, university, hospital) that has strict requirements and procedures in place to limit transmission of information between the institution and external Internet Web sites. Attempting to resolve these issues “long distance” often entails a great deal of time and increased frustrations on both sides. Issues such as these will need to be resolved if practice-based research is to make maximum use of electronic data collection.
There are limitations to this study. First, we did not attempt to measure the time needed to gather data with each method or the amount of time needed for data entry, cleaning, and coding. It is possible that a time-motion study would reveal clear superiority of either one method or the other. We did observe, however, that our staff spent many hours navigating computer security issues and assisting practices with setting up and using the handheld computers. Once practices become more facile and accustomed to using electronic devices for research studies on a regular basis, these problems may be minimized.
Second, study practices did not keep a systematic count of patients who declined the invitation to be interviewed for the study. We therefore do not know whether study refusal rates might differ between the 2 types of data collection instruments. Third, we did not take full advantage of the electronic technology. In the 466 returned electronic forms, there were no errors of omission for the 30 items that were forced choice. The 43 errors of omission occurred in the remaining 10 items with no forced choices. Had we made all responses forced choice, we would have expected even better results for the electronic method; however, in a recent review of an electronic survey project, the AAFP NRN IRB disallowed items that “required” respondents to select among substantive alternatives (eg, agree, unsure, disagree). Either the respondent had to be free to skip questions, or “no response” had to be listed as one of the forced choices. This IRB ruling appears to lessen the purported advantages of electronic data collection relative to paper forms.
Fourth, the study was limited to survey administration and data entry by trained practice interviewers, and our results cannot be generalized to self-administered surveys of patients who may vary in their levels of computer literacy. In a recent systematic review of 9 randomized trials of pen and paper vs handheld computers (PDAs) for patient diaries in clinical research, however, Dale and Hagen14 found that, despite some technical difficulties, PDAs were superior and outperformed pen and paper when collecting patient diary data.
Finally, we might have used a more stringent outcome measure—form completeness rate—defined as the proportion of requested forms returned with no errors of omission. This measure takes into account the larger proportion of electronic forms not returned and thus contributing to the error rate across all requested forms. The form completeness rate was 79% (451 of 570) for electronic forms vs 62% (351 of 570) for paper forms (P <.001). Even this more stringent measure resulted in a superior outcome for the electronic forms.
We chose to study use of handheld computers, believing that this relatively inexpensive and mobile technology might be best suited to office-based research. We now suspect that Web-based approaches offer PBRNs a better electronic solution than hand-held computers, which are susceptible to loss/theft and which require docking and downloading or mailing to the research office. For example, in a Colorado Research Network (CaReNet) survey, patients were able and willing to use tablet PCs for data collection in busy primary care offices.15 With the cost of tablet computers and wireless networks decreasing rapidly, it is becoming economical to use tablet computers and Web-based survey tools, such as SurveyMonkey (http://www.surveymonkey.com) and Zoomerang (http://info.zoomerang.com). In addition, the Agency for Health-care Research and Quality (AHRQ) has a Web-based survey tool, Ultimate Survey, that is powerful, user friendly, and available at no cost to registered PBRNs.
Acknowledgments
The authors wish to express their thanks to the physicians and study coordinators who contributed to this research: Delores Baer, FNP; Alice Bond, BSN; Benjamin Brewer, MD; Penny Carter; Steven Crane, MD; William Crow, Jr, MD; Daniel Doyle, MD; Andrew Eisenberg, MD, MHA; Mickey Ellis; Robert Farron, DO; Sandra Florence; Michael Hatsell, MD; Cindy Hendrickson, RN; Hayat Heriba, MD; Jackie Hodgson, RN; Barri Hoffman; Raj Kachoria, MD; Paul LeBlanc, MD; James Ledwith, Jr, MD; J. Carter Mayberry, MD; Lee Miller, LPN; Adam Miner, MD; Karen Miner, MA; Rebecca McVey, RN; Rose Orologas, MA; Gloria Salonic, LPN; Pat Samargo, RN; Nancy Saunders, LPN; Brian Selius, DO; Rhonda Smith, LPN; Linda Stewart, MD; Betty Swisher; Daniel Triezenberg, MD; June R. Tunstall, MD; Rose Warhank, MD; and Darryl White, MD.
Conflicts of interest: none reported
Funding support: This study was funded in part by a grant (2 U01 HS011182 02) to the American Academy of Family Physicians National Research Network by the Agency for Healthcare Research and Quality (AHRQ).
REFERENCES
- 1.Pace WD, Staton EW. Electronic data collection options for practice-based research networks. Ann Fam Med. 2005;3(Suppl 1):s21–s29. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Littell RC, Milliken GA, Stroup WW, Wolfinger RD. SAS System for Mixed Models. Cary, NC: SAS Institute, Inc; 1996.
- 3.Burnard P. Data collection using a palm-top computer. Prof Nurse. 1995;11(3):201–202. [PubMed] [Google Scholar]
- 4.Curl M, Robinson D. Hand-held computers in clinical audit: a comparison with established paper and pencil methods. Int J Health Care Qual Assur. 1994;7(3):16–20. [DOI] [PubMed] [Google Scholar]
- 5.McBride JS, Anderson RT, Bahnson JL. Using a hand-held computer to collect data in an orthopedic outpatient clinic: a randomized trial of two survey methods. Med Care. 1999;37(7):647–651. [DOI] [PubMed] [Google Scholar]
- 6.Missinou MA, Olola CH, Issifou S, et al. Short report: piloting paperless data entry for clinical research in Africa. Am J Trop Med Hyg. 2005;72(3):301–303. [PubMed] [Google Scholar]
- 7.Tattersall AB, Ellis R. The use of a hand-held computer to record clinical trial data in general practice: a pilot study. J Int Med Res. 1989;17(2):185–189. [DOI] [PubMed] [Google Scholar]
- 8.Grand AM, Delisle E, Champagne S, Theroux P. Evaluation of the Newton Pen-Pad as a tool for collecting clinical research data at the bed-side. AMIA Annu Symp Proc. 1996;738–741. [PMC free article] [PubMed]
- 9.Keshavjee K, Lawson ML, Malloy M, Hubbard S, Grass M. Technology failure analysis: understanding why a diabetes management tool developed for a Personal Digital Assistant (PDA) didn’t work in a randomized controlled trial. AMIA Annu Symp Proc. 2003;889. [PMC free article] [PubMed]
- 10.Kline JA, Johnson CL, Webb WB, Runyon MS. Prospective study of clinician-entered research data in the emergency department using an Internet-based system after the HIPAA Privacy Rule. BMC Med Inform Decis Mak. 2004;4:17. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.VanDenKerkhof EG, Goldstein DH, Blaine WC, Rimmer MJ. A comparison of paper with electronic patient-completed questionnaires in a preoperative clinic. Anesth Analg. 2005;101(4):1075–1080. [DOI] [PubMed] [Google Scholar]
- 12.Green LA, Fryer GE, Froom P, Culpepper L, Froom J. Opportunities, challenges, and lessons of international research in practice-based research networks: the case of an international study of acute otitis media. Ann Fam Med. 2004;2(5):429–433. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Froom J, Culpepper L, Green LA, et al. A cross-national study of acute otitis media: risk factors, severity, and treatment at initial visit. Report from the International Primary Care Network (IPCN) and the Ambulatory Sentinel Practice Network (ASPN). J Am Board Fam Pract. 2001;14(6):406–417. [PubMed] [Google Scholar]
- 14.Dale O, Hagen KB. Despite technical problems personal digital assistants outperform pen and paper when collecting patient diary data. J Clin Epidemiol. 2007;60(1):8–17. [DOI] [PubMed] [Google Scholar]
- 15.Main DS, Quintela J, Araya-Guerra R, Holcomb S, Pace WD. Exploring patient reactions to pen-tablet computers: a report from CaReNet. Ann Fam Med. 2004;2(5):421–424. [DOI] [PMC free article] [PubMed] [Google Scholar]