Skip to main content
Applied Clinical Informatics logoLink to Applied Clinical Informatics
. 2015 Mar 18;6(1):148–162. doi: 10.4338/ACI-2014-09-RA-0073

Implementation of an Audio Computer-Assisted Self-Interview (ACASI) System in a General Medicine Clinic

Patient Response Burden

WE Trick 1,, C Deamant 2, J Smith 1, D Garcia 1, F Angulo 1
PMCID: PMC4377567  PMID: 25848420

Summary

Background

Routine implementation of instruments to capture patient-reported outcomes could guide clinical practice and facilitate health services research. Audio interviews facilitate self-interviews across literacy levels.

Objectives

To evaluate time burden for patients, and factors associated with response times for an audio computer-assisted self interview (ACASI) system integrated into the clinical workflow.

Methods

We developed an ACASI system, integrated with a research data warehouse. Instruments for symptom burden, self-reported health, depression screening, tobacco use, and patient satisfaction were administered through touch-screen monitors in the general medicine clinic at the Cook County Health & Hospitals System during April 8, 2011-July 27, 2012. We performed a cross-sectional study to evaluate the mean time burden per item and for each module of instruments; we evaluated factors associated with longer response latency.

Results

Among 1,670 interviews, the mean per-question response time was 18.4 [SD, 6.1] seconds. By multivariable analysis, age was most strongly associated with prolonged response time and increased per decade compared to < 50 years as follows (additional seconds per question; 95% CI): 50–59 years (1.4; 0.7 to 2.1 seconds); 60–69 (3.4; 2.6 to 4.1); 70–79 (5.1; 4.0 to 6.1); and 80–89 (5.5; 4.1 to 7.0). Response times also were longer for Spanish language (3.9; 2.9 to 4.9); no home computer use (3.3; 2.8 to 3.9); and, low mental self-reported health (0.6; 0.0 to 1.1). However, most interviews were completed within 10 minutes.

Conclusions

An ACASI software system can be included in a patient visit and adds minimal time burden. The burden was greatest for older patients, interviews in Spanish, and for those with less computer exposure. A patient’s self-reported health had minimal impact on response times.

Keywords: Computers, software, quality of life, symptoms, patient-centered outcome research

1. Background

Existing data collected during a patient’s healthcare encounter has enormous potential to improve the delivery of healthcare to the right patient at the right time, and to measure the quality of care delivered [1, 2]. Although the value of using data routinely generated during healthcare encounters has been recognized, such data usually represent physiologic measures, administrative codes, medication prescribing, and radiologic interpretations. There is a void in standardized and systematic measurement of health-related behaviors and patient-reported health status in routine clinical care.

Recently, there has been increased attention on measuring patient-reported outcomes [3]. Because collecting patients’ health-related behaviors and self-reported outcomes has many potential advantages and poses minimal risk, we developed an audio computer-assisted self interview (ACASI) software system that incorporated instruments clinicians endorsed as meaningful to the clinical encounter. Such instruments measured physical and mental health, symptom burden, satisfaction, and health-related behaviors. Although incorporating audio substantially complicated system development, we included audio to improve the accuracy of responses for patients with low levels of literacy [4, 5].

2. Objectives

To describe the development of our system, including design considerations encountered during software development, and report on the duration of time for interview completion and patient factors associated with longer response times.

3. Methods

3.1 Participant Recruitment

During April 8, 2011 through July 27, 2012, we directed patients who presented to one section of the general internal medicine clinic of the Cook County Health & Hospitals System (CCHHS) to computer stations enclosed in wall-mounted kiosks. CCHHS is a large, urban public healthcare system serving municipalities surrounding and including Chicago, IL. All English- or Spanish-speaking patients 18 years of age or older were eligible. General medicine clinic patients are distributed among three separate patient care teams without regard to age or chronic illness burden. We selected one of the three firms because it was geographically separate from the other two co-located firms, which facilitated modification of the workflow and allowed placement of dedicated kiosks for patient interviews. Before their appointment, patients in the waiting area were directed to computer kiosks. To minimize disruption in clinic workflow, we did not approach patients next in the queue to be seen by their provider. Race and ethnicity categories were self-reported at registration. The CCHHS institutional review board approved the project and waived the requirement for participant consent.

3.2 Instruments

Instruments were selected in consultation with clinicians from the general medicine clinic (▶ Table 1). Factors considered during instrument selection included brevity, psychometric properties, and licensing agreements. We included the following instruments in the ACASI system: National Institutes of Health Patient Reported Outcomes Measurement Information System (PROMIS) self-reported health 10-item short form [6]; the Memorial Symptom Assessment Scale (MSAS) short form consisting of a 17-item somatic symptoms inventory [7-9]; the two-item Patient Health Questionnaire (PHQ-2) from which a score is generated to estimate the likelihood of depression [10]; a 1 to 3-item tobacco evaluation adapted from the Fagerström tolerance questionnaire [11]; and, an abbreviated 13-item version of the Health Services and Resource Administration’s patient satisfaction survey [12]. We modified the MSAS by eliminating the response option that indicated symptom presence without distress. The revised response scale was as follows: 0, absent; 1, present, little distress; 2, present, somewhat distress; 3, present, quite a bit of distress; 4, present, very much distress. The overall MSAS physical symptom burden was calculated as the mean of the twelve most prevalent symptoms [7].

Table 1.

Instrument options for patient assessments. Selections could be based on clinic location and patient characteristics. Instruments were administered using an audio computer assisted self-interview (ACASI) system in waiting room kiosks.

Instruments Intervention Opportunities
Behavioral Health
Tobacco use Substance use (ASSIST)
  • Smoking cessation counseling. List of smokers willing to quit sent to health educators

  • Brief interventions at-risk, speciality referral for dependent use

Clinical assessments
Symptom burden (MSAS) Depression screen (PHQ-2)
  • Provision of symptom-specific coping strategies

  • Join pharmacy data, alerts for common side effects (e.g., cough and ACE inhibitor)

  • Offer psychiatrist or psychologist evaluation

Patient-reported outcomes
Self-reported health (NIH-PROMIS)
  • Augmented care for patients who have decrements in self reported health

  • Longitudinal assessments available to evaluate intervention strategies.

  • Outcome routinely collected for comparative effectiveness research

Instruments were compiled into modules that could vary based on clinic location, and patient demographics. To minimize patient burden, instruments were not repeated if minimal time had elapsed since their prior interview. The sequence of instruments was fixed in the following order: Self-reported health, symptom burden, depression screen, tobacco use, and patient satisfaction. The self-reported health and symptom burden assessments were completed on each visit; the depression screen and tobacco use assessments were only administered if there had been no assessment within the prior 60 days; and, only 10% of patients were randomly selected to complete the patient satisfaction survey. For this report, instrument selection was not changed based on demographic factors. Because implementation was part of clinical care rather than as a research activity, responses were not mandatory. Also, to minimize patient burden, soon after initial deployment of the system, we restricted the patient satisfaction survey to a 10% random sample of patients.

3.3 Audio computer-assisted self-administered interviews (ACASI)

A touch-screen monitor activated the ACASI program. Patients used headphones and selected their preferred language (Spanish or English). They heard studio-quality audio files of a verbatim reading of each survey question which was simultaneously displayed on the monitor. As each response option became audible, words were highlighted. Before survey initiation, registration clerks affixed a bar coded visit label on an index card, which was provided to patients. Patients were instructed to scan the barcode at the computer kiosk. Real time data transmission to our research data warehouse enabled joining the visit number to the patient’s name and demographics (▶ Figure 1). After optional practice questions, patients completed the module. We captured the total interview time, but not item-level response times. Physicians received a printed copy of survey responses during the proximate appointment (▶ Figure 2). Software was developed using the C# language on the .NET framework. ACASI responses were stored in a SQL server research data warehouse (Microsoft Inc., Redmond, WA.) [13].

Fig. 1.

Fig. 1

Schematic that details process for administering the audio computer-assisted self interviews (ACASI).

Fig. 2.

Fig. 2

Example of the display for printed responses for a patient interview.

3.4 Analytic methods

We combined survey responses with patients’ date of birth, sex, and self-reported race and ethnicity from our research data warehouse. The PROMIS Global Physical and Mental Health scores were calculated as recommended, calibrated to a national mean of 50, with a standard deviation of 10. We excluded incomplete surveys, defined as non-response to over 10% of the questions over the entire module. We analyzed the data both including and excluding non-responses items; however, since the findings were similar, and because we were also interested in the total time burden for patients, we report the per question response times inclusive of missing responses. To minimize bias from potential survey satisficing, we excluded interviews completed unusually quickly (under 4 minutes) based on results during pilot testing.

For patient characteristics that had multiple categories, we created a referent group and indicator variables. For age, we evaluated the association between response time for each decade. Since there was an inflection point indicating delayed response times at approximately 50 years of age, we compared each decade after 50 years to the age <50 referent group. Similarly, symptom burden was associated with delayed response times. The increased response time became most apparent beyond a mean symptom burden of 0.5, corresponding to, on average, half of all symptoms present at a magnitude of “a little distress”; thus, we report response times for MSAS ≥0.5 compared to <0.5. We used the chi-square test for proportions and Student’s t-test for continuous variables. To estimate the extra response time by patient characteristic, we constructed linear regression models. We performed multivariable analyses constructing models based on entry of all variables into the model with stepwise removal of variables not significant at a P value equal to or less than 0.05. Since some patients completed surveys on separate clinic visits, we adjusted for the within-patient correlation structure using generalized estimating equations [14]. Analyses were performed on STATA version 13 (College Station, TX).

4. Results

Among 2,299 patient assessments, 1,670 (73%) were retained for the analysis. The reasons for exclusion were missing responses for 554 (24%) and unusually short interview duration for 75 (3%). The majority of the 1,527 patient respondents (91%) completed only a single assessment; however, 8.7% completed two assessments and 0.6% completed three assessments. All repeat assessments were on separate clinic visits. Most were female, a substantial minority chose Spanish, and the mean self-reported health score was below the U.S. mean of 50 (▶ Table 2). Among participants who completed the optional practice questions, most did not have a computer in their home.

Table 2.

Characteristics of patients who completed an audio computer-assisted self interview (ACASI) in a general medicine clinic during April 8, 2011 through July 27, 2012.

Patient-level characteristics, categorical N=1,527 %
Female 892 58
Spanish language 302 20
Race/ethnicity
Non-Hispanic black 706 54
Hispanic 379 25
Non-Hispanic white 141 9
Non-Hispanic Asian 94 6
Other/unknown 92 6
Computer in home (practice question)
No 352 23
Yes, used at least once 185 12
Yes, used in prior 9 days 138 9
Chose not to complete any practice questions 852 56
Characteristic, continuous Mean (SD)
Age 57.0 12.4
Response-level characteristics, continuous (N=1,670) Mean (SD)
Physical symptom burdena 1.1 0.7
Self-reported health, mentalb 44.4 8.3
Self-reported health, physicalb 40.5 8.5
Time per question, seconds 18.4 6.1
Number of interviews per patient 1.1 0.4

a Measured with the 12-item physical symptom subscale of the 17-item memorial symptom assessment scale (MSAS).

b Measured using the 10-item NIH PROMIS instrument.

The mean per-question response time was 18 seconds. The mean response time was also 18 seconds in repeated surveys among those who completed more than one assessment. In bivariable analyses, factors associated with longer response times included age >50 years with a stepwise increase in response time for each succeeding decade; Spanish language; female; race-ethnicity other than non-hispanic white; no computer in the home; a relatively high physical symptom burden; and, having a lower physical or mental component of self-reported health (▶ Table 3). By multivariable analysis, older age, Spanish language preference, and no home computer had the greatest impact on prolonging response times (▶ Table 4).

Table 3.

Response latency for audio computer-assisted self interview (ACASI) questions.

Characteristic Difference, seconds 95% CI P-value
Language
English Referent -- --
Spanish 3.4 2.7 to 4.1 <0.001
Race-Ethnicity
Non-hispanic black Referent -- --
Non-hispanic white -3.4 -4.4 to –2.3 <0.001
Non-hispanic Asian 1.7 0.5 to 2.9 0.007
Hispanic 1.5 0.8 to 2.2 <0.001
Sex
Male Referent -- --
Female 0.7 0.2 to 1.3 0.013
Computer in Homea
No Referent -- --
Yes, not used -0.4 -1.5 to 0.6 0.39
Yes, used -4.2 -5.2 to -3.2 <0.001
No response -4.9 -5.6 to -4.3 <0.001
Requested Practice
No Referent -- --
Yes 3.7 3.2 to 4.3 0.39
Physical Symptom Burdenb
Less than 0.5 Referent -- --
≥ 0.5 1.3 0.6 to 2.0 <0.001
Self-Reported Health
Better than clinic mean Referent -- --
Less than mean, physical 0.8 0.2 to 1.4 0.01
Less than mean, mental 1.0 0.3 to 1.5 0.002
Age, years
19 to 29 Referent -- --
30 to 39 2.0 -0.2 to 4.2 0.08
40 to 49 1.0 -0.1 to 3.9 0.06
50 to 59 3.5 1.5 to 5.4 <0.001
60 to 69 5.7 3.7 to 7.7 <0.001
70 to 79 7.5 5.4 to 9.6 <0.001
80 to 89 8.2 5.8 to 11 <0.001

P-values and 95% CIs calculated using Student’s t-test.

a Quantified over prior 9 days. Non-respondents included declination of practice questions.

b 12-item physical symptom subscale of the memorial symptom assessment scale (MSAS).

c Measured using the 10-item NIH PROMIS instrument.

Table 4.

Difference in response latency for completion of audio computer-assisted self interview (ACASI) questions, by multivariable analysis.

Characteristic Difference (seconds)a 95% CI P-value
Language
English Referent -- --
Spanish 3.9 2.9 to 4.9 <0.001
Race-ethnicity
Non-hispanic black Referent -- --
Non-hispanic white -2.6 -3.5 to -1.6 <0.001
Hispanic -1.0 -2.0 to 0.0 0.04
Non-hispanic Asian 1.0 -0.1 to 2.2 0.08
Other or unknown -0.9 -2.0 to 0.3 0.13
Computer in Home b
Yes or unknown Referent -- --
No, or not used 3.2 2.7 to 3.8 <0.001
Physical symptom burden c
Less than 0.5 Referent -- --
≥ 0.5 0.6 0.0 to 1.3 0.06
Self-Reported Health, Mental d
Above mean score Referent -- --
Less than or equal to mean 0.6 0.0 to 1.1 0.05
Age, years
19 to 49 Referent -- --
50 to 59 1.4 0.7 to 2.1 <0.001
60 to 69 3.4 2.6 to 4.1 <0.001
70 to 79 5.3 4.2 to 6.3 <0.001
80 to 89 5.4 4.0 to 6.9 <0.001

a Adjusted for repeat observations.

b Quantified over prior 9 days. Non-response included those who declined practice questions.

c 12-item physical symptom subscale of the memorial symptom assessment scale (MSAS).

d Measured using the 10-item NIH PROMIS instrument.

Patients were administered a minimum of two instruments (self-reported health and symptom burden). These two instruments were completed in a mean duration of approximately eight minutes. Adding the depression screen increased the mean interview duration by 80 seconds, and a tobacco assessment added an additional 30 seconds. For all five instruments, the mean interview duration was less than twelve minutes (▶ Table 5).

Table 5.

Duration of interviews based on fixed sequence modules.

Module N Mean duration, (minutes) 95% CI
Self-reported health,symptoms 167 8.2 7.8 to 8.5
Self-reported health, symptoms, satisfaction 7 8.9 5.8 to 11.9
Self-reported health, symptoms, PHQ-2 37 9.5 8.5 to 10.5
Self-reported health, symptoms, PHQ-2, tobacco 1169 10.0 9.8 to 10.2
Self-reported health, symptoms, PHQ-2, tobacco, satisfaction 290 11.5 11.1 to 11.9

Instruments: Self-reported health, NIH PROMIS 10-item short form; symptoms, memorial symptom assessment scale (17-item MSAS); satisfaction, HRSA patient satisfaction survey; PHQ, two-item public health questionnaire depression screen; tobacco, 1 to 3 item conditional logic screen based on the Fagerström assessment.

5. Discussion

We implemented an audio computer-assisted self interview system (ACASI) in a general medicine clinic that is part of a safety net health system for the medically underserved. Using real-time transactional data, the system was integrated with clinical data from the Electronic Health Record (EHR). Integration of data sources allowed instruments to be compiled into modules based on patient characteristics, clinical site, and duration since prior interview. For the system to be minimally disruptive to clinic workflow, we focused on minimizing time burden through use of short form instruments and design choices. The average administration time of all five instruments (self-reported health, symptom burden, tobacco assessment, depression screen, and satisfaction survey) was under twelve minutes. The response times were longer for older age--a stepwise increase in response time for each decade over 50 years; assessments in Spanish; individuals who do not have a home computer; and, reporting a low mental self-reported health score.

During ACASI software development, we made several decisions to improve respondents’ efficiency. To allow for rapid responses by highly literate patients, each question and its response options were displayed before the audio file was activated; thus, patients who read faster than the audio recording could select their response without waiting. Also, we allowed patients to skip individual questions by pressing a next button. On average, the response times for our patient population exceeded those previously reported for a self-administered non-audio electronic survey system [15]; however, direct comparisons are complicated by differences in instruments, patient populations, and possibly network speed.

Advantages to building a self-administered system rather than completing a face-to-face intake assessment include increased disclosure of certain behaviors [16, 17], consistency in how interviews are conducted, the opportunity to automate rule-based referrals based on automated calculation of summary scores (e.g., depression and smoking cessation counseling), and freeing up clinic personnel time to focus on other work activities. For quality of life assessments, in a prior study we found good concordance between ACASI and computer-assisted telephone interviews conducted one week later [18]. Despite the challenges of incorporating audio files, we chose to include audio so that the system could be completed with relative independence by patients who had low-literacy levels, which has been reported as common among patients at public institutions and Spanish-speaking patients [19].

As expected, we found that age was strongly associated with delayed response times [15]. Compared to patients <50, those over 80 years took 5.4 seconds longer per item, which extrapolates to 2.4 additional minutes to complete the self-reported health (10 item) and symptom burden (17 item) assessments. We identified an inflection point of longer response times at approximately 50 years of age, consistent with the previously reported age of decline in human-computer interactions [20, 21]. Increased age decreases facility with novel computer interfaces, and prolongs survey responses for web-based systems or telephone interviews [20-22]. Delayed response times for older respondents has been attributed to physical declines (vision, hearing, hand strength, and coordination) and cognitive changes [20, 23, 24]. Some age-related factors will be difficult to overcome with design modifications, but options to improve usability include minimizing the number of buttons per screen, using a large font size, spatial layouts that allow for less precise movements, high contrast color choices, avoidance of scroll bars, and interfaces that minimize distractions [20, 25].

Longer response times for Spanish-speaking patients has not been well documented. During pilot testing of the ACASI system by bilingual research assistants, we observed that Spanish translations of some items required more words than English versions; however, when we tested the duration of interviews after standardizing the responses and instruments, the English versions were completed in a similar amount of time. Possible unmeasured factors contributing to longer interview times for Spanish-speaking patients include increased attention to hearing all possible responses, a lack of clarity for culturally vague translations [26], there may have been lower education and literacy levels, and less prior exposure to similar technologies.

Although, on average, patients who had a relatively high symptom burden and low mental self-reported health score took longer to respond to questions, these were relatively modest effects. Administration of surveys to populations who report more active symptoms may not impose a meaningfully increased respondent burden. Assessing patients’ self reported measures may be more useful for patients who have multiple chronic conditions; thus, it is important that response times were minimally impacted by symptom burden.

Our findings are limited in that we evaluated patients in a single clinic that cares for the most complicated patients in our health system, we enrolled a convenience sample of patients in that those next in the queue to be seen by their provider were excluded, and we captured aggregate rather than single-item response times. Finally, although there are many seemingly self-evident potential benefits to incorporating such a system into routine clinical practice and possibly enhanced patient-provider communication, meaningful improvements in health outcomes have not yet been demonstrated.

6. Conclusions

Capture of patient-reported outcomes can be integrated into the clinical workflow and can be completed without excessive burden on patients. Given variability in response times across patients, a major challenge is to design systems that accommodate certain populations, such as older individuals, those with less exposure to technology, and possibly Spanish-speaking patients. Critical to uptake of these systems, is to demonstrate that such assessments meaningfully improve patient care or the patient experience.

Acknowledgements

We thank Sharon Irons, Eular Brown, and Kina Montgomery for their assistance in setting up the kiosks and working with the clinical staff to re-design the patient flow through the clinic. We thank George Markovski and Yingxu Xiang for developing the software platform. Funded by the Agency for Healthcare Research and Quality, grant number R24 HS19481–01.

Footnotes

Clinical Relevance

Audio computer assisted self interviews collect important self-reported behaviors and outcomes. Such assessments can be conducted while patients wait for their provider with relatively little time burden, and their responses can help guide the patient encounter and more fully complete the electronic health record. Interfaces for patient assessment need to be designed to minimize the burden on older patients or those who speak Spanish.

Conflicts of Interest Statement

The authors declare that they have no conflicts of interest in the research.

Protection of Human and Animal Subjects

This project was reviewed by the CCHHS Institutional Review Board and consent for participation was waived.

References

  • 1.Bakken S, Cimino JJ, Hripcsak G. Promoting patient safety and enabling evidence-based practice through informatics. Med Care 2004; 42(2):49–56. [DOI] [PubMed] [Google Scholar]
  • 2.Dorr D, Bonner LM, Cohen AN, Shoai RS, Perrin R, Chaney E, Young AS. Informatics systems to promote improved care for chronic illness: a literature review. J Am Med Inform Assoc 2007; 14(2):156–163. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Bayliss EA, Ellis JL, Shoup JA, Zeng C, McQuillan DB, Steiner JF. Association of patient-centered outcomes with patient-reported and ICD-9-based morbidity measures. Annals of family medicine. 2012; 10(2):126–133. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Sinha M, Khor K, Amresh A, Drachman D, Frechette A. The Use of a Kiosk-Model Bilingual Self-Triage System in the Pediatric Emergency Department. Pediatr Emerg Care 2014; 30(1):63–68. [DOI] [PubMed] [Google Scholar]
  • 5.Al-Tayyib AA, Rogers SM, Gribble JN, Villarroel M, Turner CF. Effect of low medical literacy on health survey measurements. Am J Public Health 2002; 92(9):1478–1480. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Cella D, Riley W, Stone A. Initial Adult Health Item Banks and First Wave testing of the Patient-Reported outcomes Measurement Infomation System (PROMIS) Network: 2005–2008. J Clin Epidemiol 2010; 63(11):1179–1194. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Chang VT, Hwang SS, Feuerman M, Kasimis BS, Thaler HT. The memorial symptom assessment scale short form (MSAS-SF). Cancer 2000; 89(5):1162–1171. [DOI] [PubMed] [Google Scholar]
  • 8.Kris AE, Dodd MJ. Symptom experience of adult hospitalized medical-surgical patients. J Pain Symptom Manage 2004; 28(5):451–459. [DOI] [PubMed] [Google Scholar]
  • 9.Portenoy RK, Thaler HT, Kornblith AB, Lepore JM, Friedlander-Klar H, Kiyasu E, Sobel K, Coyle N, Kemeny N, Norton L, et al. The Memorial Symptom Assessment Scale: an instrument for the evaluation of symptom prevalence, characteristics and distress. Eur J Cancer 1994; 30A(9):1326–1336. [DOI] [PubMed] [Google Scholar]
  • 10.Kroenke K, Spitzer RL, Williams JBW. The Patient Health Questionnaire-2: validity of a two-item depression screener. Med Care 2003; 41(11): 1284. [DOI] [PubMed] [Google Scholar]
  • 11.Heatherton TF, Kozlowski LT, Frecker RC, Fagerstrom K. The Fagerström test for nicotine dependence: a revision of the Fagerstrom Tolerance Questionnaire. Brit J Addict 1991; 86(9):1119–1127. [DOI] [PubMed] [Google Scholar]
  • 12.Health Resources and Services Administration H. Survey form. http://bphc.hrsa.gov/policiesregulations/performancemeasures/patientsurvey/surveyform.html; [June 17, 2014]; Available from: http://bphc.hrsa.gov/policiesregulations/performancemeasures/patientsurvey/surveyform.html.
  • 13.Wisniewski MF, Kieszkowski P, Zagorski BM, Trick WE, Sommers M, Weinstein RA. Development of a clinical data warehouse for hospital infection control. J Am Med Inform Assoc 2003; 10(5):454–462. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Rabe-Hesketh S, Skrondal A. Multilevel and Longitudinal Modeling using Stata. 2nd ed College Station, TX: Stata Press; 2008. [Google Scholar]
  • 15.Herrick D, Nakhasi A, Nelson B, Rice S, Abbott P, Saber Tehrani S, Rothman R, Lehmann H, Newman-Toker D. Usability characteristics of self-administered computer-assisted interviewing in the emergency department. Appl Clin Inf 2013; 4: 276–292. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Del Boca FK, Darkes J. The validity of self reports of alcohol consumption: state of the science and challenges for research. Addiction 2003; 98(s2):1–12. [DOI] [PubMed] [Google Scholar]
  • 17.Kurth AE, Martin DP, Golden MR, Weiss NS, Heagerty PJ, Spielberg F, Handsfield HH, Holmes KK. A comparison between audio computer-assisted self-interviews and clinician interviews for obtaining the sexual history. Sex Transm Dis 2004; 31(12):719–726. [DOI] [PubMed] [Google Scholar]
  • 18.Klevens J, Trick W, Kee R, Angulo F, Garcia D, Sadowski LS. Concordance in the measurement of quality of life and health indicators between two methods of computer-assisted interviews: self administered and by telephone. Quality of Life Research 2010; 20(8):1179–1186. [DOI] [PubMed] [Google Scholar]
  • 19.Williams MV, Davis T, Parker RM, Weiss BD. The role of health literacy in patient-physician communication. Fam Med 2002; 34(5):383–389. [PubMed] [Google Scholar]
  • 20.Hawthorn D. Possible implications of aging for interface designers. Interact Comp 2000; 12(5):507–528. [Google Scholar]
  • 21.Fricker S, Galesic M, Tourangeau R, Yan T. An experimental comparison of web and telephone surveys. Public Opin Quart 2005; 69(3):370–392. [Google Scholar]
  • 22.Wagner N, Hassanein K, Head M. Computer use by older adults: A multi-disciplinary review. Comput Hum Behav 2010; 26(5):870–882. [Google Scholar]
  • 23.Ranganathan VK, Siemionow V, Sahgal V, Yue GH. Effects of aging on hand function. J Am Geriatr Soc 2001; 49(11):1478–1484. [DOI] [PubMed] [Google Scholar]
  • 24.Fisher DL, Glaser RA. Molar and latent models of cognitive slowing: Implications for aging, dementia, depression, development, and intelligence. Psychon B Rev 1996; 3(4):458–480. [DOI] [PubMed] [Google Scholar]
  • 25.Dickinson A, Newell AF, Smith MJ, Hill RL. Introducing the Internet to the over-60s: Developing an email system for older novice computer users. Interact Comp 2005; 17(6):621–642. [Google Scholar]
  • 26.Willis G, Lawrence D, Thompson F, Kudela M, Levin K, Miller K, editors. The use of cognitive interviewing to evaluate translated survey questions: Lessons learned. Proceedings of the Federal Committee on Statistical Methodology Research Conference; Arlington, VA; 2005. [Google Scholar]

Articles from Applied Clinical Informatics are provided here courtesy of Thieme Medical Publishers

RESOURCES