Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2012 Mar 12.
Published in final edited form as: J Behav Health Serv Res. 2011 Jul;38(3):414–423. doi: 10.1007/s11414-010-9229-4

Outcome Assessment via Handheld Computer in Community Mental Health: Consumer Satisfaction and Reliability

Lizabeth A Goldstein 1, Mary Beth Connolly Gibbons, Sarah M Thompson 2, Kelli Scott 3, Laura Heintz 4, Patricia Green 5, Donald Thompson 6, Paul Crits-Christoph 7
PMCID: PMC3299491  NIHMSID: NIHMS359396  PMID: 21107916

Abstract

Computerized administration of mental health-related questionnaires has become relatively common, but little research has explored this mode of assessment in “real-world” settings. In the current study, 200 consumers at a community mental health center completed the BASIS-24 via handheld computer as well as paper and pen. Scores on the computerized BASIS-24 were compared with scores on the paper BASIS-24. Consumers also completed a questionnaire which assessed their level of satisfaction with the computerized BASIS-24. Results indicated that the BASIS-24 administered via handheld computer was highly correlated with pen and paper administration of the measure and was generally acceptable to consumers. Administration of the BASIS-24 via handheld computer may allow for efficient and sustainable outcomes assessment, adaptable research infrastructure, and maximization of clinical impact in community mental health agencies.


Psychological measures have many uses in clinical practice including screening, outcomes assessment, and treatment planning.13 In particular, outcomes assessment allows for the measurement of treatment effectiveness and has become increasingly important not only to researchers but also to clinical agencies.3 Intervention research has traditionally used pen and paper for the baseline and follow-up assessments required in outcomes evaluation, but this method of data collection poses some potential problems for effectiveness research that is conducted in the real world of treatment delivery where efficiency and ease are especially valuable. Firstly, responding to items via traditional pen and paper creates a time delay of both data reporting and entry. Paper-and-pen data collection also enables a broader margin of error, with the possibility for error in data entry, as well as lost time and effort on data entry, re-entry, comparison, and cleanup. There is also the issue of missing data in paper-based data collection, which can be attenuated in computerized systems by prompting participants to answer questions they have skipped.4

Accordingly, there has been greater attention given to instantaneous data capture in recent years. Computerized administration of measures allows for cleaner data capture and potentially saved time, which not only benefits researchers but also the mental health consumers and clinical staff completing and responding to measures.5 Research has shown that administering measures via computer is less expensive and less time consuming than conducting face-to-face interviews or paper-based assessments.68 Furthermore, most consumers do not have difficulty using electronic devices to complete assessments. One study using a touchscreen computer found that between 97% and 99% of consumers were able to correctly answer a series of questions about the proper use of the device.9 Computerized assessment has been shown to be particularly beneficial when consumers were asked about sensitive issues such as substance abuse and suicidal ideation, producing more honest responses and increasing consumers’ comfort with revealing personal information.8

Computerized assessment has proven to be a reliable method of data collection for measures of mental health and general functioning.10,11 In addition, multiple studies have found that the results of computerized assessments are equivalent to those obtained using paper-based versions across a variety of measures.7,8,1214 Consumers report high levels of satisfaction with computerized assessment, typically preferring computer-based measures to those completed using paper and pen.6,11,13,15 Ease of use and privacy are cited as advantages of computerized assessment.9,13 These studies were conducted using multiple scales across a variety of populations, including inpatient and outpatient populations in both mental health and non-mental health settings.

Although these results suggest that computerized assessment is superior to traditional pen and paper administration of measures, very few studies have utilized mobile forms of technology to assess patient outcomes. With the majority of research employing mobile, handheld computers has been conducted in an experience sampling or ecological momentary assessment format.1622 However, at least one study has utilized handheld computers to administer longer measures, assessing the outcomes of youth with severe emotional disturbances.23 The authors concluded that handheld computers are an efficient, effective means of administering assessments in clinical populations.

Just as computer-based assessments are an improvement over pen and paper, handheld computers offer a number of advantages over desktop models. These advantages are especially relevant in community mental health centers (CMHCs), where space is often limited. The portability and small screen size of handheld computers allows consumers to complete assessments in any location throughout the CMHC while maintaining confidentiality. The small size and comparatively cheap price of handheld computers also permits CMHCs to buy and store multiple devices, ensuring that several consumers can complete measures simultaneously—a significant asset given the high patient flow that is typical of community clinics. This guards against potential missed data collection as well, since consumers can still complete assessments if individual devices temporarily stop functioning.

Although various studies have used computer-based assessments to administer measures to clinical populations, very few have assessed the feasibility of using these devices in CMHCs, and even fewer have compared consumer satisfaction with assessment via handheld computer versus traditional pen and paper. Wolford et al. and Eisen et al. confirmed the viability of conducting computerized assessments in community mental health populations, but neither study examined consumer satisfaction with handheld computers.6,7 Cook et al. assessed the feasibility of administering the Quick Inventory of Depressive Symptomatology on a tablet computer among outpatients diagnosed with major depressive disorder.13 However, patients were recruited from university-affiliated clinics, rather than CMHCs, suggesting that results may not be generalizable to those seeking services at CMHCs. For example, consumers at CMHCs are typically less educated than those who seek services at other sites. In addition, the poor and indigent population seeking services at a CMHC have likely had less exposure to and experience with electronic devices. The current project assessed consumer satisfaction with handheld computers in relation to demographic and clinical variables in order to account for the differences found amongst community populations. The study was designed to evaluate any specific difficulties that CMHC consumers might have in completing assessments via handheld computer, a particularly important investigation given the singular usefulness of handheld devices in busy community clinics with limited resources.

This project aimed to assess the feasibility of administering the BASIS-24 via handheld computers in CMHCs. The validity of the computerized version of the BASIS-24 has been tested in one, small-scale investigation that did not specifically evaluate the feasibility of administration via handheld computer.7 More specifically, the current study sought to (1) provide descriptive results of consumer satisfaction with completing the BASIS-24 via handheld computer, (2) evaluate consumers’ satisfaction with this assessment delivery method in relation to demographic and clinical variables, and (3) establish internal consistency reliability of handheld computer administration of the BASIS-24 as well as concordance with paper administration of this measure. These aims were carried out in a large CMHC outpatient sample.

Method

Participants

Participants consisted of outpatient consumers waiting for intake or therapy appointments at an outpatient CMHC in Philadelphia. This CMHC serves over 2,000 consumers per year, most of whom are low-income, African-American adults. The most common diagnoses were schizophrenia and major depressive disorder. Both pharmacotherapy and psychotherapy are offered, with medication managed by psychiatrists and psychotherapy conducted mostly by master’s level clinicians and graduate student interns.

Measures

Revised behavior and symptom identification scale (BASIS-24).24

The BASIS-24 is a 24-item self-report scale that measures general psychiatric symptoms and has been validated for use in CMHCs. The scale was developed for the assessment of consumers’ mental health outcomes. The BASIS-24 provides an overall symptom score as well as six subscale scores: depression/functioning, interpersonal relationships, psychosis, emotional lability, self-harm, and substance abuse. Responses are rated on a 5-point scale, with higher scores indicating greater distress. It shows good internal consistency reliability and validity for use among people who are Caucasian, African American, and Hispanic.25 This scale was administered on paper as well as on handheld computers. The handheld computers were personal digital assistants (PDAs), which were chosen in order to maximize both portability and confidentiality (small screen and text size prevents others from reading consumers’ responses on the PDAs, even when they are sitting or standing in close proximity).

Consumer satisfaction survey

Administered on paper, this survey assessed consumers’ demographic information, satisfaction with the handheld computer assessment, previous use of a handheld computer, and thoughts about privacy with use of a handheld computer. Consumers were also asked which method of assessment administration they found easier to use (paper vs. handheld computer). Satisfaction items were measured on a 1 to 5 scale, with 1 indicating strong dissatisfaction and 5 indicating strong satisfaction. A total satisfaction score was established by calculating the mean of the five satisfaction items for each participant. The internal consistency of the total satisfaction score in the current sample was .93.

Data-capture system

The BASIS-24 was programmed as a flash movie, hosted on a server computer and accessed via the handheld computer. The flash movie incorporated the guidelines proposed by Palmblad and Tiplady,26 being appropriate for use in this particular population (CMHC consumers), requiring little to no training for consumer use, requiring all answers to result from a user’s actions (i.e., no default choice), and having all needed information appearing on one screen per question. Participants answered each question on the handheld screen by tapping the stylus on their answer. At the completion of the survey, the data instantly transferred wirelessly to a server computer, where it was automatically entered into a Microsoft Access database via an encrypted intranet connection.

Procedure

Following IRB approval of this study by the University of Pennsylvania, data collection began. All consumers who sat in the waiting room at the CMHC over a 4-month period were approached consecutively and asked if they would be willing to participate in a research study. Consumers verbally consented to participation. Consumers were offered the opportunity to complete practice questions on the handheld with the assistance of the Research Assistant prior to completing the actual BASIS-24 questionnaire. Half the consumers completed the BASIS-24 via handheld computer first, whereas the other half completed the paper version first to attenuate order effects. Consumers then completed the other form of the BASIS-24 and the Consumer Satisfaction Survey. Consumer responses were used solely for research purposes and were not made available to treatment providers. Participants were paid $10 for their time.

Data analysis

The overall satisfaction score was computed as the mean of the five satisfaction items (liked using the handheld computer, ease of use, size of words, clarity of directions, and comfort using it in the future) for the purpose of evaluating predictors of satisfaction. These items were scored from 1 (not at all) to 5 (very) in response to each question. The assessment of consumers’ privacy concerns was achieved by asking, “Did you feel that filling out the survey on the handheld computer was private enough?” to which consumers could choose “yes” or “no”. Five consumer variables were examined in relation to satisfaction and privacy: gender (male vs. female), age (44 years old or younger vs. 45 years old or older), educational level (11 years of school or less vs. 12 years of school or more), reason for appointment (intake vs. treatment), and prior use of a handheld computer (prior use vs. no prior use). Regression analyses predicting overall satisfaction and perceptions of privacy controlling for consumer demographic variables were conducted. Satisfaction was examined using a multiple regression while perceptions of privacy in relation to demographic variables were analyzed using a logistic regression. Exploratory t tests were used for individual satisfaction items when the overall satisfaction score was significantly related to a consumer variable. To establish equivalence of the handheld computer and paper-and-pen BASIS-24 scores, the intraclass correlation between the two measures were calculated. In addition, internal consistency reliability of the computerized BASIS-24 was calculated.

Results

Consumer demographics

Two-hundred eighteen consumers were approached to participate in the study, yielding a total of 200 participants. Reasons for refusing to participate included not having enough time before the therapy session and not liking surveys. Of the 200 consumers agreeing to participate, most consumers were female (68.72%) and African American (85.95%). The average age was 41.25 (SD=12.23), and average years of education completed was 11.47 (SD=3.11). Thirty-six percent of participants had used a handheld computer at some point prior to the study, independent of this research project. The mean BASIS-24 scores for the computerized and paper administrations of the measure were very similar (computerized administration: M=1.63, SD=.72, n=181; paper administration: M=1.60, SD=.73, n=170). Full demographic information is available in Table 1.

Table 1.

Demographic characteristics

Demographic n (%) M (SD)
Gender (n=195)
  Female 134 (68.72)
  Male 61 (31.28)
Race (n=185)
  African American 159 (85.95)
  American Indian or Alaskan Native 14 (7.57)
  Caucasian 18 (9.73)
  Other 11 (5.95)
Ethnicity (n=151)
  Hispanic 12 (7.95)
  Not Hispanic 139 (92.05)
Prior use of handheld computer (n=187)
  Yes 68 (36.36)
  No 119 (63.64)
Age (n=191) 41.25 (12.23)
Education (n=176) 11.47 (3.11)

Consumer satisfaction

Consumers generally expressed satisfaction with the handheld method of BASIS-24 administration, finding it easy to use and acceptable as a potential method of data collection for future projects. Table 2 provides the mean score and frequencies for each satisfaction item. Items were scored from 1 (not at all) to 5 (very) in response to each question. Eighty-four percent of consumers liked using the handheld computer (i.e., four or five in response), and 84.64% of consumers found using the handheld computer to be easy. Eighty-nine percent felt that the words were large enough, and 89.39% felt that the directions were clear. Eighty-five percent of consumers indicated they would be comfortable using the handheld computer to fill out surveys and forms in the future. Most consumers (92.71%) thought the handheld computer was sufficiently private.

Table 2.

Satisfaction with BASIS-24 administered via handheld computer

Percent answered
Question 1 (not at all) 2 3 4 5 (very) Yes No Computer
much
easier
Computer
somewhat
easier
No
difference
Paper
somewhat
easier
Paper
much
easier
M (SD)
Did you like using the
   handheld computer?
   (n=197)
4.57 3.05 8.63 12.69 71.07 4.43 (1.06)
Was it easy to use?
   (n=195)
4.62 4.10 5.64 9.74 75.90 4.48 (1.08)
Were the words large enough?
   (n=197)
3.55 2.03 5.08 6.60 82.74 4.63 (.93)
Would you be
   comfortable using
   the handheld
   computer to fill out
   future surveys and
   forms? (n=198)
4.55 4.04 6.06 4.55 80.81 4.53 (1.06)
Were the directions for
   using the computer
   clear? (n=198)
3.03 3.03 4.55 3.03 86.36 4.67 (.95)
Did you feel that filling
   out the survey on the
   handheld computer
   was private enough?
   (n=192)
92.71 7.29
Which method of
   filling out the survey
   was easier? (n=191)
59.16 10.47 21.99 5.76 3.14

Most consumers responded that completing the BASIS-24 via handheld computer was easier than completing the measure via paper and pen; 59.16% (n=113) responded that the handheld computer was much easier, while 10.42% (n=20) responded that the handheld computer was somewhat easier. Of the remaining consumers, 3.14% (n=6) indicated the paper was much easier, 5.76% (n=10) indicated paper was somewhat easier, and 21.99% (n=42) felt that there was no difference.

Demographic variables in relation to consumer satisfaction and privacy

Differences in satisfaction and perceived degree of privacy by gender, age, educational level, reason for appointment (intake vs. treatment), and prior use of a handheld computer were examined. In cases where differences in overall satisfaction were found, an assessment of responses to individual satisfaction questions was conducted on an exploratory basis. Table 3 provides a summary of the significant demographic predictors of satisfaction and perceived degree of privacy. In terms of overall satisfaction scores, there were no differences in satisfaction by gender, education level, prior use of a handheld computer, or reason for appointment. Results indicated higher overall satisfaction in younger consumers (F (1, 160)=6.79; p=.01), who demonstrated a stronger liking for the computer (t (163.32)=2.22; p<.05), thought the directions were more clear (t (157.49)=2.33; p<.05), and were more likely to have previously used handheld computers (t (177.17)=2.75; p<.01) than older participants.

Table 3.

Summary of significant demographic predictors of satisfaction and privacy

Variable Predictor M SD t p df
Liked using the
   computer
Age Younger 4.61 0.85 2.22 .03 163.32
Older 4.27 1.20
Thought it was easy Prior use Prior use 4.66 0.89 2.08 .04 171.59
No prior use 4.34 1.20
Directions were clear Age Younger 4.82 0.71 2.33 .02 157.49
Older 4.51 1.11
Prior use Prior use 4.84 0.70 2.37 .02 181.37
No prior use 4.53 1.07
Was private enough
   (where a response
   of “Yes ” was
   coded “1” and a
   response of “No”
   was coded “2”)
Education Fewer than 12
   years of education
1.15 0.36 2.02 .05 85.38
12 or more years
   of education
1.05 0.21

There were no differences in overall satisfaction by level of education, but participants with fewer than 12 years of education were more likely to state that the survey was not sufficiently private (Wald χ2 (1)=5.70; p=.02). There were no differences in perceptions of privacy by gender, age, prior use of a handheld computer, or reason for appointment.

Reliability of computerized assessment

The computerized administration was highly correlated with the paper administration (ICC (3, 1)=.95). The internal consistency reliability (Cronbach’s alpha) for both the computerized and the paper administration was .89.

Discussion

Consumers at the CMHC were extremely satisfied using the handheld computer to complete the BASIS-24. Their responses indicate that the handheld computer is a feasible method of delivering this measure in a community setting. Most consumers found the handheld computer to be easy to use and sufficiently private. Regarding the specific programming and directions used, consumers found the text large enough to read and the directions clear. Generally, consumers indicated they would be willing to complete other measures and forms using this methodology in the future, which demonstrates good consumer acceptance of this technology in the community setting.

There were some differences in satisfaction and privacy ratings based on consumer demographic variables, namely age and education level. Older consumers and those with less education might need more orientation to the device to increase comfort and satisfaction. Although we incorporated an optional practice module in order to assist consumers who might require extra help learning to use the handheld computer, very few consumers elected to complete this module. Other ways of orienting older and less-educated consumers in the use of this device should be considered to supplement the practice module.

The BASIS-24 administered via handheld computer showed a good level of equivalence with the BASIS-24 administered via traditional paper and pen at a CMHC. Furthermore, the internal consistency of the originally published, paper-administered BASIS-24 had a Cronbach’s alpha of.87,24 which is comparable to that of both the computerized (α=.89) and the pen and paper (α=.89) administration in the present study. This underscores the feasibility of using a handheld computerized version of this measure for research and clinical purposes in community settings.

There are a number of limitations that should be noted. Firstly, though these results may generalize to the use of computerized assessment with a very brief instrument in an outpatient CMHC setting, the results do not indicate whether other populations in other settings would have problems with computerized outcome assessment. It is possible that consumers would be less satisfied with computerized outcome assessments incorporating longer measures. We asked several satisfaction questions that seemed most relevant to computerized outcome assessment in the outpatient CMHC setting. Nonetheless, it is possible that these questions did not cover all important information from the consumers’ perspective regarding computerized outcome assessment. The conclusions drawn from the study are also limited due to the large number of analyses conducted to assess satisfaction and perception of privacy in relation to demographic characteristics. This increases the likelihood that false conclusions were drawn due to Type I error.

The generalizability of the results is also limited by the data collection method. Consumers were not sampled randomly, and data collection was limited to a 4-month period. Additionally, consumers were asked to complete the same measure twice, in rapid succession (once via traditional paper and pen and once using a handheld computer). Therefore, it is possible that the high reliability between the two assessment methods is due to memory effects. Finally, data on consumer diagnosis and type of insurance were not collected, limiting the ability to analyze the way in which these variables might affect satisfaction.

Implications for Behavioral Health

Despite the aforementioned limitations, the results indicate that computerized outcome assessment is a reliable and satisfactory assessment method in CMHC settings. Given that this method of data collection does not require subsequent paper-to-computer data entry or data cleaning, the data can be used for reports and other needs instantaneously. Thus, using a handheld computer is an efficient way to collect outcome data for generating reports that would be immediately available for therapists, thereby maximizing clinical impact. It allows for a more streamlined route of information sharing between patient and therapist and can be altered easily to meet the changing needs of the community clinic without significant environmental or training impact. In a healthcare system that is increasingly reliant on outcome measurement as a means of allocating funds, the ability to efficiently measure clients’ symptoms and communicate this information to clinicians can only assist in maximizing the quality of treatment and ensuring that CMHCs receive the funding they deserve. Overall, the results of this study show promise for the role of technology in providing better treatment to community mental health consumers through improved research and clinical capabilities.

Contributor Information

Lizabeth A. Goldstein, University of Pennsylvania, 3535 Market Street, Philadelphia, PA 19104, USA.

Sarah M. Thompson, University of Pennsylvania, 3535 Market Street, Philadelphia, PA 19104, USA.

Kelli Scott, University of Pennsylvania, 3535 Market Street, Philadelphia, PA 19104, USA.

Laura Heintz, University of Pennsylvania, 3535 Market Street, Philadelphia, PA 19104, USA.

Patricia Green, Northwestern Human Services, 27 East Mt. Airy Ave, Philadelphia, PA 19119, USA.

Donald Thompson, Northwestern Human Services, 906 Bethlehem Pike, Erdenheim, PA 19038, USA.

Paul Crits-Christoph, University of Pennsylvania, 3535 Market Street, Philadelphia, PA 19104, USA.

References

  • 1.Beutler LE, Malik M, Talebi H, et al. Use of psychological tests/instruments for treatment planning. In: Marush ME, editor. The Use of Psychological Testing for Treatment Planning and Outcomes Assessment. Mahwah, NJ: Lawrence Erlbaum Associates; 2004. pp. 111–146. [Google Scholar]
  • 2.Derogatis LR, Culpepper WJ. Screening for psychiatric disorders. In: Marush ME, editor. The Use of Psychological Testing for Treatment Planing and Outcomes Assessment. Mahwah, NJ: Lawrence Erlbaum Associates; 2004. pp. 65–110. [Google Scholar]
  • 3.Lambert MJ, Hawkins EJ. Use of psychological tests for assessing treatment outcomes. In: Marush ME, editor. The Use of Psychological Testing for Treatment Planning and Outcomes Assessment. Mahwah, NJ: Lawrence Erlbaum Associates; 2004. pp. 171–195. [Google Scholar]
  • 4.Palen L-A, Graham JW, Smith EA, et al. Rates of missing responses in personal digital assistant (PDA) versus paper assessments. Evaluation Review. 2008;32:257–272. doi: 10.1177/0193841X07307829. [DOI] [PubMed] [Google Scholar]
  • 5.Ahluwalia MK. Multicultural issues in computer-based assessment. In: Suzuki LA, Ponterotto JG, editors. Handbook of multicultural assessment: Clinical, psychological, and educational applications. 3 rd ed. San Francisco, CA: Jossey-Bass; 2008. pp. 92–106. [Google Scholar]
  • 6.Wolford G, Rosenberg SD, Rosenberg HJ, et al. A Clinical Trial Comparing Interviewer and Computer-Assisted Assessment Among Clients With Severe Mental Illness. Psychiatric Services. 2008;59:769–775. doi: 10.1176/ps.2008.59.7.769. [DOI] [PubMed] [Google Scholar]
  • 7.Eisen SV, Toche-Manley LL, Grissom GR. Computer-Administered Versus Paper-and-Pencil Mental Health Surveys. Psychiatric Services. 2004;55:1316–1317. doi: 10.1176/appi.ps.55.11.1316-a. [DOI] [PubMed] [Google Scholar]
  • 8.Kobak KA, Greist JH, Jefferson JW, et al. Computer-administered clinical rating scales: A review. Psychopharmacology. 1996;127:291–301. doi: 10.1007/s002130050089. [DOI] [PubMed] [Google Scholar]
  • 9.Chinman M, Young AS, Schell T, et al. Computer-assisted self-assessment in persons with severe mental illness. Journal of Clinical Psychiatry. 2004;65:1343–1351. doi: 10.4088/jcp.v65n1008. [DOI] [PubMed] [Google Scholar]
  • 10.Schmitz N, Hartkamp N, Brinschwitz C, et al. Computerized administration of the Symptom Checklist (SCL-90-R) and the Inventory of Interpersonal Problems (IIP-C) in psychosomatic outpatients. Psychiatry Research. 1999;87:217–221. doi: 10.1016/s0165-1781(99)00060-8. [DOI] [PubMed] [Google Scholar]
  • 11.Wijndaele K, Matton L, Duvigneaud N, et al. Reliability, equivalence, and respondent preference of computerized versus paper-and-pencil mental health questionnaires. Computers in Human Behavior. 2007;23:1958–1970. [Google Scholar]
  • 12.Chan-Pensley E. Alcohol-use disorders identification test: A comparison between paper and pencil and computerized versions. Alcohol & Alcoholism. 1999;34:882–885. doi: 10.1093/alcalc/34.6.882. [DOI] [PubMed] [Google Scholar]
  • 13.Cook IA, Balasubramani GK, Eng H, et al. Electronic source materials in clinical research: Acceptability and validity of symptom self-rating in major depressive disorder. Journal of Psychiatric Research. 2007;41:737–743. doi: 10.1016/j.jpsychires.2006.07.015. [DOI] [PubMed] [Google Scholar]
  • 14.Gwaltney CJ, Shields AL, Shiffman S. Equivalence of electronic and paper-and pencil administration of patient-reported outcome measures: A meta-analytic review. Value in Health. 2008;11:322–333. doi: 10.1111/j.1524-4733.2007.00231.x. [DOI] [PubMed] [Google Scholar]
  • 15.Weber B, Schneider B, Fritze J, et al. Acceptance of computerized compared to paper-and-pencil assessment in psychiatric inpatients. Computers in Human Behavior. 2003;19:81–93. [Google Scholar]
  • 16.Summerville A, Roese NJ. Dare to compare: Fact-based versus simulation based comparison in daily life. Journal of Experimental Social Psychology. 2008;44:664–671. doi: 10.1016/j.jesp.2007.04.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Epstein DH, Willner-Reid J, Vahabzadeh M, et al. Real-time electronic diary reports of cue exposure and mood in the hours before cocaine and heroin craving and use. Archives of General Psychiatry. 2009;66:88–94. doi: 10.1001/archgenpsychiatry.2008.509. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Muehlenkamp JJ, Engel SG, Wadeson A, et al. Emotional state preceding and following acts of non-suicidal self-injury in bulimia nervosa patients. Behaviour Research and Therapy. 2009;47:83–87. doi: 10.1016/j.brat.2008.10.011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Bernhardt JM, Usdan S, Mays D, et al. Alcohol assessment using wireless handheld computers: A pilot study. Addictive Behaviors. 2007;32:3065–3070. doi: 10.1016/j.addbeh.2007.04.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Younger J, Mackey S. Fibromyalgia symptoms are reduced by low-dose naltrexone. Pain Medicine. 2009;10:663–672. doi: 10.1111/j.1526-4637.2009.00613.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Verduyn P, Delvaux E, Van Coillie H, et al. Predicting the duration of emotional experience: Two experience sampling studies. Emotion. 2009;9:83–91. doi: 10.1037/a0014610. [DOI] [PubMed] [Google Scholar]
  • 22.Granholm E, Loh C, Swendsen J. Feasibility and validity of computerized ecological momentary assessment in schizophrenia. Schizophrenia Bulletin. 2008;34:507–514. doi: 10.1093/schbul/sbm113. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Shannon LM, Walker R, Blevins M. Developing a new system to measure outcomes in a service coordination program for youth with severe emotional disturbance. Evaluation and Program Planning. 2009;32:109–118. doi: 10.1016/j.evalprogplan.2008.09.006. [DOI] [PubMed] [Google Scholar]
  • 24.Eisen SV, Normand SL, Belanger AJ, et al. The Revised Behavior and Symptom Identification Scale (BASIS-R): Reliability and validity. Medical Care. 2004;42:1230–1341. doi: 10.1097/00005650-200412000-00010. [DOI] [PubMed] [Google Scholar]
  • 25.Eisen SV, Gerena M, Ranganathan G, et al. Reliability and validity of the BASIS-24 mental health survey for Whites, African-Americans, and Latinos. Journal of Behavioral Health Services & Research. 2006;33:304–323. doi: 10.1007/s11414-006-9025-3. [DOI] [PubMed] [Google Scholar]
  • 26.Palmblad M, Tiplady B. Electronic diaries and questionnaires: Designing user interfaces that are easy for all patients to use. Quality of Life Research. 2004;13:1199–1207. doi: 10.1023/B:QURE.0000037501.92374.e1. [DOI] [PubMed] [Google Scholar]

RESOURCES