Skip to main content
Journal of the American Medical Informatics Association : JAMIA logoLink to Journal of the American Medical Informatics Association : JAMIA
. 2012 Jan 11;19(4):545–548. doi: 10.1136/amiajnl-2011-000580

Evaluation of computer-based medical histories taken by patients at home

Warner V Slack 1,, Hollis B Kowaloff 1, Roger B Davis 1,2, Tom Delbanco 2, Steven E Locke 1, Charles Safran 1, Howard L Bleich 1
PMCID: PMC3384115  PMID: 22237866

Abstract

The authors developed a computer-based general medical history to be taken by patients in their homes over the internet before their first visit with their primary care doctor, and asked six doctors and their participating patients to assess this history and its effect on their subsequent visit. Forty patients began the history; 32 completed the history and post-history assessment questionnaire and were for the most part positive in their assessment; and 23 continued on to complete their post-visit assessment questionnaire and were for the most part positive about the helpfulness of the history and its summary at the time of their visit with the doctor. The doctors in turn strongly favored the immediate, routine use of two modules of the history—the family and social histories—for all their new patients. The doctors suggested further that the summaries of the other modules of the history be revised and shortened to make it easier for them to focus on clinical issues in the order of their preference.

Keywords: Clinical informatics, computer, delivering health information and knowledge to the public, demonstrating return on IT investment, improving healthcare workflow and process efficiency, internet, medical history, personal health records and self-care systems, questionnaire, systems supporting patient–provider interaction, supporting practice at a distance (telehealth)


We developed a computer-based medical history to be taken by patients in their homes over the internet, before their first visit with their primary care doctor. We then studied the assessment of this history and its effect on the patients' forthcoming visit from the perspective of both the patients and their doctors.

Background

Since first reported in 1966,1 the use of a digital computer to interview patients about their medical histories has been studied in a variety of clinical settings.2–7 As pointed out by John Bachman in a comprehensive review of the literature,6 results thus far have demonstrated the potential for the computer to explore medical and psychological problems in a manner well received by both doctor and patient. Concerns about the computer as a negative influence have proved for the most part unfounded; indeed, many patients have reported being more comfortable with the computer than with the doctor when asked about sensitive, potentially embarrassing subjects.8–13

In spite of favorable reports, however, computer-based medical histories have yet to be widely deployed, as Bachman has also pointed out,6 and most clinics and doctors' offices lack the space and facilities needed for these interviews. Now, however, the internet, which provides a means to enhance communication between patients and their doctors,7 14–17 provides an opportunity for patients to take computer-based medical histories in their homes,18–21 as well as to improve communication between patients and clinical and administrative staff within their healthcare facilities.7 14–17 19–22 To capitalize on this new technology, we have developed and studied a detailed, interactive, computer-based medical history designed for use before the patient's first appointment with a primary care doctor. We report here on our early, pilot experience with primary care doctors and patients who were new to the doctors' practice.

Prior evaluation

After completing the development of our computer-based history, we submitted it to colleagues and revised it in response to their criticisms and suggestions. Next, we had 10 volunteer patients read aloud each of the primary questions—those that would be asked of all patients—and offer their comments and criticisms. Then, with patients who volunteered to take the history twice over the internet in their homes, we studied the test–retest reliability of 215 of the primary questions and found 210 to be sufficiently reliable to remain in the interview.21 We then revised the remaining five questions, reworked one of them into two questions,21 and proceeded with the pilot trial described here.

Methods

The computer-based medical history

The computer-based medical history, developed on the basis of our experience over the years with medical histories,1 3–5 7 8 20–22 is presented to patients on their computer screens over a secure Website, PatientSite, which is a portal to a patient's electronic medical record and a component of Beth Israel Deaconess Medical Center's computing.23 24 Patients respond either by clicking on their answer from a list of choices or by typing on their keyboard for text and numerical entries. (In our preliminary study,21 the human factors of presentation and response were found to work well as judged both by the patients' assessment of ease of use and by the reliability of the questions upon re-asking.) The patients' responses are stored electronically and are available for use in determining, from among the numerous alternative summary phrases associated with the history questions, those to be compiled into the summaries of the patients' medical histories.

The history is divided into 24 modules—family history, social history, cardiac history, pulmonary history, and the like. So far as possible, it is designed to model the comprehensive, inclusive, general medical history traditionally taken, when time permits, by a primary care doctor seeing a patient for the first time. It contains 233 primary questions asked of all patients about the presence or absence of medical problems. Of these, 216 have the preformatted mutually exclusive responses ‘yes’, ‘no’, ‘uncertain (don't know, maybe)’, ‘don't understand’, and ‘I'd rather not answer’; 10 have other sets of multiple choices, one response permitted; five have multiple choices with more than one response permitted; and two have numerical responses. In addition, more than 6000 questions, explanations, suggestions, and recommendations are available for presentation, as determined by the patient's responses and the branching logic of the program. These questions are available to explore in detail medical problems elicited by one or more of the primary questions. If, for example, a patient responds with ‘yes’ to the question about chest pain, the program branches to multiple qualifying questions about characteristics of the pain, such as onset, location, quality, severity, relationship to exertion, and course.

The interview begins with a teaching sequence on how to respond to the questions, followed by an introductory sequence to identify the participating patient. It then inquires about problems in need of early attention, with provision for free text entry, and offers advice on how best to proceed if, in the patient's opinion, the problem could be urgent. The interview then proceeds to the medical history.21

As the patient completes each module, the program displays a summary, presented in easy-to-read phrases based on the patient's responses, which the patient can review and, although not edit directly, qualify by typing messages of amplification, clarification, or criticism (see supplementary figure 1, available online only). The interview concludes with 10 questions, presented with a 10-point scale (1 for most negative, 10 for most positive), for the patient to use to assess the history, followed by a provision for the patient to enter comments and suggestions for improvements. The patient can interrupt the interview at any time and return to it later at the point of interruption. When the patient has finished the interview, the program displays a summary, with prose compiled on the basis of the patient's responses and presented in a legible, but otherwise traditional format (an outline of the summary is available as supplementary figure 2, available online only). Each positive finding in the summary is denoted by an asterisk and listed with its relevant details for review by patient and doctor on the doctor's computer screen at the time of the visit (see supplementary figure 3, available online only). The patient's responses are stored and retained in an electronic file, which is separate from the patient's electronic medical record, but from which a summary of the patient's history, in whole or in part, can be generated for the doctor to use and to incorporate into the patient's record at any time, at his or her discretion.

The trial

We sent email messages to seven primary care doctors affiliated with Beth Israel Deaconess Medical Center who accepted new patients. Six of these doctors agreed to participate and to have us contact patients scheduled to see them for the first time. We had hoped to have our initial contact with patients by email, but Beth Israel Deaconess Medical Center does not obtain email addresses at the time of a patient's initial registration. Accordingly, we sent letters to 620 potentially eligible patients. (We also contacted 55 patients via email addresses collected when they were seen previously at the medical center by a different doctor.) Our letter and email message mentioned the study and asked the patients to respond by email if they had access to the internet at home and wanted to learn more. We then sent email messages to patients who responded, with directions to PatientSite, a link to the online description of the study, a link to the request for informed consent, and for patients who signed the informed consent, a link to the computer-based medical history.

At the time of the office visit, the summary of the computer-based history of those patients who had completed the interview was available on the doctor's computer screen for the doctor and patient to use together on a voluntary basis. At the option of the doctor, the summary could then be edited and incorporated into the patient's online medical record.25

Within a day after the visit, the program sent an email message (followed by a daily reminder when indicated) to the patient and to the doctor, with links to six questions for the doctor and six for the patient, presented with a 10-point scale (1 for most negative, 10 for most positive) that asked about the effect of the medical history and its summary on the quality of the visit from the patient's and the doctor's perspectives, with provision for them to record comments and suggestions for improvement. At the conclusion of the study, the doctors were asked by a final, online questionnaire, whether they would like to have their patients continue with the program and whether they would recommend it for other doctors and their patients.

Results

Of the 675 patients initially contacted, 76 signed on to the description of the study, 45 signed the request for informed consent, 40 began the medical history, 32 took it to completion, and eight stopped before completion but did not restart, for reasons unknown. The 32 patients who completed the history (17 women and 15 men between the ages of 21 and 72 years) were presented with a mean of 550 screens and took between 45 and 94 min to complete the interview (based on an estimated 7 s per screen).26

Six of the 32 patients who completed the history were excluded from the remainder of the study: one patient's summary could not be viewed by the doctor due to a technical error (later corrected); one patient's visit was canceled; one patient's visit was rescheduled to a non-participating doctor; one patient inadvertently took the history after the visit with the doctor; and for two patients—reasons unknown—neither they nor their doctors completed the post-visit questionnaires. Of the remaining 26 patients (13 women and 13 men between the ages of 21 and 72 years), the results for six were incomplete: three patients completed their post-visit assessments, but their doctor did not, and in the case of the remaining three patients, the doctors completed the post-visit assessments, but the patients did not. Twenty patients (nine women and 11 men between the ages of 21 and 72 years) together with their participating doctors completed all assessment questionnaires.

Patients' post-history questionnaire

The 32 patients who completed the history were generally positive in their assessment, as indicated by their responses to the post-history assessment questionnaire. When asked ‘How helpful were the questions when thinking about your health?’ they responded with a mean score of 8.4 (on a scale from 1 ‘not at all helpful’ to 10 ‘very helpful’); and for all 10 questions, the combined mean score was 8.3 (see supplementary figure 4, available online only). Of the 17 patients who typed in suggestions for improvement, seven focused on the length of the program and ways to make it more efficient, such as by including more than one question per screen and enhancing the ‘back-up’ option to enable more direct access to a previously answered question in need of reconsideration; others asked for more flexibility in the response options, such as more use of ‘sometimes’, and for less detailed and seemingly redundant questioning about what they considered low priority problems. Among additional comments were, ‘Great questionnaire [but] took a lot of time’, ‘This seems like a very good idea. It should save a great deal of time at the office visit’, ‘I like the summaries generated from yes/no answers. Very cool!’ and ‘Thank you’.

Patients' post-visit questionnaire

The 23 patients who completed their post-visit questionnaire were, for the most part, positive in their assessment of the helpfulness of the computer-based history and its summary at the time of their visit with the doctor. When asked ‘How helpful was it for you to have taken the computer interview before seeing your doctor?’ they responded on the 10-point scale (on a scale from 1 ‘not at all helpful’ to 10 ‘very helpful’) with a mean score of 8.3; and for all six questions, the combined mean score was 7.8 (see supplementary figure 5, available online only). (For the three of these patients whose doctors did not complete their post-visit questionnaire, but whose responses were included with the 20 patients whose doctors completed their questionnaire, the combined mean score for all six questions was 5.9.) Once again, the patients' suggestions focused on the length of the interview, difficulties with the back-up and editing options, and a disproportionate attention to problems of lesser importance from their perspective—‘Questions sometimes led to a small issue looking like a large issue’. On a positive note, ‘It was nice [the doctor] saw a diagnosis on my [previous record] that wasn't accurate so was able to fix it.’

Doctors' post-visit questionnaire

The doctors were also for the most part positive in their assessment of the usefulness of the history and its summary at the time of their visits with the 23 patients for whom they completed their post-visit questionnaire. When asked ‘How helpful was it for your patient to have taken the computer interview before seeing you?’ they responded with a mean score of 7.7 on the10-point scale. When asked ‘To what extent do you think the computer summary helped you to provide better care to your patient?’ they responded with a mean score of 7.5. For all six questions, the combined mean score was 7.6 (see supplementary figure 6, available online only). (For the three patients who did not complete their post-visit questionnaire, but whose doctors did complete theirs, the combined mean score for all six questions was 7.2.)

The doctors' comments, however, did indicate that the summaries were too long, and that they needed to differentiate better between medical problems of greater and lesser importance. One doctor commented: ‘It was useful to have the info prior to the visit. However, [it would be more helpful] if the questions can [be] organized so there is not so much redundancy.’

Doctors' post-study questionnaire

When asked at the conclusion of the trial, ‘Would you like to have your patients continue to use this medical history program?’ and ‘Would you recommend this medical history program to other doctors and their patients?’ (on 10-point scales from 1 for ‘never’ to 10 for ‘always’), one doctor, based on experience with one patient, responded with 2 for continuing with the program; three doctors, based on their experience with 4, 5, and five patients, responded with means of 6, both for continuing with the program and for recommending the program to others; one doctor, based on experience with seven patients, responded with 10 both for continuing with the program and for recommending it to others; and one doctor, who had responded with a mean score of 8.3 for experience with one patient, did not respond to the concluding questions.

When asked, ‘Would you like to have individual modules of the medical history, such as family history and social history, available for your patients?’ and ‘Would you recommend individual modules, such as family history and social history to other doctors and their patients?’ the five doctors who responded provided a mean score of 9 for their patients' use of these modules, and a mean score of 8.4 for recommending use of the modules to others.

Discussion

The participating patients and doctors were forthcoming with both positive reinforcement and constructive suggestions for improvement. We are exploring ways to shorten both the history for patients, such as more use of multiple questions per screen, and the summary for doctors, and to make these easier for patients and doctors to use.

In response to the doctors' request that we shorten their patients' summaries, we plan to make the summaries hyperlinked to enable the doctor to focus on clinical issues in the order of their preference and to let each doctor specify which links will be opened in each section and which will be closed upon the first display of the summary. Currently, for example, the summary of each clinical module concludes with a list of negatives (see supplementary figure 3, available online only). Although helpful as a reminder of the scope of the problems addressed in the history, this listing consumes space and reading time and would be better as an optional feature, available by hyperlink upon request.

We do not know why potential participants decided either to join the study or not, but we did anticipate a low rate of participation. We were compelled to use regular mail for most of the initial contacts, and some of the patients contacted may not have had ready access to the internet. We the investigators, rather than the doctors who would be establishing rapport with the patients, were the ones to send the letters; and we required those who were interested to contact us by email and then to read a detailed description of the study together with a lengthy request for informed consent, before they could proceed with their medical history. Once the online medical history has moved beyond research to ongoing clinical practice, patients will no longer be required to read and sign these time-consuming documents before their being offered—at the discretion of their doctors—the option to take the medical history. Instead, doctors will be free to suggest that their patients take the history, or any of its components, whenever they think doing so will help with patient care.

The results of the pilot trial, together with our discussions with the participating doctors, indicate that the doctors want certain components of the history—particularly the family and social histories—available right away, and we are moving to comply with this request. After we modify the remainder of the history and its summary in accordance with the patients' and doctors' requests, we hope to make all modules—individually and in combination—available for routine use in patient care.

Acknowledgments

The authors are greatly indebted to the participating patients and doctors and want to thank them for their help with this study.

Footnotes

Funding: This study was supported in part by a grant from the National Library of Medicine (1 R01 LM008255-01A1) and by a grant from the Rx Foundation.

Competing interests: WVS has been on the Scientific Advisory Board of the Eliza Corporation, and both WVS and TD hold stock options in the company. SEL is a principal in Veritas Health Solutions LLC, an owner of Veritas Health Associates LLC, a principal in Cognitive Behavioral Technologies LLC, he owns stock in InfoMedics, Inc., and he consults for LifeOptions Group and Mensante Corp. CS is on the board of directors of Intelligent Medical Objects where he holds stock options, and he holds stock in the Allscripts Corporation. HBK, RBD, and HLB have no competing interests.

Ethics approval: This study was conducted with the approval of the Institutional Review Board of the Beth Israel Deaconess Medical Center, Boston, Massachusetts.

Contributors: All of the authors contributed to the design of the study, the performance of the study, and the writing of the paper.

Provenance and peer review: Not commissioned; externally peer reviewed.

Data sharing statement: Participants gave informed consent for data sharing.

References

  • 1.Slack WV, Hicks GP, Reed CE, et al. A computer-based medical history system. N Engl J Med 1966;274:194–8 [DOI] [PubMed] [Google Scholar]
  • 2.Mayne JG, Weksel W, Sholtz PN. Toward automating the medical history. Mayo Clin Proc 1968;43:1–25 [PubMed] [Google Scholar]
  • 3.Slack WV, Slack CW. Patient–computer dialogue. N Engl J Med 1972;286:1304–9 [DOI] [PubMed] [Google Scholar]
  • 4.Slack WV. A history of computerized medical interviews. MD Comput 1984;1:52–9 [PubMed] [Google Scholar]
  • 5.Slack WV. Patient–computer dialogue: a review. In: van Bemmel JH, McCray AT, eds. Yearbook of Medical Informatics 2000: Patient-centered Systems. Stuttgart, Germany: Schattauer, 2000:71–8 [PubMed] [Google Scholar]
  • 6.Bachman JW. The patient–computer interview: a neglected tool that can aid the clinician. Mayo Clin Proc 2003;78:67–78 [DOI] [PubMed] [Google Scholar]
  • 7.Slack WV. A 67-year-old man who e-mails his physician. JAMA 2004;292:2255–61 [DOI] [PubMed] [Google Scholar]
  • 8.Slack WV, Van Cura LJ. Patient reaction to computer-based medical interviewing. Comput Biomed Res 1968;1:527–31 [DOI] [PubMed] [Google Scholar]
  • 9.Greist JH, Gustafson DH, Stauss FF, et al. A computer interview for suicide-risk prediction. Am J Psychiatry 1973;130:1327–32 [DOI] [PubMed] [Google Scholar]
  • 10.Lucas RW, Card WI, Knill-Jones RP, et al. Computer interrogation of patients. BMJ 1976;2:623–5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Locke SE, Kowaloff HB, Hoff RG, et al. Computer-based interview for screening blood donors for risk of HIV transmission. JAMA 1992;268:1301–5 [PubMed] [Google Scholar]
  • 12.Turner CF, Ku L, Rogers SM, et al. Adolescent sexual behavior, drug use, and violence: increased reporting with computer survey technology. Science 1998;280:867–73 [DOI] [PubMed] [Google Scholar]
  • 13.Kurth AE, Martin DP, Golden MR, et al. A comparison between audio computer-assisted self-interviews and clinician interviews for obtaining the sexual history. Sex Transm Dis 2004;31:719–26 [DOI] [PubMed] [Google Scholar]
  • 14.Gustafson DH, Brennan PF, Hawkins RP, eds. Investing in E-Health. New York: Springer Health Informatics Series, 2007 [Google Scholar]
  • 15.Brennan PF, Casper GR, Burke LJ, et al. Technology-enhanced practice for patients. Heart Lung 2010;39(6 Suppl):S34–46 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Gustafson DH, Shaw BR, Isham A, et al. Explicating an evidence-based, theoretically informed, mobile technology-based system to improve outcomes for people in recovery for alcohol dependence. Subst Use Misuse 2011;46:96–111 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Delbanco T, Walker J, Darer JD, et al. Open notes: doctors and patients signing on. Ann Intern Med 2010;153:121–5 [DOI] [PubMed] [Google Scholar]
  • 18.Brigham J, Lessov-Schlaggaar CN, Javitz HS, et al. Test–retest reliability of web-based retrospective self-report of tobacco exposure and risk. J Med Internet Res 2009;11:e35. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Adamson S, Bachman J. Pilot study providing online care in a primary care setting. Mayo Clin Proc 2010;85:704–10 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Slack WV. Patient–computer dialogue: a hope for the future. Mayo Clin Proc 2010;85:701–3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Slack WV, Kowaloff HB, Davis RB, et al. Test–retest reliability in a computer-based medical history. J Am Med Inform Assoc 2011;18:73–6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Slack WV. Cybermedicine as a patient's assistant. In: Slack WV, ed. Cybermedicine: How Computing Empowers Doctors and Patients for Better Health Care. Rev edn San Francisco (Calif): Jossey-Bass, 2001:38–43 [Google Scholar]
  • 23.Bleich HL, Beckley RF, Horowitz G, et al. Clinical computing in a teaching hospital. N Engl J Med 1985;312:756–64 [DOI] [PubMed] [Google Scholar]
  • 24.Sands DZ, Halamka JD. PatientSite: patient centered communication, services, and access to information. In: Nelson R, Ball MJ, eds. Consumer informatics: Applications and Strategies in Cyber Health Care. New York: Springer-Verlag, 2004 [Google Scholar]
  • 25.Safran C, Rury C, Rind DM, et al. A computer-based ambulatory medical record for a teaching hospital. MD Comput 1991;8:291–9 [PubMed] [Google Scholar]
  • 26.Slack WV, Leviton A, Bennett SE, et al. Relation between age, education, and time to respond to questions in a computer-based medical interview. Comput Biomed Res 1988;21:78–84 [DOI] [PubMed] [Google Scholar]

Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES