Abstract
The informed consent process for research has come under scrutiny, as consent documents are increasingly long and difficult to understand. Innovations are needed to improve comprehension in order to make the consent process truly informed. We report on the development and pilot testing of video clips that could be used during the consent process to better explain research procedures to potential participants. Based on input from researchers and community partners, 15 videos of common research procedures/concepts were produced. The utility of the videos was then tested by embedding them in mock informed consent documents that were presented via an online electronic consent system designed for delivery via iPad. Three mock consents were developed, each containing five videos. All participants (n=61) read both a paper version and the video-assisted iPad version of the same mock consent and were randomized to which format they reviewed first. Participants were given a competency quiz that posed specific questions about the information in the consent after reviewing the first consent document to which they were exposed. Most participants (78.7%) preferred the video-assisted format compared to paper (12.9%). Nearly all (96.7%) reported that the videos improved their understanding of the procedures described in the consent document; however, comprehension of material did not significantly differ by consent format. Results suggest videos may be helpful in providing participants with information about study procedures in a way that is easy to understand. Additional testing of video consents for complex protocols and with subjects of lower literacy is warranted.
Keywords: informed consent process, videos, electronic forms, tablet computing
Introduction
Informed consent is the cornerstone to ethical human subjects research. However, the increasing emphasis on regulatory procedures, combined with more complex and highly technical research procedures, has resulted in lengthier informed consent documents that are often highly technical and difficult to understand. Given these concerns and the ubiquity of alternative communication modalities, it is logical to consider innovative methods of communicating information in the informed consent process. For some research protocols, short videos may better communicate difficult procedures and concepts, be less intimidating, and help potential participants to focus on important aspects of the research.
Informed consent documents are more often oriented to regulatory requirements than participant comprehension. The traditional informed consent process involves providing potential research participants with written material that requires reading and does not take into account other styles of learning, such as visual, auditory, or experiential learning. As consent forms increase in length, there is a decrease in the likelihood that they will be read and adequately comprehended [1]. Even when documents are prepared at appropriate reading levels, it is often difficult for potential participants to comprehend and retain the most important details [1, 2]. Furthermore, those who are older and more infirm demonstrate lower levels of understanding of research procedures [3]. These issues could jeopardize the process of obtaining consent to participate in research that is truly informed.
There have been a number of studies on multimedia interventions to improve a potential participant’s understanding of a clinical trial, but results have been mixed [4-8]. Jimison and colleagues [9] developed a multimedia consent tool after receiving input from key stakeholders, including prior research participants, researchers, and institutional review board members. The tool itself included a structured modular approach that contained standard consent language and allowed investigators to add in research specific information. Key components included general information about clinical trials, a printable listing of available resources, interviews with previous study participants, a self-test, and trial-specific information. This prototype was favorably reviewed by research participant stakeholders; however, researchers and IRB members had concerns about methods for reviewing the system for potential biases in presentation. The utility of tools in which investigators would need to do a great deal of set up is unclear. Several other studies have found that oral and videotaped presentations of consent content may help patients comprehend consent information [10-12]; however, video remains an under-utilized tool in the informed consent process. This may be due, in part, to the types of videos being used. Many studies have merely repackaged the information found in a consent document into a verbal presentation (either audio or a video of a person talking) [13]. Having additional videos of actual procedures may add to a participant’s understanding of what s/he will be asked to do during a clinical trial.
The purpose of this study was to develop short video clips of common research procedures and concepts and to test their acceptability and effect on comprehension within a mock informed consent process.
Methods
This study was conducted in several phases. The Medical University of South Carolina (MUSC) Institutional Review Board (IRB) approved all aspects of the study including focus groups conducted with community members and medical researchers, review of video content by a community advisory panel, and the video consent pilot.
Development of Video Consents
Video Production
In preparation for video production, the research team developed a list of possible research procedures and concepts. From this list, MUSC researchers and community members were asked to rank and prioritize which procedures/concepts may best be communicated by video format. Video scripts for the selected procedures/concepts were developed by an outside production company and were reviewed and approved by the research team. The production company developed scripts and produced the videos under contract with MUSC using grant funding. Final draft versions of the videos were reviewed in six focus groups comprised of community members and then edited by the production company based on feedback. Fifteen high quality videos were produced that described either research procedures or research concepts using visual images with voice over. Additional on-screen instructional text was provided to highlight and summarize important information about the procedure or the concept being viewed. There were no subtitles used in the videos in order to allow for voice over to be re-recorded in additional languages as necessary. The 15 videos were produced over a 6-month time period (from script development to final penultimate version). Each video cost approximately $3,000 to produce.
Developed videos fell into two general categories: specific procedures and research concepts. Procedural videos included: Magnetic Resonance Imaging (MRI), BodPod, Intravenous (IV) infusion, echocardiogram, Transcranial Direct Current Stimulation (tDCS), Transcranial Magnetic Stimulation (TMS), biopsy, CT scan, ultrasound, and DEXA scan. Conceptual videos included: genetic research, gene therapy, data de-identification and coding, randomization, and biorepository. Each video clip was approximately 60 to 120 seconds long, depending on content, with an average length of 102.7 seconds. It is important to note that the developed videos focused on what could be expected during a given procedure and did not address risks of the procedures.
Electronic Consent Platform
The South Carolina Clinical & Translational Research Institute’s bioinformatics team, part of MUSC’s Clinical & Translational Science Award, developed a platform for the electronic informed consent based on a Research Permissions Management System (RPMS). The RPMS was developed and piloted at MUSC in collaboration with Health Sciences South Carolina with funding from the National Library of Medicine. Specifically, the videos were programmed using HTML5 and incorporated into electronic consent forms as hyperlinks. Metrics on video usage were collected to assess usability. Research electronic data capture (REDCap) [14], a web-based database system, was used for participant registration. The Video-Assisted Consent (VAC) system pulled registration information using the REDCap application program interface to display the appropriate consent material.
The integrated VAC platform allowed the written words of the consent to be emulated on the iPad, and videos to be incorporated. Figure 1 shows an example of an iPad screen that participants saw when reviewing the video-assisted consent. Participants clicked the yellow bar to view the video and could navigate between pages/screens using the forward and back buttons.
Figure 1.
Example of video assisted consent format
Development of Mock Consents
Once the videos were completed, the research team devised three mock consent documents that contained elements described in the videos. The three consents contained five videos each, using fourteen of the fifteen videos (although a gene therapy video was developed, it was not used; the randomization video was used in two of the mock consents). Resulting mock consent materials were presented at a 10th to 11th grade reading level (see Table 1).
Table 1.
Characteristics of the three consents
| Consent A | Consent B | Consent C | |
|---|---|---|---|
| Total Words | 1946 | 2066 | 3204 |
| Flesch Reading Ease | 51.3% | 51.6% | 54.5% |
| Flesch-Kincaid Reading Level | 10.8 | 10.5 | 10.0 |
| Videos Included | Dexa Scan Bod Pod Infusion Randomization Biorepository |
Biopsy CT Scan Ultrasound Echocardiogram De-identification |
MRI TMS tDCS Randomization Genetic Research |
Research Design
Participants were recruited through word of mouth and flyers posted in public areas across the MUSC campus. Sixty-one participants were enrolled after providing written informed consent. All participants were 18 years of age or older and required to pass a 5-item competency quiz about study participation in order to ensure that they had the ability to read and comprehend information they would subsequently be asked to evaluate. Participants were allowed to ask questions about the study prior to taking the competency quiz, however, they were not allowed to ask questions about the mock consents. Participants were paid $25.00 for their time.
Participants were assigned serially to review one of the three mock consents (Consent A, Consent B, or Consent C). Within each consent stratum (A, B, or C), participants were then randomly assigned to receive either: 1) paper format first, online electronic iPad format second; or 2) online electronic iPad format first, paper format second. This crossover design was chosen to allow participants the opportunity to directly compare one format against the other.
Prior to reviewing the consent formats, participants completed the Rapid Estimate of Adult Literacy in Medicine, revised (REALM-R), an 8-item health literacy assessment based on word recognition [15]. Participants were then asked to complete a self-assessment of prior understanding/knowledge of different medical procedures described in the videos. After reviewing the first consent (either the paper version or the electronic version depending on randomization), participants were given a 20-item competency quiz that asked specific questions about the information contained in the consent document. Participants were not quizzed after reading the second consent. Content items covered in the competency quizzes included voluntariness, randomization, FDA status of study intervention, confidentiality, and questions about specific study procedures and risks described in the consent they reviewed. After both consent formats were reviewed, participants were asked to complete an assessment of the two formats and videos they viewed.
Measures
Rapid Estimate of Adult Literacy in Medicine, revised (REALM-R)—This assessment is an 8-item validated version of the original assessment in which higher scores signify better health literacy. It is an effective screening tool for identifying individuals who may have problems with health literacy [15]. Those with a score of 6 or less are considered at risk for poor health literacy.
Participant Self-Report Knowledge—This assessment was developed by the research team. Participants were asked to rate their current knowledge/understanding on a Likert scale (from no previous knowledge to significant previous knowledge) of different research procedures that would be shown in any of the videos. Participants were also asked if they had participated in previous studies.
Competency/Comprehension Quiz—Each consent had twenty True/False questions that covered similar areas, but were customized to reflect the content of each consent (A, B, or C). These quizzes were developed by the research team and covered the required elements of consent as defined by 45 CFR 46.116.
Consent Format Assessment—This assessment was developed by the research team and queried participants regarding which consent format they preferred and the helpfulness of the videos in the electronic consent they reviewed. Specifically, participants were asked to rate the helpfulness of each video reviewed on a Likert scale as being one of the following: 1) the video made the information more confusing, 2) not helpful/no added benefit, 3) slightly helpful/some added benefit, or 4) significantly helpful/significant added benefit.
Analysis Plan
Categorical variables were tested using chi-square analyses. Continuous measures were tested using student’s t-test or ANOVA as appropriate. Correlations were performed using bivariate correlations with Pearson coefficients. All analyses were performed using SPSS statistical software (version 20). All tests were two-tailed and p-values less than 0.05 were considered statistically significant.
Results
Sixty-one participants were enrolled after providing written informed consent. Demographic data and other selected characteristics are presented in Table 2, grouped by the randomization scheme (which consent format was reviewed first). Participants were primarily women (60.7%; n=37), with an average age of 43.0 (SD=14.2); 52.5% were African American, 42.6% Caucasian, 3.3% Asian, and 1.6% other race; 4.9% identified themselves as Hispanic; 41% of participants reported participation in prior research studies. The average educational level was 14.8 years and participants had an average score of 7.03 on the REALM-R. None of the participants were students in the medical field. The majority (86.9%) were employed. There were no statistically significant differences on these variables according to randomization group.
Table 2.
Demographics and Selected Characteristics
| Total n=61 |
Paper First (n=31) |
Video First (n=30) |
Test Statistic |
|
|---|---|---|---|---|
| Mean Age (SD) | 43.0 (14.2) | 42.0 (14.3) | 44.0 (14.2) | NS |
|
| ||||
| % Male | 39.3 (n=24) | 29.0 (n=9) | 50.0 (n=15) | NS |
|
| ||||
| Race | ||||
| % Caucasian | 42.6 (n=26) | 48.4 (n=15) | 36.7 (n=11) | NS |
| % African American | 52.5 (n=32) | 45.2 (n=14) | 60.0 (n=18) | |
| % Asian | 3.3 (n=2) | 3.3 (n=1) | 3.3 (n=1) | |
| %Other | 1.6 (n=1) | 3.3 (n=1) | 0 | |
|
| ||||
| % Hispanic | 4.9 (n=3) | 6.5 (n=2) | 3.3 (n=1) | NS |
|
| ||||
| Mean Education (Years; SD) |
14.8 (2.7) | 14.7 (2.4) | 14.8 (2.9) | NS |
|
| ||||
| % Employed | 86.9 (n=53) | 87.1 (n=27) | 86.7 (n=26) | NS |
|
| ||||
| Mean REALM-R Score |
7.03 (1.7) | 6.8 (2.1) | 7.2 (1.2) | NS |
|
| ||||
| Total Correct on Competency Quiz |
17.6 (2.3) | 17.9 (2.0) | 17.2 (2.5) | NS |
|
| ||||
| % Preferred Video | 78.7 (n=48) | 77.4 (n=24) | 80.0 (n=24) | NS |
Competence/Comprehension
Those in the paper first condition answered on average 17.9 (SD=2.0) of the 20 questions correctly compared with 17.2 (SD=2.5) for the video-assisted first condition. These differences were not statistically significant.
Format Preference
The video-assisted consent format was preferred by 77.4% (n=24) of those in the paper first condition and 80.0% (n=24) of those in the video-assisted first condition. These differences were not statistically significant. Of the 13 participants who did not prefer video, five had no preference between video and paper and eight preferred paper. The primary reason for preferring paper was that they liked being able to go back and re-read previous information. Table 3 shows a breakout of the characteristics of participants by consent format preferred. There was a trend (F=2.52; p=0.089) for the group who preferred the video format to have more years of education. There were no differences between genders with respect to format preference. No other differences were found by format preference.
Table 3.
Participant Characteristics by Preferred Format
| Preferred Paper n=8 |
Preferred Video n=48 |
No Preference n=5 |
Test Statistic |
|
|---|---|---|---|---|
| Mean Age (SD) | 45.5 (11.8) | 42.6 (15.0) | 43.4 (9.6) | NS |
| % Male | 25.0 (N=2) | 39.6 (N=19) | 60.0 (N=3) | NS |
| Mean Education (SD) | 13.8 (1.3) | 15.1 (2.7) | 12.8 (2.6) | F=2.52; p=0.089 |
| Mean REALM-R Score (SD) |
6.8 (1.6) | 7.2 (1.6) | 5.8 (2.3) | NS |
| % Video First | 50 (n=4) | 50 (n=24) | 40 (n=2) | NS |
| % No previous Computer Tablet Experience |
37.5 (n=3) | 14.6 (n=7) | 20.0 (n=1) | NS |
| % Problems with iPad | 25.0 (n=2) | 18.8 (n=9) | 20.0 (n=1) | NS |
| Mean Total Correct on Competency Quiz (SD) |
17.25 (2.4) | 17.6 (2.2) | 17.6 (2.7) | NS |
Participants were also asked if they would have agreed to participate in the given study, were the mock study “real”. The majority of participants (75.4%, n=46) stated that they would be interested. Additionally, participants were asked if the videos helped them make the decision to participate, and 78.7% (n=48) gave affirmative responses. Interestingly, there was a correlation between agreement to participate and whether participants found the videos helpful (r=0.447, p<0.001). Similarly, participants who did not find the videos helpful in making a decision to participate in the mock study were more likely not to want to participate in the mock study (χ2=9.76; p=0.002).
As can be seen in Figure 2, most participants found the videos helpful in explaining the research procedures/concepts. Although most videos were found helpful by a majority of participants, there was a significant difference between the helpfulness of the procedural videos compared to the conceptual videos (χ2=5.61; p<0.05). Specifically, participants were more likely to rate the procedural videos as being significantly helpful and the conceptual videos as not helpful. These data are shown in Figure 3.
Figure 2.
Ratings of helpfulness of videos
Figure 3.
Helpfulness of Procedural Videos Compared to Conceptual Videos
Technical Outcomes
It is important to note that the videos were supplementary to the written consent. As such, the video-assisted format took slightly more time to review, although the difference was not statistically significant. The paper consents took an average of 896.7 +/− 461.9 seconds, and the video consents took an average of 946.3 +/− 346.1 seconds to read.
Approximately 20% (n=12) of the participants experienced technical difficulties with the iPads. These included: videos not loading properly (n=5), videos stopping and having to be restarted (n=3), difficulty with Internet connection, which kept the videos from playing (n=3).
Discussion
As medical and research procedures become increasingly technical and complex, it is important to ensure that research participants have a good understanding of procedures for which they are consenting. In this manuscript, we described the development and pilot testing of video clips displaying fairly common yet sometimes complex research procedures and concepts. These video clips can be easily displayed on iPads and other mobile devices and can be readily embedded into electronic consent forms.
While other investigators have found adding video presentations to be helpful in the consent process [10-12] we were interested in assessing the utility of the specific videos we produced in order to develop a video library that could be used by others. The development of a video library of common research procedures and concepts would allow investigators to include those videos, with content appropriate to their particular consent(s), for a wide variety of research studies. Furthermore, as IRBs become familiar with the video content, they may be more likely to consider the videos as “boilerplate” material, potentially making the IRB application process more efficient.
Of interest, the majority of participants preferred the video-assisted consent to the paper format and rated the video consents as helpful. Participants especially found video assistance helpful when describing procedures (e.g., MRI, TMS), as opposed to explaining conceptual aspects of study participation (e.g., randomization, de-identification). Those who preferred paper did so mainly because they wanted to have the option of going back to re-read portions of the consent. Although re-review of previously viewed screens was available within the electronic version, it is possible that better instructions on the use of the iPad with regard to its functionality would have been helpful. Providing participants printed versions of the consent form is also essential to ensuring that they have adequate opportunity to review the consent and maintain a copy for their records.
A significant number of participants indicated that the video clips would have contributed to their decision to participate in the mock study had it been a real study. Further investigation into using video-assisted material as a tool for general study recruitment and retention is warranted. Similar to the findings of other investigators [16], using video clips did not improve overall comprehension of study procedures; however, these authors also concluded that multimedia consent procedures appear to increase information retention – which we did not test.
Other studies that have used multimedia consent tools have often just repackaged the information in a consent document into a verbal presentation [13]. In the current study, the videos provide added information to the written consent and show actors actually having the procedures performed. Armstrong and colleagues [17] also evaluated videos of actual procedures (biopsies) compared to paper consents. Similar to the current study, although the participants found the videos to be helpful, there was no difference in knowledge scores between those who saw videos versus those who received written information.
Regardless of the impact on comprehension, the video-assisted consent process may give participants a better understanding of what to expect during a clinical trial and has the potential (although not evaluated in this study) to help minimize therapeutic misconception. It also has the advantage of standardizing the information given to study participants. This would be particularly advantageous for multi-site trials in which it is important that study procedures are explained and consents are obtained in a standardized manner to minimize site differences. It is also possible that standardized, video-assisted consents could lead to efficiency in regulatory review. Developing a library of video clips of medical procedures could also be useful in obtaining consents for procedures in the context of clinical care. Furthermore, the fact that the video clips can be embedded in an electronic consent and/or an electronic health record could potentially make the tracking and archiving of consents easier and less fraught with error.
In spite of the potential advantages of video-assisted consent, there are several cautions. As with any technologic solution, there is always the possibility of unanticipated technologic “glitches”. Using appropriate technology and adequate pretesting procedures is critical to ensuring that the consent process is not more cumbersome and frustrating rather than improved. In addition, adding video clips to the consent process takes considerable effort on the part of the research team and can make the informed consent process more time-consuming; however, creating and sharing video-clips of common procedures and concepts across research teams and sites would minimize the effort required versus creating video material de novo for each study. Nevertheless, it will be important for future studies to clarify both the participant and procedure characteristics that are important in making the best use of video-assisted consent.
There are a number of study limitations to be considered. As mentioned above, the competency quizzes for the consents used in this study were created specifically for use in this study and were not validated instruments. Therefore, they may not have given the most sophisticated reflection of participants’ understanding of the consent materials. In addition, only those individuals who could pass an initial competency quiz were eligible to compare the two consent formats. Thus, this study was not able to address the utility of video-assisted consent in a low-literate population. The videos were also tested in simulated research studies. They would need to be tested in actual studies to understand their full utility. Finally, as this was a pilot study, both the sample size and the number of procedures studied were limited. The videos that were developed focused on research procedures. To fully meet the ethical goals of informed consent, it would be important to have videos that discussed: 1) voluntariness and the ability to leave a study at any time; 2) the decision to participate in research will not affect regular medical care; and 3) possible benefit [18]. The authors struggled with whether or not to include risks in the procedural videos developed, but ultimately decided against including risks, since those would typically be located in a separate section of a consent document.
In conclusion, in this pilot study of informed consent procedures, the majority of participants preferred video-assisted to paper consent formats. While no differences in overall comprehension of study procedures between the two consent formats emerged, the majority of participants found the videos to have significant added benefit in explaining study procedures. Future studies are needed to investigate optimal and perhaps expanded use of video clips in aiding the informed consent process.
Acknowledgments
Grant Support: This publication was supported by the South Carolina Clinical & Translational Research (SCTR) Institute, with an academic home at the Medical University of South Carolina, through NIH Grant Numbers UL1 RR029882 and UL1 TR000062 as well as RC2 LM010796.
Footnotes
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
References
- [1].Fortun P, West J, Chalkley L, Shonde A, Hawkey C. Recall of informed consent information by healthy volunteers in clinical trials. QJM : monthly journal of the Association of Physicians. 2008;101:625–9. doi: 10.1093/qjmed/hcn067. [DOI] [PubMed] [Google Scholar]
- [2].Muir KW, Lee PP. Literacy and informed consent: a case for literacy screening in glaucoma research. Archives of ophthalmology. 2009;127:698–9. doi: 10.1001/archophthalmol.2009.59. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [3].Agoritsas T, Perneger TV. Patient-reported conformity of informed consent procedures and participation in clinical research. QJM : monthly journal of the Association of Physicians. 2011;104:151–9. doi: 10.1093/qjmed/hcq172. [DOI] [PubMed] [Google Scholar]
- [4].Dunn LB, Lindamer LA, Palmer BW, Schneiderman LJ, Jeste DV. Enhancing comprehension of consent for research in older patients with psychosis: a randomized study of a novel consent procedure. The American journal of psychiatry. 2001;158:1911–3. doi: 10.1176/appi.ajp.158.11.1911. [DOI] [PubMed] [Google Scholar]
- [5].Tait AR, Voepel-Lewis T, Moscucci M, Brennan-Martinez CM, Levine R. Patient comprehension of an interactive, computer-based information program for cardiac catheterization: a comparison with standard information. Archives of internal medicine. 2009;169:1907–14. doi: 10.1001/archinternmed.2009.390. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [6].Hopper KD, TenHave TR, Hartzel J. Informed consent forms for clinical and research imaging procedures: how much do patients understand? AJR American journal of roentgenology. 1995;164:493–6. doi: 10.2214/ajr.164.2.7839996. [DOI] [PubMed] [Google Scholar]
- [7].Ryan RE, Prictor MJ, McLaughlin KJ, Hill SJ. Audio-visual presentation of information for informed consent for participation in clinical trials. Cochrane Database Syst Rev. 2008:CD003717. doi: 10.1002/14651858.CD003717.pub2. [DOI] [PubMed] [Google Scholar]
- [8].Flory J, Emanuel E. Interventions to improve research participants’ understanding in informed consent for research: a systematic review. JAMA : the journal of the American Medical Association. 2004;292:1593–601. doi: 10.1001/jama.292.13.1593. [DOI] [PubMed] [Google Scholar]
- [9].Jimison HB, Sher PP, Appleyard R, LeVernois Y. The use of multimedia in the informed consent process. Journal of the American Medical Informatics Association : JAMIA. 1998;5:245–56. doi: 10.1136/jamia.1998.0050245. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [10].Jeste DV, Palmer BW, Golshan S, Eyler LT, Dunn LB, Meeks T, et al. Multimedia consent for research in people with schizophrenia and normal subjects: a randomized controlled trial. Schizophrenia bulletin. 2009;35:719–29. doi: 10.1093/schbul/sbm148. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [11].Sorrell JM. Effects of writing/speaking on comprehension of information for informed consent. Western journal of nursing research. 1991;13:110–22. doi: 10.1177/019394599101300108. [DOI] [PubMed] [Google Scholar]
- [12].Barbour GL, Blumenkrantz MJ. Videotape aids informed consent decision. JAMA : the journal of the American Medical Association. 1978;240:2741–2. [PubMed] [Google Scholar]
- [13].Kass NE, Sugarman J, Medley AM, Fogarty LA, Taylor HA, Daugherty CK, et al. An intervention to improve cancer patients’ understanding of early-phase clinical trials. Irb. 2009;31:1–10. [PMC free article] [PubMed] [Google Scholar]
- [14].Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)--a metadata-driven methodology and workflow process for providing translational research informatics support. Journal of biomedical informatics. 2009;42:377–81. doi: 10.1016/j.jbi.2008.08.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [15].Bass PF, 3rd, Wilson JF, Griffith CH. A shortened instrument for literacy screening. Journal of general internal medicine. 2003;18:1036–8. doi: 10.1111/j.1525-1497.2003.10651.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [16].Bass PF, 3rd, Wilson JF, Griffith CH. A shortened instrument for literacy screening. 2003 doi: 10.1111/j.1525-1497.2003.10651.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [17].Armstrong AW, Alikhan A, Cheng LS, Schupp C, Kurlinkus C, Eisen DB. Portable video media for presenting informed consent and wound care instructions for skin biopsies: a randomized controlled trial. The British journal of dermatology. 2010;163:1014–9. doi: 10.1111/j.1365-2133.2010.10067.x. [DOI] [PubMed] [Google Scholar]
- [18].Sugarman J, Lavori PW, Boeger M, Cain C, Edsond R, Morrison V, et al. Evaluating the quality of informed consent. Clin Trials. 2005;2:34–41. doi: 10.1191/1740774505cn066oa. [DOI] [PubMed] [Google Scholar]



