Skip to main content
AEM Education and Training logoLink to AEM Education and Training
. 2020 Oct 13;5(2):e10505. doi: 10.1002/aet2.10505

Learning Experience Design in Health Professions Education: A Conceptual Review of Evidence for Educators

Joann Pan 1, Jessica Sheu 1, Lauren Massimo 2, Kevin R Scott 3, Andrew W Phillips 4,
Editor: John Burkhardt
PMCID: PMC8052999  PMID: 33898909

Abstract

Objectives

The increasing use of online resources in emergency medicine (EM) education has driven demand for higher quality resources. Learning experience design (LED) is the study of how electronic user interfaces impact learner outcomes. We sought to summarize the evidence for LED principles to inform creation of EM educational resources.

Methods

We performed scripted searches of MeSH terms, PubMed keywords, and hand tracings. Inclusion criteria were controlled studies using light‐emitting diode or liquid crystal display monitors with Latin‐based languages. Cathode ray tube (CRT) monitors were excluded because of the user experience confounders.

Results

Thirty‐two articles met inclusion criteria. Overall, 14‐point size significantly improved legibility compared to smaller font sizes. Similarly, Verdana and Arial typefaces significantly improved legibility compared to Times New Roman typeface. Verdana also significantly decreased subjective mental workload and visibility difficulty ratings and required the least eye movement of any typefaces tested. Positive polarity (dark text on light background) significantly improved reading outcomes across many measurements over negative polarity. There was higher character identification accuracy with higher luminance. Text effects (e.g., italics), interword and interletter spacing, and page presentation are among variables with mixed or minimal evidence.

Conclusion

Learning experience design principles significantly impacted reading and learning outcomes in laboratory settings. No studies evaluated classroom outcomes. Recommendations for electronic learning environments are 14‐point font with Verdana or Arial typeface with positive polarity (dark letters on light background). We recommend increasing screen brightness slightly. EM educators may significantly improve the speed and accuracy of learning written material by espousing evidence‐based LED principles.


Internet‐based learning (IBL) has become increasingly prevalent in health professions education, no area more so than emergency medicine (EM). 1 For example, as of 2014 there were 141 and 42 free blogs and podcasts, respectively, for EM and critical care alone. 2 Moreover, the complexity of IBL resources in EM and other specialties has moved well beyond templated blog sites now with large, custom‐built websites and mobile platforms such as EM Coach, EM:RAP, UpToDate, Pepid, and others that have the potential to fully leverage technology for education. These, among other IBL resources, are also now formal and desired components of EM residency education. 3

Mixed outcomes from several quantitative studies of IBL led to a meta‐analysis showing no significant difference in outcomes between Internet‐based and non–Internet‐based instructional methods. 1 Nonetheless, IBL offers flexibility that non–Internet‐based instructional methods do not, which makes IBL an appealing instructional method for both instructors and learners, especially when distance learning drastically increased in the setting of the COVID‐19 outbreak. The rapid and deep embrace of distance learning by EM residency and clerkship programs, in addition to other specialties, only increases the need for effective IBL 4 and will likely usher in greater acceptance and use regardless of future distancing requirements. Success with IBL is multifactorial but heavily dependent on the human–computer interface such as website accessibility, navigation, and attractiveness. 5

Learning experience design (LED), a relatively new concept, formalized in 2007, is a user‐centered approach to educational design that accounts for both content and user preferences to optimize the user interface for learning efficacy. 6 Put simply: LED is an area of study that seeks to improve learning outcomes by improving the human–computer interface. The goal is to design the computer to meet the needs of the learner, rather than (all too commonly) the learner being forced to meet the needs of the computer.

The growing demand for new, effective IBL delivery, coupled with early support for LED in online systems outside of medical education, highlights the importance of LED research to optimize visual text characteristics and designs in online learning platforms. Improved user interfaces may also impact future study findings comparing IBL and traditional learning. The primary objective of this review is to summarize LED concepts and optimal design features for electronic text presentation that can improve EM learning efficacy in its various platforms.

METHODS

We conducted a conceptual literature review with a structured search approach. 7 No institutional review board approval was required because no human or animal subjects were included.

Search Protocol

PubMed was searched between December 26, 2017, and February 11, 2018, for MeSH and title/abstract terms individually and in combination including “linear text AND computer,” “linear text AND screen,” “visual layout AND computer,” “visual layout AND screen,” “reading AND computer,” “reading AND screen,” "learning experience design AND computer,” "learning experience design AND mobile,” and “reading and spacing.” Additional studies were also found by hand‐tracing citations in returned results (please see Data Supplement S1, Appendix S1 [available as supporting information in the online version of this paper, which is available at http://onlinelibrary.wiley.com/doi/10.1002/aet2.10505/full] for the complete search script and number of articles found for each query).

Selection Criteria

Articles were included if they quantitatively compared the effects of different LEDs on learning outcomes. Exclusion criteria were studies that implemented use of CRT monitors, Kindle PaperWhite devices, nonadult learners under the age of 18, and descriptive articles without comparisons. Studies with CRT monitors and Kindle PaperWhite devices were excluded to limit potential confounding variables since these technologies carry markedly different resolutions and graphic interfaces than contemporary common screens. In cases where screen type was not discussed, studies published before year 2000 were excluded based on historical rise in light‐emitting diode and liquid crystal display screen prominence in approximately the year 2000. 8 Two investigators jointly agreed upon inclusion and exclusion of articles with any discrepancies discussed until resolution (JP and AWP).

RESULTS

A total of 32 articles met search criteria (Figure 1). Results are presented as four different types of visual text characteristics: character size, text effects, screen background, and text presentation, based on the findings reported in the included studies. Tables summarizing all 32 studies, including important methods and findings, are available in Data Supplement S1, Appendix S2. A summary of key design features and learning outcomes is presented in Figure 2.

Figure 1.

Figure 1

Flowchart of identified studies.

Character Size

Summary Findings

Larger font size, specifically 14 point, is consistently associated with improved legibility, reading speed, and productivity and is preferred subjectively by readers based on five studies.

All five studies on the effect of font size found larger fonts to be preferable based on several measures. Two different studies showed that larger fonts were more legible, each stopping at 14‐point. 9 Larger fonts also reduced required reading time in two additional studies, in which 12‐point font 10 or 14‐point font 11 were the maximum fonts tested. Banerjee et al. 11 also found 14‐point font, the largest tested, to be associated with the highest participants’ ranking and lowest overall mental workload (defined as mean of four scale dimensions). The fifth study found that task productivity (correct clicks/minute) was greatest for the largest font size tested, although was only 3.56 mm (10‐point font size). 12 The largest font tested in any of the studies was 14 point.

Text Effects

Typeface Summary Findings

Verdana typeface was generally supported across four studies by increased legibility, highest subjective preferences, and lowest cognitive load. Arial is a less‐studied but reasonable alternative.

Six studies that met inclusion criteria evaluated typeface, of which Verdana was the most commonly studied (see Box for typeface examples). Verdana and Arial typefaces had the greatest legibility when compared to Times New Roman and Franklin Gothic typefaces. 9 Verdana was also more subjectively legible compared to Mistrial AV and Plump MT font types. 13 Another study examining Verdana found it to have the highest subjective preference and lowest subjective mental workload, but Courier New typeface had the fastest reading time. 11 Finally, Verdana also had the lowest cognitive load objectively measured by eye tracking. 14 Arial was subjectively preferred over Times New Roman. 15 One additional study found Frutiger (a “humanist” typeface) to be more legible in glance reading conditions than Eurostile (a square “grotesque” typeface). 16

Box 1.
Typeface Examples.

Box 1

Text Enhancement Summary Findings

Font smoothing with ClearType may improve legibility. There is limited evidence that boldface may improve legibility while case enhancement and italics may worsen reading outcomes.

Five studies evaluated text enhancements, defined as text color (against standard white background), text case, character enhancement (e.g., boldface), and font smoothing. Colored text did not improve reading task accuracy compared to black and gray text. 17 , 18 Case enhancement (all uppercase) was associated with errors of commission in which readers incorrectly reported two different drug names as the same, paradoxically worsening reading task accuracy. 18 The only study that evaluated character enhancement found that boldface text improved legibility, but italicized typeface worsened legibility. 9 Finally, two studies evaluating ClearType, a proprietary subpixel‐rendering technology, found improved legibility 9 and reader preference and lower subjective mental workload. 19

Screen Background

Polarity Summary Findings

Multiple studies support positive (dark text on light background) over negative polarity by both subjective and objective measurements of reading efficacy.

Twelve total studies were identified that matched search criteria and evaluated the effect of screen polarity, several of them with multiple endpoints. Positive polarity is defined as darker text on a lighter background, while negative polarity is conversely lighter text on a darker background.

Findings of subjective measurements are conflicting. One study found improved legibility (especially for smaller font sizes), 20 another improved readability, 21 and yet another subjective preference 22 for positive polarity. However, another study showed that physiologic measures of effort, such as breathing rate, and also self‐reported measurements, such as mood and fatigue, showed no differences between positive and negative polarity. 23

Seven studies examined the effect of polarity on task performance and various measurements of accuracy, and these studies almost uniformly supported positive polarity. Readers demonstrated significantly better proofreading using positive polarity in four different studies, although notably three from the same primary author. 23 , 24 , 25 , 26 A variety of other quantitative variables that were significantly better with positive than negative polarity were visual acuity (FrACT test) 26 and visual identification performance. 22 In contrast, one study found faster reading speed (number of characters read per minute) with negative polarity when compared to positive polarity text, 27 and another study found that there was no difference in proofreading between positive and negative polarity when screen luminance was controlled. 28

Text‐on‐Screen Color Combination Summary Findings

There is insufficient evidence to conclude the best text‐on‐screen color combination.

One study evaluated reading task accuracy between different text‐on‐screen color combinations. Shieh and Lin found 22 that visual identification task accuracy was highest for a blue‐on‐yellow color combination and lowest for purple on red. The same was true for subjective preference scores.

Screen Glare Summary Findings

There is insufficient evidence to conclude precise effects of glare.

The only study meeting inclusion criteria that evaluated glare found that increased screen glare decreased viewing distance (furthest distance still able to read text on screen) but did not impact reading productivity (correct clicks per minute) or accuracy (percentage of clicks that were correct clicks). 12

Screen Luminance Summary Findings

Limited evidence suggests that brighter screens improve character clarity in some contexts.

Luminance (amount of light emitted from the screen) must be taken in the context of the maximum and minimum luminance (light background vs. dark text, for example) and in the context of ambient light reflecting off the screen. Two studies by the same lead author found improved character identification performance with higher maximum absolute luminance. Greater screen luminance combination (essentially contrast ratio that accounts for ambient light) improved visual task accuracy and character identification performance, but only in the setting of low contrast ratios. 18 , 29

One additional study evaluated the impact of surround lighting (e.g., desk lamp) on screen legibility and found that surround luminance that was equivalent to or slightly less than the maximum screen luminance was empirically (transient adaptation tests) and subjectively preferred. 30

Text Presentation

Spacing Summary Findings

A variety of outcomes and contexts suggest that reading performance and speed are best around the default interletter and interword spacing but only with certain font styles.

Search criteria identified 13 articles that evaluated the effects of interletter or interword spacing. Reading speed was slower as interletter spacing deviated further in either direction (shorter and longer) from the font’s default setting, 31 , 32 whereas reading performance (task reaction time and numeral discrimination) generally improved as the interletter spacing increased. 33 , 34 , 35 , 36 , 37 Performance evaluation studies did not evaluate interletter spacing that was shorter than the default.

Similar to interletter spacing, greater interword spacing was associated with slower reading speed but better reading performance, measured by reading accuracy, and task accuracy. 32 , 38 A single study showed improved accuracy with half‐character spacing between words compared to whole character. 18 A single study also evaluated unsegmented text (in which spaces are replaced with numbers, such as “here4is3an9example”) and found decreased word identification accuracy compared to conventional word separation. 39

Fixed and proportional‐width fonts may provide an important confounder, however, for both interletter and interword spacing. [Proportional‐width fonts including Calibri, Cambria, Georgia, and Verdana vary the distance between characters proportionally to the size of the character—i.e., smaller real estate for the letter “i” than the letter “W”—whereas fixed‐width fonts including Courier New and Consolas) assign the same distance between characters regardless of the size of the specific characters. For example, Calibri vs. Courier New versions of the word “initiate” are initiate and initiate, respectively, for the same font size.] A single, recent study found that increased interletter spacing decreased reaction time with proportional‐width fonts but increased reaction times for fixed‐width fonts. 33 Moreover, increased word spacing did not change reading speed with proportional font widths but did slow reading speed with fixed fonts. Notably, of the aforementioned studies that reported font(s) used, all but Paterson and Jordan 32 used proportional width fonts. Details of each study are included in AppendixData Supplement S1, Appendix S2.

Line Lengths Summary Findings

There is insufficient evidence to conclude overall effects of the number of characters per line on learning.

The only study meeting inclusion criteria that evaluated line length (characters per line, cpl) found faster reading times but lower reader preference rankings and poorer reading task accuracy when lines were longer, 85 to 100 cpl versus 55 to 70 cpl. 15 (For reference, a default Word document has 68 cpl for Times New Roman 12 point font.)

Dynamic Display Types Summary Findings

There is insufficient evidence to conclude a preferred method of displaying text.

A single study evaluated four different presentation methods: scrolling (vertical scrolling from bottom of screen), leading (text moved from right to left continuously along a single line), teletype (one character is added to the line at a time), and normal format (whole text presented on a single window) and found the fastest reading speeds with scrolling, followed by normal page format. However, comprehension decreased as reading speed increased, with scrolling producing the worst comprehension scores. 40

DISCUSSION

Varying amounts of evidence exist for different components of LED, which remains a young discipline. Currently, best evidence supports use of 14‐point font with Verdana or Arial typeface and a positive polarity (dark text on light background) screen layout. Learners should be encouraged to slightly increase screen brightness. Other factors such as text‐on‐color combinations, screen glare, line length, and scrolling have, to date, been insufficiently studied to draw conclusions.

The current evidence provides at least an early foundation of how educational material that will be viewed on personal screens should be presented, and it is important to recognize that the personal screen medium—and its best practices—is different than the traditional paper medium. Educators cannot simply apply to the screen the same strategies that they have applied to paper for decades. Moreover, decisions on presentation are not relegated to the computer programmers, but instead fall squarely on the educator because programmers have made so many options available.

Typeface outcomes were particularly striking for their illustration of the ill‐fated history of applying what had been established in the paper world to the electronic one. Serif fonts are the classic typewriter fonts, with crisp, embellished ends, such as Courier New. 11 However, the defining feature of Verdana and Arial fonts is that, as Sans Serif (i.e., “without serif”) fonts, they lack the embellished ends, 11 which makes no difference in legibility on paper, but a marked difference on a screen. Italics typeface, used frequently on paper to accentuate especially important information, may actually make that important information more difficult to read on a screen. Thus, the future of educational material might need to present important material in italics in the printed version but in bold typeface in the e‐book.

Generally speaking we expected a parabolic relationship for variables with their effect on learning—essentially a sweet spot—as with the interletter and interword spacing—but that was not a consistent finding across the other variables, potentially because the complex relationships between variables are not yet sufficiently delineated. For example, although measurements of learning continued to improve with increasing screen luminance, other literature examining fatigue as a dependent variable has shown repeatedly that increased luminance increases eye fatigue. The “sweet spot” remains elusive.

As with many new inquiries, the early findings presented almost as many new questions as they did answers. A central question that impacts the strength of the aforementioned overall recommendations is the impact of each of these factors as confounders. Studies did not make the same choices for which options would be used for various independent variables, such as font size and type when studying polarity or spacing. Indeed, the finding that some luminance outcomes depended on contrast ratios or that letter and word spacing depended on font properties suggests that blanket recommendations such as Sans Serif fonts at 14‐point size should be adopted cautiously.

Another important question is the issue of polarity in the unique setting of medical imaging. Negative polarity (white text on dark background) is recommended on lecture hall screens when presenting medical imaging, 41 but none of the studies in this review specifically evaluated multimedia learning with images that were predominantly dark background. Our results cannot be applied to lecture hall presentations because we would also have to account for the presentation distance from the viewer as part of the setting. Text presentation may need to change polarity based on subject matter.

These findings are in the context of several important limitations. First, all studies evaluated were in laboratory environments, none in the classroom or clinical rotation settings. Additionally, definitions, such as the commonly evaluated “legibility” and “cognitive load” are not standardized and, as such, were measured by different criteria and scales across studies. Finally, LED, especially in medical education, is an area of inquiry that crosses multiple fields, from computer science to psychology, to education, to medicine, making it very difficult to capture all current knowledge.

Although the findings can be interpreted in a technical nature, LED really is a subtype of instructional design, which is a recommended component of EM education fellowships. With increasing use of electronic interfaces for education, EM educators should be utilizing not only broad instructional design principles for traditional teaching, but also LED principles, in standard curricula. 42

CONCLUSIONS

Studies of learning experience design remain in their infancy but generally support the use of Verdana or Arial font in 14‐point size on a positive polarity screen (dark text on light background) with slightly increased brightness. An important next step in learning experience design research in health professions education is to establish current practices with the goal of converting to current best practices.

Figure 2.

Figure 2

Summary of key design features and learning outcomes.

Supporting information

Data Supplement S1. Supplemental material.

AEM Education and Training 2021;5:1–8

The authors have no relevant financial information to disclose.

Dr. Phillips is the founder and editor‐in‐chief of EM Coach, LLC, an Internet‐based learning platform for emergency medicine education.

References

  • 1. Cook D, Levinson AJ, Erwin PJ. Internet‐based learning in the health professions. JAMA 2008;300:1181–96. [DOI] [PubMed] [Google Scholar]
  • 2. Cadogan M, Thoma B, Chan TM, Lin M. Free Open Access Meducation (FOAM): the rise of emergency medicine and critical care blogs and podcasts (2002–2013). Emerg Med J 2014;31:e76–7. [DOI] [PubMed] [Google Scholar]
  • 3. Mallin M, Schlein S, Doctor S, et al. A survey of the current utilization of asynchronous education among emergency medicine residents in the United States. Acad Med 2014;89:598–601. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Virtual Learning Resources. Council of Residency Directors in Emergency Medicine . Available at: https://www.cordem.org/resources/education‐‐curricula/virtual‐learning‐resources/elearning‐during‐covid/. Accessed May 9, 2020.
  • 5. Chumley‐Jones HS, Dobbie A, Alford CL. Web‐based learning: sound educational method or hype? A review of the evaluation literature. Acad Med 2002;77:S86–93. [DOI] [PubMed] [Google Scholar]
  • 6. What Is Learning Experience Design? LXD.org. Available at: https://learningexperiencedesign.com/fundamentals‐of‐learning‐experience‐design/what‐is‐learning‐experience‐design/. Accessed May 9, 2020.
  • 7. Grant MJ, Booth A. A typology of reviews: an analysis of 14 review types and associated methodologies. Health Informat Libraries J 2009;26:91–108. [DOI] [PubMed] [Google Scholar]
  • 8. CRT monitors . PCTechGuide.com. 2011. Available at: https://www.pctechguide.com/crt‐monitors. Accessed Apr 16, 2019.
  • 9. Sheedy JE, Subbaram MV, Zimmerman AB, Hayes JR. Text legibility and the letter superiority effect. Human Factors 2005;47:797–815. [DOI] [PubMed] [Google Scholar]
  • 10. Bernard ML, Chaparro BS, Mills MM, Halcomb CG. Comparing the effects of text size and format on the readibility of computer‐displayed Times New Roman and Arial text. Int J Human‐Computer Studies 2003;59:823–35. [Google Scholar]
  • 11. Banerjee J, Bhattacharyya M. Selection of the optimum font type and size interface for on screen continuous reading young adults : an ergonomic approach. J Human Ergol 2011;40:47–62. [PubMed] [Google Scholar]
  • 12. Ko P, Mohapatra A, Bailey IL, Sheedy J, Rempel DM. Effect of font size and glare on computer tasks in young and older adults. Optom Vis Sci 2014;91:682–9. [DOI] [PubMed] [Google Scholar]
  • 13. Yoshioka H, Matsunobe T, Yamaoka T.Relations between the visibility of a character and the eye movement in the difference of font types. 2007. Available at: https://www.semanticscholar.org/paper/Relations‐between‐the‐visibility‐of‐a‐character‐and‐Yoshioka‐Matsunobe/3e0e8e6ed911114f49ccf81e5e70c683981dc354. pdf # search. Accessed Apr 16, 2019.
  • 14. Banerjee J, Majumdar D, Majumdar D, Pal MS. An eye movement study for identification of suitable font characters for presentation on a computer screen. J Hum Ergol (Tokyo) 2010;39:15–21. [PubMed] [Google Scholar]
  • 15. Ling J, Van Schaik P. The influence of font type and line length on visual search and information retrieval in web pages. Int J Human Comput Stud 2006;64:395–404. [Google Scholar]
  • 16. Dobres J, Chahine N, Reimer B, Gould D, Mehler B, Coughlin JF. Utilising psychophysical techniques to investigate the effects of age, typeface design, size and display polarity on glance legibility. Ergonomics 2016;59:1377–91. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Schell KL. Using enhanced text to facilitate recognition of drug names: evidence from two experimental studies. Appl Ergon 2009;40:82–90. [DOI] [PubMed] [Google Scholar]
  • 18. Lin CC. Effects of screen luminance combination and text color on visual performance with TFT‐LCD. Int J Ind Ergon 2005;35:229–35. [Google Scholar]
  • 19. Tyrrell RA, Pasquale TB, Aten T, Francis EL.47.4: Empirical Evaluation of User Responses to Reading Text Rendered Using ClearType TM Technologies. SID Symposium Digest of Technical Papers 2001:1205–1207.
  • 20. Dobres J, Chahine N, Reimer B. Effects of ambient illumination, contrast polarity, and letter size on text legibility under glance‐like reading. Appl Ergon 2017;60:68–73. [DOI] [PubMed] [Google Scholar]
  • 21. Hall RH, Hanna P. The impact of web page text‐background colour combinations on readability, retention, aesthetics and behavioural intention. Behav Inf Technol 2004;23:183–95. [Google Scholar]
  • 22. Shieh K, Lin C. Effects of screen type, ambient illumination, and color combination on VDT visual performance and subjective preference. Int J Ind Ergon 2000;26:527–36. [Google Scholar]
  • 23. Buchner A, Baumgartner N. Text‐background polarity affects performance irrespective of ambient illumination and colour contrast. Ergonomics 2007;50:1036–63. [DOI] [PubMed] [Google Scholar]
  • 24. Piepenbrock C, Mayr S, Buchner A. Positive display polarity is particularly advantageous for small character sizes: implications for display design. Hum Factors 2014;56:942–51. [DOI] [PubMed] [Google Scholar]
  • 25. Piepenbrock C, Mayr S, Buchner A. Smaller pupil size and better proofreading performance with positive than with negative polarity displays. Ergonomics 2014;57:1670–7. [DOI] [PubMed] [Google Scholar]
  • 26. Piepenbrock C, Mayr S, Mund I, Buchner A. Positive display polarity is advantageous for both younger and older adults. Ergonomics 2013;56:1116–24. [DOI] [PubMed] [Google Scholar]
  • 27. Mallick Z, Siddiquee AN, Haleem A. Mobile computing with special reference to readability task under the impact of vibration, colour combination and gender. J Hum Ergol 2008;37:57–66. [PubMed] [Google Scholar]
  • 28. Buchner A, Mayr S, Brandt M. The advantage of positive text‐background polarity is due to high display luminance. Ergonomics 2009;52:882–6. [DOI] [PubMed] [Google Scholar]
  • 29. Lin C, Huang K. Effects of ambient illumination and screen luminance combination on character identification performance of desktop TFT‐LCD monitors. Int J Ind Ergon 2006;36:211–8. [Google Scholar]
  • 30. Sheedy JE, Smith R, Hayes J, et al. Visual effects of the luminance surrounding a computer display. Ergonomics 2005;48:1114–28. [DOI] [PubMed] [Google Scholar]
  • 31. Slattery TJ, Rayner K. Effects of intraword and interword spacing on eye movements during reading: exploring the optimal use of space in a line of text. Attent Percept Psychophys 2013;75:1275–92. [DOI] [PubMed] [Google Scholar]
  • 32. Paterson KB, Jordan TR. Effects of increased letter spacing on word identification and eye guidance during reading. Memory Cognit 2010;38:502–12. [DOI] [PubMed] [Google Scholar]
  • 33. Slattery TJ, Yates M, Angele B. Interword and interletter spacing effects during reading revisited: interactions with word and font characteristics. J Exp Psychol Appl 2016;22:406–22. [DOI] [PubMed] [Google Scholar]
  • 34. Montani V, Facoetti A, Zorzi M. The effect of decreased interletter spacing on orthographic processing. Psychon Bull Rev 2015;22:824–32 [DOI] [PubMed] [Google Scholar]
  • 35. Perea M, Gomez P. Increasing interletter spacing facilitates encoding of words. Psychon Bull Rev 2012;19:332–8. [DOI] [PubMed] [Google Scholar]
  • 36. Perea M, Panadero V, Moret‐Tatay C, Gómez P. The effects of inter‐letter spacing in visual‐word recognition: evidence with young normal readers and developmental dyslexics. Learn Instr 2012;22:420–30. [Google Scholar]
  • 37. Huang KC, Yeh PC. Numeral size, spacing between targets, and exposure time in discrimination by elderly people using an lcd monitor. Percept Mot Skills 2007;104:543–6. [DOI] [PubMed] [Google Scholar]
  • 38. McGowan VA, White SJ, Paterson KB. The effects of interword spacing on the eye movements of young and older readers. J Cogn Psychol 2015;27:609–21. [Google Scholar]
  • 39. Sheridan H, Rayner K, Reingold EM. Unsegmented text delays word identification: evidence from a survival analysis of fixation durations. Vis Cogn 2013;21:38–60. [Google Scholar]
  • 40. Laarni J. "Searching for optimal methods of presenting dynamic text on different types of screens." Proceedings of the second Nordic conference on Human‐computer interaction ‐ NordiCHI '02, 2002. [Google Scholar]
  • 41. Collins J. Education techniques for lifelong learning: making a PowerPoint presentation. Radiographics. 2004;24:1177–83. [DOI] [PubMed] [Google Scholar]
  • 42. Yarris LM, Coates WC, Lin M, et al. A suggested core content for education scholarship fellowships in emergency medicine. Acad Emerg Med 2012;19:1425–33. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Data Supplement S1. Supplemental material.


Articles from AEM Education and Training are provided here courtesy of Wiley

RESOURCES