Abstract
Writing systems vary in many ways, making it difficult to account for cross-linguistic neural differences. For example, orthographic processing of Chinese characters activates the mid-fusiform gyri (mFG) bilaterally, whereas the processing of English words predominantly activates the left mFG. Since Chinese and English vary in visual processing (holistic vs. analytical) and linguistic mapping principle (morphosyllabic vs. alphabetic), either factor could account for mFG laterality differences. We used artificial orthographies representing English to investigate the effect of mapping principle on mFG lateralization. The fMRI data were compared for two groups that acquired foundational proficiency: one for an alphabetic and one for an alphasyllabic artificial orthography. Greater bilateral mFG activation was observed in the alphasyllabic versus alphabetic group. The degree of bilaterality correlated with reading fluency for the learned orthography in the alphasyllabic but not alphabetic group. The results suggest that writing systems with a syllable-based mapping principle recruit bilateral mFG to support orthographic processing. Implications for individuals with left mFG dysfunction will be discussed.
Keywords: cross-linguistic, writing systems, reading, VWFA, artificial orthographies
1. Introduction
The role of a proposed visual word form area (VWFA) has been a widely investigated topic over more than a decade. The standard profile of this functionally defined area in the mid-fusiform gyrus (mFG) is that it shows greater selectivity for words or alphabetic strings than other visual stimuli (Cohen et al., 2002; McCandliss, Cohen, & Dehaene, 2003), but the underlying mechanisms that lead to this selectivity remain under debate. For instance, it is unclear whether the specialization of this region is driven by visuo-perceptual (Dehaene & Cohen, 2011) or linguistic/phonological processing demands (Price & Devlin, 2011). Nevertheless, its importance in the reading network has been made clear from a diverse set of evidence. For example, the typical selectivity of this region for words is seen only after reading instruction, dyslexic readers show atypical engagement of the mFG (Richlan, Kronbichler, & Wimmer, 2009; Shaywitz et al., 2002), and acquired damage to the mFG is associated with acquired alexia (Leff, Spitsyna, Plant, & Wise, 2006; Warrington & Shallice, 1980), or ‘word blindness.’ Thus, it remains important to understand more precisely the factors that influence the specialization of this region, because this should enhance our understanding of normal and disordered reading development, and it may lead to improved strategies for remediation.
An infrequently studied question is whether the role of the mFG differs across writing systems, and if so why. During reading tasks, the mFG tends to be left-lateralized for alphabetic languages (Dehaene, Le Clec'H, Poline, Le Bihan, & Cohen, 2002; Nelson, Liu, Fiez, & Perfetti, 2009; Price & Devlin, 2003; Vigneau et al., 2006), with the majority of findings obtained through studies of English. In contrast, bilateral activation of the mFG has been observed for Chinese (Liu, Dunlap, Fiez, & Perfetti, 2007; Nelson et al., 2009; Tan et al., 2001). Some past researchers have attributed the differences in laterality across the two writing systems to the fact that Chinese involves greater visual spatial processing demands (Liu et al., 2007; Tan et al., 2000). In order to test the effect of visual demands on fusiform engagement, a recent study leveraged the fact that faces tend to be processed in the right hemisphere. This motivated the creation of an artificial orthography in which faces were used as letters to represent the consonant and vowel sounds of English. This face-alphabet (termed ‘FaceFont’) was compared to ‘KoreanFont’, a linguistically equivalent alphabetic system in which Korean letters were used to represent English phonemes (Moore, Durisko, Perfetti, & Fiez, 2014). Both were transparent alphabetic systems, in which thirty-five letter-sound pairs were used to represent all of the consonant and vowel sounds of English. The authors reasoned that if fusiform laterality is sensitive to visuo-perceptual processing demands, then FaceFont should engage the right hemisphere to a greater extent than KoreanFont. They found that a region in the left mFG, within the typical territory of the VWFA, responded to both FaceFont and KoreanFont words more strongly in trained versus untrained participants, whereas no differences were observed in the right mFG. These results suggest that left-lateralized mFG processing of orthographic stimuli is not restricted to stimuli with particular visual-perceptual features.
Several research groups have used artificial orthographies to explore an alternative hypothesis for the laterality of the mFG: namely, that the mapping principle of a writing system plays a role in the determining the laterality of orthographic processing. Instead of varying the visual characteristics of an alphabetic writing system, as was done by Moore et al. (2014), two groups kept the artificial orthography the same, but varied the method of instruction. Some subjects were explicitly taught the letter-sound correspondences of an alphabet, while others were taught whole-word correspondences between printed and spoken word forms (Mei et al., 2012; Yoncheva, Blau, Maurer, & McCandliss, 2010; Yoncheva, Wise, & McCandliss, 2015). Results from these studies suggest that both the left and right mFG contribute to orthographic processing when an alphabetic writing system is taught as a logographic (whole-word) system.
Moore and colleagues, in a second study that used faces as orthographic stimuli, also investigated the impact of mapping principle (Moore, Brendel, & Fiez, 2014). They did so by directly varying the mapping principle of two different artificial proto-orthographies. For an alphabetic system, 15 face-graphemes were mapped onto 15 English phonemes, which could be combined to create words. For a syllabic system, 15 face-graphemes were mapped onto English syllables (which were also words, such as may, be, four), which could be combined to create English words (e.g., maybe, before). They tested the ability of a patient with a left occipitotemporal lesion (due to a stroke) to acquire the two orthographies. Importantly, the patient's lesion encompassed the typical territory of the VWFA and she exhibited the hallmark symptoms of acquired alexia, a loss of fluent word recognition that has been associated with damage to the VWFA. The patient struggled with learning the phoneme-grapheme pairings, but she was able to learn all of the syllable-grapheme pairings and use them to decode novel words. This result provides further evidence that the grain size of the orthographic mapping principle influences the laterality of the neural underpinnings required for reading.
The current study also uses an artificial orthography approach to test the mapping principle account of mFG lateralization during reading. However, it goes beyond the prior work through the use of an artificial orthography in which a corpus of 375 syllable-grapheme mappings can be used to represent any spoken English word. Face images are used as the component graphemes in the system, which is termed "Faceabary" (Hirshorn & Fiez, 2014). The Faceabary system can regarded as either a syllabic system, because each of the 375 face graphs is mapped onto a single English syllable, or as an alphasyllabic system, because both consonant and vowel information can be identified within the graphs. This is because different face identities systematically map to different consonants, and different facial expressions systematically map to different vowels. These mappings are not explicitly taught to subjects, but subjects do become aware of this general design principle, and the alphasyllabic structure likely facilities subjects' ability to master the syllable-grapheme correspondences in the Faceabary system. A few simple decoding rules specify how a sequence of represented syllables can be blended together to produce the phonological form of a word printed in Faceabary. For instance, a "vowel-dropping" rule is used to blend together two consonant-vowel (CV) Faceabary graphs to represent a CVC English word. Thus, the graphs for /kæ/ and /tə/ can be combined to represent the word ‘cat’. Using the graphs in the Faceabary system and its decoding rules, it is possible to represent any English word.
In this study, participants were trained to read using the Faceabary system. They were first trained to decode words, and then to read whole texts, in an effort to mimic natural reading acquisition. The learning of the syllable-based Faceabary system was compared to the learning of the phoneme-based (alphabetic) FaceFont system reported in the study by Moore et al. (Moore, Durisko, et al., 2014). Functional neuroimaging (fMRI) was used to provide measures of mFG lateralization following training. The mapping principle account of mFG lateralization predicts that a left-lateralized training effect should be observed in the fusiform gyri for the alphabetic system, while a bilateral pattern should be observed for the syllabic system.
2. Method
2.1 Participants
All participants were native English speakers who completed an initial screening in which they reported no history of hearing or vision issues, learning or reading difficulties, drug or alcohol abuse, mental illness, or neurological problem. Additionally, participants were screened for fMRI contraindications (e.g., ferromagnetic material in or on body, not right-handed, claustrophobic, pregnant, etc.). All participants provided informed consent and were compensated for their time.
2.1.1. FaceFont
Eleven participants (5 males) completed the two-week FaceFont training protocol and a subsequent post-training behavioral and neuroimaging session (M Age = 21.1 years, SD =1.8). Training study participants were recruited through fliers posted around the University of Pittsburgh campus (see Moore, Durisko, et al., 2014)
2.1.2. Faceabary
Fifteen participants (6 males) completed the three-week Faceabary training protocol, in addition to pre- and post-training behavioral and neuroimaging sessions (M Age = 20.6 years, SD =1.7). Participants were recruited from a database of subjects who had participated in previous behavioral studies and had indicated that they would be interested in participating in future studies.
2.2 Writing Systems
Each participant was assigned to learn either the FaceFont or the Faceabary writing system. Both of these artificial orthographies use faces from the NimStim set (Tottenham et al., 2009) as graphemes, but the faces are mapped to English phonology using different mapping principles.
2.2.1. FaceFont
FaceFont is a transparent alphabetic system with a one-to-one grapheme (face) -to-phoneme correspondence mapping principle. Thirty-five face-sound pairs are used to represent all sounds in English. There are 5 exceptions in which a single grapheme (i.e., a single face) represents two similar sounds (e.g., /ɔ/ in hawk and /a/ in hot).
2.2.2.Faceabary
The Faceabary system consists of 375 unique grapheme/faces that represent a syllable. There is internal consistency such that all face identities represent the consonant component of the syllable and the displayed facial emotion represents the vowel component. We chose the identities for Faceabary from the NimStim that included the full range of expressions. There were 13 total expressions with open and closed mouth. This resulted in 21 faces used for simple CV graphs, and 16 more for more complex graphs (see Section 2.3.2.1).
2.3. Behavioral Training Procedure
2.3.1. FaceFont Training
FaceFont-trained participants completed one- to two-hour training sessions consisting of three components: grapheme (face-phoneme mapping) training (Session 1), word-level training (Sessions 2–5), and story-level training (Sessions 6–9). Progress was monitored at the end of each word-level and story-level training session with a single-word reading test. Training was followed by a final session on the tenth day. During this final session, participants read stories that were transcribed from a standardized reading test designed to assess reading fluency and comprehension (Wiederholt & Bryant, 2001). They also completed an fMRI session to probe for the neural basis of FaceFont reading. No printed English was used except for the initial basic instructions of a task. For details beyond those provided below, see Moore, Durisko, Perfetti, & Fiez, 2014.
2.3.1.1. Phoneme-Grapheme Mapping Training
In Session 1, participants completed phoneme training. Using the E-prime computer program for psychological experiments (Schneider, Eschman, & Zuccolotto, 2002), a grapheme would appear on the computer screen, and participants pressed a button to elicit the auditory presentation of the associated phoneme. Because the focus was on the participants achieving mastery of all 35 grapheme-phoneme pair associations, the participants could spend as much time as they wanted on each grapheme, and each associated phoneme could be played unlimited number of times before the participant advanced to the next grapheme. After all 35 pairs were presented in random order, the cycle was repeated four more times, for a total of five cycles of individually-paced grapheme-phoneme learning.
2.3.1.2. Phoneme test
Participants took a phoneme test after they completed the phoneme training. Graphemes appeared on the computer screen and participants were asked to say aloud the phoneme associated with each grapheme. Two cycles were administered with random order of grapheme presentation, for 70 total items on the test. Participants were required to score 90% or better on the phoneme test. If they did not meet this criterion, the examiner reviewed their specific errors, and they repeated both the phoneme training and phoneme testing as many times as needed until criterion was reached or until session duration was exceeded. All participants passed the phoneme test in three or fewer attempts.
2.3.1.3. Word-level training
In sessions 2 – 5, participants completed word-level training in which they read 400 1-syllable words, 2- to 4-phonemes in length. Words were presented in random order using E-prime. The same 400 words were used in each session in order to facilitate fluency for reading through repetition. Participants were encouraged to attempt to read the word when it appeared on the screen, but had the option to hear any of the individual phonemes or to hear the whole word. For example, one of the training words was “beef”, consisting of phonemes /b/, /i/, /f/. Using a key press, participants could play any of the three phonemes individually or could play the entire word, if necessary. After completing each word level training session (Sessions 2–9), participants took single-word reading tests on the computer consisting of “old” words (words that were in the word training), “new” words (words that were not in the word training task), and nonwords (15 each). Participants were presented with items one at a time and asked to read each one aloud as quickly and as accurately as possible. Results are reported in Moore et al. (2014).
2.3.2. Faceabary
Faceabary-trained participants completed training similar to FaceFont-trained participants, but it extend over three weeks due to the much larger set of face-syllable mappings to be learned. Additionally, rather than learning all of the graph-sound mappings before commencing word-level training, the large corpus of Faceabary graphs was divided into subsets, and participants progressed through multiple cycles of grapheme and interleaved word training, to give the following: grapheme (face-syllable mapping) training (Sessions 1–9), word-level training (Sessions 1–10), and story-level training (Sessions 11–15). Another difference was that Faceabary training was preceded by a functional magnetic resonance imaging (fMRI) session. Faceabary training was similarly followed by an imaging session on the last (fifteenth) day.
2.3.2.1. Syllable-Grapheme Mapping Training
The first nine sessions were used for learning the mappings between face graphemes and syllables. The basic procedure was the same each day as in the FaceFont training: participants would see a face-grapheme on the screen and participants pressed a button to elicit the auditory presentation of the associated phoneme and participants could spend as much time as they wanted on each grapheme, and each associated syllable could be played unlimited number of times before the participant advanced to the next grapheme. During the first week of training, participants learned all simple CV syllables. Due to the large number of graphemes to be learned and the internal structure of Faceabary, training each day focused on 2–3 vowels and all of their respective consonant pairings (e.g., Day 1 focused on C+/i/ and C+/ə/ syllables for a total of 42 total syllables). On Day 2, participants learned CV syllables that focused on /ʊ/, /ɒ/; Day 3 on /ɪ/, /ɔɪ/, /eɪ/; Day 4 on /oʊ/, /aɪ/, and /ɛ/; Day 5 on /æ/, /u/, and /aʊ/. Face-graphemes were blocked by vowel-sound, but within each block, face identities (carrying the consonant component of the syllable) were randomized. For example, participants would learn ‘/mi/, /li/, /ki/,’, etc. and then ‘/pə /, /rə/, /wə/,’ etc. After all pairs were presented, the cycle was repeated four more times, for a total of five cycles of individually-paced grapheme-syllable learning. After Day 1, the session started with a recap of previously learned face-graphemes before learning new ones. During the second week (days 6–9) participants learned non-CV graphemes. On Day 6, they learned CCV graphemes, on Day 7 they learned VC graphemes, Day 8 VCC graphemes, and Day 9 faces that represented grammatical markers.
2.3.2.2. Syllable tests
Participants took a syllable test every day after they completed the grapheme-syllable mapping training. Graphemes appeared on the computer screen in random order and participants were asked to say aloud the syllable associated with each grapheme. Participants were required to score 75% or better. If they did not meet this criterion, the examiner reviewed their specific errors, and they repeated both the syllable training and syllable testing as many times as needed until criterion was reached or until session duration was exceeded. All participants reached criterion within three training-testing cycles.
2.3.2.3. Word-level training
Each day after syllable training, participants completed word-level training in which they read 50 words that focused on newly learned graphemes, but could draw upon previous learned graphemes as well. On the first day, participants were instructed on decoding rules involving ‘dropping’ the schwa at the ends of words (e.g., /ni/ + /tə/ = neat) or between successive consonants (e.g., /pə/ + /leɪ/ = play). Participants were encouraged to attempt to read the word when it appeared on the screen, but received feedback from the experimenter when they incorrectly pronounced a word or struggled with a certain grapheme. After completing all grapheme training, on Day 10 participants reviewed all words (N=450) that they had read in previous days. After completing the syllable-grapheme mapping training and word level training, participants took single-word reading tests on the computer (Sessions 11–15), similar to that of FaceFont participants. Stimuli consisted of “old” words (words that were in the word training), “new” words (words that were not in the word training task), and nonwords. Participants were presented with items one at a time and asked to read each one aloud as quickly and as accurately as possible. Results are outside the scope of the current paper and are not reported.
2.3.3. Story-level training (FaceFont & Faceabary)
After grapheme and word training, all participants spent the last week of training (Week 2 for FaceFont, Week 3 for Faceabary) reading stories in their respective writing systems. Each day, they read ten early reader stories from the “Now I’m Reading!” series (Gaydos, 2003) that were transcribed into the training fonts, beginning with ten Level 1 stories and progressing one level each session to longer and more complex stories (up to Level 4). Story-level reading performance was measured by words read per minute (WPM), calculated by the total number of words in each story divided by the time to complete the passage.
During the final session for both groups (Session 10 for FaceFont, 15 for Faceabary), participants read the first 6 stories from Form A of the Gray Oral Reading Test – 4 (GORT-4; Wiederholt & Bryant, 2001) which had been transcribed into their respective training font. Participants were asked to read each story aloud as quickly and accurately as possible. The stories were administered and scored according to the standardized test protocol. Standardized GORT scores were not used because the stories had been transcribed into our experimental fonts and participants in our study exceeded the maximum age on which normative data has been collected for this test. Administration of the multiple-choice comprehension questions deviated from the protocol in that the questions/answers were only read aloud to the participants (whereas in the standardized protocol the items are presented both visually and aurally). As the transcribed materials deviate significantly from the published GORT, we will therefore refer to this task and its measures as the Face Oral Reading Test (FORT). Raw FORT scores for the following were obtained: accuracy, time, and comprehension. In addition to the raw scores, we computed words per minute (WPM) for each story as a normalized measurement of time/reading rate across the stories that varied in length (Table 3). Lastly, we calculated a z-score for each individual measure (accuracy, WPM [time], and comprehension), and then created a composite mean FORT z-score across the three measurements for all participants. The mean FORT z-score was calculated using the z-score for comprehension and WPM, where positive scores reflected greater skill. Because a positive z-score on the deviations from print (accuracy) measure reflected poorer performance, we used their accuracy score times −1, in order for positive scores to reflect better performance in all three measurements. The mean FORT z-score was used for further analyses.
Table 3.
FORT Behavioral Measures (Mean and Standard Deviation).
| Words Per Minute (WPM) |
Total Number of Words |
Deviations from Print (Accuracy) |
Comprehension (% Correct) |
|
|---|---|---|---|---|
| FaceFont | 19.4 (4.14) | 398 | 7.56 (6.67) | 0.88 (.09) |
| Faceabary | 16.6 (4.58) | 398 | 4.45 (5.44) | 0.89 (.03) |
2.4. fMRI Procedure
2.4.1. fMRI Image Acquisition
A 3-Tesla head only Siemens Allegra magnet and standard radio frequency coil were used for all MR scanning sessions at the University of Pittsburgh. Prior to functional scanning, structural images were collected using a standard T2-weighted pulse sequence in 38 contiguous slices (3.125 × 3.125 × 3.2 mm voxels) parallel to the anterior commissure-posterior commissure plane. Functional images were collected in the same plane as the anatomical series using a one-shot echo-planar imaging pulse sequence [TR = 2000 ms, TE = 25 ms, FOV = 200 mm, flip angle = 70°].
2.4.2. fMRI Experimental Procedure
Training study participants were shown three types of stimuli – face-words (in FaceFont or Faceabary), KoreanFont words used in a previous study where Korean characters served as graphemes (Moore, Durisko, et al., 2014), and patterns (used as a baseline). Participants were instructed to stay alert and passively view all stimuli. Although they were not explicitly told to read the face-words, they all reported trying to do so during the scans. Face-words on average were 2.16 graphs long in Faceabary and 3.25 long in FaceFont. Patterns were matched for length to the face-words in both groups. In order to ensure that participants were not falling asleep, we had visual monitoring and could see eye-blinks throughout the scan. They completed two functional runs designed identically. The two runs had the same block design format, but the order of the blocks and the stimuli within each block were randomized across the two runs for each participant. Each run contained 21 20-second epochs, with 7 epochs of each stimulus type presented in random order. Each epoch consisted of 10 2-second trials using the same stimulus type for each trial. Within a trial, a stimulus item was presented for 1500 ms, followed by a ‘+’ for 500 ms. The next trial immediately followed with another stimulus item of the same type for 1500 ms and ‘+’ for 500 ms, and so on. There was no pause between epochs.
Within this paradigm, each training-study participant passively viewed 140 English words printed in their specific training font. Both the FaceFont group and the Faceabary group used the same word list. Words were 1 syllable, 2- to 4-phonemes in length. No word was repeated within the fMRI session, nor did the words overlap with the items used in the word training set from the behavioral sessions. Words were presented in random order across both runs. Faceabary participants completed this imaging protocol before and after training, whereas FaceFont participants were only scanned after training.
2.4.3 fMRI Data Analysis
A series of preprocessing steps were conducted prior to data analysis using the integrated NeuroImaging Software package (NIS 3.6) (Fissell, et al., 2003) in order to correct for artifacts and movement and to account for individual differences in anatomy. Images were reconstructed and then corrected for subject motion with Automated Image Registration (AIR 3.08; Woods, Cherry, & Mazziotta, 1992). For runs in which head motion exceeded 4mm or 4° in any direction, data from the beginning of the epoch in which the head movement occurred through the end of the run were not used in the analysis. The images were then corrected to adjust for scanner drift and other linear trends within runs. The structural images of each subject were stripped to remove the skull and co-registered to a common reference brain, chosen from among the participants (Woods, Mazziotta, & Cherry, 1993). Functional images were transformed into the same reference space, normalized by a mean scaling of each image to match global mean image intensity across subjects, and smoothed using a three-dimensional Gaussian filter (8 mm FWHM) to account for anatomical differences between subjects. Finally, images were converted into Talairach space (Talairach & Tournoux, 1988). Specific cerebellar lobules were designated using a cerebellar atlas (Schmahmann, Doyon, Toga, Petrides, & Evans, 2000).
2.4.3.1. Group Comparisons in Literature-Derived Fusiform ROIs
Because FaceFont participants did not complete a localizer task, a regions-of-interest (ROI) approach was used to investigate neural activity in the mFG, at or near the prototypical site of the VWFA and a potential right hemisphere homologue. Specifically, we used the x-y-z peak coordinate from Bolger and colleagues (Bolger, Perfetti, & Schneider, 2005; −45 −57 −12, and right homologue) that sampled from different writing systems.
2.4.3.1. Group Comparisons in Whole Brain
NIS 3.6 was used to analyze the fMRI data. We performed a whole brain, voxel-wise 2 × 2 ANOVA with training group (FaceFont vs. Faceabary) and stimulus (Face-words vs. Patterns). Because there were no pauses between epochs during task administration, the first four 2-second time periods (T1-T4) of each epoch were not included in the analysis in order to allow the carry-over BOLD response to the stimuli in the prior epoch to return to baseline; therefore, only T5-T10 were included in the ANOVA. AFNI 3dClustSim (Cox, 1996) was used to identify a voxel-wise significance threshold that was corrected to p < .05 (uncorrected at p < .01, cluster size of 42 voxels).
2.4.3.2. Faceabary Pre- vs. Post-Scan Comparison
The same analysis approach was used to compare the Faceabary results pre- versus post-training, except the 2 × 2 ANOVA was within the Faceabary group, and the factors were training time point (Pre vs. Post) and stimulus type (Face-words vs. Patterns). Results are reported at p < .05 corrected.
2.4.3.3. Faceabary Pre- vs. Post-Scan Functional Connectivity
An analysis was run in the Faceabary group comparing pre- and post-scan functional connectivity in order to determine if different neural networks were becoming engaged after training. Seeding took place in a functionally-derived right mFG ROI that showed greater activation for face-words than patterns in the Faceabary group compared to the FaceFont group during the post-training scan. We ran analyses on the time-series of only the face-word blocks pre- and post-training using condition-specific psychophysiological interactions (O’Reilly, Woolrich, Behrens, Smith, & Johansen-Berg, 2012). The mean time series for ‘face-word’ trials was extracted for each subject for the ROI seed region. A simple functional correlation analysis was conducted using a standard procedure within AFNI (http://afni.nimh.nih.gov/sscc/gangc/SimCorrAna.html). Time series were used as a regressor in single-subject analyses using standard GLM methods, including nuisance variables that modeled head motion, white matter, CSF, and the global mean. Differences between pre- and post-training scans were then computed using AFNI’s 3dttest, and only results that were significant at the p < .05 corrected level are reported.
Of the areas that showed greater functional connectivity with the functionally-derived right mFG ROI during the post-training compared to pre-training scan, mean beta coefficients were extracted for each subject to run correlations between functional connectivity strength and a behavioral measure of reading fluency.
3. Results
3.1. Behavioral Results
Story reading rate, expressed as WPM, was the main behavioral measure of training and final reading achievement. Across the four days of story reading, there was a main effect of story level, such that groups got faster across the four levels, F (3,72)= 4.07, η2 = .15, p = .01. However, there was no main effect of group, F (1,25) = .01, η2 = .00, p = .95, and no significant interaction between story level and group, F(3,72) = 2.05, η2 = .08, p = .12. There was also no significant difference between post-training performance on the FORT, t(25) = .99, d = .38, p = .33. These results indicate that once Faceabary subjects acquired the Faceabary grapheme inventory and practiced single-word decoding, their text reading fluency developed at a pace comparable to that observed for the acquisition of the alphabetic FaceFont system.
3.2. fMRI Results Post Training: Group Differences
3.2.1. Group Comparisons in Literature-Derived mFG ROI Selectivity
A measure of the functional response to the learned artificial orthography (Face-words – Patterns) was extracted from the two mFG literature-derived ROIs in each participant. The resulting values were submitted to a 2 × 2 (ROI × Group) ANOVA. There was a no main effect of ROI, F(1,25) = 1.74, η2 = .06, p = .20, but there was a main effect of group, F(1,25) = 4.74, η2 = .16, p = .039 such that Faceabary participants had greater activation levels than FaceFont participants. Importantly, there was a significant interaction between ROI and Group, F(1,25) = 4.45, η2 = .15, p = .045, such that there were group differences, depending on the ROI (see Figure 2). Follow-up t-tests comparing group differences in each ROI separately revealed that there was greater selectivity in the right mFG ROI in the Faceabary as compared to the FaceFont group, t(25) = 2.74, d = 1.11, p = .01. However, there were no significant differences in selectivity between groups in the left mFG ROI, t(25) = 1.33, d = .53, p = .201.
Figure 2.
Group differences in selectivity for face-words compared to patterns in Left and Right literature-derived mFG ROIs (±45, −57 −12).
These results suggest that the alphabetic FaceFont group is more left-lateralized, whereas the Faceabary group is more bilateral. To confirm this interpretation, a laterality index (LI) was calculated for each subject (Right mFG ROI Selectivity – Left mFG ROI Selectivity). The Faceabary group exhibited a significantly more bilateral response than the FaceFont group, t(25) = 2.11, d = .83, p = .045 (LIFaceFont = −2.74, LIFaceabary = .64). To investigate this result further, the relation between individual differences in mFG laterality and reading skill (mean FORT z-score) was examined. The correlation between LI and mean FORT z-score in the Faceabary group was not significant, but it was in the positive direction, such that individuals with more bilateral fusiform selectivity were faster readers, r(14) = .40, p = .14. The correlation between LI and mean FORT z-score in the FaceFont group was also not significant, but it was in the negative direction, such that individuals with more left lateralized fusiform selectivity were faster readers, r(11) = −.43, p = .16 (see Figure 3). To test whether the correlations differed significantly across groups, a comparison of two regression models was conducted, with LI as the dependent measure. Model 1 was a main effects model with two predictor variables, mean FORT z-score and group (FaceFont, Faceabary). Model 2 added an additional interaction term between mean FORT z-score and group. A significant group×mean FORT z-score interaction term would demonstrate a different relationship between mean FORT z-score and LI for one group compared to the other. On its own, Model 1 accounted for 15% of the variance in LI, R2 = .15, p = .14. Model 2, including the interaction term, accounted for 30% of the variance in LI, R2 = .30, p = .043. A comparison of the two models reveals that the FORT × group interaction term significantly improved the fit of the model, F(1,23) = 4.56, p = .044. This suggests that the mean FORT z-score differentially predicts LI in the two groups.
Figure 3.
Individual differences in mFG laterality (using literature-derived ROIs) as a function of reading skill (mean FORT z-score).
3.2.2. Group Comparisons Whole Brain Analysis
A whole brain analysis was used to investigate group differences in face-word processing that might be located outside of the left and right literature-derived mFG ROIs. We conducted a 2 × 2 voxel-wise ANOVA (Group × Stimulus) to identify areas that showed a significant interaction such that there was relatively greater activity for face-words compared to patterns in the Faceabary group compared to the FaceFont group. We detected one voxel cluster in the right mFG that showed a Group X Stimulus interaction (peak voxel: 39, −62, −4, subpeak: 41, −51, −14), F(1,25) = 24.29, η2 = .49, p < .001, confirming a similar result from the literature-derived mFG ROI analysis. The significant interaction is a result of a different pattern of activation for face words vs. patterns between the two groups. For the Faceabary group, there was greater activation for face words compared to patterns, t(14) = 2.35, d = .96, p = .034. In contrast, for the FaceFont group there was greater activation for patterns compared to face words, t(11) = −4.85, d = 1.97, p = .001. There were no regions that showed the opposite effect for FaceFont greater than Faceabary.
3.3. Faceabary Group: Pre- vs. Post-training differences
3.3.1. fMRI Differences Pre vs. Post
Both pre-training and post-training scans were available in the Faceabary group. A 2 × 2 voxel-wise ANOVA (Session × Stimulus) was conducted to identify any areas that showed a training effect in the Faceabary group. A similar analysis was not possible for the FaceFont group, since these data came from a previously published study that did not include a pre-training imaging session.
Seven areas showed a significant interaction, such that there was relatively greater activation for face-words than patterns in the post-scan compared to the pre-scan. The areas with this pattern include regions associated with phonological processing, such as the left inferior frontal gyrus (BA 44), bilateral anterior insula, and left supramarginal gyrus (see Table 1). These results suggest that Faceabary participants were engaging in phonological decoding processes even in the absence of directions to covertly read words. The opposite pattern, greater activation for face-words compared to patterns in the pre-scan compared to post-scan, was observed in the right amygdala, and other regions shown to be part of the distributed network involved in face processing (such as superior temporal sulcus and orbital frontal cortex) (Fox, Iaria, & Barton, 2009; Ishai, 2008; Ishai, Schmidt, & Boesiger, 2005).
Table 1.
Differences for Face-words vs. Patterns in Pre- or Post-training.
| Location | BA | X | Y | Z | # Voxels |
|---|---|---|---|---|---|
| Post > Pre | |||||
| Left | |||||
| Insula | 13 | −33 | 30 | −5 | 55 |
| Superior Frontal Gyrus | 6 | −2 | 23 | 34 | 21 |
| Inferior Fronta Gyrusl/Precentral Gyrus | 44/6 | −45 | 20 | 11 | 94 |
| Inferior Parietall Lobule/Supramarginal Gyrus/Angular Gyrus | 40/39 | −39 | −42 | 24 | 55 |
| Right | |||||
| Insula | 13 | 30 | 33 | −2 | 33 |
| Pre > Post | |||||
| Left | |||||
| Superior Frontal Gyrus | 9 | −5 | 70 | 14 | 74 |
| Cingulate | 31 | −2 | −42 | 11 | 31 |
| Right | |||||
| Amygdala | 14 | 14 | −27 | 24 | |
| Middle Temporal Gyrus/Superior Temporal Sulcus | 21 | 52 | 2 | −24 | 25 |
3.3.2. Functional Connectivity
Unexpectedly, even though group (FaceFont vs. Faceabary) differences were observed in the right mFG (right literature-derived mFG ROI and a functionally-derived mFG ROI in the whole-brain analysis), the pre- vs. post-training analysis within the Faceabary group did not reveal a significant training effect in this brain area. One hypothesis for this null finding was that during pre-training, the right mFG was engaged in face processing and thus showed a selectivity for faces compared to patterns, while after training the same area was engaged in the phonological mapping of the face-word. In this scenario the right mFG could show similar selectivity for face-words compared to patterns, but for different reasons. To test this hypothesis, we compared functional connectivity in the pre- vs. post-scan, using the right functionally-derived mFG ROI from the whole-brain analysis as the seed region (Table 2). We found that there was greater connectivity with this right mFG ROI in the post-scan compared to pre-scan in several reading-related areas such as the left mFG (peak: −49, −55, −14), right parietal lobule (Das, Bapi, Padakannaya, & Singh, 2011; Das, Kumar, Bapi, Padakannaya, & Singh, 2009) and left angular gyrus (Horwitz, Rumsey, & Donohue, 1998; Pugh et al., 2000). Conversely, there was relatively greater functional connectivity between this right mFG ROI and surrounding cortex in the right fusiform gyrus (32, −47, −10) before training.
Table 2.
Differences (pre- vs. post-training) in functional connectivity with the right functionally-derived mFG ROI from whole brain analysis.
| Location | BA | X | Y | Z | # Voxels |
|---|---|---|---|---|---|
| Pre > Post | |||||
| Left | |||||
| Cingulate Gyrus | 25 | −23 | −2 | 33 | 27 |
| Right | |||||
| Cingulate/Precuneus | 23 | −47 | 39 | 31 | |
| Fusiform Gyrus | 37 | 32 | −47 | −10 | 21 |
| Post > Pre | |||||
| Left | |||||
| Precentral/Middle Frontal Gyrus | 6 | −41 | −5 | 57 | 32 |
| Medial Frontal Gyrus | 6 | −5 | −11 | 72 | 32 |
| Cingulate Gyrus | 23 | −2 | −23 | 27 | 26 |
| Inferior Temporal Gyus | 20/37 | −59 | −44 | −16 | 42 |
| Caudate | −26 | −44 | 12 | 23 | |
| Postcentral Gyrus | 5 | −2 | −44 | 69 | 47 |
| Inferior Temporal Gyrus/Fusiform Gyrus | 37 | −49 | −55 | −14 | 46 |
| Lingual Gyrus | 18 | −1 | −73 | 2 | 90 |
| Middle Occipital Gyrus | 19 | −37 | −76 | 2 | 28 |
| Angular Gyrus | 39 | −34 | −76 | 32 | 175 |
| Right | |||||
| Superior Frontal Gyrus | 8 | 2 | 56 | 36 | 196 |
| Superior Frontal Gyrus | 8 | 26 | 29 | 54 | 59 |
| Anterior Cingulate Gyrus | 25 | 5 | 14 | −7 | 75 |
| Middle Frontal Gyrus | 6 | 38 | −2 | 60 | 35 |
| Inferior Temporal Gyus | 20 | 50 | −5 | −37 | 22 |
| Cerebellar Tonsil | 11 | −41 | −31 | 30 | |
| Inferior/Superior Parietal Lobule | 40/7 | 44 | −55 | 50 | 22 |
| Precuneus/Cuneus | 19 | 5 | −83 | 39 | 252 |
3.3.3. Functional Connectivity Correlations with Behavior
Next, we examined whether any of the areas that showed greater functional connectivity with the right functionally-derived mFG ROI post-training vs. pre-training were correlated with behavioral measures of reading skill (mean FORT z-score). The only area to show a significant correlation between post-training functional connectivity strength with the right mFG ROI and reading skill was in the left mFG (peak: −49, −55, −14; see Figure 4). Before training there was no correlation between the functional connectivity of the left and right mFG, and reading skill (mean FORT z-score). After training, a significant correlation emerged, R2 = .31, F(1,13) = 5.35, p = .04.
Figure 4.
Greater Functional Connectivity between right functionally-derived mFG ROI and left fusiform post-training. The correlation between functional connectivity between the right and left fusiform and reading rate increased after training. There was a significant positive correlation between left and right fusiform functional connectivity and reading skill (mean FORT z-score) after training such that greater connectivity was correlated with greater reading skill.
4. Discussion
The present study used two artificial orthographies to examine the effect of mapping principle on fusiform laterality. We compared a syllable-based and an alphabetic phoneme-based writing system, and found evidence of a more bilateral response in the mFG when the mapping principle of an orthographic stimulus is non-alphabetic. Training in a syllable-based system, as compared to a phoneme-based system, resulted in a more bilateral mFG response to orthographic stimuli. Further, within the group trained on the syllable-based system, the degree to which the learned orthography elicited a bilateral response was correlated with the behavioral outcome of greater reading fluency. Greater bilateral mFG processing in the syllable-based system compared to the phoneme-based system in the present study is consistent with studies comparing Chinese, with a larger mapping principle grain size, to alphabetic writing systems (English: Nelson et al., 2009; French: Szwed, Qiao, Jobert, Dehaene, & Cohen, 2014). Both Nelson et al. (2009) and Szwed et al. (2014) found that Chinese and alphabetic scripts elicit left mFG word selectivity, whereas only Chinese elicited right mFG word selectivity. Furthermore, an increase in right mFG functional connectivity with the left mFG was observed in the syllable-based group after training. This demonstrates that the right mFG can become engaged with the typical left-lateralized reading network after syllable-based training. These results support the notion that phonological demands vary across differences in the grain size of orthographic-phonological mapping, and these differences can lead to cross-linguistic differences in the lateralization of orthographic processing in the mFG.
As one caveat, we acknowledge that our analysis approach, which involved literature-derived ROIs based upon a VWFA coordinate drawn from the literature, did not ensure that we fully captured the VWFA (and its right hemisphere homologue) in each participant. This is because there is subject-specific variation in the location of the VWFA (Glezer & Riesenhuber, 2013). Nevertheless, the same pattern of results was observed across ROIs centered on alternative loci in the same vicinity (see Footnote 1), which suggests that the results of the present study are robust and likely representative of the results that would have emerged from an analysis involving individually localized VWFA ROIs.
It is also important to note that participants did not perform an explicit task in the scanner. Consequently, potential behavioral differences between the two groups cannot be ruled out, such as differences in decoding effort or strategies. A related concern is that due to the different number of graphs to be learned in the two behavioral training paradigms, there were necessary differences in the training protocol that could have elicited different decoding strategies. For example, the FaceFont group practiced the same words over the course of four days, which could have encouraged the adoption of a whole-word decoding. However, there are several pieces of evidence that make us believe that this is unlikely or would not have made a difference in our behavioral or neural outcome measures. First, even after many exposures of the same words, FaceFont participants’ word decoding did not show signs of fluency that would be expected from a sight-word reading or a whole-word decoding strategy. While FaceFont reading latencies became faster, they were around 5000 ms on average after training (Moore, Durisko, et al., 2014), which suggests a more labored decoding strategy. Second, a whole-word reading strategy would not have been an efficient or sufficient strategy for story reading, since most words had never been seen before. Lastly, the ‘face-words’ shown in the scanning sessions did not overlap with any of the words used in the behavioral training for either group, so it is unlikely that those particular words would have been decoded using a whole-word ‘sight word’ strategy. Thus, we interpret group differences in mFG laterality as reflecting the use of a writing system with a phoneme- vs. syllable-based mapping principle.
Our results converge with past studies and also extend past findings. Mounting evidence supports the notion that visuo-perceptual processing alone cannot explain laterality difference in mFG activation across writing systems. This further supports the proposal that phonological mapping to orthography helps shape the selectivity for words compared to other kinds of visual stimuli in the mFG, and the extent to which the right mFG also shows this selectivity (Hsiao & Lam, 2013). This proposal also provides a framework to reinterpret past work comparing different writing systems. For example, Duncan and colleagues compared two real writing systems that varied in grain size (morphosyllabic Kanji vs. syllabic Hiragana) and found greater connectivity between the left and right fusiform gyri in the system with the larger grain size of mapping (Kanji) (Duncan et al., 2013). They attributed this finding to greater visual demands imposed by the more visually complex Kanji, which in turn requires greater right fusiform gyrus integration into the left language network. However, two studies have elegantly controlled for visual content (Mei et al., 2012; Yoncheva et al., 2010) while also varying whether subjects received alphabetic vs. logographic instruction. Both studies found greater bilaterality in the fusiform gyrus for logographic-trained participants. This suggests that it is the grain size of the linguistic mapping principle, rather than the visuo-perceptual qualities of a graphemic inventory, that determine the laterality of the fusiform gyri for reading. Like others, we find that the mapping principle or unit of instruction of a writing system can change the laterality of orthographic processing within the mFG (Mei et al., 2012; Yoncheva et al., 2010; Yoncheva et al., 2015). We extend this past work by showing that laterality effects can also be seen with a linguistic mapping principle smaller than that in a logographic system, but larger than a phonemic system: a syllable-based system. Unlike a logographic system, our Faceabary syllable-based system requires phonological decoding as in a phoneme-based system; the basic units are just larger.
Although both FaceFont and Faceabary used a similar set of stimuli, there were visual differences across groups that could have affected fusiform gyrus laterality. One example is that the face-graphemes in the syllable-based Faceabary had emotional content that corresponded with vowel information. The emotional content of the faces in Faceabary may have elicited greater right mFG activation, since emotional valence has been shown to modulate right amygdala and also the right-lateralized mFG face area (Adolphs, 2002; Ishai, 2008; Ishai et al., 2005; Vuilleumier & Pourtois, 2007). However, we observed opposite training patterns in the right fusiform gyrus and the right amygdala. There was greater activation in the right amygdala for faces than patterns before training, when the unlearned faces should have been processed as faces, whereas there was less activation for faces than patterns after training. This suggests that any visuo-affective differences between FaceFont and Faceabary are unlikely to have caused the observed difference in mFG laterality after training.
Additional differences between Faceabary and FaceFont stimuli could be that more attention was required of fine facial detail in the Faceabary group. However, one neuroimaging study revealed a right-fusiform advantage for the processing of faces as a whole and a left fusiform superiority for the processing of facial features (Rossion et al., 2000). Similarly, another recent study reported that successful episodic memory encoding of faces relied on the left fusiform cortex because of the involvement of feature/part information processing (Mei et al., 2010). Thus, if greater featural facial processing demands drove group differences, we may have expected the opposite result of greater left-fusiform selectivity for Faceabary. The fact that we observed greater selectivity in the right fusiform suggests that the larger mapping principle grain size is more plausible that differences in attention to visual features.
Past work also speaks to the converse hypothesis, that facial expressions in Faceabary would require more holistic configural processing. For example, Moore et al. (2014) kept the decoding training constant across two alphabetic writing systems that differed in visual processing demands (faces vs. ‘KoreanFont’ line segments). They reasoned that if holistic visual processing drives right mFG engagement, greater right mFG activation should be observed for the FaceFont. They observed similar left hemisphere lateralization for both writing systems, which led them to rule out visual processing demands as a major determinant of VWFA lateralization. Mei et al. (2012) and Yoncheva et al. (2010) used the same exact visual stimuli, but varied the decoding training (phoneme vs. whole word), and found greater right mFG engagement for the whole-word trained participants. Taken together, these results have influenced our interpretations of the current results to reflect group differences causes by differences in linguistic mapping principles as opposed to differences in holistic visual processing.
As further evidence, the increase in functional connectivity between the right and left mFG after training demonstrates how the functional underpinnings of the engagement of right mFG can change. It also offers insight into why we observed group differences in the selectivity of the right mFG. Whereas the right mFG ROI, defined from group differences, showed greater connectivity with the neighboring right mFG region before training, when the face-syllable mappings had not yet been learned, it showed greater connectivity with the left mFG after training, when the linguistic mappings had been learned. Finally, the fact that individual differences in right mFG activation and in functional connectivity to the left mFG correlated with a reading outcome measure clearly supports the idea that the right mFG became involved in Faceabary reading.
While the focus of the current paper was to disentangle two potential hypotheses (e.g., visual spatial processing demands vs. mapping principle) regarding cross-linguistic differences in mFG laterality, the results also contribute to an ongoing debate about what determines the location and function of the VWFA within the left mFG. One hypothesis suggests that the VWFA is fundamentally sensitive to visual statistics and its location results from neuronal recycling of inferotemporal cortex used for the detection of shapes in foveal vision (Dehaene & Cohen, 2007). In contrast, a competing hypothesis proposes that the VWFA is sensitive to linguistic and orthographic properties of words (Price & Devlin, 2011), and is in its location due to connectivity between the visual and linguistic systems within the mFG. We suggest that these two proposals do not need to be mutually exclusive. Certainly any neural functioning related to reading would have to result from some form of neuronal recycling, as reading is a relatively new invention in the timeframe of evolution. Interestingly, there are laterality differences in the acoustic processing of speech such that phonemes are left-lateralized, whereas syllable-level acoustics (measured as theta-oscillations) are right-lateralized (Giraud et al., 2007; Hickok & Poeppel, 2007; Morillon, Liégeois-Chauvel, Arnal, Bénar, & Giraud, 2012; Poeppel, Idsardi, & van Wassenhove, 2008). We therefore propose that neuronal recycling is driven by the binding of visual and speech-based information, with the degree of laterality determined by the units of speech associated with the decoding of the orthographic forms.
One last important component of the current study is that participants in both groups learned to read a productive orthography at the story level. The development of a syllable-based orthographic system for English may have practical benefits. Past research in naturally occurring and artificial orthographies supports the idea that learning a syllable-based writing system can elicit improvements in alphabetic decoding in typically developing children (Asfaha, Kurvers, & Kroon, 2009; Gleitman & Rozin, 1973). Improved decoding due to biphone-bigraph learning, in which the focus is on the syllable level of representation (e.g., ‘pa+at=pat’ is more effective than ‘p+a+t=pat’), has also been observed in alexic and dyslexic individuals (Bowes & Martin, 2007; Friedman & Lott, 2002). Further support comes from a patient with pure alexia and left mFG damage, who had impaired phoneme-grapheme mapping, but relatively preserved syllable-grapheme mapping using artificial orthographies (Moore, Brendel, et al., 2014). An interesting possibility to explore in future research rests upon the logic that using a syllable-based orthography may encourage the use of decoding strategies that rely to a lesser extent on left hemisphere reading areas. This could be especially relevant to dyslexic readers, who have been shown to have less left-lateralized reading networks (Maisog, Einbinder, Flowers, Turkeltaub, & Eden, 2008; Shaywitz et al., 2002). Thus, future research could test whether dyslexic readers may benefit to a greater extent from a syllable-based orthography than typical readers, and open a new area for investigating remediation strategies for at-risk readers.
Figure 1.
Example stimuli for fMRI experiment. Each group exclusively saw face-words in their learned artificial orthography, in addition to identical KoreanFont and Pattern stimuli.
Footnotes
The same pattern of results were observed using ROIs derived from a ‘quasilocalizer’ defined as KoreanFont vs. Patterns (an approach successfully deployed in our previous work (Moore, Durisko, et al., 2014)), as well as ROIs centered on alternative VWFA coordinates reported in published work (Cohen et al., 2002; Dehaene et al., 2001; McCandliss et al., 2003).
References
- Adolphs R. Neural systems for recognizing emotion. Current opinion in neurobiology. 2002;12(2):169–177. doi: 10.1016/s0959-4388(02)00301-x. [DOI] [PubMed] [Google Scholar]
- Asfaha YM, Kurvers J, Kroon S. Grain size in script and teaching: Literacy acquisition in Ge'ez and Latin. Applied Psycholinguistics. 2009;30(4):709. [Google Scholar]
- Bolger DJ, Perfetti CA, Schneider W. Cross cultural effect on the brain revisited: Universal structures plus writing system variation. Human Brain Mapping. 2005;25(1):92–104. doi: 10.1002/hbm.20124. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bowes K, Martin N. Longitudinal study of reading and writing rehabilitation using a bigraph–biphone correspondence approach. Aphasiology. 2007;21(6–8):687–701. [Google Scholar]
- Cohen L, LehÈricy S, Chochon F, Lemer C, Rivaud S, Dehaene S. Language specific tuning of visual cortex? Functional properties of the Visual Word Form Area. Brain. 2002;125(5):1054. doi: 10.1093/brain/awf094. [DOI] [PubMed] [Google Scholar]
- Cox RW. AFNI: Software for analysis and visualization of functional magnetic resonance neuroimages. Computers and Biomedical Research. 1996;29:162–173. doi: 10.1006/cbmr.1996.0014. [DOI] [PubMed] [Google Scholar]
- Das T, Bapi RS, Padakannaya P, Singh NC. Cortical network for reading linear words in an alphasyllabary. Reading and Writing. 2011;24(6):697–707. [Google Scholar]
- Das T, Kumar U, Bapi R, Padakannaya P, Singh N. Neural representation of an alphasyllabary—The story of Devanagari. Current Science. 2009;97(7):1033–1038. [Google Scholar]
- Dehaene S, Cohen L. Cultural recycling of cortical maps. Neuron. 2007;56(2):384–398. doi: 10.1016/j.neuron.2007.10.004. [DOI] [PubMed] [Google Scholar]
- Dehaene S, Cohen L. The unique role of the visual word form area in reading. Trends in Cognitive Sciences. 2011;15(6):254–262. doi: 10.1016/j.tics.2011.04.003. [DOI] [PubMed] [Google Scholar]
- Dehaene S, Le Clec'H G, Poline J, Le Bihan D, Cohen L. The visual word form area: a prelexical representation of visual words in the fusiform gyrus. Neuroreport. 2002;13(3):321–325. doi: 10.1097/00001756-200203040-00015. [DOI] [PubMed] [Google Scholar]
- Dehaene S, Naccache L, Cohen L, Bihan D, Mangin JF, Poline JB, Rivière D. Cerebral mechanisms of word masking and unconscious repetition priming. Nature Neuroscience. 2001;4(7):752–758. doi: 10.1038/89551. [DOI] [PubMed] [Google Scholar]
- Duncan KJK, Twomey T, Jones ŌP, Seghier ML, Haji T, Sakai K, Devlin JT. Inter-and Intrahemispheric Connectivity Differences When Reading Japanese Kanji and Hiragana. Cerebral Cortex. 2013 doi: 10.1093/cercor/bht015. bht015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fox CJ, Iaria G, Barton JJ. Defining the face processing network: optimization of the functional localizer in fMRI. Human brain mapping. 2009;30(5):1637–1651. doi: 10.1002/hbm.20630. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Friedman RB, Lott SN. Successful blending in a phonological reading treatment for deep alexia. Aphasiology. 2002;16(3):355–372. [Google Scholar]
- Giraud A-L, Kleinschmidt A, Poeppel D, Lund TE, Frackowiak RS, Laufs H. Endogenous cortical rhythms determine cerebral specialization for speech perception and production. Neuron. 2007;56(6):1127–1134. doi: 10.1016/j.neuron.2007.09.038. [DOI] [PubMed] [Google Scholar]
- Gleitman LR, Rozin P. Teaching reading by use of a syllabary. Reading research quarterly. 1973:447–483. [Google Scholar]
- Glezer LS, Riesenhuber M. Individual variability in location impacts orthographic selectivity in the “visual word form area”. The Journal of neuroscience. 2013;33(27):11221–11226. doi: 10.1523/JNEUROSCI.5002-12.2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hickok G, Poeppel D. The cortical organization of speech processing. Nature Reviews Neuroscience. 2007;8(5):393–402. doi: 10.1038/nrn2113. [DOI] [PubMed] [Google Scholar]
- Hirshorn EA, Fiez JA. Using artificial orthographies for studying cross-linguistic differences in the cognitive and neural profiles of reading. Journal of Neurolinguistics. 2014;31:69–85. doi: 10.1016/j.jneuroling.2014.06.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Horwitz B, Rumsey JM, Donohue BC. Functional connectivity of the angular gyrus in normal reading and dyslexia. Proceedings of the National Academy of Sciences. 1998;95(15):8939–8944. doi: 10.1073/pnas.95.15.8939. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hsiao JH, Lam SM. The Modulation of Visual and Task Characteristics of a Writing System on Hemispheric Lateralization in Visual Word Recognition—A Computational Exploration. Cognitive science. 2013;37(5):861–890. doi: 10.1111/cogs.12033. [DOI] [PubMed] [Google Scholar]
- Ishai A. Let’s face it: it’sa cortical network. Neuroimage. 2008;40(2):415–419. doi: 10.1016/j.neuroimage.2007.10.040. [DOI] [PubMed] [Google Scholar]
- Ishai A, Schmidt CF, Boesiger P. Face perception is mediated by a distributed cortical network. Brain research bulletin. 2005;67(1):87–93. doi: 10.1016/j.brainresbull.2005.05.027. [DOI] [PubMed] [Google Scholar]
- Leff A, Spitsyna G, Plant G, Wise R. Structural anatomy of pure and hemianopic alexia. Journal of Neurology, Neurosurgery & Psychiatry. 2006;77(9):1004–1007. doi: 10.1136/jnnp.2005.086983. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Liu Y, Dunlap S, Fiez J, Perfetti C. Evidence for neural accommodation to a writing system following learning. Human Brain Mapping. 2007;28(11):1223–1234. doi: 10.1002/hbm.20356. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Maisog JM, Einbinder ER, Flowers DL, Turkeltaub PE, Eden GF. A Meta- analysis of Functional Neuroimaging Studies of Dyslexia. Annals of the New York Academy of Sciences. 2008;1145(1):237–259. doi: 10.1196/annals.1416.024. [DOI] [PubMed] [Google Scholar]
- McCandliss BD, Cohen L, Dehaene S. The visual word form area: expertise for reading in the fusiform gyrus. TICS. 2003;7(7):293–299. doi: 10.1016/s1364-6613(03)00134-7. [DOI] [PubMed] [Google Scholar]
- Mei L, Xue G, Lu Z-L, He Q, Zhang M, Xue F, Dong Q. Orthographic transparency modulates the functional asymmetry in the fusiform cortex: An artificial language training study. Brain and Language. 2012 doi: 10.1016/j.bandl.2012.01.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Moore MW, Brendel PC, Fiez JA. Reading faces: Investigating the use of a novel face-based orthography in acquired alexia. Brain and Language. 2014;129:7–13. doi: 10.1016/j.bandl.2013.11.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Moore MW, Durisko C, Perfetti CA, Fiez JA. Learning to read an alphabet of human faces produces left-lateralized training effects in the fusiform gyrus. Journal of Cognitive Neuroscience. 2014;26(4):896–913. doi: 10.1162/jocn_a_00506. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Morillon B, Liégeois-Chauvel C, Arnal LH, Bénar C-G, Giraud A-L. Asymmetric function of theta and gamma activity in syllable processing: an intra-cortical study. Frontiers in psychology. 2012;3 doi: 10.3389/fpsyg.2012.00248. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nelson JR, Liu Y, Fiez J, Perfetti CA. Assimilation and Accomodation Patterns in Ventral Occipitotemporal Cortex in Learning a Second Writing System. Human Brain Mapping. 2009;30:810–820. doi: 10.1002/hbm.20551. [DOI] [PMC free article] [PubMed] [Google Scholar]
- O’Reilly JX, Woolrich MW, Behrens TE, Smith SM, Johansen-Berg H. Tools of the trade: psychophysiological interactions and functional connectivity. Social cognitive and affective neuroscience. 2012;7(5):604–609. doi: 10.1093/scan/nss055. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Poeppel D, Idsardi WJ, van Wassenhove V. Speech perception at the interface of neurobiology and linguistics. Philosophical Transactions of the Royal Society B: Biological Sciences. 2008;363(1493):1071–1086. doi: 10.1098/rstb.2007.2160. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Price CJ, Devlin JT. The myth of the visual word form area. Neuroimage. 2003;19(3):473–481. doi: 10.1016/s1053-8119(03)00084-3. [DOI] [PubMed] [Google Scholar]
- Price CJ, Devlin JT. The interactive account of ventral occipitotemporal contributions to reading. Trends in Cognitive Sciences. 2011;15(6):246–253. doi: 10.1016/j.tics.2011.04.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pugh KR, Mencl WE, Shaywitz BA, Shaywitz SE, Fulbright RK, Constable RT, Fletcher JM. The angular gyrus in developmental dyslexia: task-specific differences in functional connectivity within posterior cortex. Psychological Science. 2000;11(1):51. doi: 10.1111/1467-9280.00214. [DOI] [PubMed] [Google Scholar]
- Richlan F, Kronbichler M, Wimmer H. Functional abnormalities in the dyslexic brain: A quantitative meta- analysis of neuroimaging studies. HBM. 2009;30(10):3299–3308. doi: 10.1002/hbm.20752. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shaywitz BA, Shaywitz SE, Pugh KR, Mencl WE, Fulbright RK, Skudlarski P, Lyon GR. Disruption of posterior brain systems for reading in children with developmental dyslexia. Biological Psychiatry. 2002;52(2):101–110. doi: 10.1016/s0006-3223(02)01365-3. [DOI] [PubMed] [Google Scholar]
- Szwed M, Qiao E, Jobert A, Dehaene S, Cohen L. Effects of literacy in early visual and occipitotemporal areas of chinese and French readers. Journal of Cognitive Neuroscience. 2014;26(3):459–475. doi: 10.1162/jocn_a_00499. [DOI] [PubMed] [Google Scholar]
- Tan LH, Liu H, Perfetti CA, Spinks JA, Fox PT, Gao J. The Neural System Underlying Chinese Logograph Reading. Neuroimage. 2001;13:836–846. doi: 10.1006/nimg.2001.0749. [DOI] [PubMed] [Google Scholar]
- Tottenham N, Tanaka JW, Leon AC, McCarry T, Nurse M, Hare TA, Nelson C. The NimStim set of facial expressions: judgments from untrained research participants. Psychiatry research. 2009;168(3):242. doi: 10.1016/j.psychres.2008.05.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Vigneau M, Beaucousin V, Herve P, Duffau H, Crivello F, Houde O, Tzourio-Mazoyer N. Meta-analyzing left hemisphere language areas: phonology, semantics, and sentence processing. Neuroimage. 2006;30(4):1414–1432. doi: 10.1016/j.neuroimage.2005.11.002. [DOI] [PubMed] [Google Scholar]
- Vuilleumier P, Pourtois G. Distributed and interactive brain mechanisms during emotion face perception: evidence from functional neuroimaging. Neuropsychologia. 2007;45(1):174–194. doi: 10.1016/j.neuropsychologia.2006.06.003. [DOI] [PubMed] [Google Scholar]
- Warrington EK, Shallice T. Word-form dyslexia. Brain: a journal of neurology. 1980;103(1):99–112. doi: 10.1093/brain/103.1.99. [DOI] [PubMed] [Google Scholar]
- Yoncheva YN, Blau VC, Maurer U, McCandliss BD. Attentional focus during learning impacts N170 ERP responses to an artificial script. Developmental neuropsychology. 2010;35(4):423–445. doi: 10.1080/87565641.2010.480918. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yoncheva YN, Wise J, McCandliss B. Hemispheric specialization for visual words is shaped by attention to sublexical units during initial learning. Brain and Language. 2015;145:23–33. doi: 10.1016/j.bandl.2015.04.001. [DOI] [PMC free article] [PubMed] [Google Scholar]




