Skip to main content
. 2019 Oct 1;199:325–335. doi: 10.1016/j.neuroimage.2019.06.003

Table 1.

Previous studies reporting left vOT activation during reading: A MEDLINE search was conducted (from January 2000 to October 2018) using the keywords (i) ‘Reading’, (ii) ‘fMRI’ or ‘magnetic resonance imaging’ and (iii) ‘occipitotemporal’, ‘occipito-temporal’, or ‘visual word form area’ to identify papers that had reported activation during reading in left vOT. Relevant references within these articles also directed us to other papers that were considered in the literature review. Altogether, we identified 213 articles. We then excluded: (i) reviews and meta-analyses (i.e. those not reporting original-research), (ii) effects from subjects who were not neurologically or psychiatrically “normal” adults, or who had atypical learning, (iii) effects that were not related to visually presented words or pseudowords, (iv) effects not reported in standardized coordinates, (v) results of contrasts that compared visual stimuli to rest or fixation (because it was impossible to determine the level of cognitive processing that was driving activation), (vi) single case studies, (vii) co-ordinates related to laterality indices, (viii) effects in predefined regions of interest (region-based analyses), and (ix) studies published in non-English journals. Where appropriate, stereotactic Talairach coordinates were converted into Montreal Neurological Institute (MNI) space. For each study, we reported the location of the left vOT activation peak. The median of all vOT peaks is [x = −43 mm, y = −58 mm, z = −14.5 mm]. Activation contrasts were categorised as being related to: (1) changes in task demands where subjects performed different tasks with the same set of stimuli or (2) changes in stimulus demands where subjects performed the same task with different sets of stimuli. Task driven contrasts were further categorised into those primarily driven by visual (e.g. letter detection versus phoneme detection), semantic (e.g. semantic versus identity one-back matching), or general demands (e.g. one-back matching versus passive viewing). Stimulus driven contrasts were further categorised into those primarily driven by visual differences (e.g. written words versus pictures of objects), linguistic content (e.g. words versus false fonts), a combination of visual differences and linguistic content (e.g. words versus checkerboards), semantic content (e.g. high versus low imageable words), general demands (e.g. unfamiliar versus familiar words), or stimulus primes (i.e. less activation when stimuli were preceded by identical ones). In some papers, superior peaks at z  ≥  −12mm were labelled as inferior occipital gyrus instead of vOT.

Study MNI coordinates
Factor driving activation
x y z
Cohen et al. (2002) −39 −57 −9 Stimuli: visual/linguistic content
Binder et al. (2005a) −42 −55 −10 Stimuli: general demands
Chee et al. (2000)* −43 −56 −10 Task: semantic demands
Xu et al. (2015) −40 −56 −10 Stimuli & task: visual/linguistic
Bruno et al. (2008) −46 −56 −11 Stimuli: general demands
Xue et al. (2006) −42 −53 −12 Task: general demands
Nosarti et al. (2010) −44 −54 −12 Stimuli: general demands
Dehaene et al. (2010) −45 −57 −12 Stimuli: primes
Stevens et al. (2017) −52 −49 −13 Stimuli: visual content
Weiss and Booth (2017) −36 −48 −14 Stimuli: general demands
Hayashi et al. (2014) −48 −54 −14 Stimuli: visual/linguistic content
Purcell et al. (2011) −40 −56 −14 Stimuli: visual/linguistic content



Quinn et al. (2017) −42 −60 −8 Stimuli: general demands
Dehaene et al. (2004) −44 −64 −8 Stimuli: primes
Peng et al. (2003)* −43 −66 −9 Stimuli: linguistic content
Guo and Burgund (2010)* −43 −70 −9 Task: visual demands
Schurz et al. (2010) −44 −60 −10 Stimuli: general demands
Woollams et al. (2011) −44 −62 −10 Stimuli: general demands
Weiss and Booth (2017) −46 −62 −10 Stimuli: general demands
Cohen et al. (2008) −42 −70 −10 Stimuli: linguistic content
Sussman et al. (2018) −40 −62 −10 Stimuli: visual/linguistic content
Twomey et al. (2013) −45 −58 −11 Stimuli: general demands
Kiehl et al. (1999) −41 −60 −12 Stimuli & task: visual/linguistic
Carreiras et al. (2007) −40 −66 −12 Stimuli: visual/linguistic content
Wimmer et al. (2016) −48 −58 −14 Stimuli: general demands
Wright et al. (2008) −48 −58 −14 Task: semantic demands



Yarkoni et al. (2008)* −44 −52 −15 Stimuli: general demands
Mongelli et al. (2017) −44 −55 −15 Stimuli: visual content
Cohen et al. (2002) −42 −57 −15 Stimuli: visual/linguistic content
Sandak et al. (2004) −45 −50 −16 Stimuli: semantic content
Richardson et al. (2011) −40 −54 −16 Stimuli: linguistic content
Danelli et al. (2013) −40 −56 −16 Stimuli: visual/linguistic content
Binder et al. (2005b) −42 −52 −17 Stimuli: general demands
Kronbichler et al. (2004) −42 −50 −18 Stimuli: general demands
Szwed et al. (2014) −42 −54 −18 Stimuli: linguistic content
Schuster et al. (2015) −39 −46 −20 Stimuli: visual/linguistic content
Dehaene et al. (2001) −44 −52 −20 Stimuli: primes
Thesen et al. (2012) −46 −52 −20 Stimuli: linguistic content



Chee et al. (2000)* −43 −60 −15 Task: semantic demands
Kronbichler et al. (2009) −48 −60 −15 Stimuli: general demands
Cohen et al. (2003) −42 −63 −15 Stimuli: visual/linguistic content
Xue and Poldrack (2007) −39 −66 −15 Stimuli: primes
Kao et al. (2010) −41 −58 −16 Stimuli: linguistic content
Kherif et al. (2011) −40 −58 −16 Stimuli: primes
Cohen et al. (2004) −44 −64 −16 Task: visual demands
Mechelli et al. (2003) −44 −64 −16 Stimuli: general demands
Wang et al. (2018) −48 −64 −16 Stimuli: semantic content
Chee et al. (2003)* −45 −58 −17 Stimuli: general demands
Booth et al. (2002) −42 −60 −18 Stimuli: visual/linguistic content
Devlin et al. (2006) −42 −60 −18 Stimuli: linguistic content
Kronbichler et al. (2007) −48 −60 −18 Stimuli: general demands
Danelli et al. (2015) −40 −64 −18 Stimuli: visual/linguistic content