Skip to main content
. Author manuscript; available in PMC: 2023 Jul 1.
Published in final edited form as: Psychol Rev. 2021 Aug 26;129(4):640–695. doi: 10.1037/rev0000313

Table 1:

CSWL models in literature.

Paper Input Formalism Key features Constraints/Biases Captures Implications Limitations
Hypothesis Testing Models
Siskind (1996) Symbolic Discrete rule-based inference; incremental Pre-defined rules detect noise and homonymy; Heuristic functions disambiguate word senses under homonymy. Mutual exclusivity and coverage (to narrow down the set of meanings for a word); composition Data: Artificially generated corpus
Behaviour: Learns under variable vocabulary size and degree of referential uncertainty; fast mapping; bootstrapping from partial knowledge
Incremental systems of CSWL and mutual exclusivity (ME) can solve lexical acquisition problems like of children Not possible to revise the meaning of a word once it is considered learned; sensitive to noise and missing data
Frank, Goodman & Tenenbaum (2009) Sub-symbolic audiovisual Bayesian Batch processing; Uses speaker’s intention to derive mappings Speaker’s intent is known Data: Small corpus of mother-infant interactions CHILDES dataset
Behaviour: ME; Fast map; Generalizes from social cues
Some language-specific constraints such as ME may not be necessary Learns small lexicon; arbitrary representation of speaker’s intention
Truesell et al. (2013) Symbolic Mathematical Incremental; Retains one referent per word at a time; minimal free parameters Some degree of failure to recall Data: Trueswell et al. (2013)
Behaviour: Captures participant’s knowledge (or lack) of referents from preceding trials
Adults retain only one hypothesis about a word’s meaning at each learning instance Arbitrary assumptions of recall probability
Sadegi, Scheutz & Krause (2017) Sub-symbolic audiovisual Bayesian, Embodied Incremental; Uses speaker’s referential intentions; Robotic implementation; Robust to noise Speaker’s Intent; ME; Limited memory Data: Simple utterances from a human to the robot
Behaviour: Learns under referential uncertainty
Incremental models help avoid a need for excessive memory Tested on a very limited data set and an artificial experiment only
Najnini & Banerjee (2018) Symbolic Connectionist Integrates socio-pragmatic theory; Batch processing; Deep reinforcement learning; Uses four reinforcement algorithms Novel-Noun Novel-Category (N3C); Attentional, prosodic cues Data: Artificial experiments on two transcribed video clips of mother-infant interaction from CHILDES corpus
Behaviour: Referential uncertainty; N3C bias
Reinforcement learning models are well-suited for word-learning Learns one-to-one mappings only; No modelling of empirical experiments
Associative Learning Models
Yu & Ballard (2007) Sub-symbolic Audio-symbolic visual Probabilistic Batch processing; Uses expectation maximization; Adds speaker’s visual attention and social cues in speech Visual Attentional and Social cues Data: 600 mother utterances from CHILDES database corpus
Behaviour: Referential uncertainty; Role of prosodic and visual cues
Speakers’ attentional and prosodic cues guide CSWL learning No modelling of empirical results
Fazly, Alishahi & Stevenson (2010) Sub-symbolic audio; Symbolic visual Probabilistic Incremental; Calculates and accumulates probability for each word-object pair Prior knowledge bias Data: Artificial experiments on the CHILDES database corpus
Behaviour: Referential uncertainty; ME bias
Inbuilt biases such as ME not necessary; Primarily, input shapes development Basic CSWL; no modelling of empirical results
Yu & Smith (2011) Eye-tracking data Mathematical Incremental; Moment-by-moment modelling; Uses eye fixations to build associations Selective visual attention Data: Yu & Smith (2011)
Behaviour: Learning under referential uncertainty in infants; Selective attention
Visual attention drives learning; Learners actively select word–object mappings to store Mathematical treatment of infant gaze data; Not a model of audiovisual input
Nematzadeh, Fazly & Stevenson (2012) Sub-symbolic audio Symbolic visual Probabilistic Extension from Fazly et.al. (2010) forgetting and attention to novelty Prior knowledge; Attention to novelty; Memory Data : Artificial experiments on a small corpus from CHILDES database
Behaviour: Referential uncertainty; spacing effect
Memory and attention processes underlie spacing effect behaviours No modelling of empirical results
Kachergis, Yu & Shiffrin (2012, 2013, 2017) Symbolic Mathematical Incremental; Learned associations and novel items compete for attention; Associations decay; WM supports successive repeated associations Familiarity/prior knowledge; Novelty/entropy for attentional shifting Data: Kachergis, Yu & Shiffrin (2012, 2013, 2017),
Behaviour: ME as well as its relaxation; Sensitivity to variance in input frequency and contextual diversity
ME can arise in associative mechanisms through attentional shifting and memory decay Bias to associate uncertain words with uncertain objects similar to ME; Unexplained parametric variations
Yurovsky, Fricker, Yu, & Smith (2014) Symbolic Mathematical, Bayesian Compares role of full and partial knowledge in generating mutual exclusivity behaviour Prior knowledge bias Data: Yurovsky et al (2014)
Behaviour: ME; Bootstrapping from partial knowledge
Partial knowledge can help disambiguate word meanings Specific to analysis of the role of prior knowledge reported in this work
Rasanen & Rasilo (2015) Sub-symbolic audiovisual Probabilistic Transition probability-based; Joint learning of word segmentation and word-object mappings from continuous speech Transition probability (TP) analysis Data : Yu and Smith (2007), Yurovsky, Yu, & Smith (2013) Caregiver Y2 UK corpus;
Behaviour: ME, Sensitivity to varying degrees of referential uncertainty
CSWL can aid bootstrapping of speech segmentation and vice versa Hard allocation of TPs into disjoint referential contexts; No experiments on development
Bassani & Araujo (2019) Sub-symbolic audiovisual Modular connectionist Incremental trial-by-trial learning; Raw images of objects, streams of phonemes as input data Time-Varying Self-Organizing Maps Data: Yurovsky et al. (2013), Yu and Smith (2007), Trueswell et al. (2013);
Behaviour: Referential uncertainty, Local/global competition, Context-sensitive association learning
Time-Varying Self-Organizing Maps are better at capturing co-variations than Hebbian learning Does not benefit from prior knowledge in forming new associations
Mixed Models
Fontanari, Tikhanoff, Cangelosi, & Perlovsky (2009) Symbolic Neural Modelling Fields Batch processing of input; NMF categorization mechanism Noise/Clutter detection; Parametric models Data: Small artificial dataset
Behaviour: Referential uncertainty
Fuzziness in noise can be exploited to find the correct associations The number of models is chosen a priori; No modelling of any empirical data
Kachergis & Yu (2018) Symbolic Mathematical Extends Kachergis e.al. (2012) with uncertain responses during training Uncertain response probability Data: Kachergis & Yu (2018)
Behaviour: Captures participant accuracy and uncertainty in learning trials
Neither pure HT or extreme AL models can account for CSWL behaviours Processes / representations that generate uncertain responses not specified
Smith, Smith, & Blythe (2011) Symbolic Probabilistic analysis Comparison of different possible strategies in an associative model Data: Smith et al. (2011)
Behaviour: Learning under varying referential uncertainty and interleaving trials
Continuum of possible strategies used, modulated task difficulty Mathematical treatment is specific to the empirical work by the authors
Stevens, Gleitman, Trueswell, & Yang (2017)) Sub-Symbolic audio Symbolic visual Probabilistic Incremental; Combines selection, ME, reward based learning and associative learning; Mutual exclusivity Data: CHILDES; Cartmill et al. (2013); Yu & Smith (2007); Trueswell et al. (2013); Koehne et al. (2013)
Behaviour: CSWL under varying uncertainty
Adults can retain multiple associations but always a single favoured hypothesis Does not account for retaining multiple hypotheses for one word
Taniguchi, Taniguchi & Cangelosi (2017) Sub-symbolic audiovisual Embodied, Bayesian, Generative Unsupervised machine learning based on a Bayesian generative model; Robotic implementation; Word learning for objects and actions Mutual exclusivity; Taxonomic bias Data: Artificial experiment on a limited word-referent set
Behaviour: learning under referential uncertainty; Learning of objects and actions
Mutual exclusivity constraint is effective for lexical acquisition in CSL Does not deal with major issues in CSWL
Yurovsky & Frank (2015) Symbolic Probabilistic Incremental; shares intention/attention to create AL to HT spectrum Intention distribution and memory decay Data: Yurovsky & Frank (2015)
Behaviour: CSWL under varying within and between trial uncertainty
CSWL distributional but modulated by limited attention and memory Even distribution of attention among non-hypothesized is arbitrary