Skip to main content
. Author manuscript; available in PMC: 2010 Jan 26.
Published in final edited form as: Cogn Sci. 2008 Jun 1;32(4):741–754. doi: 10.1080/03640210802066964

Fig. 1.

Fig. 1

Diagram of the sequence encoder architecture, with numbers of units used in Simulation 1 shown for each group. Note: These numbers were determined by trial and error to be sufficient to support near asymptotic performance on the training sequences. Solid arrows denote full connectivity and learned weights, and dashed arrows denote one-to-one copy connections. Rectangular groupings denote external (prescribed) representations coded over localist units, and oval groupings denote internal (learned) representations distributed over hidden units. SRN = simple recurrent network.