(
A) As in
Figure 6E–G of the main text, but for
(left column) and
(right column). In addition, the plots showing KL divergence (in bits) for the distribution of field sizes and number of fields per cell are shown. (
B) As in (
A) but for FF-TD. (
C) A in
Figure 6H of the main text, but for FF-TD with
and (
D) FF-TD with
. (
E) Total KL divergence across
for RNN-S, FF-TD, the random network from
Figure 6D (‘Shuffle’), and the split-half noise floor from the Payne et al. dataset (‘Data’). This noise floor is calculated by comparing the place field statistics of a random halves of the neurons from Payne et al. We measure the KL divergence between the distributions calculated from each random half. This is repeated 500 times, and it is representative of a lower bound on KL divergence. Intuitively, it should not be possible to fit the data of Payne et al as well as the dataset itself can.