Skip to main content
. 2021 Jul 1;10:e66175. doi: 10.7554/eLife.66175

Figure 11. Recent history constrains the mouse’s decisions.

(A) The mouse’s trajectory through the maze produces a sequence of states st=nodeoccupiedaftersteptst=nodeoccupiedaftersteptst=nodeoccupiedafterstept. From each state, up to three possible actions lead to the next state (end nodes allow only one action). We want to predict the animal’s next action, at+1, based on the prior history of states or actions. (B–D) Three possible models to make such a prediction. (B) A fixed-depth Markov chain where the probability of the next action depends only on the current state st and the preceding state st-1. The branches of the tree represent all 3×127 possible histories (st-1,st). (C) A variable-depth Markov chain where only certain branches of the tree of histories contribute to the action probability. Here one history contains only the current state, some others reach back three steps. (D) A biased random walk model, as defined in Figure 9, in which the probability of the next action depends only on the preceding action, not on the state. (E) Performance of the models in (B,C,D) when predicting the decisions of the animal at T-junctions. In each case we show the cross-entropy between the predicted action probability and the real actions of the animal (lower values indicate better prediction, perfect prediction would produce zero). Dotted line represents an unbiased random walk with 1/3 probability of each action.

Figure 11.

Figure 11—figure supplement 1. Markov model fits.

Figure 11—figure supplement 1.

Fitting Markov models of behavior. (A) Results of fitting the node sequence of a single animal (C3) with Markov models having a fixed depth (‘fix’) or variable depth (‘var’). The cross-entropy of the model’s prediction is plotted as a function of the average depth of history. In both cases we compare the results obtained on the training data (‘train’) vs those on separate testing data (‘test’). Note that at larger depth the ‘test’ and ‘train’ estimates diverge, a sign of over-fitting the limited data available. (B) As in (A) but to combat the data limitation we pooled the counts obtained at all nodes that were equivalent under the symmetry of the maze (see Materials and methods). Note considerably less divergence between ‘train’ and ‘test’ results, and a slightly lower cross-entropy during ‘test’ than in (A). (C) The minimal cross-entropy (circles in (B)) produced by variable vs fixed history models for each of the 19 animals. Note the variable history model always produces a better fit to the behavior.