Skip to main content
PLOS Computational Biology logoLink to PLOS Computational Biology
. 2012 Sep 27;8(9):e1002691. doi: 10.1371/journal.pcbi.1002691

Spike-based Decision Learning of Nash Equilibria in Two-Player Games

Johannes Friedrich, Walter Senn 1,*
Editor: Olaf Sporns2
PMCID: PMC3459907  PMID: 23028289

Abstract

Humans and animals face decision tasks in an uncertain multi-agent environment where an agent's strategy may change in time due to the co-adaptation of others strategies. The neuronal substrate and the computational algorithms underlying such adaptive decision making, however, is largely unknown. We propose a population coding model of spiking neurons with a policy gradient procedure that successfully acquires optimal strategies for classical game-theoretical tasks. The suggested population reinforcement learning reproduces data from human behavioral experiments for the blackjack and the inspector game. It performs optimally according to a pure (deterministic) and mixed (stochastic) Nash equilibrium, respectively. In contrast, temporal-difference(TD)-learning, covariance-learning, and basic reinforcement learning fail to perform optimally for the stochastic strategy. Spike-based population reinforcement learning, shown to follow the stochastic reward gradient, is therefore a viable candidate to explain automated decision learning of a Nash equilibrium in two-player games.

Author Summary

Socio-economic interactions are captured in a game theoretic framework by multiple agents acting on a pool of goods to maximize their own reward. Neuroeconomics tries to explain the agent's behavior in neuronal terms. Classical models in neuroeconomics use temporal-difference(TD)-learning. This algorithm incrementally updates values of state-action pairs, and actions are selected according to a value-based policy. In contrast, policy gradient methods do not introduce values as intermediate steps, but directly derive an action selection policy which maximizes the total expected reward. We consider a decision making network consisting of a population of neurons which, upon presentation of a spatio-temporal spike pattern, encodes binary actions by the population output spike trains and a subsequent majority vote. The action selection policy is parametrized by the strengths of synapses projecting to the population neurons. A gradient learning rule is derived which modifies these synaptic strengths and which depends on four factors, the pre- and postsynaptic activities, the action and the reward. We show that for classical game-theoretical tasks our decision making network endowed with the four-factor learning rule leads to Nash-optimal action selections. It also mimics human decision learning for these same tasks.

Introduction

Neuroeconomics is an interdisciplinary research field that tries to explain human decision making in neuronal terms. Behavioral outcomes are construed as results of brain activity and the neuronal correlates of the quantities relevant for the decision making process are identified. Humans, as economic agents, attempt to optimize some reward function by participating in the production, exchange and maintenance of goods. Reward for the individuals will depend in general not merely upon their own actions but also on those of the other players and, furthermore, these will adapt their own strategies.

Classical models in neuroeconomics are based on temporal difference (TD) learning [1], an algorithm to maximize the total expected reward [2] with potential neuronal implementations [3], [4]. It assumes that the environment can be described as a Markov decision process (MDP), i.e. by a finite number of states with fixed transition probabilities [5]. Multi-agent games, however, are not Markovian as the evolution of the environment typically does not only depend on the current state, but also on the history and on the adaptation of the other agents. Such games can be described as partially observable Markov decision processes (POMDP, [6]) by embedding the sequences and the learning strategies of the other agents into a large state space. We have presented a policy gradient method for population reinforcement learning which, unlike TD-learning, can cope with POMDPs and can be implemented in neuronal terms [7]. Yet, since a human learner would need to successfully explore the large state space of the POMDP, this appears to be an unrealistic scenario for explaining decision making in a multi-agent environment. A more realistic learning scenario is that humans transiently conceive the other players to follow a fixed strategy, and try to find their optimal counter strategy under this stationarity approximation. Maximizing one's own payoff while assuming stationarity in the opponents strategy is called a fictitious play and conditions are studied when this play effectively converges to a stationary (Nash) equilibrium [8].

Here we show that for classical two-player games [9] a simplified population reinforcement learning approach [7], which is policy gradient under the stationarity approximation, can reproduce human data. We consider two games, blackjack [10] and the inspector game [11], as examples for which the optimal strategy is either deterministic or stochastic, respectively. Optimality is expressed in terms of the Nash equilibrium, a solution concept for games involving two or more players. It is reached when no player has anything to gain by changing its strategy unilaterally. Each player is making the best decision it can, taking into account the decisions of the other(s), hence the Nash equilibrium constitutes an optimum. Our algorithm is consistent with behavioral experiments for these games [10], [11] while performing optimally according to the Nash equilibrium. We also show that TD-learning as well as covariance learning fail to find the stochastic Nash equilibrium for the inspector game.

The current paper follows a long tradition of explaining human and animal behavior by simple models of reward-based learning, starting from Thorndike's law of effect [12] and Pavlovian conditioning paradigms [13], [14] up to more recent theories of reinforcement learning [1], [2], [15], [16]. Basic reinforcement learning with simple models of a few free parameters have also been applied to games. It has been shown for a wide set of two-player games that these simple algorithms well approximate human performance [17].Yet, we show that basic reinforcement learning does not follow the reward gradient, and in fact it does not fit human data on the inspector game as well as our gradient rule. Obviously, playing games involves cognitive reasoning, as for instance captured by the theory of ‘adaptive control of thought–rational’ (ACT-R, [18]). Within such a theory, our model represents a neuronal implementation of a ‘production rule’ which initiates a behavioral pattern in response to sensory and possibly cognitive input.

Results

Model

The network representing a player consists of a population of Inline graphic Spike-Response-Model (SRM) neurons with escape noise [19], driven by a common presynaptic stimulus encoding the current state of the game (the player's hand value in blackjack, and a fixed stimulus for the inspector game). Each input spike pattern (Inline graphic) is composed of Inline graphic afferent spike trains generated once by independent Inline graphic Poisson processes with Inline graphic duration, and then repeatedly presented with the same fixed spike timings. The population neurons integrate the afferent presynaptic input spike trains and produce an output spike pattern (Inline graphic, see Fig. 1). The decision of an individual postsynaptic neuron is denoted by Inline graphic, with Inline graphic, if the considered neuron does not spike, otherwise Inline graphic. Behavioral decisions Inline graphic are stochastically made based on the population activity Inline graphic defined as sum of the individual decisions Inline graphic across the Inline graphic population neurons: if Inline graphic is small, the population decision is likely Inline graphic, and the larger Inline graphic is, the more likely is Inline graphic. At the end of a game involving either a single decision (like in the inspector game) or a sequence of decisions (like in blackjack), a reward signal Inline graphic is delivered by an external critic which either informs about winning or losing (with Inline graphic like in blackjack) or delivers a specific payoff (like in the inspector game).

Figure 1. Neuronal architecture implementing the two players.

Figure 1

Each player Inline graphic (Inline graphic) is represented by a population of decision making neurons (shown Inline graphic of each) which receive an input spike pattern Inline graphic and generate an output spike pattern Inline graphic. The population decision Inline graphic is represented by a readout unit, with Inline graphic being more likely when more decision making neurons fire at least one output spike. The synaptic weights Inline graphic are adapted as a function of Inline graphic, Inline graphic, Inline graphic and the reward signal Inline graphic delivered by a critic.

The synapses feeding the stimulating spike pattern to the population neurons are updated according to a multi-factor plasticity rule involving the reward, the behavioral decision, the single neuron decision and the eligibility trace which depends on the post- and pre-synaptic activity:

graphic file with name pcbi.1002691.e032.jpg (1)

Here, Inline graphic is the reward signal encoding the reward prediction error [20] (see Eq. 5 in Methods), Inline graphic is the global feedback signal informing the synapses about the population decision Inline graphic weighted by the population activity (Eq. 6 in Methods), and Inline graphic is the neuronal decision (spike/no spike). The eligibility trace Inline graphic is a synaptic buffer roughly encoding the covariance between the past pre- and postsynaptic activity relevant for learning (Eq. 8 in Methods). Technically, Inline graphic is the derivative of the log-likelihood of producing the postsynaptic spike train. The learning rule can be shown to perform gradient ascent in the expected reward (Supporting Text S2).

While most of the terms in Eq. 1 may have their standard biological counterpart [16], there is less experimental evidence for assigning the decision feedback Inline graphic' to one specific neuromodulator. Yet, be the population neurons recurrently connected [21] or not, decision learning based on a population always requires that a global population signal is fed back to the individual neurons, as otherwise learning would quickly degrade with increasing population size [7], [22]. By the same performance reasons it is not possible to replace the other factors ‘Inline graphic’ in Eq. 1 by a classical spike-timing dependent plasticity (STDP) implementation endowed with the multiplicative reward signal ‘Inline graphic[23]. In fact, reward-modulated STDP is only able to learn multiple stimulus-response associations when the reward factor averages out to zero for each stimulus individually, requiring an additional reward-prediction network [16].

Our neuronal implementation is as simple as possible to provide the required computational properties. The lack of feedback connectivity avoids issues relating to population spike correlations [24], and the neural mechanisms supporting the readout of the decision and the population feedback signal are not considered here. Similarly, the fixed spike trains representing an input pattern is a biological simplification which does not fundamentally restrict the suggested approach.

Blackjack

The simplified version of blackjack considered here was played in 18th century France and is the precursor of the version played in casinos nowadays. The card decks used consist of Inline graphic cards. Ace counts eleven, jack, queen and king ten points and the numbers two to ten according to their written value. The player (gambler) draws one card after the other, starting with an initial two card hand, with the object of bringing the hand value (total across drawn cards) as close as possible to Inline graphic, but stopping early enough so that it does not exceed this number, in which case he immediately loses. Afterwards the croupier does the same for the bank. The player wins if its score is higher than that of the croupier or if the croupier exceeds Inline graphic, otherwise the croupier wins. The winner's payoff is Inline graphic, the loser's Inline graphic. We assume that both player and croupier base their decision whether to draw another card or not only on their current hand value. Player and bank follow a strategy defined by the hand value Inline graphic and Inline graphic, respectively, from which on they stop to draw another card.

The described rules of the game result in the payoff-matrix (Table 1) comprising the average payoff of the bank as a function of the strategies Inline graphic and Inline graphic of the player and bank, respectively (Methods). The gambler loses whatever the bank wins, therefore the game is an example of a zero sum game. For zero sum games a Nash equilibrium corresponds to a minimax solution [25]. If the pay-off matrix has a saddle point (an entry which is the maximum in its row and the minimum in its column) the corresponding strategy pair is a minimax solution which represents a pure Nash equilibrium. In blackjack there is a unique such pair, Inline graphic, and hence there is a unique Nash equilibrium at all. For this optimal strategy pair the gambler stops drawing another card as soon as he has Inline graphic points or more, while the croupier stops at Inline graphic or more. The entry represents the lowest loss for the gambler given the strategy of the bank (minimum in the column), and the maximal payoff obtainable by the bank given the strategy of the gambler (maximum in the row). The Nash equilibrium is asymmetric because in the case of a standoff (equal final hand values) the croupier always obtains reward Inline graphic and the player Inline graphic. For hand values smaller than Inline graphic it is safe to draw another card whereas for more than Inline graphic drawing another card leads to certain loss due to exceeding Inline graphic. While we do not model these trivial actions, we address the learning problem for hand values between Inline graphic and Inline graphic.

Table 1. Average bank payoff for our version of blackjack.

Inline graphic 13 14 15 16 17 18 19
11 0.2982 0.3164 0.3027 0.2544 0.1689 0.0436 −0.1237
12 0.1635 0.2015 0.2076 0.1791 0.1130 0.0066 −0.1427
13 0.1052 0.1587 0.1806 0.1679 0.1176 0.0266 −0.1077
14 0.0438 0.1134 0.1536 0.1597 0.1282 0.0560 −0.0598
15 0.0119 0.0706 0.1289 0.1555 0.1450 0.0940 −0.0008
16 0.0143 0.0607 0.1085 0.1557 0.1685 0.1411 0.0702
17 0.0543 0.0893 0.1254 0.1628 0.1989 0.1980 0.1539
18 0.1349 0.1598 0.1854 0.2120 0.2394 0.2651 0.2509

The values show the calculated mean gains of the bank, respectively losses of the gambler, dependent on the strategy of the gambler (stopping to draw a card at hand values equal or larger than Inline graphic) and the croupier (stopping at Inline graphic). The strategies are described by the hand values from which on the players stop to draw another card. Formatted typesetting is used to highlight the Nash equilibrium.

pRL and TD-learning converge to a pure Nash equilibrium

We first simulated two neural networks playing against each other. Each hand value between Inline graphic and Inline graphic was represented by a fixed spatio-temporal spike pattern generated by 6 Hz Poisson processes, with different (but fixed) patterns distinguishing numbers, gambler and croupier. Since initially no ordering information is associated to the Poisson spike train encoding of the hand values, the learning process has yet to assign this information by trial and error. The drawing probabilities for each hand value learned by the gambler after Inline graphic and Inline graphic games, averaged over ten runs, are shown in Fig. 2A. The colored dashed lines in the plot indicate the decision boundaries of Nash equilibrium above which no further card is drawn by the gambler or croupier, respectively. Initially, both players randomly decide with Inline graphic chance to draw (black dashed line). After about Inline graphic games both players have learned to not exceed Inline graphic and do not draw further cards for high hand values, being still undetermined about what action to take for low hand values. After Inline graphic games both have learned successfully to draw another card for low hand values and tend to play according to the Nash equilibrium.

Figure 2. Playing blackjack with pRL converges toward pure Nash equilibrium.

Figure 2

(A) Average strategy (Inline graphic) after Inline graphic (open circles) and Inline graphic (filled circles) games where the gambler (blue) is a neural net as well as the croupier (black). The dotted vertical lines left of Inline graphic and Inline graphic show the separation line of drawing/not drawing another card for the optimal Nash strategy pair. (B) Average strategy (Inline graphic) after Inline graphic games for a neural net as gambler playing against a croupier that follows a given strategy Inline graphic (blue), Inline graphic (red) or Inline graphic (green). The colored dotted lines left of Inline graphic show the separation line of drawing/not drawing another card for the optimal strategy given that the croupier stops drawing at Inline graphic (from left to right). (C) Average reward (Inline graphic) of the gambler for the scenario described in (B). The colored dotted lines show the maximal reachable average reward. (D) Average strategy (Inline graphic) over the last Inline graphic out of a total of Inline graphic games for a neural net (red) or human (green) as gambler playing against a croupier that follows a given strategy Inline graphic. The initial weights of the network were chosen such that the strategy in the first Inline graphic trials (blue) mimics the strategy of humans instructed about the game rules (black).

We next simulated the gambler by a neural net and the croupier by a computer algorithm which follows right from the beginning a fixed strategy. The resulting drawing probabilities after Inline graphic games are shown in Fig. 2B for three different strategies of the croupier, Inline graphic, Inline graphic and Inline graphic. The gambler learns the perfect response strategies. The neural net exploits deviations of the croupier from the Nash equilibrium Inline graphic by also deviating and thus increasing its reward. If the croupier stops earlier at Inline graphic the gambler continues until Inline graphic, trying to exceed the croupier's hand value (blue), whereas if the croupier stops later at Inline graphic the gambler stops already at Inline graphic (green), taking advantage of the fact that the croupier likely exceeds Inline graphic. For the case Inline graphic (red), the gambler does not learn the optimal strategy as well as for the two others. This is due to the fact that here the true mean rewards for the strategies Inline graphic are close to the one for the optimal Inline graphic, cf. Table 1, and cannot be distinguished based on merely Inline graphic samples. Instead of just looking at the strategy we hence consider the maximally possible and the actually obtained reward for the three croupier strategies. Fig. 2C depicts the low pass filtered reward which approaches the theoretical optimum indicated by the dashed lines and read out from the corresponding columns Inline graphic in Table 1. For the non-optimal croupier strategies (Inline graphic) the maximally possible reward is significantly higher.

Hence, from playing against another network and against the croupier, we conclude that a neuronal population endowed with the plasticity rule (1) is able to learn the optimal strategies in blackjack, determined either by the pure Nash equilibrium, or by the croupier's fixed strategy. Replacement of the neural net by a TD-learner yields similar results for both scenarios (Supporting Text S1).

pRL fits human data on blackjack

Human behavior in blackjack was studied in [10], [26], though with a deck of Inline graphic cards. The instruction of the subjects about the rules of the game already induces a prior in the drawing behavior with a preference for drawing or stopping in the case of low or high hand values, respectively. Neither pRL nor TD can reproduce this type of learning by insight. Moreover, the network needs first to learn the ordering of the stimuli by trial-and-error, and hence much more learning trials are required for the network. To still allow for a comparison with the human data, we used the same deck of Inline graphic cards and simulated a neural network with initial weights chosen in such a way, that the initial strategy mimics the one of human's (Fig. 2D). After playing the same number of games as humans did, the network's final strategy agrees with the experimental data of humans, showing the same shift to a slightly less risky drawing behavior.

Inspector game

The inspector game [27] has been widely studied in neuroeconomics [28]. The economic story surrounding the game is that a lazy employee prefers not to work. An employer knows this and sometimes ‘inspects’, but has to pay some cost ‘Inline graphic’for inspection. The payoffs for employee and employer are shown in Table 2. The inspector game shows only a mixed Nash equilibrium in which decisions are taken stochastically with a fixed probability. At the equilibrium, the players mix pure strategies, each with the same payoff: had these pure strategies different payoffs, then it would be better to just follow the pure strategy with the highest expected payoff.

Table 2. Payoff matrix of the inspector game.

employee\ employer inspect Don't inspect
work 0.5, 2-i 0.5, 2
shirk 0, 1-i 1, 0

The variables in the left of each cell determine the employee's payoffs, and the variables in the right determine the employer's payoffs for each combination of player's responses. ‘Inline graphic’is the cost of inspection to the employer.

For each value of the inspection cost Inline graphic, there is a unique mixed Nash equilibrium in which the probability with which the employee is shirking just corresponds to the inspection cost, Inline graphic, and the probability with which the employer is inspecting is Inline graphic. In this case, neither player can improve its expected payoff by only unilaterally changing its strategy. In fact, using Inline graphic, the expected payoff for the employer is always

graphic file with name pcbi.1002691.e114.jpg

independently of Inline graphic. Likewise, if Inline graphic, the expected payoff for the employee is always Inline graphic, independently of Inline graphic.

pRL reproduces inspector game data and Nash equilibria

We played with our neural network as employee against the algorithm presented in [11] as employer, and we also simulated two neural nets playing against each other. Fig. 3A shows a running average over the last Inline graphic trials of the shirk and inspection rates for the neuronal net employee (green) and neuronal net employer (red), overlaid with the corresponding data for a human employee (black) playing against a human employer (grey, data from [11]). The inspection cost was held constant during three blocks of Inline graphic trials and stepped from Inline graphic to Inline graphic and finally to Inline graphic. The averaged shirk rate of the neuronal net employee is in striking agreement with the one of the human employee across the whole rate of inspection costs (Fig. 3B). There is also good agreement with the experimental data for the employee's reward (Fig. 3C).

Figure 3. pRL but not TD-learning fits data and follows a mixed Nash equilibrium.

Figure 3

(A) Choice behavior for pRL versus pRL (employee green, employer red) and human versus human (employee black, employer gray) [11]. The cost of inspection was stepped from Inline graphic to Inline graphic to Inline graphic, respectively, and this does also correspond to the shirk rate in Nash equilibrium (thick black lines). The inspection rate in the Nash equilibrium would always be Inline graphic. (B) Average choice behavior of pRL vs pRL (dark green circles) and TD vs TD (light green circles), pRL for the employee vs computer algorithm for the employer (blue squares), human vs human (black), human as an employee vs computer algorithm (orange) and monkey vs computer algorithm (cyan) for Inline graphic trials/block as function of the inspection cost. The solid line indicates the Nash equilibrium. (C) Reward as function of the inspection cost for Inline graphic trials/block. Coloring as in (B). pRL simulations are more similar to the experimental data than the TD simulations. (D) Average choice behavior as in (B) but for Inline graphic trials/block. The inspect rates for pRL vs pRL (TD vs TD) (dark (light) red circles) and pRL vs computer algorithm (purple squares) are shown too. The lines indicate the Nash equilibrium for the employee (diagonal) and the employer (horizontal). pRL behaves according to the Nash equilibrium, whereas TD does not. (E) Time course of the probability to shirk with inspection cost Inline graphic for pRL vs algorithm (blue line) and pRL vs pRL (TD vs TD) (dark (light) green line). For the latter the probability of the employer to inspect is shown too (dark (light) red line). pRL oscillates around the Nash equilibrium (drawn lines), whereas TD completely deviates from Nash. (F) Time course of the probability to shirk or inspect respectively with inspection cost Inline graphic for pRL vs pRL (green respectively red, solid) as in E, but shifted up for clarity and overlaid with the negative change in the shirk rate (green dashed) and the change in the inspect rate (red dashed) to show the counteractive behavior.

To check whether the good fit of the human data is due to the gradient property alone or whether the population boost is also necessary we considered single neurons playing against each other (by setting the number of neurons in the population to Inline graphic, without changing the learning rule). In this case our learning rule becomes equivalent to the policy gradient rule for single escape rate neurons [29], [30]. With only a single neuron learning turns out to be too slow to match the transient behavior in the human data (Fig. 3A), even after optimizing the learning rate (data not shown). We have previously shown that the speeding up learning in a population of spiking neurons is only possible with an additional population signal modulating synaptic plasticity [22], [31]. We conclude that population learning is necessary, and that other spike-based gradient rules which do not exploit a population signal [15], [32], [33] will also be too slow.

During the Inline graphic trials across an experimental block, the shirk rates (but not the inspection rates) tended towards the corresponding value of the Nash equilibrium (diagonal line in Fig. 3B), and so did the employee's reward (horizontal line in Fig. 3C), although without reaching them. In the simulation we extended the block size to check the asymptotic behavior. We found that for block sizes of Inline graphic trials, the average shirk and the inspection rates closely reached the Nash equilibrium (match of simulation points with the two lines in Fig. 3D).

Despite the match of the average rates with the mixed Nash equilibria, the running means oscillate around the corresponding equilibria (shown in Fig. 3E for inspection cost Inline graphic), as predicted by the theory [34], [35]. In the asymmetric case, when our neuronal employee plays against the (apparently not optimal) computer algorithm, the oscillation vanish and the employee's shirk rate reaches the optimal Nash equilibrium (blue), as expected for a neuronal network endowed by synaptic modifications following the reward gradient (Supporting Text S2). When two reward maximizing networks play against each other, however, each tries to exploit any deviation of the other from his Nash equilibrium, pushing him even further away. This leads to oscillations around the Nash equilibrium where a change in strategy of one player is oppositely directed to the deviation from the Nash equilibrium of the other player. In fact, when superimposing one rate with the negative change of the other rate, a close match is observed (Fig. 3F).

As we are studying a policy gradient algorithm, one may ask how robust the described properties are in view of a possibly improper biological implementation. The eligibility trace Inline graphic (Eq. 8 in Methods) depends on the presynaptic spike timing and contains a positive term that depends itself on the postsynaptic spike timing and a negative term depending on the postsynaptic potential. Due to the policy gradient property, the two terms are balanced and the eligibility trace is zero on average. To check for robustness we performed simulations where this balance is perturbed. The above results still qualitatively hold true if the negative term in the eligibility trace is twice as large, or even if it is neglected completely, yielding STDP with plasticity for pre-post spike pairing only. The robustness can be attributed to the factor Inline graphic in the learning rule (Eq. 1) which averages out to Inline graphic due to the subtraction of the reward prediction (since Inline graphic, see Eq. 5 in Methods) and hence neutralizes any bias in the estimate of Inline graphic [16].

TD-learning is inconsistent with data and Nash

To value the match generated by our synaptic plasticity rule with experimental and theoretical data, we also trained a TD-learner on the inspector game. Yet, the parameter optimized TD-algorithm roughly reproduces only the humans average shirk rate (Fig. 3B), but less well the subjects' rewards (Fig. 3C). More strikingly, two opposing TD-learners simulated across the Inline graphic trial blocks do not behave according to the Nash equilibrium (Fig. 3D), but adopt a deterministic strategy within such a block (Fig. 3E). When simulating even longer, oscillations emerge as well, but without convergence of the long-term average to the Nash equilibrium.

There are principled reasons why TD-learning must generally fail to find an optimum solution, and in particular a mixed Nash equilibrium. First, TD-learning assumes that the underlying process is Markovian, but for multiplayer games this assumption is in general not satisfied. In fact, because the policy of the other player may change in time during learning, the optimal decision probabilities depend on past actions. Values which are assumed to be a function of the current action only, may therefore be incorrectly estimated. Second, for a mixed Nash equilibrium, TD-learning may also fail to correctly map values to decision probabilities at the steady-state after learning. This is because each policy imposes some predefined mapping from action values to decision probabilities, and this adhoc mapping may not reflect the true relationship between expected payoffs and decision probabilities defining the Nash equilibrium. For the inspector game with inspection costs Inline graphic, for instance, the softmax policy, which selects action Inline graphic with probability Inline graphic, where Inline graphic is the value of action Inline graphic and Inline graphic a parameter (inverse temperature) regulating the amount of stochasticity in the decision making, does never reflect the Nash probabilities Inline graphic and Inline graphic, whatever the parameter Inline graphic is (see Supporting Text S1).

In other learning tasks, animals and humans transiently match the choice probabilities with the average payoffs of the corresponding actions, Inline graphic (for a discussion of probability matching see [36][38]). In these cases too, TD-learning with softmax will not be able to find the solutions neither, unless again all Inline graphic- and Inline graphic-values are each the same, or alternatively, the choice policy is redefined (to Inline graphic, see [39]). For other examples where TD-learning fails in each of the two ways of either learning the values or inferring the choice probabilities see [7].

Not all covariance-rules lead to Nash equilibria

Covariance-based learning rules change the synaptic strengths Inline graphic according to the covariance between reward prediction error and some measure of neuronal activity Inline graphic, Inline graphic, see e.g. [37], where Inline graphic denotes expectation. pRL which follows the stochastic gradient of the expected reward, Inline graphic, is a special instance of a covariance rule where the quantity Inline graphic corresponds to the eligibility trace Inline graphic (Eq. 1). Steady-states of covariance rules satisfy Herrnstein's matching law [37], [40]. This law states that the number of times an action is chosen is proportional to the reward accumulated from choosing that action. Formally, Inline graphic, where Inline graphic is the number of times an action Inline graphic is chosen, Inline graphic is the action-independent proportionality constant, and Inline graphic is the reward accumulated by action Inline graphic across its Inline graphic choices.

Pure strategies, where only a single action is chosen, trivially satisfy the matching property since for non-chosen actions both Inline graphic and Inline graphic vanish. In contrast, stochastic strategies only satisfy matching in the case that the selected options provide the same average reward Inline graphic. The mixed (and trivially the pure) Nash equilibrium represents a special case of matching. If in a two-player game, for instance, player 2 adopts a mixed Nash strategy then, by definition, player 1 receives the same average reward Inline graphic from any action (see also [38]).

Both the steady-state of a covariance rule and the Nash equilibrium imply matching. But a steady-state of the covariance rule does not necessarily need to be a Nash equilibrium. This can be seen by generalizing the classical covariance rule [37] to population learning, and applying this rule to the inspector game. To do so we replaced the eligibility trace Inline graphic in the pRL rule (Eq. 1) by the deviation of the neuronal response Inline graphic from its mean, Inline graphic. The emerging population covariance (pCOV) learning rule,

graphic file with name pcbi.1002691.e177.jpg (2)

with positive learning rate Inline graphic and estimates Inline graphic (see Eq. 5 in Methods) and Inline graphic for Inline graphic and Inline graphic respectively, does not follow the reward gradient and does not reliably converge to the unique mixed Nash equilibrium of the inspector game (Fig. 4). To keep simulation time short we did not use spiking neurons but merely binary neurons that produce output Inline graphic with probability Inline graphic for a given binary input pattern Inline graphic. This yields an expected neural response Inline graphic. Notice that Inline graphic can be considered as the personalized reward for the specific neuron in consideration: if the sign of Inline graphic and Inline graphic coincide, the population reward signal Inline graphic elicited in response to the population decision can be taken as a personal reward signal for that neuron; otherwise the neuron's personal reward has reversed sign of Inline graphic. Personalizing this way the neuronal rewards within the population solves the spatial credit-assignment problem and boosts learning [22]. Fig. 4 (A–D) shows the results for the pCOV rule applied to the inspector game, once with only the employee playing according to pCOV against an employer playing according to the algorithm presented in [11], and once for pCOV versus pCOV. Only in a fraction of the simulated runs is the mixed Nash equilibrium reached, while in the other runs, a deterministic (non-Nash) strategy pair emerges.

Figure 4. Covariance learning rules may lead to a mixed Nash equilibrium, but also to deterministic non-Nash strategies. pRL fits data better than basic reinforcement models.

Figure 4

Time course of the probability to shirk (A,C) and inspect (B,D) with inspection cost Inline graphic for pCOV vs algorithm (A,B) and pCOV vs pCOV (C,D). In each panel the horizontal lines depict the Nash equilibrium, and for 10 simulation runs inspection and shirk rates are shown (same color in (A,B) and (C,D), respectively, correspond to the same run). Only a small fraction of all runs converge or oscillate around the Nash equilibrium, while the other runs result in a deterministic strategy pair. The initial distribution of synaptic weights Inline graphic was Gauss with mean Inline graphic and standard deviation Inline graphic. The learning rate was set to Inline graphic, but Inline graphic did not change the proportion of runs converging to the pure strategy. (E) Average choice behavior of pRL vs pRL (green), RE1 vs RE1 (blue), RE3 vs RE3 (red) and human vs human (black) for Inline graphic trials/block as function of the inspection cost. The light red circles show the average choice behavior for RE3 vs RE3 and Inline graphic trials/block. Individual runs converged to a pure strategy, hence the shown averages over 200 runs reflect the percentage of runs converging to a pure shirk strategy. (F) Reward as function of the inspection cost for Inline graphic trials/block. Coloring as in (E). The solid lines indicate the Nash equilibrium.

As check for robustness we performed further simulations showing that these negative results hold true also for other (non-gradient) covariance rules. We considered the version where the mean Inline graphic is not calculated analytically but determined as a running average (as done for Inline graphic), and where the neuronal activity in the covariance rule is equal to the binary output, Inline graphic, without mean subtraction (yielding the simple update rule Inline graphic). Moreover, considering only a single neuron and taking its response Inline graphic as the behavioral decision did not qualitatively change the results, demonstrating that the failure is not due to the population framework (data not shown).

In [41] the author further elaborates on the relationship of covariance based rules to the Replicator dynamics. The latter is described by Inline graphic, where Inline graphic is the effective learning rate and Inline graphic is the average reward for choosing action Inline graphic. The effective learning rate Inline graphic depends on the details of the decision making network and is given in Eq.(14) of [41]. If synaptic changes are driven by the covariance of reward and neural activity, then according to the average velocity approximation, learning behavior is described by the differential equation above, but the effective learning rate Inline graphic is not guaranteed to be positive. Indeed, for the binary neurons one gets Inline graphic, which can be negative due to negative components Inline graphic. Hence, the convergence statements in [41] do not apply to our decision making network, and there is no contradiction with the finding that the covariance rule fails to reproduce the mixed Nash equilibrium. Nevertheless, in the special case of a Inline graphic coding of the inputs Inline graphic such that the effective learning rate becomes positive, and the specific covariance rule Inline graphic with a single postsynaptic neuron (Inline graphic), we can also fit the human data (for Inline graphic learning trials) and obtain the oscillations around the Nash equilibrium (for Inline graphic trials).

Whereas we do not consider the neural mechanisms supporting the readout of the decision, such a mechanism has been studied in the context of matching [42]. There, the probability for decision making is well described by a logistic function, which is also our choice for Inline graphic. For an increasing amount of stochasticity in the decision making they report an increasing deviation from matching towards more uniform choice probabilities. For choice probabilities larger than Inline graphic this leads to lower choice probabilities than predicted by the matching law, a phenomenon called undermatching. We also varied the amount of stochasticity by changing the slope of Inline graphic as a function of Inline graphic. We find that increasing and decreasing the slope by a factor of two still robustly leads to matching for the considered inspection costs, but when decreasing it by a factor of ten we also observe undermatching. Due to the stochasticity in the decision making the range of choice probabilities our network can represent is limited. With increasing the amount of stochasticity the range becomes smaller and extreme probabilities close to Inline graphic or Inline graphic predicted by the matching law cannot be represented. Instead, choice probabilities lie closer to uniform randomness, i.e. undermatching occurs.

pRL fits data and Nash better than basic reinforcement models

The Replicator equation is widely used in evolutionary game theory and provides a good phenomenological description of choice behavior in many repeated-choice experiments. A different phenomenological description has been suggested in [17]. Starting from Luce's basic reinforcement learning (RE1, see also [43]), the authors adapt this rule to take account of a generalization and recency effect (RE3, Methods). They show that both the basic and extended reinforcement learning reproduces the behavior of humans in many different games. However, we find that these rules poorly match human behavior for the inspector game. In fact, for Inline graphic trials/block both models of Erev and Roth fit the experimental data significantly worse than pRL (Fig. 4 E and F). After Inline graphic trials RE3 converged to a pure (non-Nash) strategy, and for RE1 the inspection rate diverged away from the Nash equilibrium of Inline graphic. This non-optimal equilibrium performance is consistent with the fact that RE1 and RE3 are not gradient procedures (see Supporting Text S3). Whether pRL fits the behavioral data of humans also better in the other games Erev and Roth considered remains to be tested. In any case, the models have to cope with the fact that humans show a variety of behaviors in two-player matrix games, although in many settings they eventually play according to Nash (for a discussion see [44]).

Discussion

We considered a population of spiking neurons which represent an adaptive agent in a dynamic environment including other adaptive agents. The agent's adaptation was implemented as population reinforcement learning algorithm (pRL) which was previously shown to perform stochastic gradient ascent in the reward for partially observable Markov decision processes (POMDPs) [7]. Here we showed with blackjack and the inspector game that pRL can also cope with a dynamic multi-agent environment and that the performance is comparable to human data in both these games. In fact, when two neuronal populations play against each other, they learn to behave according to the optimal (but unstable) Nash equilibrium. By definition, no further increase in an agent's expected payoff is possible in the Nash equilibrium by only changing its own strategy while the environment remains stationary. In these steady-state conditions – where the opponent's strategy is assumed to be stationary – pRL is proven to maximize the expected reward (Supporting Text S2). The simulations show that the equilibrium is indeed reached by two pRL agents playing against each other, with a pure (deterministic) Nash equilibrium in blackjack and a mixed (stochastic) Nash equilibrium in the inspector game. As predicted by the theory [34], [35], the strategies oscillated around the mixed Nash equilibrium when both players used the same gradient algorithm based on the others stationarity assumption, i.e. when one network played against another both using pRL (with a small learning rate). Averaging over long enough time windows, i.e. long compared to the oscillation period, yields the Nash equilibrium values. However, when implementing only the employee by a gradient pRL network and the employer by a non-gradient computer algorithm [11], the two players do not play exactly equally well. In this case no oscillations occurred and both converged to and stayed at the optimal Nash equilibrium.

For mathematical clarity we presented the spike-based pRL for an episodic learning scenario. But a biologically plausible implementation of a fully online scheme is also possible: to avoid an explicit separation of stimuli in time, the rectangular window function used to temporally integrate the eligibility trace (Eq. 8 in Methods) can be replaced by an exponentially decaying window function to get a low-pass filtered eligibility trace, and concentrations of neuromodulators can be used to encode feedback about the population decision and the global reward signal (e.g. acetylcholine or dopamine) [22]. We considered reward delivery immediately after stimulus presentation, but reward could also be substantially delayed when considering a further eligibility trace incorporating the population decision [7]. Moreover, since learning in general speeds up with population size (up to 1-shot learning for stimulus-response associations [31]) we expect that the convergence for pRL towards the Nash equilibrium can be much faster than in our example where parameters were fit to reproduce human data.

The mixed Nash equilibrium represents a special case of Herrnstein's matching law [40], according to which the number of times an action is chosen is proportional to the reward accumulated from choosing that action. This is true both for the pure and mixed Nash optimum. In the special case that the current reward only depends on the current action, but not on past actions, reward maximization always implies matching. (In fact, if one action would yield a higher (average) payoff per choice, then this action must be chosen with probability 1 to maximize expected reward, and matching (Inline graphic) is trivially satisfied (since for the non-chosen action Inline graphic). If both actions yield the same payoff Inline graphic per choice (Inline graphic), then matching is again trivially satisfied.) In turn, a reward-based learning rule which only empirically maximizes reward in this case leads to only an approximated matching [42]. Choice probabilities which maximize the expected reward are trivially also fixed points of any learning rule defined by the covariance between reward and neuronal activity. (In fact, at the reward maximum there is no change in neuronal activity which, in average, would lead to an increase (and in the opposite direction to a decrease) of the expected reward, and hence the covariance between activity and reward must vanish.) The other direction, again, is not true: a covariance-based rule does not necessarily lead to reward maximization or a Nash equilibrium [37], [38]. Indeed, our simulations of the inspector game with the canonical covariance-based plasticity rules show that these rules do not necessarily lead to the mixed Nash equilibrium, but instead can result in deterministic (non-Nash) strategies. Similarly, basic reinforcement rules studied in the context of economics and human decision making [17] are neither compatible with the mixed Nash equilibrium for the inspector game.

The performance of spike-based pRL is also superior to TD-learning [2] which is often discussed in the neuro-economical context [1]. With the parameter values for which TD-learners came closest to human data (although without matching them as closely as pRL), the mixed Nash equilibrium in the inspector game was not reached within the long learning times. Instead, TD-learner first adopted a deterministic strategy, transiently switched their behavior, and swapped back to the same deterministic strategy. We attributed this mismatch to a general failing of TD-learning in correctly mapping action values to choice probabilities in probabilistic decision making tasks. TD-learning with the softmax choice policy, in particular, fails when matching of choice probabilities with average payoff is required [40].

Different generalizations have been considered to approach the shortcomings of algorithms in socio-economic games. TD-learning has been extended to not only assign values to its own decisions, but to pairs of own and opponent decisions. This enables the learning of minimax strategies where reward is maximized for the worst of the opponents actions [45]. While for zero-sum games minimax may realize a mixed Nash equilibrium, it results in a deterministic strategy in the inspector game: minimizing the maximal loss implies for the employee to always work (to prevent being caught while shirking), and for the employer to always inspect (to prevent undetected shirking). Another approach is to separately learn its own and the opponents action values and then calculate the Nash equilibrium [46], but such explicit calculations do not seem to be the typical human behavior in socio-economic interactions. Instead, it is tempting to consider pRL with long eligibility traces which, as it performs policy gradient in POMDPs [7], should find cooperative strategies with, on average, higher than Nash payoffs for all agents. For the inspector game such a co-operative strategy is that the employer should let the employee sporadically shirk (say with probability Inline graphic) without inspection, but with the common agreement that shirking will not prevail (leading to average payoffs Inline graphic and Inline graphic for the employee and employer, respectively).

Although under the specific experimental conditions of the inspector game humans did not show cooperation, they often do so in other game-theoretic paradigms, as e.g. in the prisoner's dilemma, and hence deviate from the Nash equilibrium (for a review see [47]). It remains a challenge for future modeling work to capture such cooperative behavior. Likely, this will involve modeling the prediction of other player's reactions in response to ones own actions, as considered in the theory of mind [48] and as being a hallmark of successful socio-economic behavior.

Given the difficulties of modeling genuine social behavior, and the difficulties humans effectively have in stacked reflexive reasoning, the assumption of the opponent's stationarity considered here appears as a reasonable approximation for decision making even in complex situations. In view of its success in matching behavioral and theoretical data we may ask how far human decision making is in fact determined by cognitive reasoning, or whether decisions should rather be attributed to automated neuronal processes steered e.g. by pRL (which can also encompass input from a cognitive module as it is suggested for the production rules in the ACT-R theory, [18]). In fact, daily experience tells us that decisions are often more appropriate when we listen to our gut feeling, while we tend to merely add justifications post-hoc. Or put in Schopenhauer's words, “that in spite of all his resolutions and reflections he does not change his conduct” [49].

Methods

Model details

Focusing on one neuron we denote by Inline graphic its input, which is a spike pattern made up of Inline graphic spike trains, and by Inline graphic its output spike train. The membrane potential can be written as

graphic file with name pcbi.1002691.e239.jpg (3)

The postsynaptic kernel Inline graphic and the reset kernel Inline graphic vanish for Inline graphic. For Inline graphic they are given by

graphic file with name pcbi.1002691.e244.jpg

For the resting potential we use Inline graphic (arbitrary units). Further, Inline graphic is used for the membrane time constant and Inline graphic for the synaptic time constant. Action potential generation is controlled by an instantaneous firing rate Inline graphic which increases with the membrane potential. So, at each point Inline graphic in time, the neuron fires with probability Inline graphic where Inline graphic represents an infinitesimal time window (we use Inline graphic in the simulations). Our firing rate function is

graphic file with name pcbi.1002691.e253.jpg

with Inline graphic and Inline graphic (parameter values taken from [29], see also [19]).

We consider a population of Inline graphic neurons and an input layer of size Inline graphic for each player that is represented by a neural net. We assume that each population neuron synapses onto a site in the input layer with probability of Inline graphic, leading to many shared input spike trains between the neurons. The population response is read out by the decision making unit based on a spike/no-spike code. We introduce the coding function Inline graphic, with Inline graphic, if neuron Inline graphic does not spike, otherwise Inline graphic. The population activity Inline graphic being read out by the decision making unit is:

graphic file with name pcbi.1002691.e264.jpg

Note that such a formal summation could be implemented in terms of a neuronal integrator (forming a ‘line attractor’) which continuously integrates excitatory and inhibitory input and keeps the neuronal activity at a constant level in the absence of input [50]. Using this activity readout, the behavioral decision Inline graphic is made probabilistically, with likelihood Inline graphic given by the logistic function

graphic file with name pcbi.1002691.e267.jpg (4)

and Inline graphic being the counter probability. The normalization of the activity Inline graphic with Inline graphic ensures that Inline graphic, thus being of same order as the noise in the decision readout.

We now describe the terms, modulating synaptic plasticity in Eq. (1). The reward feedback Inline graphic encodes the reward prediction error, as observed in experiments [20],

graphic file with name pcbi.1002691.e273.jpg (5)

Here Inline graphic is a running mean estimate of the expected reward, Inline graphic, where we set Inline graphic. The parameter Inline graphic is the positive learning rate which, for notational convenience, we absorb into the reward signal. In all pRL simulations we used the value Inline graphic. Both values Inline graphic and Inline graphic (rounded) were chosen to minimize the Mean Squared Error (MSE) between the average model and human data (Inline graphic for the shirk rate and Inline graphic for the employee's reward in the inspector game). All other parameter values were taken from [7].

The decision feedback Inline graphic is given by

graphic file with name pcbi.1002691.e284.jpg (6)

which is the derivative of Inline graphic, see Eq. (4), with respect to Inline graphic; so decision feedback measures how sensitive the decision is to changes in activity.

As shown in [29], the probability density, Inline graphic, that a neuron actually produces the output spike train Inline graphic in response to the stimulus Inline graphic during a decision period lasting from Inline graphic to Inline graphic satisfies:

graphic file with name pcbi.1002691.e292.jpg (7)

The derivative of Inline graphic with respect to the strength of synapse Inline graphic is known as characteristic eligibility in reinforcement learning [51]. For our choice of the firing rate function one obtains for the last term in (1)

graphic file with name pcbi.1002691.e295.jpg (8)

In all the simulations initial values for the synaptic strength were picked from a Gaussian distribution with mean zero and standard deviation equal to Inline graphic, independently for each afferent and each neuron. In the Supporting Text S2 we show that the plasticity rule (1) composed of the factors (5, 6, 8) and the decision Inline graphic follows the stochastic gradient of the expected reward.

TD-learning

For TD-learning we used the SARSA control algorithm [2] which estimates the values of state-action pairs Inline graphic. At each point in time, the value estimates Inline graphic are updated according to

graphic file with name pcbi.1002691.e300.jpg

Here Inline graphic is similar to a learning rate and has values between Inline graphic and Inline graphic. Inline graphic is the reward immediately obtained after performing action Inline graphic. In the case of blackjack it is defined as zero if the game is not over and the player chooses to draw another card, otherwise it is determined by the payoffs of the considered game. When in state Inline graphic, the next action Inline graphic is chosen using softmax, i.e. according to the probability Inline graphic. In all simulations we used the rounded values Inline graphic and Inline graphic as they minimized the MSE between averaged model and human data (Inline graphic for the shirk rate and Inline graphic for the employee's reward in the inspector game). Note that in both TD-learning and pRL we adapted the same number of free parameters (TD: Inline graphic and Inline graphic; pRL: Inline graphic and Inline graphic), making it possible to directly compare the quality of the fit.

Basic reinforcement models

In both Roth-Erev models [17] the probabilistic choice rule is parametrized using propensities Inline graphic. The probability Inline graphic that a specific player (who's index is omitted) plays his Inline graphicth pure strategy is Inline graphic.

RE1 model

The propensity of the chosen action Inline graphic is incremented by the received reward Inline graphic, Inline graphic, where Inline graphic denotes the Kronecker delta. The initial propensities are set to Inline graphic, where Inline graphic is the average reward for that player under uniformly distributed Inline graphic's, and the strength parameter Inline graphic is the one parameter that is optimized.

RE3 model

In addition to the strength parameter Inline graphic, the generalization and forgetting parameters Inline graphic and Inline graphic are introduced. The propensities are updated according to

graphic file with name pcbi.1002691.e332.jpg

The parameters were chosen to minimize the mean squared error (MSE) between the average model and human data (Inline graphic (RE1) and Inline graphic (RE3) for the shirk rate and Inline graphic (RE1) and Inline graphic (RE3) for the employee's reward in the inspector game).

Blackjack details

In blackjack we assume an infinite number of card decks. Independently of the history, the drawing probability therefore remains constant, with a probability to draw a card with value Inline graphic being Inline graphic, and the probability to draw any other value from 2 to 11 being Inline graphic. For a strategy determined by the stopping value Inline graphic we calculated analytically the probability distribution of hand values Inline graphic after drawing the last card. The drawing process is iterated for those hand values that are smaller than Inline graphic until there is only probability mass on hand values greater than or equal to Inline graphic. Because the lowest card value on the desk remains always Inline graphic, drawing Inline graphic times in a row yields a lowest possible hand value of Inline graphic. Hence up to Inline graphic cards are drawn in order to obtain a hand value Inline graphic greater or equal to Inline graphic. Let us denote the value of the Inline graphicth card by Inline graphic and its probability distribution by Inline graphic,

graphic file with name pcbi.1002691.e353.jpg

To obtain the probability distribution Inline graphic we sum up the probabilities of all possible combinations to draw Inline graphic cards that yield hand value Inline graphic, with the condition that the sum of the first Inline graphic drawn cards is smaller than Inline graphic, such that a Inline graphicth card is actually drawn under the stopping strategy.

graphic file with name pcbi.1002691.e360.jpg

where II is the indicator function which is one if its argument is true and zero else. The product of the Inline graphic is the joint probability that the first card has value Inline graphic, the second Inline graphic and so on. The first indicator function ensures that all Inline graphic drawn cards sum up to Inline graphic, the second that Inline graphic cards are drawn, i.e. the sum of the first Inline graphic cards has to be smaller than the stopping value Inline graphic, because otherwise no further card would be drawn. For instance in the case of Inline graphic and Inline graphic one obtains the distributions in Table 3.

Table 3. Probability distribution of hand values Inline graphic after drawing the last card.

Inline graphic <15 15 16 17 18 19 20 21 >21
Inline graphic 0 0.1206 0.1247 0.1194 0.1138 0.1078 0.1546 0.0944 0.1648
Inline graphic 0 0 0.1247 0.1287 0.1231 0.1170 0.1638 0.1036 0.2390

Denoting the hand value of the gambler by Inline graphic and that of the croupier by Inline graphic the payoff of the bank is

graphic file with name pcbi.1002691.e377.jpg

Averaging of Inline graphic with respect to the joint distribution Inline graphic yields the entry in the average payoff matrix Table 1 for the strategy pair (Inline graphic). For instance for (Inline graphic), Inline graphic.

We defined the drawing probabilities in Fig. 2 for a hand value at a certain game number Inline graphic as the frequency with which another card has been drawn upon the last Inline graphic presentations prior to Inline graphic of the corresponding stimulus. The evolution of the average reward Inline graphic in time in Fig. 2C are the low pass filtered reward sequences, Inline graphic where Inline graphic is the reward in the Inline graphic-th game and Inline graphic was used. The initial value Inline graphic was calculated assuming a random Inline graphic choice behavior prior to learning.

The initial weights mimicking the prior strategy of instructed humans were obtained by training our network to make a decision with a certain probability. This is possible by adapting pRL to perform regression (as will be published elsewhere).

Inspector game details

The evolution of the rates in time in Fig. 3E are the low pass filtered decision sequences, e.g. Inline graphic where Inline graphic if the employee shirks in trial Inline graphic, otherwise Inline graphic. We used a value of Inline graphic and assumed again an initial random Inline graphic choice behavior. The rate change in Fig. 3F was determine by binning the obtained time course of the rate into bins of width Inline graphic, calculating the mean of each bin, and the differences between succeeding bins. The result was further low pass filtered once more with an exponential running mean (Inline graphic) to reduce the noise.

Supporting Information

Text S1

We present further results for temporal-difference learning and elaborate on its failure to learn mixed Nash equilibria.

(PDF)

Text S2

We show how the plasticity rule presented in the main text is based on a gradient ascent procedure maximizing the average reward.

(PDF)

Text S3

We demonstrate that the heuristic rules of Erev and Roth [17] are no gradient procedures.

(PDF)

Acknowledgments

We thank Robert Urbanczik for helpful discussions, Michael C. Dorris for providing the code of the computer algorithm used in [11], Johannes Hewig for the data of humans playing blackjack [10], and Yonatan Loewenstein for helpful feedback on the covariance rules.

Funding Statement

This work was supported by a grant of the Swiss SystemsX.ch initiative (Neurochoice, evaluated by the SNSF). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1. Dayan P, Daw ND (2008) Decision theory, reinforcement learning, and the brain. Cogn Affect Behav Ne 8: 429–453. [DOI] [PubMed] [Google Scholar]
  • 2.Sutton RS, Barto AG (1998) Reinforcement Learning: An Introduction. Cambridge, MA: MIT Press.
  • 3. Seymour B, O'Doherty JP, Dayan P, Koltzenburg M, Jones AK, et al. (2004) Temporal difference models describe higher-order learning in humans. Nature 429: 664–7. [DOI] [PubMed] [Google Scholar]
  • 4. Potjans W, Morrison A, Diesmann M (2009) A spiking neural network model of an actor-aritic learning agent. Neural Comput 21: 301–339. [DOI] [PubMed] [Google Scholar]
  • 5.Howard RA (1960) Dynamic programming and Markov processes. Cambridge, MA: MIT Press.
  • 6. Smallwood RD, Sondik EJ (1973) The optimal control of partially observable markov processes over a finite horizon. Oper Res 21: 1071–1088. [Google Scholar]
  • 7. Friedrich J, Urbanczik R, Senn W (2011) Spatio-temporal credit assignment in neuronal population learning. PLoS Comput Biol 7: e1002092. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Fudenberg D, Levine DK (1998) Theory of Learning in Games. Cambridge, MA: MIT Press.
  • 9.Von Neumann J, Morgenstern O (1944) Theory of games and economic behavior. Princeton: Princeton University Press.
  • 10. Hewig J, Trippe R, Hecht H, Coles GH, Holroyd CB, et al. (2007) Decision-making in blackjack: An electrophysiological analysis. Cereb Cortex 17: 865–877. [DOI] [PubMed] [Google Scholar]
  • 11. Dorris MC, Glimcher PW (2004) Activity in posterior parietal cortex is correlated with the relative subjective desirability of action. Neuron 44: 365–378. [DOI] [PubMed] [Google Scholar]
  • 12. Thorndike EL (1898) Animal Intelligence: An Experimental Study of the Associative Processes in Animals. Psychol Monogr 2: 321–330. [Google Scholar]
  • 13.Rescorla R, Wagner A (1972) A theory of Pavlovian conditioning: variations in the effectiveness of reinforecement and nonreinforcement. In: Black AH, Prokasy WF, editors. Classical Conditioning II: current research and theory. New York: Appleton Century Crofts. pp. 64–99.
  • 14.Dayan P, Long T (1998) Statistical models of conditioning. Adv Neural Inf Process Syst 10. pp. 117–123.
  • 15. Fiete IR, Seung HS (2006) Gradient learning in spiking neural networks by dynamic perturbation of conductances. Phys Rev Lett 97: 048104. [DOI] [PubMed] [Google Scholar]
  • 16. Frémaux N, Sprekeler H, Gerstner W (2010) Functional requirements for reward-modulated spike- timing-dependent plasticity. J Neurosci 30: 13326–13337. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Erev I, Roth AE (1998) Predicting how people play games: reinforcement learning in experimental games with unique, mixed strategy equilibria. Amer Econ Rev 88: 848–881. [Google Scholar]
  • 18. Anderson JR, Bothell D, Byrne MD, Douglass S, Lebiere C, et al. (2004) An integrated theory of the mind. Psychol Rev 111: 1036–1060. [DOI] [PubMed] [Google Scholar]
  • 19.Gerstner W, Kistler WM (2002) Spiking Neuron Models. Cambridge, UK: Cambridge University Press.
  • 20. Schultz W, Dayan P, Montague PR (1997) A neural substrate of prediction and reward. Science 275: 1593–1599. [DOI] [PubMed] [Google Scholar]
  • 21. Wang XJ (2008) Decision making in recurrent neuronal circuits. Neuron 60: 215–234. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Urbanczik R, Senn W (2009) Reinforcement learning in populations of spiking neurons. Nat Neurosci 12: 250–252. [DOI] [PubMed] [Google Scholar]
  • 23. Izhikevich EM (2007) Solving the distal reward problem through linkage of STDP and dopamine signaling. Cereb Cortex 17: 2443–2452. [DOI] [PubMed] [Google Scholar]
  • 24. Averbeck B, Latham PE, Pouget A (2006) Neural correlations, population coding and computation. Nat Rev Neurosci 7: 358–3666. [DOI] [PubMed] [Google Scholar]
  • 25. Von Neumann J (1928) Zur Theorie der Gesellschaftsspiele. Math Ann 100: 295–320. [Google Scholar]
  • 26. Hewig J, Trippe R, Hecht H, Coles GH, Holroyd CB, et al. (2008) An electrophysiological analysis of coaching in blackjack. Cortex 44: 1197–1205. [DOI] [PubMed] [Google Scholar]
  • 27.Avenhaus R, Von Stengel B, Zamir S (2002) Inspection games. In: Aumann RJ, Hart S, editors. Handbook of Game Theory with Economic Applications. pp. 1947–1987.
  • 28.Glimcher PW, Camerer C, Fehr E, Poldrack R, editors (2008) Neuroeconomics: decision making and the brain. Amsterdam: Elsevier.
  • 29. Pfister J, Toyoizumi T, Barber D, Gerstner W (2006) Optimal spike-timing-dependent plasticity for precise action potential firing in supervised learning. Neural Comput 18: 1318–1348. [DOI] [PubMed] [Google Scholar]
  • 30. Florian RV (2007) Reinforcement learning through modulation of spike-timing-dependent synaptic plasticity. Neural Comput 19: 1468–1502. [DOI] [PubMed] [Google Scholar]
  • 31. Friedrich J, Urbanczik R, Senn W (2010) Learning spike-based population codes by reward and population feedback. Neural Comput 22: 1698–1717. [DOI] [PubMed] [Google Scholar]
  • 32. Seung HS (2003) Learning in spiking neural networks by reinforcement of stochastic synaptic transmission. Neuron 40: 1063–1073. [DOI] [PubMed] [Google Scholar]
  • 33. Werfel J, Xie X, Seung HS (2005) Learning curves for stochastic gradient descent in linear feedfor-ward networks. Neural Comput 17: 2699–2718. [DOI] [PubMed] [Google Scholar]
  • 34. Crawford VP (1985) Learning behavior and mixed-strategy Nash equilibria. J Econ Behav Organ 6: 69–78. [Google Scholar]
  • 35. Stahl DO (1988) On the instability of mixed-strategy Nash equilibria. J Econ Behav Organ 9: 59–69. [Google Scholar]
  • 36. Shanks DR, Tunney RJ, McCarthy JD (2002) A re-examination of probability matching and ra-tional choice. J Behav Decis Making 15: 233–250. [Google Scholar]
  • 37. Loewenstein Y, Seung HS (2006) Operant matching is a generic outcome of synaptic plasticity based on the covariance between reward and neural activity. Proc Natl Acad Sci U S A 103: 15224–15229. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38. Loewenstein Y, Prelec D, Seung HS (2009) Operant matching as a Nash equilibrium of an in-tertemporal game. Neural Comput 21: 2755–2773. [DOI] [PubMed] [Google Scholar]
  • 39. Sakai Y, Fukai T (2008) The actor-critic learning is behind the matching law: matching versus optimal behaviors. Neural Comput 20: 227–251. [DOI] [PubMed] [Google Scholar]
  • 40. Herrnstein RJ (1961) Relative and absolute strength of response as a function of frequency of reinforcement. J Exp Anal Behav 4: 267–272. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41. Loewenstein Y (2010) Synaptic theory of replicator-like melioration. Front Comput Neurosci 4: 17. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42. Soltani A, Wang XJ (2006) A biophysically based neural model of matching law behavior: melio-ration by stochastic synapses. J Neurosci 26: 3731–3744. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Luce R (1959) Individual Choice Behavior: A Theoretical Analysis. New York: Wiley.
  • 44. Shanks DR, Tunney RJ, McCarthy JD (1998) Do people play Nash equilibrium? Lessons from evolutionary game theory. J Econ Lit 36: 1–28. [Google Scholar]
  • 45.Littman ML (1994) Markov games as a framework for multi-agent reinforcement learning. In: Proceedings of the International Conference on Machine Learning. San Francisco, CA. pp. 157–163.
  • 46. Hu J, Wellman MP (2003) Nash Q-learning for general-sum stochastic games. J Mach Learn Res 4: 1039–1069. [Google Scholar]
  • 47. Holt CA, Roth AE (2004) The Nash equilibrium: a perspective. Proc Natl Acad Sci U S A 101: 3999–4002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48. Yoshida W, Dolan RJ, Friston KJ (2008) Game theory of mind. PLoS Comput Biol 4: e1000254. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Schopenhauer A (1890) The wisdom of life. London: S. Sonnenschein. 147 pp.
  • 50. Seung HS (1996) How the brain keeps the eyes still. Proc Natl Acad Sci U S A 93: 13339–13344. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51. Williams RJ (1992) Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach Learn 8: 229–256. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Text S1

We present further results for temporal-difference learning and elaborate on its failure to learn mixed Nash equilibria.

(PDF)

Text S2

We show how the plasticity rule presented in the main text is based on a gradient ascent procedure maximizing the average reward.

(PDF)

Text S3

We demonstrate that the heuristic rules of Erev and Roth [17] are no gradient procedures.

(PDF)


Articles from PLoS Computational Biology are provided here courtesy of PLOS

RESOURCES