Significance
Diversity of information and expertise among group members has been identified as a crucial ingredient of collective intelligence. However, many factors tend to reduce the diversity of groups, such as herding, groupthink, and conformity. We show why the individual incentives in financial and prediction markets and the scientific community reduce diversity of information and how these incentives can be changed to improve the accuracy of collective forecasting. Our results, therefore, suggest ways to improve the poor performance of collective forecasting seen in recent political events and how to change career rewards to make scientific research more successful.
Keywords: collective intelligence, game theory, democracy, diversity, markets
Abstract
Collective intelligence is the ability of a group to perform more effectively than any individual alone. Diversity among group members is a key condition for the emergence of collective intelligence, but maintaining diversity is challenging in the face of social pressure to imitate one’s peers. Through an evolutionary game-theoretic model of collective prediction, we investigate the role that incentives may play in maintaining useful diversity. We show that market-based incentive systems produce herding effects, reduce information available to the group, and restrain collective intelligence. Therefore, we propose an incentive scheme that rewards accurate minority predictions and show that this produces optimal diversity and collective predictive accuracy. We conclude that real world systems should reward those who have shown accuracy when the majority opinion has been in error.
The financial crisis and its aftermath have reopened long-standing debates about the collective wisdom of our societal organizations (1–3). Financial and prediction markets seem unable to foresee major economic and political upheavals, such as the credit crunch or Brexit. This lack of collective foresight could be the result of insufficient diversity among decision-making individuals (4). Diversity has been identified as a key ingredient of successful groups across many facets of collective behavior (5–7). It is a crucial condition for collective intelligence (6–10) that can be more important than the intelligence of individuals within a group (11). Because collective behavior ultimately results from individual actions, incentives play a major role for diversity and collective performance (12, 13). Although most previous research has focused on explaining how collective intelligence emerges (14), less is known about how to optimize the wisdom of crowds in a quantitative sense.
Harnessing collective wisdom is important. Global systems of communication, governance, trade, and transport grow rapidly in complexity every year. Many of these real world problems have a large number of contributing factors. For example, predicting future economic fluctuations requires integrating knowledge about credit markets and supply chains across the world as well as the ramifications of political developments in different countries and the shifting sentiments of individual investors and consumers. Political developments are themselves the result of many factors: both direct (e.g., political parties’ strategies) and indirect (e.g., technological change). Scientific questions are also increasingly complex. For instance, building a complete model of an ecosystem requires bringing together expertise on many scales from individual animal behavior to complex networks of predation and codependency (15). In each case, knowledge about the diverse contributing factors is dispersed. For these high-dimensional problems, it is becoming impossible for any single individual or agency to gather and process enough data to understand the entire system (16). In many cases, we do not even have full knowledge of what the potential causal factors are, let alone a full understanding of them.
Attention is, therefore, shifting toward distributed systems as a means of bringing together the local knowledge and private expertise of many individuals (12, 17). In machine learning, researchers have found that a pluralistic modeling approach maximizes prediction accuracy (18). In politics, the forecasts of prediction markets (19, 20) are now commonly reported alongside opinion polls during elections. Scientists are also turning to crowdsourcing collective wisdom as a validation tool (21–23). However, as highlighted by the failure of financial and prediction markets to foresee the results of recent elections in the United Kingdom and the United States, collective wisdom is not a guaranteed property of a distributed system (2), partly because of herding effects (24, 25). In science as well, the incentive structure undervalues diversity: low-risk projects with assured outcomes are more likely to be funded than highly novel or interdisciplinary work (26, 27). Rewards for conformity with institutional cultures can severely limit useful diversity (28). Previous work (29) has investigated mechanisms to elicit truthful minority views to counter herding effects in expressed opinion. This work raises the question: how can minority viewpoints be fostered in the first place to enhance diversity and its potential benefits for collective intelligence?
Here, we analyze an evolutionary game-theoretic model of collective intelligence among unrelated agents motivated by individual rewards. We show that previously proposed incentive structures (13) are suboptimal from the standpoint of collective intelligence and in particular, produce too little diversity between individuals. We propose an incentive system that we term “minority rewards,” wherein agents are rewarded for expressing accurate minority opinions, and show that this produces stable, near-optimal collective intelligence at equilibrium. Our results show that common real world reward structures are unlikely to produce optimal collectively intelligent behavior, and we present a superior alternative that can inform the design of reward systems.
Results
To investigate the effect of incentives on collective intelligence, we use an abstract model of collective information gathering and aggregation (13). Complex outcomes are modeled as a result of independent causal factors. A large population of individual agents gathers information in a decentralized fashion, each being able to pay attention to just one of these factors at any given time. Collective prediction is achieved by aggregation of individual predictions via simple voting. Agents are motivated by an incentive scheme that offers rewards for making accurate predictions. It is assumed that the accuracy of an individual’s prediction can be judged after the event. We exclude cases where either the ground truth is never discoverable or no such ground truth exists (for instance, in questions regarding taste or voter preferences). Instead, we consider questions such as the prediction of future events (which are known after they occur) or scientific questions (which may be resolved at some later point in time). For example, one might consider whether national gross domestic product (GDP) will rise above trend in the coming year, whether a certain party will win an election, or whether global temperatures will change by more than 1 °C in the next decade. The proportion of agents attending to different sources of information evolves depending on the rewards that they receive, where less successful agents tend to imitate their more successful peers.
Consider a binary outcome, , which is the result of many factors, . We model this outcome as the sign of a weighted sum of the contributing factors:
[1] |
For simplicity, we assume that each contributing factor takes binary values, such that , and that the values of these factors are uncorrelated (SI Appendix discusses instances with correlated factors). Without loss of generality, for all factors.
An individual attending to factor observes the value of . Having observed the value of , this individual then votes in line with that observation. Thus, if the proportion of individuals attending to factor is , the collective prediction is given by
[2] |
Collective accuracy, , is the probability that the collective vote agrees with the ground truth given the distribution, , of agents attending to each factor:
[3] |
The reward given to an agent for an accurate vote depends on the proportion of other correct votes in any given collective decision. Let be the proportion of agents who will vote identically to those attending to factor (i.e., the proportion of agents attending to factors with values that match ): , where is the Kronecker delta. Then, the reward is determined by a function, , such that an agent receives a reward proportional to if and only if his/her prediction is accurate. We will investigate three potential reward systems for deciding how each agent is rewarded for his/her accurate votes, the first two of which are taken from previous work by Hong et al. (13). The first of these is “binary rewards”: agents receive a fixed reward if they make an accurate prediction, corresponding to the reward function . The second is “market rewards”: a fixed total reward is shared equally among all agents who vote accurately, corresponding to the reward function . This reward scheme adds an incentive to be accurate when others are not and closely mimics the reward system of actual prediction markets. Finally, we introduce minority rewards: agents are rewarded for an accurate prediction when fewer than one-half of the other agents also vote accurately, corresponding to the reward function , where is the Heavyside step function. This system explicitly rewards agents who hold accurate minority opinions and incentivizes agents to be accurate on questions where the majority prediction is wrong.
The expected reward that a player receives by attending to factor is determined by the expected value of , conditioned on voting accurately (Eq. 8). Players adapt their behavior in response to the rewards that they and others receive. In alignment with previous evolutionary game theory work, we model changes in individual attention to factors as being the result of imitation; agents who are observed to be gaining greater rewards are imitated by those gaining fewer. This model leads to the classic replicator equation (30), describing the evolution of the proportion of agents, , who pay attention to factor (Eq. 6).
We studied the behavior of the model under three incentive schemes described above. We initialized the model by assigning uniform proportions of agents to each factor, with values of randomly drawn from a uniform distribution (the absolute scale of does not affect the model). We followed the evolutionary dynamics described by the replicator equation until the population converged to equilibrium. This calculation was repeated over a range of problem dimensionalities from to . Expected rewards were calculated by either exhaustive search over all possible values of (for ) or using appropriate normal distribution limits for large numbers of factors (Materials and Methods).
Fig. 1 shows how collective accuracy and diversity evolve toward equilibrium for three rewards systems of binary, market, and minority rewards in simulations with , , and independent factors. Note the logarithmic scale on the x axis to better illustrate the early evolution. For each reward system, two initial allocations of agents’ attention are used: (i) a uniform allocation to each factor and (ii) an allocation where one-half of all agents attend to the single most important factor, with others allocated uniformly across the other factors. This figure shows that the equilibrium distribution of attention is the same no matter whether agents initially attend to arbitrary factors or initially favor the most obvious ones. The convergence time to equilibrium depends on the magnitude of rewards; in our simulations, we normalize rewards, such that the mean reward per agent is one at each time step.
Fig. 1.
Evolution of (A, C, and E) collective accuracy and (B, D, and F) diversity for binary rewards (black lines and squares), market rewards (blue lines and circles), and minority rewards (red lines and triangles) in simulations with (A and B) , (C and D) , and (E and F) independent factors. Lines indicate results from a uniform initial allocation of agents over factors, whereas dashed lines indicate an initial allocation of 50% of agents to the single most important factor, with the remainder allocated uniformly over the remaining factors. Note that the number of time steps is plotted on a logarithmic scale.
Fig. 2 shows how the resulting collective accuracy varies across problem dimensionalities from to for three different reward systems and a uniform allocation of agents to factors. For simple problems (), all reward schemes produce high collective accuracy (over 90%). In these cases, the strong predictive power of only one or two meaningful independent factors means that individual accuracy is high, and collective aggregation only leads to relatively small increases in collective accuracy. However, even for these “small ” problems, we observe that minority rewards outperform other schemes. The differences in collective accuracy become more substantial as increases. As Fig. 1 shows, these differences become apparent after only a few iterations, well before equilibrium is reached. Consistent with ref. 13, we find that market rewards increase diversity and collective accuracy relative to binary rewards. However, collective accuracy under market rewards declines rapidly with increasing , falling to for . For comparison, we also show the accuracy achieved under a uniform allocation of agents, which reaches a stable value of ∼80% for large . Market rewards, therefore, produce lower accuracy than a uniform allocation for all but the lowest values of . In contrast, minority rewards lead to a far higher accuracy than any of the investigated alternative reward systems, regardless of system complexity, and achieve close to 100% accuracy up to . Our mathematical analysis shows that minority rewards will continue to produce near-perfect accuracy for any problem size if the population of agents is large enough (SI Appendix). Our analysis of finite group sizes shows that minority rewards outperform other reward schemes for problem dimensions up to 10 times bigger than the population size, assuming best response dynamics (SI Appendix, Fig. S1).
Fig. 2.
Collective accuracy at equilibrium as a function of the number of independent factors across different reward systems. Lines and shaded regions show the mean and SD of 10 independent simulations with different randomly generated values for the factor coefficients. Points on each curve show the precise values of for which simulations were carried out equally spaced within each multiple of 10.
The different levels of collective accuracy across reward systems are a reflection of the differing equilibrium distributions of the proportion of agents attending to each factor. Minority rewards outperform both market rewards and unweighted approaches, because attention is automatically redirected if the collective prediction would otherwise be wrong; only those outcomes where the majority opinion is wrong contribute to agents’ rewards. Under minority rewards, the system converges toward a state where the number of agents paying attention to any factor is proportional to factor importance. This optimal distribution is both a stationary and a stable state of the minority rewards system (our mathematical analysis is in SI Appendix). Additional analysis (SI Appendix, Fig. S2) shows that varying the cutoff value for minority rewards (for example, by rewarding those voting with less than 40% of the group or 60%) invariably reduces collective accuracy. In Fig. 3, we plot the equilibrium distribution for each reward system for a high-dimensional problem (). Using binary rewards, almost all agents attend to the single most important factor. Under market rewards, agents distribute themselves in proportion to the predictive value of the factors but only among the top 10% of factors; 90% of factors receive essentially no attention at all (this proportion decreases as increases and is, therefore, larger for smaller values of ). By comparison, under minority rewards, the proportion of agents paying attention to a factor is also proportional to its importance, but agents cover the full range of factors down to the least important ones, thereby providing more information to the group and improving predictions. The evolution of this distribution toward equilibrium is shown in detail in SI Appendix, Fig. S3.
Fig. 3.
Equilibrium proportions of agents paying attention to each factor as a function of the coefficient associated with that factor. Results are shown for simulations with factors and three reward systems of (A) binary rewards, (B) market rewards, and (C) minority rewards as well as (D) the uniform allocation. Binary rewards drive almost all agents to the single most important factor (the greatest coefficient). Market rewards create a distribution proportional to coefficient size across the most important 10% of factors, whereas minority rewards distribute agents almost perfectly in proportion to the magnitude of the coefficient.
Discussion
We proposed a reward system, minority rewards, that incentivizes individual agents in their choice of which informational factors to pay attention to when operating as part of a group. This system rewards agents for both making accurate predictions and being in the minority of their peers or conspecifics. As such, it encourages a balance between seeking useful information that has substantive predictive value for the ground truth and seeking information that is currently underutilized by the group. Conversely, where the collective opinion is already correct, no rewards are offered, and therefore, no agent is motivated to change their strategy. Over time, therefore, agents are motivated to change their behavior only in ways that benefit collective accuracy.
The poor performance of market rewards relative to a uniform unweighted allocation for shows that a market reward system incentivizes herding behavior and suppresses useful diversity as illustrated by the equilibrium distribution in Fig. 3B. This result suggests that stock markets and prediction markets tend to systematically underweight a large pool of informational factors that are of limited predictive power individually but that can contribute powerfully to aggregate predictions if agents can be persuaded to pay attention to them. This finding sheds doubt on the accuracy of existing markets as a tool for aggregating dispersed knowledge to predict future profits or events and motivates additional work on how to design collectively more accurate market mechanisms. The relatively high performance of uniform allocations of attention supports work showing that models with equally weighted predictors can match or even improve on more closely fitted prediction models (31, 32). The inclusion of all relevant predictors is often more important than determining their appropriate weights in making predictions; too much diversity is less harmful than too little, especially for complex problems.
Incentives are a fundamental part of any effort to harness the potential of collective intelligence. In this paper, we have presented evidence that rewarding accurate minority opinions can induce near-optimal collective accuracy within a model of collective prediction. Therefore, to maximize the collective wisdom of a group, we suggest that individuals should not be rewarded simply for having made successful predictions or findings and also that a total reward should not be equally distributed among those who have been successful or accurate. Instead, rewards should be primarily directed toward those who have made successful predictions in the face of majority opposition from their peers. This proposal can be intuitively understood as rewarding those who contribute information that has the potential to change collective opinion, because it contradicts the current mainstream view. In our model, groups rapidly converge to an equilibrium with very high collective accuracy, after which the rewards for each agents become less frequent. We anticipate that, after this occurs, agents would move on to new unsolved problems. This movement would produce a dynamic system in which agents are incentivized to not only solve problems collectively but also, address issues where collective wisdom is currently weakest. Future work should investigate how our proposed reward system can be best implemented in practice from scientific career schemes to funding and reputation systems (33) to prediction markets and democratic procedures (34). We suggest experiments to determine how humans respond to minority rewards and additional theoretical work to determine the effects of stochastic rewards, agent learning, and finite group dynamics. In conclusion, how best to foster collective intelligence is an important problem that we need to solve collectively.
Materials and Methods
Terminology.
Throughout this paper, we use the following conventions for describing probability distributions.
denotes the expectation of .
denotes the normal probability density function with mean and variance evaluated at .
for vector-valued and and matrix denotes the multivariate normal probability density function with mean and covariance matrix evaluated at .
denotes the standard normal cumulative probability distribution function with mean = 0 and SD = 1.
Ground Truth and Voting.
We consider a binary outcome, , that is the result of many independent factors, (correlated factors are in SI Appendix). We model this outcome as being determined by the sign of : a weighted sum of the contributing factors
[4] |
In computational implementation of this model, we sample values of independently from a uniform distribution (the scale of which is arbitrary and does not influence the analysis). We assume without loss of generality that factors are ordered such that , and furthermore, we normalize the values of the coefficients such that without affecting the value of . Our analytical results (SI Appendix) do not depend on the exact distribution of . Any sampling distribution for that has a finite moment of order will obey the Ljapunov and Lindeberg conditions (35), guaranteeing convergence in distribution of to a normal distribution, from which our results are obtained.
Each individual attends to one factor at a given time; an individual attending to factor , therefore, observes the value of . Having observed the value of , this individual then votes in line with that observation. The collective prediction, , is given by the sign of the collective vote , which is a sum over the contributing factors weighted by the proportion of individuals attending to each factor:
[5] |
Evolutionary Dynamics.
We model changes in individual attention to factors as being motivated by imitation; agents who are observed to be gaining greater rewards are imitated by those gaining fewer (30), leading to the classic replicator equation (36–38) describing the evolution of , the proportion of agents attending to factor :
[6] |
where by definition. The expected reward [] is the mean reward that an agent attending factor will receive averaging over all possible values of both and the other factors . It is, thus, determined by both the proportion of times that the agent will vote correctly (when ) and the magnitude of the reward received on those occasions (determined by the reward system). To calculate this expectation, we either exhaustively enumerate all possibilities (for ) or numerically evaluate an approximation considering the normally distributed limiting behavior (see below). When solving these equations (one for each factor) numerically, we normalize the rewards given to all agents, such that . This normalization is equivalent to adaptive variation of the time step and does not change the relative rewards between options or the final steady state, but it ensures smoother convergence to that state. This normalization also mimics a real constraint on any practical reward system where the total reward available may be fixed. In our model, we assume that agents reliably receive the expected reward for the factor to which they attend. Similar models with stochastic rewards (13) may show slower convergence to equilibrium. In our simulation of the collective dynamics of the system, we used the Runge–Kutta order 2(3) algorithm as implemented in R by Soetaert et al. (39).
The Three Reward Schemes.
We present three possible systems for rewarding agents for making accurate predictions. Each reward scheme corresponds to a choice of reward function, , which determines the magnitude of the reward when an agent makes an accurate prediction as a function of the proportion, , of other agents who also do so. These reward schemes are
-
i)
binary rewards: ,
-
ii)
market rewards: , and
-
iii)
minority rewards: , where is the Heavyside step function.
The expected reward that an agent receives for attending to factor is, therefore, the expected value of conditional on his/her vote being accurate:
[7] |
where is the proportion of agents voting identically to those attending to factor : , where is the Kronecker delta. The lower limit of the integral above is to account for the limiting case of a single individual attending to the factor. As the population size tends to infinity, tends to zero. For our implementation, we take .
Normal Approximation for Expected Rewards.
For , an exhaustive search over all combinations of is computationally infeasible. Instead, we use the Central Limit Theorem to approximate the expected reward received for attending to any given factor. Focusing on a single individual who attends to factor , we can calculate the expected reward received by the individual as follows. We assume without loss of generality by symmetry that the focal individual observes . The expected reward, , is then
[8] |
Given the independence of the individual values of , the mean and variance of can be determined by the linearity of expectations and the sum rule for variances of independent variables:
[9] |
In the case of binary rewards, where , the value of does not impact on the reward for attending to any factor. In this case, the expected reward is calculated directly from the distribution of :
[10] |
For other reward schemes where the value of affects the reward, we also require an approximation for . Again, we calculate the mean and variance of :
[11] |
The convergence of in distribution to a normal distribution depends on the values of meeting the Lindeberg condition (35). In practice, this condition means that all elements of should tend to zero as the number of dimensions, , tends to infinity (i.e., the distribution should not be dominated by a small subset of elements). As illustrated in Fig. 1, when the system is initialized in a state conforming to these requirements, it will remain so for market and minority reward systems but will not remain so for the binary reward system. Because the binary reward system does not depend on the value of , the failure of this approximation in this case does not have any repercussions for our results.
Both and are correlated because of the shared dependence on the values of with a covariance of
[12] |
In the normal distribution limit, the joint distribution may be approximated as
[13] |
with
Using standard relations for conditional normal distributions, we, therefore, have
[14] |
Combining the above expressions gives the complete equation for the expected reward of attending to factor conditioned on the values of , the current distribution of attention , and the reward function :
[15] |
This integral may be evaluated numerically to give the expected reward for any general reward modulation function .
Calculating Collective Accuracy.
The collective accuracy, , is the probability that the collective vote will correctly predict the ground truth conditioned on the current distribution of attention to different factors. For small numbers of factors (we use ), this probability can be determined exactly by exhaustive search over all possible combinations of the values of . For larger values of , we use the following normal approximation (similarly defined as above) for the joint distribution of the latent ground truth function and the collective vote :
[16] |
where
[17] |
implying the following conditional probability distribution for given :
[18] |
Considering without loss of generality the case where ,
[19] |
which can be evaluated numerically. The normal approximation limit becomes invalid when the distribution of is concentrated on very few elements; in these cases (which we identify as 99% of the distribution mass being concentrated on fewer than 10 elements), we use exhaustive search over the values of corresponding to the remaining factors with nonnegligible values of .
Supplementary Material
Acknowledgments
Cédric Beaume, Viktoria Spaiser, and Jochen Voss provided valuable feedback on the manuscript. This work was supported by European Research Council Advanced Investigator Grant “Momentum” 324247.
Footnotes
The authors declare no conflict of interest.
This article is a PNAS Direct Submission.
This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1618722114/-/DCSupplemental.
References
- 1.Galton F. Vox populi. Nature. 1907;75:450–451. [Google Scholar]
- 2.Mackay C. Extraordinary Popular Delusions and the Madness of Crowds. Start Publishing LLC; New York: 2012. [Google Scholar]
- 3.Hertwig R. Tapping into the wisdom of the crowd – with confidence. Science. 2012;336:303–304. doi: 10.1126/science.1221403. [DOI] [PubMed] [Google Scholar]
- 4.Shefrin H. How psychological pitfalls generated the global financial crisis. In: Siegel LB, editor. Voices of Wisdom: Understanding the Global Financial Crisis of 2007-2009. Research Foundation of CFA Institute; Charlottesville, VA: 2010. [Google Scholar]
- 5.Santos FC, Pinheiro FL, Lenaerts T, Pacheco JM. The role of diversity in the evolution of cooperation. J Theor Biol. 2012;299:88–96. doi: 10.1016/j.jtbi.2011.09.003. [DOI] [PubMed] [Google Scholar]
- 6.Zafeiris A, Vicsek T. Group performance is maximized by hierarchical competence distribution. Nat Commun. 2013;4:2484. doi: 10.1038/ncomms3484. [DOI] [PubMed] [Google Scholar]
- 7.Aplin LM, Farine DR, Mann RP, Sheldon BC. Individual-level personality influences social foraging and collective behaviour in wild birds. Proc Biol Sci. 2014;281:20141016. doi: 10.1098/rspb.2014.1016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Surowiecki J. The Wisdom of Crowds. Random House; New York: 2005. [Google Scholar]
- 9.Page SE. The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies. Princeton Univ Press; Princeton: 2008. [Google Scholar]
- 10.Page SE. Where diversity comes from and why it matters? Eur J Soc Psychol. 2014;44:267–279. [Google Scholar]
- 11.Woolley AW, Chabris CF, Pentland A, Hashmi N, Malone TW. Evidence for a collective intelligence factor in the performance of human groups. Science. 2010;330:686–688. doi: 10.1126/science.1193147. [DOI] [PubMed] [Google Scholar]
- 12.Pickard G, et al. Time-critical social mobilization. Science. 2011;334:509–512. doi: 10.1126/science.1205869. [DOI] [PubMed] [Google Scholar]
- 13.Hong L, Page SE, Riolo M. Incentives, information, and emergent collective accuracy. MDE Manage Decis Econ. 2012;33:323–334. [Google Scholar]
- 14.Couzin ID. Collective cognition in animal groups. Trends Cogn Sci. 2009;13:36–43. doi: 10.1016/j.tics.2008.10.002. [DOI] [PubMed] [Google Scholar]
- 15.Purves D, et al. Ecosystems: Time to model all life on earth. Nature. 2013;493:295–297. doi: 10.1038/493295a. [DOI] [PubMed] [Google Scholar]
- 16.Helbing D, et al. Saving human lives: What complexity science and information systems can contribute. J Stat Phys. 2015;158:735–781. doi: 10.1007/s10955-014-1024-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Lämmer S, Helbing D. Self-control of traffic lights and vehicle flows in urban road networks. J Stat Mech. 2008;2008:P04019. [Google Scholar]
- 18.Bell RM, Koren Y. Lessons from the Netflix prize challenge. SIGKDD Explor. 2007;9:75–79. [Google Scholar]
- 19.Wolfers J, Zitzewitz E. 2006. Prediction Markets in Theory and Practice (National Bureau of Economic Research, Cambridge, MA), Working Paper 12083.
- 20.Arrow KJ, et al. The promise of prediction markets. Science. 2008;320:877–878. doi: 10.1126/science.1157679. [DOI] [PubMed] [Google Scholar]
- 21.Oprea TI, et al. A crowdsourcing evaluation of the NIH chemical probes. Nat Chem Biol. 2009;5:441–447. doi: 10.1038/nchembio0709-441. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Morgan MG. Use (and abuse) of expert elicitation in support of decision making for public policy. Proc Natl Acad Sci USA. 2014;111:7176–7184. doi: 10.1073/pnas.1319946111. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Herbert-Read JE, Romenskyy M, Sumpter DJ. A Turing test for collective motion. Biol Lett. 2015;11:20150674. doi: 10.1098/rsbl.2015.0674. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Lorenz J, Rauhut H, Schweitzer F, Helbing D. How social influence can undermine the wisdom of crowd effect. Proc Natl Acad Sci USA. 2011;108:9020–9025. doi: 10.1073/pnas.1008636108. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Moussaïd M. Opinion formation and the collective dynamics of risk perception. PLoS One. 2013;8:e84592. doi: 10.1371/journal.pone.0084592. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Young NS, Ioannidis JP, Al-Ubaydli O. Why current publication practices may distort science. PLoS Med. 2008;5:e201. doi: 10.1371/journal.pmed.0050201. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Stephan PE. Research efficiency: Perverse incentives. Nature. 2012;484:29–31. doi: 10.1038/484029a. [DOI] [PubMed] [Google Scholar]
- 28.Duarte JL, et al. Political diversity will improve social psychological science. Behav Brain Sci. 2015;38:e130. doi: 10.1017/S0140525X14000430. [DOI] [PubMed] [Google Scholar]
- 29.Prelec D. A Bayesian truth serum for subjective data. Science. 2004;306:462–466. doi: 10.1126/science.1102081. [DOI] [PubMed] [Google Scholar]
- 30.Helbing D. A stochastic behavioral model and a ‘microscopic’ foundation of evolutionary game theory. Theory Decis. 1996;40:149–179. [Google Scholar]
- 31.Dawes RM. The robust beauty of improper linear models in decision making. Am Psychol. 1979;34:571–582. [Google Scholar]
- 32.Graefe A. Improving forecasts using equally weighted predictors. J Bus Res. 2015;68:1792–1799. [Google Scholar]
- 33.Conte R, et al. Manifesto of computational social science. Eur Phys J Spec Top. 2012;214:325–346. [Google Scholar]
- 34.Helbing D. Why we need democracy 2.0 and capitalism 2.0 to survive. Jusletter IT. 2016;2016:65–74. [Google Scholar]
- 35.Feller W. An Introduction to Probability Theory and Its Applications: Volume 1. 3rd Ed. Vol 1 Wiley; New York: 1970. [Google Scholar]
- 36.Schuster P, Sigmund K. Replicator dynamics. J Theor Biol. 1983;100:533–538. [Google Scholar]
- 37.Hofbauer J, Sigmund K. Evolutionary game dynamics. Bull Am Math Soc. 2003;40:479–519. [Google Scholar]
- 38.Nowak MA. Evolutionary Dynamics. Belknap Press of Harvard Univ Press; Cambridge, MA: 2006. [Google Scholar]
- 39.Soetaert K, Petzoldt T, Setzer RW. Solving differential equations in R: Package deSolve. J Stat Softw. 2010;33:1–25. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.