Abstract
Many interactions between searching agents and their elusive targets are composed of a succession of steps, whether in the context of immune systems, predation or counterterrorism. In the simplest case, a two-step process starts with a search-and-hide phase, also called a hide-and-seek phase, followed by a round of pursuit–escape. Our aim is to link these two processes, usually analysed separately and with different models, in a single game theory context. We define a matrix game in which a searcher looks at a fixed number of discrete locations only once each searching for a hider, which can escape with varying probabilities according to its location. The value of the game is the overall probability of capture after k looks. The optimal search and hide strategies are described. If a searcher looks only once into any of the locations, an optimal hider chooses it's hiding place so as to make all locations equally attractive. This optimal strategy remains true as long as the number of looks is below an easily calculated threshold; however, above this threshold, the optimal position for the hider is where it has the highest probability of escaping once spotted.
Keywords: search games, game theory, behavioural ecology, foraging, optimization
1. Introduction
Many biological interactions between a hider and a searcher consist of a succession of hide-and-seek bouts, also called search-and-hide bouts, followed by pursuit–evasion sequences. For example, at the physiological scale, immune cells patrol the body and destroy invaders once identified; at the ecological scale, there are long sequences of interactions between game animals and large predators, as commonly broadcast in wildlife documentaries. Quantitative descriptions of the exact position and behaviour of the two protagonists, including a detailed map of the arena in which the interaction takes place, are however exceedingly rare in the scientific literature. The focus is indeed generally on the escape strategies of the prey or on the foraging strategy of the predator, but very seldom of both. Hunting cheetahs are a good example [1]: we now know much about the kinematics and biomechanics of the predator during the chase, but information regarding the prey whereabouts is nearly non-existent, and information about the terrain in which the interaction occurs is systematically absent. The same applies to other predator–prey interactions with very different animals in very different environments: for penguins chasing krill for example [2]. The lack of comprehensive data for the major variables means that the attack strategies of predators cannot be truly understood in the only relevant context, that of the escape strategies developed by prey, and vice versa [3,4].
Game theory is the tool of choice for analysing the strategies of both predator and prey, in particular two sub-disciplines: search theory and search games. Search theory [5], which deals with the optimal way of looking for a hidden object with no opponent, has recently again proved its enormous power and relevance to society; after 2 years of unsuccessful searching, application of this theory enabled the wreckage of Air France flight AF 447 to be found in the Atlantic within one week [6]. The two-sided search problems of hide-and-seek situations that we all know from our childhood are the realm of search games. These situations are common in military and anti-terror activity and also in many other fields especially predator–prey situations, as described by Gal [7]. Pursuit–evasion situations, for example an aircraft or a missile trying to intercept another aircraft, were modelled and solved by Isaacs in his classic book [8] as a new type of game: differential games. The book also introduced continuous search games in its last chapter. These games were subsequently developed by Gal [9] and updated by Alpern & Gal [10]. Search theory in general, and search games in particular, have stimulated much research, with applications to computer science (e.g. [11]), economics and biology, culminating in several recent books [12,13]. The two types of models, pursuit–evasion and hide–search models, were developed separately from the 1960s onwards. The main emphasis in pursuit–evasion games has been to find minmax controls, such as accelerations or angles, which influence the trajectories of the players, each of them usually being aware of the movement of the other player. In hide–search games, the players have, by contrast, little information about the movement of the other player and the main emphasis is to find optimal probabilistic choices of strategies for the players.
The search space considered in our work is a set of discrete locations. An optimal one-sided (i.e. no opponent) search for an object hidden in one of several locations with different characteristics is a classical problem solved by Blackwell, as reported by [14] and extended by Ross [15,16] to an optimal search and stop problem. The classical result was also obtained by Gittins [17] within the context of his well-known Gittins index. Gittins and Roberts analysed this model as a search game, the solution of which is significantly complicated even for only two locations [18,19]. In this game, the hider would like to choose the hiding probabilities so as to make both locations equally attractive for the searcher but this is a difficult task for an immobile hider. For example, if the two locations were equally attractive for the searcher at the first step but the hider was not found at that step, then the posterior probability would have made the unvisited location distinctly more favourable to visit next. Thus, the hider cannot make both locations equally attractive all the time. What the hider can achieve is an appropriate balance so that during the search process both locations will be as equally attractive as possible. Discrete search games of this and other types have also been analysed in [20–23].
Analysing real predator–prey strategies using elements of game theory as described above has been carried out on a few occasions. It has been applied more to defence by prey than to attacks by predators [24,25]. The vast majority of studies in this area deal with pursuit–evasion games in a featureless environment, from hunting dragonflies and bats in the air [26,27] to sighthound dogs on land [28], and fish and copepods in water [29]. Pursuit–evasion games, in their pure form, assume that complete information is available to both antagonists; both have full visibility, for example. Most behavioural studies on predator attack strategies deal with this kind of game, as reviewed by Alpern et al. [30]. Search games in their pure form assume no knowledge of the other party's whereabouts. However, partial information is sometimes available in which case it is referred to as a ‘noisy’ search game. The information enables one or both to ‘hear’ the other and gain partial information about the other's movement pattern but without full information about its location [30,31,32]. Sit-and-wait strategies, which have been well studied for predators [33,34], are another form of search game in which the predator has partial information about the prey, whereas the prey has virtually no information about the predator's position and movements. Two biological systems, one involving parasitic wasps attacking leaf-mining moths and the other involving wolfspiders attacking wood crickets have been analysed as search games [35,36]. There is, however, still no model encompassing both types of games, search and pursuit–evasion games.
We have combined the theories for two types of game theory, which until now have developed separately owing to differing assumptions about the level of information available to the two players; we thus developed a single model to represent the complex interactions as they unfold in reality. We use a succession of a search-and-hide phase, followed by a pursuit–escape phase, based on multiple locations and identify the optimal strategies for both players. The hider has a varying probability of being captured depending on where it is. The search effort is identified as determining which of the two types of games makes the largest contribution to defining the ultimate capture probability. We define a matrix game in which a searcher looks at a fixed number of discrete locations only once each, searching for a hider, which can escape with varying probabilities according to its location. The value of the game is the overall probability of capture after k looks. The optimal search and hiding strategies are described. If a searcher looks only once into any of the locations, an optimal hider chooses its hiding place randomly so as to make all locations equally attractive. This optimal strategy remains true as long as the number of looks is below an easily calculated threshold; however, above this threshold, the optimal strategy for the hider is to choose a fixed location, which has the highest probability of escaping once spotted. The emphasis shifts above the threshold from the search-and-hide phase to the pursuit–escape phase. An extension to a random number of looks and repeated games is proposed and the implications in terms of giving-up time by the searcher are considered.
2. Searching among heterogeneous locations
We analyse the following model. A predator (searcher) looks for a prey (hider) in a search space consisting of n locations. The hider chooses a location and the searcher inspects k different locations, where k is a parameter of the game (the ‘giving-up time’ for the continuous version). If the predator visits a location i where the prey is hiding, capture is not certain but occurs with probability pi and the game ends. This probability can originate from the second phase of the game—pursuit–evasion—and represents the probability of capture in that phase.
We assume n locations, as possible hiding places for the prey. For each location, i = 1, 2, … , n, the probability of capture is pi. For convenience, we assume
The number of (different) locations, k, that the searcher inspects before he gets ‘tired’ is a parameter of the problem 1 ≤ k ≤ n.
Hiding strategy (mixed): a vector of hiding probabilities (h1, h2, … , hn), where hi is the probability that the hider hides at location i. Thus,
Search strategy: a pure search strategy is a set {S1, S2, … , Sk} of k different locations that are visited.
A mixed search strategy is a probabilistic choice of these sets.
We will use an equivalent definition for a mixed search strategy.
Definition 2.1. —
A mixed search strategy is a vector of probabilities R = (r1, r2, … , rn), where ri is the probability that the searcher visits location i during the k visits.
2.1
For any vector R, a possible way to construct the corresponding mixed search strategy is the following simple algorithm.
— Arrange n non-overlapping intervals, L1, L2, … , Ln, where Li is associated with location i and has length ri, in the line segment [1, k + 1]. These intervals will be denoted L-intervals.
— Take a sample of one observation, θ, from the uniform distribution in [0, 1].
— Construct the following k points in the line segment [1, k + 1]: {j + θ, j = 1, 2, … , k} and for each j = 1, 2, … , k picks the L-interval containing the point j + θ. The k locations associated with these L-intervals will be visited by the searcher.
We now show that for each location i = 1, 2, … , n, the probability that the interval Li (i.e. location i) will be chosen by the algorithm is indeed ri. Let Li = [j + a, j + a + ri], where 0 ≤ a ≤ 1, for some j, and consider two cases.
If a + ri ≤ 1, then location i will be chosen if and only if a ≤ θ ≤ a + ri, which occurs with probability ri.
If a + ri > 1, then location i will be chosen if and only if a ≤ θ ≤ 1 or 0 ≤ θ ≤ a + ri − 1, which also occurs with probability ri.
This algorithm is in the spirit of systematic sampling [37] and Monte Carlo simulation techniques. An alternative approach, suggested by Steve Alpern, is to use the fact that the set of all R is convex and compact so that any point in this set is a convex combination of the extreme points of the set. However, this direction leads to a large linear programming problem with about nk variables for producing the corresponding mixed search strategy.
The value of the game (the probability of capture) is denoted by vk.
Let us first consider a hiding strategy h*, which makes all the locations equally attractive to search for k = 1
| 2.2 |
By (2.2), λ satisfies
| 2.3 |
The following two statements are easy to prove:
— for k = 1, the optimal hiding strategy is h* and v = λ and
— for k = n, the optimal hiding strategy is h1 = 1 and v = p1.
In general, if k < p1/λ then the optimal hiding strategy is h* and vk = kλ. We will show that the searcher can make all the locations equally attractive for hiding and that the hider can make all the locations equally attractive to the searcher, i.e. for each i, the probability that the searcher visits i is proportional to 1/pi. We now present a formal proof for the result.
Consider a search plan that involves visiting each location i with probability ri = kλ/pi.
This search strategy guarantees kλ because for each hiding location i, the probability of capture is pi × ri = kλ.
However, the hiding strategy h* guarantees a probability of capture of at most kλ against any pure search strategy S1, S2, … , Sk because for each visit Si the probability of capture is
so the probability of capture by one of these visits is the sum of the individual probabilities, i.e. kλ.
The above optimal strategies are unique: if rj < kλ/pj for some j, then the hider can pay rjpj < kλ by hiding at j. Hence, an optimal search strategy has to satisfy ri ≥ kλ/pi for all i, which implies equality by (2.1).
Similarly, if hj > λ/pj for some j, then the searcher can get more than kλ by using a vector R with ri = (k − d)λ/pi for all i ≠ j and rj = (k − d)λ/pj + d , where d is a small positive number. Hence, an optimal hiding strategy has to satisfy hi ≤ λ/pi for all i, which implies equality.
If k ≥ p1/λ, then the hider can guarantee at most p1 by always hiding at location 1. The optimality of this strategy is proved as follows:
The searcher can guarantee at least p1 by choosing r1 = 1 ≤ kλ/p1 and ri ≥ min(kλ/pi, 1) for all 2 ≤ i ≤ k. This is possible because
As kλ ≥ p1, a response of hiding at any location i ≥ 2 leads to a probability of capture ≥ p1.
It is easy to see that for k > p1/λ, optimal search strategies are not unique.
So we have proved:
Theorem 2.2. —
If k < p1/λ, then the optimal search strategy satisfies ri = kλ/pi and the optimal hiding strategy satisfies hi = λ/pi for all i = 1, 2, … , n.
If k ≥ p1/λ, then an optimal search strategy satisfies r1 = 1 and ri ≥ kλ/pi for i = 2, … , n and an optimal hiding strategy satisfies h1 = 1.
The value of the game satisfies
We now present a simple example that illustrates the result.
Example 2.2. —
Assume n = 4, and that pi, are (0.5, 0.5, 1, 1)
By (2.2), λ = 1/6 and p1/λ = 3, so
for k = 1, 2, the optimal hiding strategy is h* = (1/3, 1/3, 1/6, 1/6), and
for k = 4, an optimal hiding strategy is to hide at location number 1 (or 2).
For k = 3, both types of strategies are optimal. This is because p1/λ is an integer.
The value vk for k = 1, 2, 3 and 4 is 1/6, 1/3, 1/2 and 1/2, respectively.
Figure 1 describes the behaviour of the value as a function of k. The value increases linearly but becomes a constant as soon as the threshold is reached. This behaviour is true in general, as stated in the Theorem.
Figure 1.

Ultimate probability of capture v of a hider hiding in one of four locations as function of the number of looks, k, by the searcher.
3. Discussion
We start the discussion by commenting on two major assumptions in our model, the fixed number of looks and the lack of repetitions, and propose some modifications to increase the broadness of the model. The final paragraph puts the work within the larger context of behavioural ecology.
The searcher's strategy assumes that the number of looks into potential hiding sites, k, is fixed and known to both players. In reality, k might be a variable. The problem of k being a random variable with a distribution known only to the searcher is more complex and requires further analysis. However, if k < p1/λ with high probability, then h* is optimal and v = kλ, whereas if k > p1/λ with high probability, then hiding at location number 1 is optimal and v = p1. However, the optimal strategy can, in general, be different from the strategies described above. For example: pi are (0.5, 0.51, 1) and k is a random variable which can be either 1 or 3, each with a probability of 50%. Hiding according to h* is bad for k = 3, whereas hiding only at 1 is bad for k = 1. The optimal hiding strategy involves hiding at either 1 or 2 with a probability about 1/2 at each of these locations and hiding at 3 with a small probability.
The second assumption, which sets our work apart from other work on searching different sites, is the absence of repetition. In all of the models of searching several locations described in the introduction, there is generally some probability for a searcher visiting a location at which the hider hides but not seeing it. Thus, the searcher may benefit from going back to a visited location more than once. By contrast, the assumption in the model we developed here is that the searcher always finds the hider if it is hiding at a location visited; however, the searcher does not necessarily catch the hider. Thus, each location is visited at most once. The search plan consists of a random sequence of sites to visit, which is set before the game starts; there is no update during the game itself. Its dynamical nature stems therefore from the two-phase succession of hide–search and pursuit–evasion phases. In reality, most interactions contain several such successions in series, and our model is the simplest prototype of such interactions. Fabre [38], for example, described in vivid detail the intermingled successions of hunting bouts by dabber wasps locating and pursuing their spider prey. The spider drops onto the ground from the flower it is sitting on and thereby escapes for a while. The wasp, even though disoriented at first, does not give up and starts exploring the surroundings with great care, sometimes at substantial distance from the flower, before relocating its prey. If found again, the spider runs as fast as possible in an attempt to escape once more. The hunt ends sooner or later, usually with a wasp's sting in the spider's body. Our model can be extended to cover such cases as a repeated game with a constant sum of pay-offs [39]. Let us suppose that a searcher is looking for a hider in a larger region consisting of distinct clumps of locations, called patches in the behavioural ecology literature. During the k looks among the locations within a single patch, there can be any of three events. First, if the searcher does not find the hider, then the game ends with zero pay-off to the searcher and a pay-off of one to the hider. Second, if the searcher finds the hider and catches it, then the game ends with a pay-off of one to the searcher and a pay-off of zero to the hider. Finally, if the searcher finds the hider but the hider escapes to another patch then the process restarts.
In our model, the hider's optimal strategy is simple because it consists of a single choice. If the number of looks is small, the hider should make all locations equally attractive. If it is large, it should hide at the location with the smallest probability of capture once spotted. The strategy of the searcher is more complex, because it consists of a plan composed of looking into a random sequence of locations selected by a rather involved selection mechanism, as described in the Theorem. An optimal searcher should not waste time by looking more often than needed to reach the threshold beyond which the capture probability stays constant. The increase in the chance of capturing the hider is linear before that threshold and the overall relationship is thus one approaching the law of diminishing returns, well known in optimal foraging theory in behavioural ecology [40,41,42]. The deeper relationship between our minimax results based on game theory and the one-sided search theory developed for stationary prey items will be considered in a sequel work.
References
- 1.Wilson A, Lowe J, Roskilly K, Hudson P, Golabek K, McNutt J. 2013. Locomotion dynamics of hunting in wild cheetahs. Nature 498, 185–189. ( 10.1038/nature12295) [DOI] [PubMed] [Google Scholar]
- 2.Watanabe YY, Takahashi A. 2013. Linking animal-borne video to accelerometers reveals prey capture variability. Proc. Natl Acad. Sci. USA 110, 2199–2204. ( 10.1073/pnas.1216244110) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Abrams PA. 2000. The evolution of predator–prey interactions: theory and evidence. Annu. Rev. Ecol. Syst. 31, 79–105. ( 10.1146/annurev.ecolsys.31.1.79) [DOI] [Google Scholar]
- 4.Lima SL. 2002. Putting predators back into behavioral predator–prey interactions. Trends Ecol. Evol. 17, 70–75. ( 10.1016/S0169-5347(01)02393-X) [DOI] [Google Scholar]
- 5.Stone LD. 1975. Theory of optimal search. New York, NY: Academic Press. [Google Scholar]
- 6.Stone LD, Keller CM, Kratzke TM, Strumpfer JP. In press Search for the wreckage of Air France Flight 447. Stat. Sci.
- 7.Gal S. 2011. Search games. New York, NY: Wiley Encyclopedia of Operations Research and Management Science, John Wiley Sons. [Google Scholar]
- 8.Isaacs R. 1965. Differential games: a mathematical theory with applications to welfare and pursuit, control and optimization. New York, NY: John Wiley and Sons. [Google Scholar]
- 9.Gal S. 1980. Search games. New York, NY: Academic Press. [Google Scholar]
- 10.Alpern S, Gal S. 2003. Search games and rendez-vous theory. Dordrecht, The Netherlands: Kluwer Academic Publishers. [Google Scholar]
- 11.Megiddo N, Hakimi SL, Garey MR, Johnson DS, Papadimitriou CH. 1988. The complexity of searching a graph. J. ACM 35, 18–44. ( 10.1145/42267.42268) [DOI] [Google Scholar]
- 12.Alpern S, Fokkink R, Gsieniec L, Lindelauf R, Subrahmanian V. (eds). 2013. Search theory: a game theoretic perspective. New York, NY: Springer. [Google Scholar]
- 13.Ben-Gal I, Kagan E. 2013. Information search after static or moving targets: theory and modern applications. New York, NY: Wiley. [Google Scholar]
- 14.Matula D. 1964. A periodic optimal search. Am. Math. Mon. 71, 15–21. ( 10.2307/2311296) [DOI] [Google Scholar]
- 15.Ross SM. 1969. A problem in optimal search and stop. Oper. Res. 17, 984–992. ( 10.1287/opre.17.6.984) [DOI] [Google Scholar]
- 16.Ross SM. 1983. Introduction to stochastic dynamic programming: probability and mathematical. New York, NY: Academic Press. [Google Scholar]
- 17.Gittins J. 1989. Allocation indices for multi-armed bandits. London, UK: Wiley. [Google Scholar]
- 18.Gittins J, Roberts D. 1979. Search for an intelligent evader concealed in one of an arbitrary number of regions. Nav. Res. Logist. Q. 26, 657–666. ( 10.1002/nav.3800260410) [DOI] [Google Scholar]
- 19.Roberts D, Gittins J. 1978. The search for an intelligent evader: strategies for searcher and evader in the two-region problem. Nav. Res. Logist. Q 25, 95–106. ( 10.1002/nav.3800250108) [DOI] [Google Scholar]
- 20.Garnaev A. 2000. Search games and other applications of game theory. New York, NY: Springer. [Google Scholar]
- 21.Iida K. 1992. Studies on the optimal search plan. New York, NY: Springer. [Google Scholar]
- 22.Ruckle WH. 1983. Geometric games and their applications. London, UK: Pitman Advanced Publishing Program. [Google Scholar]
- 23.Zoroa N, Fernndez-Sez MJ, Zoroa P. 2011. A foraging problem: sit-and-wait versus active predation. Eur. J. Oper. Res. 208, 131–141. ( 10.1016/j.ejor.2010.08.001) [DOI] [Google Scholar]
- 24.Ruxton GD, Sherratt TN, Speed MP. 2004. Avoiding attack: the evolutionary ecology of crypsis, warning signals, and mimicry. Oxford, UK: Oxford University Press. [Google Scholar]
- 25.Stephens DW, Brown JS, Ydenberg RC. 2007. Foraging: behavior and ecology. Chicago, IL: University of Chicago Press. [Google Scholar]
- 26.Combes S, Salcedo M, Pandit M, Iwasaki J. 2013. Capture success and efficiency of dragonflies pursuing different types of prey. Integr. Comp. Biol. 53, 787–798. ( 10.1093/icb/ict072) [DOI] [PubMed] [Google Scholar]
- 27.Kalko EK. 1995. Insect pursuit, prey capture and echolocation in pipestirelle bats (Microchiroptera). Anim. Behav. 50, 861–880. ( 10.1016/0003-3472(95)80090-5) [DOI] [Google Scholar]
- 28.Shubkina A, Severtsov A, Chepeleva K. 2012. Factors influencing the hunting success of the predator: a model with sighthounds. Biol. Bull. 39, 65–76. ( 10.1134/S1062359012010074) [DOI] [PubMed] [Google Scholar]
- 29.Kirboe T. 2013. Attack or attacked: the sensory and fluid mechanical constraints of copepods predator–prey interactions. Integr. Comp. Biol. 53, 821–831. ( 10.1093/icb/ict021) [DOI] [PubMed] [Google Scholar]
- 30.Alpern S, Fokkink R, Timmer M, Casas J. 2011. Ambush frequency should increase over time during optimal predator search for prey. J. R. Soc. Interface 8, 1665–1672. ( 10.1098/rsif.2011.0154) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Broom M, Ruxton GD. 2005. You can run or you can hide: optimal strategies for cryptic prey against pursuit predators. Behav. Ecol. 16, 534–540. ( 10.1093/beheco/ari024) [DOI] [Google Scholar]
- 32.Alpern S, Fokkink R, Gal S, Timmer M. 2013. On search games that include ambush. SIAM J. Control Optim. 51, 4544–4556. ( 10.1137/110845665) [DOI] [Google Scholar]
- 33.Schoener TW. 1971. Theory of feeding strategies. Annu. Rev. Ecol. Syst. 2, 369–404. ( 10.1146/annurev.es.02.110171.002101) [DOI] [Google Scholar]
- 34.Helfman GS. 1990. Mode selection and mode switching in foraging animals. Adv. Stud. Behav. 19, 249–298. ( 10.1016/S0065-3454(08)60205-3) [DOI] [Google Scholar]
- 35.Meyhoefer R, Casas J, Dorn S. 1997. Vibration-mediated interactions in a host–parasitoid system. Proc. R. Soc. Lond. B 264, 261–266. ( 10.1098/rspb.1997.0037) [DOI] [Google Scholar]
- 36.Morice S, Pincebourde S, Darboux F, Kaiser W, Casas J. 2013. Predator–prey pursuit–evasion games in structurally complex environments. Integr. Comp. Biol. 53, 767–779. ( 10.1093/icb/ict061) [DOI] [PubMed] [Google Scholar]
- 37.William I, Madow G, Madow LH. 1944. On the theory of systematic sampling. Ann. Math. Stat. 15, 1–24. ( 10.1214/aoms/1177731312) [DOI] [Google Scholar]
- 38.Fabre JH. 1891. Souvenirs entomologiques: Etudes sur l'instinct et les murs des insectes, vol. 4 Paris, France: C. Delagrave. [Google Scholar]
- 39.Mertens JF, Sorin S, Zamir S. 1994. Repeated games. Louvain-la-Neuve, Belgium: Center for Operations Research Econometrics, Université Catholique de Louvain. [Google Scholar]
- 40.Krebs J, Stephens DW. 1986. Foraging theory. Princeton, NJ: Princeton University Press. [Google Scholar]
- 41.Stephens D, Brown J, Ydenberg R. 2007. Foraging: behavior and ecology. Chicago, IL: University of Chicago Press. [Google Scholar]
- 42.Wajnberg E. 2006. Time allocation strategies in insect parasitoids: from ultimate predictions to proximate behavioral mechanisms. Behav. Ecol. Sociobiol. 60, 589–611. ( 10.1007/s00265-006-0198-9) [DOI] [Google Scholar]
