Skip to main content
Journal of the Royal Society Interface logoLink to Journal of the Royal Society Interface
. 2015 Mar 6;12(104):20141158. doi: 10.1098/rsif.2014.1158

Optimal Lévy-flight foraging in a finite landscape

Kun Zhao 1,, Raja Jurdak 1, Jiajun Liu 1, David Westcott 2, Branislav Kusy 1, Hazel Parry 1, Philipp Sommer 1, Adam McKeown 2
PMCID: PMC4345481  PMID: 25631566

Abstract

We present a simple model to study Lévy-flight foraging with a power-law step-size distribution Inline graphic in a finite landscape with countable targets. We find that different optimal foraging strategies characterized by a wide range of power-law exponent μopt, from ballistic motion (μopt → 1) to Lévy flight (1 < μopt < 3) to Brownian motion (μopt ≥ 3), may arise in adaptation to the interplay between the termination of foraging, which is regulated by the number of foraging steps, and the environmental context of the landscape, namely the landscape size and number of targets. We further demonstrate that stochastic returning can be another significant factor that affects the foraging efficiency and optimality of foraging strategy. Our study provides a new perspective on Lévy-flight foraging, opens new avenues for investigating the interaction between foraging dynamics and the environment and offers a realistic framework for analysing animal movement patterns from empirical data.

Keywords: Lévy flight, random search, optimal foraging

1. Introduction

Understanding animal movement is crucial for understanding ecological and evolutionary processes in nature and has a wide range of applications such as ecosystem management, species conservation and disease control [15].

In the past decade, it has been widely observed that the movements of many animal species, from albatrosses [6,7] to spider monkeys [8], honeybees [9] to deer [10] and marine predators [11] to human foragers [12,13], appear to exhibit Lévy-flight patterns, i.e. the step-size distribution can be approximated by a power-law Inline graphic with 1 < μ ≤ 3. Despite this apparent similarity, there is an ongoing debate in the scientific community over the existence of Lévy flights in animal movement and the methodology of verifying Lévy flights from empirical data [1416]. In the meanwhile, scientists, especially theorists, ask the question of why animals do Lévy flight, which fascinates researchers from various disciplines from ecology to physics [1722].

One common approach to the origin of animal movement patterns is to use the scheme of optimizing random search [2325]. In a random search model, single or multiple individuals search a landscape to locate targets whose locations are not known a priori, which is usually adopted to describe the scenario of animals foraging for food, prey or resources. The locomotion of the individual has a certain degree of freedom which is characterized by a specific search strategy such as a type of random walk and is also subject to other external or internal constraints, such as the environmental context of the landscape or the physical and psychological conditions of the individual. It is assumed that a strategy that optimizes the search efficiency can evolve in response to such constraints on a random search, and the movement is a consequence of the optimization on random search.

A seminal work by Viswanathan et al. [26] first studied Lévy-flight foraging through the scheme of optimizing random search. In their model, a forager searches for targets using a random walk with the aforementioned power-law step-size distribution. The forager will keep moving until a target is ‘encountered’, i.e. a target lies within its limited perception range. The search efficiency is defined as the encounter rate of targets, namely the number of visited targets per unit moving distance. The model considers two scenarios: (i) non-destructive foraging, in which the targets are revisitable and (ii) destructive foraging, in which the targets are depleted once visited. For non-destructive foraging, when the targets are sparsely distributed, there exists an optimum exponent μopt ≈ 2 that maximizes the search efficiency, corresponding to the Lévy-flight strategy. For destructive foraging, μopt → 1, corresponding to the ballistic motion, is the optimum solution. It is worth noting that this model only captures an idealized scenario in which learning and prey–predator relationships are ignored [26], and other random search strategies such as intermittent random search can outperform the Lévy-flight strategy in some cases [27].

Recent studies also turn attention to the substantial difference of the value of μopt observed in empirical data. In fact, μopt among different species can range from μopt ≈ 1.59 for human beings [28] to μopt ≈ 2.4 for bigeye tuna [11]. Here μopt also varies significantly among individuals within the same species, e.g. μopt can go from 1.18 to 2.9 in jellyfish [29]. It has been shown that intermediate values of the optimal exponent 1 < μopt ≤ 2 can emerge in the crossover regime between non-destructive and destructive foraging in which targets are regenerated after a period τ once depleted [30], or arise in response to landscape heterogeneity [31]. A recent study by Palyulin et al. [32] shows that, in the case of searching for a single target in an infinite one-dimensional space, when an external drift (e.g. underwater current, atmospheric wind, etc.) is present, μopt can also vary in the interval 2 ≤ μopt ≤ 3.

In this paper, we propose a simple model to study Lévy-flight foraging in a finite two-dimensional landscape with countable targets. The model considers foraging to be a step-based exploratory search process for distinct targets subject to termination. The forager can revisit the targets and the foraging efficiency is defined as the total number of discovered targets per unit moving distance in this process. We find that different optimal search strategies can emerge in adaption to the interplay between the termination condition and environmental context of the landscape. In particular, different optimum foraging strategies with various exponent μopt can emerge when the termination is regulated by a finite number of steps N. In this case, the value of N, along with landscape features such as landscape size and number of targets, can play an important role in determining the value of μopt. When termination is regulated by a finite moving distance Inline graphic, the best strategy is always ballistic motion, corresponding to μopt → 1. To capture more complex foraging dynamics, we also consider stochastic returning (e.g. home-return behaviour) in our model through an exploration–return mechanism [33] and demonstrate that this returning can be another factor that affects the foraging efficiency. Our study not only sheds new light on the understanding of Lévy-flight foraging, but also provides an expanded modelling framework to study animal movement patterns.

2. Model

The foraging takes place in a finite two-dimensional L × L-squared landscape with periodic boundary conditions (when moving across the boundary the forager will come back from the other side of the landscape). There are K targets distributed uniformly over the landscape, corresponding to a density ρ = L2/K. The forager can detect a target within its perception range rv. The mean free path of the system λ is therefore given by λ = (2ρrv)1, which indicates the average straight-line moving distance of detecting or ‘encountering’ a target in the landscape. Without loss of generality, we set rv = 1, so λ = (1/2ρ). In this paper, we assume the targets are revisitable, analogous to the case of non-destructive foraging [26]. The goal of the forager is to explore the landscape to find new targets.

The foraging process is a step-based stochastic process with an exploration–return mechanism. At each step n, the forager first decides the type of the step movement: exploration or return. Let us denote the probability of choosing exploration by pn and the probability of choosing return by qn = 1 − pn. Moreover, we use Sn to denote the number of distinct targets discovered by the forager up to step n, Inline graphic to denote the accumulated moving distance since the forager leaves its last visited target, ln to denote the moving distance of step n, and Inline graphic to denote the total moving distance up to step n. The foraging movement at step n is performed as follows (illustrated in figure 1):

  • (1) If the decision is exploration, the forager will perform random search in this step. The step-size l and the turning angle θ are drawn randomly from the pre-defined distribution functions P(l) and P(θ). During the step movement, the forager will continuously detect targets. If a target is detected during the step movement, the forager will move to the target in a straight line and the step movement will be truncated. The actual moving distance ln = l − Δl in this case is smaller than the probabilistic moving distance l and we set Inline graphic. There are two situations of detecting a target. (i) The target is a new target that has not been discovered before and then we update Sn by SnSn + 1 and the location of this new target is memorized by the forager. (ii) The target is a previously visited target. In this case, we do not update Sn. One should note that if this step is the first step the forager leaves a target, the forager will ignore that target in detection to avoid trapping. If no target is detected in this step, we update Inline graphic by Inline graphic.

  • (2) If the decision is returning, the forager will move to one of the previously visited targets in a straight line. Note that the forager does not attempt to detect targets in a return step, which is analogous to the ‘blind’ phase in intermittent random search [27]. We assume that the forager can memorize the locations of all previously visited targets and randomly decide on the target of the return phase. In this initial model, we focus on this simple approach to modelling memory and leave more complicated memory processes to future work.

Figure 1.

Figure 1.

A schematic diagram of the model. This diagram shows a foraging process with N = 4 steps. The red dots represent targets. The cylinder formed by black dashed boundaries indicates the detection area during an exploration step. In step one, the forager leaves a target and detects no new target during this step. Note that the forager ignores the departure target. In step two, the forager decides to undertake exploration and detects a target during this step. Therefore, the original probabilistic step (the green dash-dotted line) is truncated to a shorter actual step l2 (the green solid line). Step three is similar to step one. In step four, the forager decides to return and it flies straight back to the departure target in step one. In a return step, the forager is not attempting to detect targets.

Here, we assume that the foraging process starts at a random target in the landscape. That target can be understood as the base of the forager, and its location is recorded in the initial memory of the forager. Therefore, the forager can have at least one location to choose in the return phase. We use a probabilistic function Θn ∈ [0, 1] to characterize the termination condition of the process, i.e. the process is terminated at step n with probability Θn. When a step is performed, we update n by nn + 1, and we use N to denote the total number of steps upon termination.

We then define the search efficiency η as the ratio of the total number of distinct targets discovered by the forager to the total moving distance upon termination, which yields

2. 2.1

One should note that, besides the stochastic returning in the return phase, the forager can also revisit a previously discovered target if it lies within the forager's perception range in the exploration phase. The revisitation during exploration may occur in two scenario: (i) the forager taking advantage of the chance proximity of a previously discovered target to relieve movement constraints (e.g. to rest or supplement energy) prior to reinitiating exploration and (ii) the forager can detect a target within its perception range, but is not able to identify the resource availability in the detected target without a revisitation. In such context, the target can be understood to be a site that contains resource (e.g. a tree with fruits). The random search dynamics in the exploration phase in our model resembles the non-destructive foraging in [26], while the definition of search efficiency that only accounts for distinct targets discovered by the forager resembles the case of destructive foraging with exploitation.

Finally, we specify the random search dynamics and the exploration–return mechanism. In this paper, we use a power-law step-size distribution for the random search, which yields

2. 2.2

where 1 < μ ≤ 3 is the power-law exponent which serves as a control parameter of the random search foraging strategy. The lower bound l0 represents the natural limit of step-size. The movement converges to the Gaussian (Brownian motion) when μ ≥ 3, and to ballistic motion when μ → 1. For simplicity in this paper, we set l0 = rv = 1 and we use a uniform distribution P(θ) = 1/2π for the turning angle.

To specify pn and qn, we assume that the forager makes a decision between exploration and return according the following rule: the longer it has travelled since it leaves the last visited target, the more likely it will decide to return. The decision rule here reflects the accumulated resistance on the forager's will for exploration as the moving distance increases, such as the decline of energy level, accumulated stress of finding no new targets or other behavioural regularities. Therefore, pn can be represented by a non-increasing function of Inline graphic. Specifically in this paper, we define pn as an exponential function

2. 2.3

where β is a control parameter for tuning the intensity of such returning. A higher β leads to a higher likelihood of stochastic returning during exploration, which models more conservative foraging behaviour. Similar dynamics have been observed empirically in both animals [34] and humans [35]. The stochastic returning can be used to model the home-return patterns [33] or the central-place foraging dynamics [36,37]. When β = 0 such that pn = 1, the model is without a return phase and the foraging dynamics resembles the non-destructive foraging in [26].

In summary, the foraging in our model is tuned by five parameters, namely μ, N, L, K and β. The power-law exponent μ characterizes the random search strategy. The landscape size L and the number of targets K represent the environmental features. The number of steps N regulated by the probabilistic function Θn specifies the termination of the foraging. The intensity of stochastic returning β is an additional parameter to characterize more complex foraging dynamics.

3. Results and discussions

3.1. The case with β = 0

We first discuss the simple case without the stochastic returning by setting β = 0 and pn = 1. To gain better insight into the model, we derive a closed-form expression for the search efficiency η using mean-field approximation. We then investigate the relationship between η(μ) and other parameters of the model using a combination of numerical simulations and analytical evaluation.

3.1.1. Approximate mean-field solution

To evaluate equation (2.1), we first calculate the numerator SN. We denote by Inline graphic the mean number of steps from the beginning to the discovery of the Sth new target, and Inline graphic the mean number of steps between the discovery of the Sth target and the (S + 1)th target. Intuitively, since the foraging is confined in a finite landscape with limited number of targets, as the forager explores the landscape more, the number of undiscovered targets decreases and discovering new targets becomes more difficult. If the forager detects a target at step n, the probability that the detected target is new, denoted by pnew, should be approximately proportional to the number of undiscovered targets at step n, namely pnew ≈ (KSn)/γK, where γ is a constant coefficient. Therefore, to discover one more new target, the forager has to make 1/pnew detections on average, which gives

3.1.1. 3.1

where nd is the mean number of steps between two consecutive detections. The assumption that Inline graphic indicated by equation (3.1) is supported by the numerical simulation, as shown in figure 2b. The increment of Sn at each step n can be written as follows:

3.1.1. 3.2

The above differential equation can be solved with initial condition S0 = 1, which yields

3.1.1. 3.3
Figure 2.

Figure 2.

(a) The number of discovered targets Sn versus step number n. The curves in different colours from top to bottom correspond to various μ ∈ [1.1, 2.5] with an interval of 0.2. Here, we use the landscape size L = 200 and the number of targets K = 200. The dots represent simulation results averaged for 100 realizations, and the solid lines represent the corresponding mean-field solution given by equation (3.3). (b) The mean number of steps between the discovery of the Sth target and the (S + 1)th target Inline graphic versus the number of undiscovered targets KS. The dots represent simulation results, and the lines represent the corresponding linear regression fit. (c) The mean number of steps between two consecutive detections nd as a function of μ for different values of L and K. Here, K is adjusted to obtain the corresponding λ given L. The solid lines represent the nonlinear fitting Inline graphic. (d) The constant coefficient γ as a function of μ for different values of L and K. The solid lines represent the cubic polynomial fitting.

Then we calculate the denominator Inline graphic. In mean-field approximation, Inline graphic can be simply expressed as Inline graphic, where Inline graphic is the mean step-size given by [26]

3.1.1. 3.4

The above equation captures well the simulation results, as shown in figure 7b. Via equations (3.3) and (3.4), we are able to calculate the search efficiency η.

Figure 7.

Figure 7.

(a) The mean step-size Inline graphic versus the power-law exponent μ. (b) The mean step-size in exploration Inline graphic versus μ. The red solid line represents equation (3.4). (c) The mean step-size in return Inline graphic versus μ. (d) The ratio of the number of return steps Nret to the total number of steps N versus μ. In panels (a–d), the dots represent simulations results, and the dashed lines are a guide to the eye. Different colours correspond to different values of β, as indicated in the legend of panel (a). The results are obtained from numerical simulation with the landscape size L = 1000, the number of targets K = 5000 and the termination condition Θn = δ(n − 50 000) and are averaged over 100 realizations.

To evaluate Sn analytically through equation (3.3), we still need to know nd and γ. Unfortunately, a closed-form of nd and γ in equation (3.3) remains mathematically elusive, so we apply curve fitting to the simulations curves of Sn to obtain the value of C = γnd in equation (3.3). Consequently, we can estimate γ using γ = C/nd, where nd is obtained from simulation. The advantage of this approach is that both γ and nd remain approximately constant as n increases. Therefore, we can obtain the value of C by performing the simulation up to a small number of steps (e.g. up to 20% targets in the landscape have been discovered in our study), and then use the analytical solution to extrapolate the results for large n. Sn evaluated in equation (3.3) with a fitting parameter C is in good agreement with the numerical simulation, as shown in figure 2a. With L and K given (λ is determined), we can also use curve fitting to find γ(μ) and nd(μ) by performing numerical simulations for a small number of discrete values of μ, as shown in figure 2c,d. In particular, for a given λ , we use a cubic polynomial function to approximate γ(μ), and an exponential function Inline graphic [26,38] to approximate nd(μ), where A(μ, λ) is a fifth degree polynomial function of μ determined by polynomial regression. Our results also suggest that, with μ given, γ depends on both L and K, while nd only depends on λ, as shown in figure 2c,d.

We note that both the denominator and numerator in equation (2.1), denoting the total moving distance Inline graphic and total number of discovered targets SN, respectively, are a decreasing function of μ. Intuitively, this is easy to understand. In an N-step truncated Lévy flight, decreasing μ not only leads to a larger mean step-size as indicated by equation (3.4), but also enlarges the searched area so that more new targets can be discovered by the forager. On the other hand, SN and LN are also increasing functions of N, i.e. termination upon larger number of steps leads to more discovered targets and longer moving distances. In the following, we study how the foraging efficiency and optimality of search strategy depend on the number of steps N, the landscape size L and the number of targets K.

3.1.2. Number of steps N

To study the influence of the number of steps N, we use a probabilistic function Θn = δ(nN) to terminate the foraging process at n = N step for any given value of μ. Here δ(·) is the Kronecker delta function, i.e. δ(x) = 0 if x = 0 and δ(x) = 1 if x ≠ 0. We perform numerical simulation with L = 103, K = 5000 and λ = 100. The search efficiency η is then fully determined by the number of foraging steps N and the power-law exponent μ, which yields η = η(μ, N). Surprisingly, for any given value of N, there exists an optimal exponent μopt which maximizes η. We find that μopt shifts substantially as N varies and is overall an increasing function of N, as shown in figure 3a. The corresponding η(μ, N) calculated by the mean-field approach using equations (3.2) and (3.3), as shown in figure 3b, align with the results from numerical simulation in figure 3a.

Figure 3.

Figure 3.

(a) The rescaled search efficiency η/ηmax versus the power-law exponent μ for different values of the total number of steps N from numerical simulation with the intensity of stochastic returning β = 0, the landscape size L = 1000, the number of targets K = 5000 and the termination condition Θn = δ(nN). The results are averaged over 100 realizations. The rescaling factor ηmax = η(μ = μopt, N = 5000) is the overall maximum search efficiency. (b) η/ηmax versus μ from mean-field calculation. The black dots indicate the peaks of the curves. The inset of panel (b) shows μopt versus N. The curves in panels (a,b) with different colours from top to bottom correspond to various N ∈ [5 × 103, 5 × 104] with an interval of 5000. Note that the discrepancy between the simulation and the mean-field approach here is mostly due to the slight deviation of equation (3.4), as shown in figure 7b. This can be improved by better calibrating the form of equation (3.4), e.g. using curve fitting as we obtain the form of continuous function for nd(μ) and γ(μ).

A simple explanation for the presence of such optimality relates to the increased difficulty in discovering new targets in the later stage of the foraging process, particularly for smaller μ. Recall that the forager can discover new targets more rapidly in the beginning (high-efficiency stage) but then enters into a difficult stage for discovering new targets in the latter steps (low-efficiency stage) as the number of new targets drops. The random search with smaller μ will enter this low-efficiency stage earlier in the foraging process, as shown in figure 2a. The gain in SN from reducing μ will decline steadily, and will eventually no longer compensate for increases in Inline graphic; at that point, η reaches its maximum.

Here μopt as a function of N can be obtained by numerically solving Inline graphic using the continuous η(μ, N) from the mean-field approach, as shown in the inset of figure 3b. We note that when N → 1 the optimal exponent will approach μopt → 1 approximately corresponding to ballistic motion. The optimal exponent can also approach μ = 3 for a sufficiently large N, which means the optimal random search can go through a transition from Lévy flight (1 < μ < 3) to Brownian motion (μ ≥ 3) as N increases.

3.1.3. Landscape size L and number of targets K

We then study how the foraging efficiency depends on the landscape size L and number of targets K under the termination condition Θn = δ(nN). We evaluate the search efficiency by varying the landscape size L and the number of targets K in three different combinations: (1) varying L and K simultaneously with the mean free path λ = L2/2K kept constant; (2) varying L with K kept constant and (3) varying K with L kept constant. As shown in figure 4, the presence of optimality in η(μ) still remains but μopt varies for different combinations of L and K.

Figure 4.

Figure 4.

Panels (a–c) show the rescaled search efficiency η/ηmax versus the power-law exponent μ for different values of the landscape size L and the number of targets K from numerical simulation with the intensity of stochastic returning β = 0 and the termination condition Θn = δ(n − 20 000). The results are averaged over 100 realizations. The curves in panel (a) from bottom to top correspond to (L, K) = (100, 50), (200, 200), (400, 800), (800, 3200) such that the mean free path λ = 100 remains constant. The curves in panel (b) from top to bottom correspond to L = 200, 400, 600, 800 with K = 800 kept constant. The curves in panel (c) from top to bottom correspond to K = 800, 1600, 2400, 3200 with L = 400 kept constant. Panels (d–f) show η/ηmax evaluated by the mean-field solution, corresponding to the results from numerical simulation in panels (a–c), respectively. The black dots indicate the peak of the curves.

Intuitively, with N and λ given, as K increases, the optimal exponent μopt will shift to a smaller value since the foraging can stay in the high-efficiency stage for longer total moving distances, accounting for the situation in combination (1). Similarly, with N and K given, as λ decreases, μopt will shift to a larger value, since the average distance for discovering new targets becomes shorter and the low-efficiency stage comes earlier for smaller μ, accounting for the situation in combination (2). The coupled effect of increasing K and decreasing λ is studied through combination (3). In this case, as K increases, μopt still shifts to a smaller value, but the shifting is not as significant as the case in combination (1).

The results here suggest that the foraging efficiency and optimal foraging strategy characterized by μopt can be highly sensitive to environmental context and landscape features under some termination conditions.

3.1.4. Termination condition Θn

The foraging defined in our model is considered to be an individual search process in a finite area subject to termination. This setting can account for two scenarios in the real-world: (1) foraging within a restricted area during a certain foraging season; (2) foraging in a fractal landscape (where resources are clustered in patches) such that searching each patchy area can be viewed as a single process. In scenario (2), owing to the depletion of new targets, the forager has to decide on when to terminate the foraging. The probabilistic function Θn, which characterizes the termination condition and regulates the number of steps N, can reflect the animal's prior experience, its physical condition or behavioural regularity, or other environmental conditions such as seasonal variation that can have an influence on the forager's decision to terminate foraging/change foraging area. In this context, various optimal foraging strategies may evolve in response to different termination conditions coupled with the environmental context of the foraging area to achieve the highest search efficiency.

The results displayed in figures 3 and 4 present the optimality of foraging strategy for the special case with Θn = δ(nN) in which the termination is completely regulated by a finite number of steps. For comparison, we study another special case with a termination condition Inline graphic, where θ(·) is the step function such that θ(x) = 0 if x < 0 and θ(x) = 1 if x ≥ 0. In this case, foraging is terminated once the accumulated moving distance Inline graphic exceeds some threshold Inline graphic, and the best strategy is always ballistic motion corresponding to μopt → 1, as shown in figure 5. The results here imply that, if the forager always prefers exploring the landscape with a prefixed length of moving distance Inline graphic (or a certain amount of energy for exploration if we consider energy expenditure is proportional to Inline graphic), the optimal search strategy will evolve to ballistic motion regardless of the choice of Inline graphic. On the other hand, if the forager explores the landscape with a random moving distance Inline graphic (or a random amount of energy) regulated by a certain number of steps N, various optimal search strategies from ballistic motion (μ → 1) to Lévy flight (1 < μ < 3) to Brownian motion (μ ≥ 3) can emerge depending on landscape features. This situation is more likely to occur in scenario (1), in which the animal may be subject to daily regularity during the foraging season. For instance, the animal may only perform a regular number of foraging steps (trips/bouts) per day, and the moving distance in each step is randomly distributed. It is also interesting to note that the optimality under the termination condition Θn = δ(nN) can emerge before the landscape has been fully exploited. Therefore, the optimality we observe here remains valid even if we introduce an additional condition to the termination, i.e. Θn = δ(nN) + δ(SnK), such that the foraging will be terminated once the forager discovers all targets (or we assume that the forager has prior knowledge of the number of targets) and will not fall into the ‘zero-gain’ regime.

Figure 5.

Figure 5.

(a) The search efficiency η versus the power-law exponent μ for different values of the threshold moving distance Inline graphic from numerical simulation with the intensity of stochastic returning β = 0, the landscape size L = 1000, the number of targets K = 5000 and the termination condition Inline graphic where Inline graphic is the accumulated moving distance. The results are averaged over 100 realizations. (b) η evaluated by the mean-field solution by setting the total number of steps N by Inline graphic in equation (3.3).

The two special cases discussed above demonstrate that different optimalities of foraging strategy can exist under alternative termination conditions. Generally speaking, termination can be triggered by more complex mechanisms, and Θn can be a function of multiple variables such as N, Inline graphic, Inline graphic and other observable or hidden quantities. The optimality should be considered subject to the detailed termination mechanism.

3.2. The general case with β > 0

We now turn our attention to the case with β > 0 in which stochastic returning is present. We briefly study this case with the termination condition Θn = δ(nN). It would be interesting to study other stochastic returning mechanisms and their interplay with the termination decision in future. We perform numerical simulation with various intensity β and exponent μ, as shown in figure 6a. As might be expected, β has a significant impact on the search efficiency. When β increases, the optimal exponent μopt will shift to a smaller value.

Figure 6.

Figure 6.

(a) η/ηmax versus μ for various intensities of stochastic returning β from numerical simulation with the landscape size L = 1000, the number of targets K = 5000 and the termination condition Θn = δ(n − 50 000). The results are averaged over 100 realizations. The dots represent simulation results, and the dashed lines are a guide to the eye. (b) The total number of discovered targets SN versus μ. The curves in panels (a,b) with different colours from top to bottom corresponding to various β with increasing values as indicated in the legend of panel (b).

The result here indicates that an optimal search strategy can be tuned by the stochastic returning which is widely observed in animals. One should note that the stochastic returning here is an additional constraint for the random search, which can be associated with a wide range of intrinsic physical or psychological features of the forager or external environmental context of the landscape. For example, a high value of β can represent a harsh foraging environment that drives the forager to return more frequently. A higher intensity of stochastic return leads to lower gain (fewer targets discovered) and higher cost (longer moving distance) in foraging and is hence harmful to the search efficiency.

Finally, we discuss some interesting characteristics regarding Inline graphic under the current stochastic returning mechanism. In the presence of stochastic returning, Inline graphic is composed of the moving distance of the exploration and the return phase, respectively, namely Inline graphic, where the superscript denotes the corresponding phase. In particular, we can write Inline graphic and Inline graphic with Inline graphic, where the superscripted N and Inline graphic denote the number of steps and mean step-size in each phase. These quantities as a function of μ and β are shown in figures 7 and 8. Interestingly, we find that Inline graphic, as well as the portion of return steps Inline graphic, are not a monotonic function of μ and exhibit a maximum. A possible explanation for this peak is that the value of μ determines the balance between the stochastic returning and accidental rediscovery of a known target. A lower μ results in longer steps, leading to higher likelihood of stochastic returning. On the other hand, a higher μ results in shorter steps, leading the forager to revisit the same target many times consecutively by the accidental rediscovery (the trapping phenomenon in the model). Following this reasoning, the peaks in Inline graphic may reflect the value of μ that maximizes the aggregated return distance due to the stochastic and accidental returns. Fully understanding these phenomena is beyond the scope of this paper.

Figure 8.

Figure 8.

(a) The total moving distance Inline graphic versus the power-law exponent μ. (b) The total moving distance in exploration Inline graphic versus μ. (c) The total moving distance in return Inline graphic versus μ. (d) The ratio of Inline graphic to the total number of steps Inline graphic versus μ. In panels (a–d), the dots represent simulations results and the dashed lines are a guide to the eye. Different colours correspond to different values of β, as indicated in the legend of panel (a). The results are obtained from numerical simulation with the landscape size L = 1000, the number of targets K = 5000 and the termination condition Θn = δ(n − 50 000) and are averaged over 100 realizations.

4. Conclusion

In this paper, we have presented a simple model to study Lévy-flight foraging in a finite landscape with countable targets. The first important message from our results is that the interplay between the termination of foraging and environmental features, such as the landscape size and target density, can have a significant influence on the development of an optimal foraging strategy. This is in contrast to most previous models which are not sensitive to termination condition, such as the ‘non-destructive’ truncated Lévy-flight foraging [26] or other random search models that focus on the search for a single target [32]. The termination condition in our model can be used to account for the decision-making mechanism that drives the animal to stop foraging in a certain area. In particular, when the number of steps (or trips/bouts in foraging) plays a role in the termination condition, a wide range of μopt, which captures foraging dynamics from ballistic motion (μopt → 1) to Lévy flight (1 < μopt < 3) to Brownian motion (μopt ≥ 3), can evolve in response to other constraints such as the landscape size, number of targets as well as stochastic returning (e.g. home-return behaviour).

This idealized model allows a variety of future extensions. In the current approach, we do not consider energy intake and consumption, and simply assume that the forager always has enough energy to perform long flights. One important improvement to the model is to incorporate an adaptive foraging strategy based on available energy. As a supplement to this work, one can also study other termination conditions to capture more complex internal regularity, memory mechanisms or decision-making of foraging [3943], as well as the coevolution of the foraging dynamics and termination mechanism. Moreover, one can consider time-variant and heterogeneous distribution of targets in the landscape, which can lead to a study of how the foraging strategy adapts to the environmental changes in the presence of cognition and memory. It is also worth extending the model to incorporate multiple foragers and studying the collective process of competition and cooperation in foraging [44].

It would be interesting for researchers to test our model in empirical data. One direct observable consequence suggested by our study is that the value of μopt may have a relationship with some dependent quantities such as the total moving distance (energy expenditure) or the number of steps (trips/bouts) of a foraging process, as well as the size or the resource distribution of the foraging area. This dependence may be tested by measuring the correlation between the value of μopt and these quantities among individuals. Finally, although it is initially proposed to study animal foraging, our model can be used generally in other random search or biological encountering processes as well.

Acknowledgements

We thank anonymous reviewers for their insightful comments and valuable suggestions to the original manuscript.

Funding statement

This research was conducted within the CSIRO Biosecurity Flagship, supported by grants from CSIRO's Sensor and Sensor Networks Transformational Capability Platform, CSIRO's OCE Postdoctoral Scheme, the Australian Government Department of the Environment's the National Environmental Research Program and the Rural Industries Research Development Corporation (through funding from the Commonwealth of Australia, the State of New South Wales and the State of Queensland under the National Hendra Virus Research Program).

References

  • 1.Smouse PE, Focardi S, Moorcroft PR, Kie JG, Forester JD, Morales JM. 2010. Stochastic modelling of animal movement. Phil. Trans. R. Soc. B 365, 2201–2211. ( 10.1098/rstb.2010.0078) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Bartumeus F. 2007. Lévy processes in animal movement: an evolutionary hypothesis. Fractals 15, 151–162. ( 10.1142/S0218348X07003460) [DOI] [Google Scholar]
  • 3.Ortiz-Pelaez A, Pfeiffer D, Soares-Magalhaes R, Guitian F. 2006. Use of social network analysis to characterize the pattern of animal movements in the initial phases of the 2001 foot and mouth disease (FMD) epidemic in the UK. Prev. Vet. Med. 76, 40–55. ( 10.1016/j.prevetmed.2006.04.007) [DOI] [PubMed] [Google Scholar]
  • 4.Caro T. 2009. The behaviour–conservation interface. Trends Ecol. Evol. 14, 366–369. ( 10.1016/S0169-5347(99)01663-8) [DOI] [PubMed] [Google Scholar]
  • 5.Bousquet F, Le Page C. 2004. Multi-agent simulations and ecosystem management: a review. Ecol. Model. 176, 313–332. ( 10.1016/j.ecolmodel.2004.01.011) [DOI] [Google Scholar]
  • 6.Viswanathan G, Afanasyevt V. 1996. Lévy flight search patterns of wandering albatrosses. Nature 381, 413–415. ( 10.1038/381413a0) [DOI] [PubMed] [Google Scholar]
  • 7.Humphries NE, Weimerskirch H, Queiroz N, Southall EJ, Sims DW. 2012. Foraging success of biological Lévy flights recorded in situ. Proc. Natl Acad. Sci. 109, 7169–7174. ( 10.1073/pnas.1121201109) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Ramos-Fernandez G, Mateos JL, Miramontes O, Cocho G, Larralde H, Ayala-Orozco B. 2004. Lévy walk patterns in the foraging movements of spider monkeys (Ateles geoffroyi). Behav. Ecol. Sociobiol. 55, 223–230. ( 10.1007/s00265-003-0700-6) [DOI] [Google Scholar]
  • 9.Reynolds AM, Smith AD, Menzel R, Greggers U, Reynolds DR, Riley JR. 2007. Displaced honey bees perform optimal scale-free search flights. Ecology 88, 1955–1961. ( 10.1890/06-1916.1) [DOI] [PubMed] [Google Scholar]
  • 10.Focardi S, Montanaro P, Pecchioli E. 2009. Adaptive Lévy walks in foraging fallow deer. PLoS ONE 4, e6587 ( 10.1371/journal.pone.0006587) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Sims DW, et al. 2008. Scaling laws of marine predator search behaviour. Nature 451, 1098–1102. ( 10.1038/nature06518) [DOI] [PubMed] [Google Scholar]
  • 12.Hills TT, Kalff C, Wiener JM. 2013. Adaptive Lévy processes and area-restricted search in human foraging. PLoS ONE 8, e60488 ( 10.1371/journal.pone.0060488) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Raichlen DA, Wood BM, Gordon AD, Mabulla AZ, Marlowe FW, Pontzer H. 2014. Evidence of Lévy walk foraging patterns in human hunter–gatherers. Proc. Natl Acad. Sci. USA 111, 728–733. ( 10.1073/pnas.1318616111) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Benhamou S. 2007. How many animals really do the Levy walk? Ecology 88, 1962–1969. ( 10.1890/06-1769.1) [DOI] [PubMed] [Google Scholar]
  • 15.Edwards AM, et al. 2007. Revisiting Lévy flight search patterns of wandering albatrosses, bumblebees and deer. Nature 449, 1044–1048. ( 10.1038/nature06199) [DOI] [PubMed] [Google Scholar]
  • 16.Edwards AM. 2011. Overturning conclusions of Lévy flight movement patterns by fishing boats and foraging animals. Ecology 92, 1247–1257. ( 10.1890/10-1182.1) [DOI] [PubMed] [Google Scholar]
  • 17.Viswanathan G, Raposo E, Da Luz M. 2008. Lévy flights and superdiffusion in the context of biological encounters and random searches. Phys. Life Rev. 5, 133–150. ( 10.1016/j.plrev.2008.03.002) [DOI] [Google Scholar]
  • 18.Viswanathan GM. 2011. The physics of foraging: an introduction to random searches and biological encounters. Cambridge, UK: Cambridge University Press. [Google Scholar]
  • 19.James A, Plank MJ, Edwards AM. 2011. Assessing Lévy walks as models of animal foraging. J. R. Soc. Interface 8, 1233–1247. ( 10.1098/rsif.2011.0200) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Bartumeus F, Catalan J. 2009. Optimal search behavior and classic foraging theory. J. Phys. A 42, 434002 ( 10.1088/1751-8113/42/43/434002) [DOI] [Google Scholar]
  • 21.Viswanathan G, Afanasyev V, Buldyrev SV, Havlin S, Da Luz M, Raposo E, Stanley HE. 2000. Lévy flights in random searches. Physica A 282, 1–12. ( 10.1016/S0378-4371(00)00071-6) [DOI] [Google Scholar]
  • 22.Bartumeus F, Raposo EP, Viswanathan GM, da Luz MG. 2013. Stochastic optimal foraging theory. In Dispersal, individual movement and spatial ecology, pp. 3–32. Berlin, Germany: Springer. [Google Scholar]
  • 23.Bartumeus F, da Luz MGE, Viswanathan G, Catalan J. 2005. Animal search strategies: a quantitative random-walk analysis. Ecology 86, 3078–3087. ( 10.1890/04-1806) [DOI] [Google Scholar]
  • 24.da Luz MG, Grosberg A, Raposo EP, Viswanathan GM. 2009. The random search problem: trends and perspectives. J. Phys. A 42, 430301 ( 10.1088/1751-8121/42/43/430301) [DOI] [Google Scholar]
  • 25.Reynolds A, Rhodes C. 2009. The Lévy flight paradigm: random search patterns and mechanisms. Ecology 90, 877–887. ( 10.1890/08-0153.1) [DOI] [PubMed] [Google Scholar]
  • 26.Viswanathan G, Buldyrev SV, Havlin S, Da Luz M, Raposo E, Stanley HE. 1999. Optimizing the success of random searches. Nature 401, 911–914. ( 10.1038/44831) [DOI] [PubMed] [Google Scholar]
  • 27.Benichou O, Loverdo C, Moreau M, Voituriez R. 2007. A minimal model of intermittent search in dimension two. J. Phys. 19, 065141. [Google Scholar]
  • 28.Gonzalez MC, Hidalgo CA, Barabasi AL. 2008. Understanding individual human mobility patterns. Nature 453, 779–782. ( 10.1038/nature06958) [DOI] [PubMed] [Google Scholar]
  • 29.Hays GC, et al. 2012. High activity and Lévy searches: jellyfish can search the water column like fish. Proc. R. Soc. B 279, 465–473. ( 10.1098/rspb.2011.0978) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Santos M, Raposo E, Viswanathan G, Da Luz M. 2004. Optimal random searches of revisitable targets: crossover from superdiffusive to ballistic random walks. Europhys. Lett. 67, 734 ( 10.1209/epl/i2004-10114-9) [DOI] [Google Scholar]
  • 31.Raposo E, Bartumeus F, Da Luz M, Ribeiro-Neto P, Souza T, Viswanathan G. 2011. How landscape heterogeneity frames optimal diffusivity in searching processes. PLoS Comput. Biol. 7, e1002233 ( 10.1371/journal.pcbi.1002233) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Palyulin VV, Chechkin AV, Metzler R. 2014. Lévy flights do not always optimize random blind search for sparse targets. Proc. Natl Acad. Sci. USA 111, 2931–2936. ( 10.1073/pnas.1320424111) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Song C, Koren T, Wang P, Barabási AL. 2010. Modelling the scaling properties of human mobility. Nat. Phys. 6, 818–823. ( 10.1038/nphys1760) [DOI] [Google Scholar]
  • 34.Niv Y, Joel D, Meilijson I, Ruppin E. 2002. Evolution of reinforcement learning in uncertain environments: a simple explanation for complex foraging behaviors. Adapt. Behav. 10, 5–24. ( 10.1177/10597123020101001) [DOI] [Google Scholar]
  • 35.Zhao K, Stehlé J, Bianconi G, Barrat A. 2011. Social network dynamics of face-to-face interactions. Phys. Rev. E 83, 056109 ( 10.1103/PhysRevE.83.056109) [DOI] [PubMed] [Google Scholar]
  • 36.Reynolds A. 2008. Optimal random Lévy-loop searching: new insights into the searching behaviours of central-place foragers. Europhys. Lett. 82, 20001 ( 10.1209/0295-5075/82/20001) [DOI] [Google Scholar]
  • 37.Steingrmsson SÓ, Grant JW. 2008. Multiple central-place territories in wild young-of-the-year Atlantic salmon Salmo salar. J. Anim. Ecol. 77, 448–457. ( 10.1111/j.1365-2656.2008.01360.x) [DOI] [PubMed] [Google Scholar]
  • 38.Buldyrev S, Havlin S, Kazakov AY, da Luz M, Raposo E, Stanley H, Viswanathan G. 2001. Average time spent by Lévy flights and walks on an interval with absorbing boundaries. Phys. Rev. E 64, 041108 ( 10.1103/PhysRevE.64.041108) [DOI] [PubMed] [Google Scholar]
  • 39.Pérez-Reche FJ, Ludlam JJ, Taraskin SN, Gilligan CA. 2011. Synergy in spreading processes: from exploitative to explorative foraging strategies. Phys. Rev. Lett. 106, 218701 ( 10.1103/PhysRevLett.106.218701) [DOI] [PubMed] [Google Scholar]
  • 40.Janson CH. 2007. Experimental evidence for route integration and strategic planning in wild capuchin monkeys. Anim. Cogn. 10, 341–356. ( 10.1007/s10071-007-0079-2) [DOI] [PubMed] [Google Scholar]
  • 41.Cheng K, Spetch ML. 1998. Mechanisms of landmark use in mammals and birds. Oxford, UK: Oxford University Press. [Google Scholar]
  • 42.Volchenkov D, Helbach J, Tscherepanow M, Kühnel S. 2013. Exploration–exploitation trade-off features a saltatory search behaviour. J. R. Soc. Interface 10, 20130352 ( 10.1098/rsif.2013.0352) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Dukas R. 1998. Cognitive ecology: the evolutionary ecology of information processing and decision making. Chicago, IL: University of Chicago Press. [DOI] [PubMed] [Google Scholar]
  • 44.Giraldeau LA, Caraco T. 2000. Social foraging theory. Princeton, NJ: Princeton University Press. [Google Scholar]

Articles from Journal of the Royal Society Interface are provided here courtesy of The Royal Society

RESOURCES