Abstract
Path planning method for unmanned underwater vehicles (UUV) homing and docking in movement disorders environment is proposed in this paper. Firstly, cost function is proposed for path planning. Then, a novel particle swarm optimization (NPSO) is proposed and applied to find the waypoint with minimum value of cost function. Then, a strategy for UUV enters into the mother vessel with a fixed angle being proposed. Finally, the test function is introduced to analyze the performance of NPSO and compare with basic particle swarm optimization (BPSO), inertia weight particle swarm optimization (LWPSO, EPSO), and time-varying acceleration coefficient (TVAC). It has turned out that, for unimodal functions, NPSO performed better searching accuracy and stability than other algorithms, and, for multimodal functions, the performance of NPSO is similar to TVAC. Then, the simulation of UUV path planning is presented, and it showed that, with the strategy proposed in this paper, UUV can dodge obstacles and threats, and search for the efficiency path.
1. Introduction
Unmanned underwater vehicles (UUV) were first designed for military purposes. Comparing it with an ordinary vehicle, it is more suitable for stealthy and scouting missions. Recent advances in technology have driven the development of unmanned vehicles to a new level. UUV are widely applied in civilian applications such as surveying, landscape mapping, and life rescuing [1]. A major challenge in the development of UUV is the realization of a real-time path planning and obstacle avoidance strategy that can effectively guide the vehicle in unstructured environment [2].
In recent years, many researchers have developed various approaches to solve the path planning problems, such as genetic algorithms (GAs), linear programming, potential fields, probabilistic sampling methods like rapidly exploring random trees (RRTs), and artificial intelligence (AI) methods like A*, which assures the path optimality. Cheng et al. [1] proposed a GA-inspired UUV path planner that is based on dynamic programming (DP). In the proposed path planner, the random-based crossover operator in GA is replaced with a deterministic crossover operator. The proposed path planner can always provide the best combination of crossover points from the available path segments. Li et al. [3] proposed a novel artificial bee colony (ABC) algorithm for unmanned combat aerial vehicles (UCAVs) path planning. Simulation results confirm that the algorithm is more competent for the UCAV path panning scheme than other ABC algorithms. Mansury et al. [4] have proposed a solution to the problem of path planning using artificial bee colony (ABC) algorithm and cubic Ferguson splines. Firstly, a path for robot movement is described by Ferguson splines and then ABC algorithm is used to optimize the parameter of splines to find an optimal path between the starting and the goal points considering obstacles between the starting and the goal points considering obstacles between them. Khelchandra and Jie [5] proposed a path planning method that is based on random sampling. Their proposed method has shown a high probability to find collision-free paths in short time. However, because of its randomized nature, their method may come up with infeasible solutions. Fernndez-Perdomo et al. [6] proposed a novel path planning algorithm for gliders using ocean currents. It bases on the A* family of algorithms and incorporates a probabilistic framework. Instead of discretizing the search space, a set of bearing angles is sampled at each surfacing point and the glider trajectory is integrated. Like other exact algorithms, the computational complexity of an A* algorithm increases with the size of the search domain. Li et al. [7] redefined potential functions to eliminate oscillations and local minima problems and use improved wall-following methods for the robots to escape nonreachable target problems. Meanwhile, they develop a regression search method to optimize the planned path. The optimization path is calculated by connecting the sequential points produced by improved artificial potential field (APF) method. The simulation results confirm that the proposed path planning approach can calculate a shorter or more nearly optimal path than the general APF. Jaillet et al. [8] presented a sampling-based algorithm to compute paths in problems which involve high-dimensional cost spaces. The proposed method combines the exploratory strength of the rapidly exploring random tree (RRT) algorithm, with the efficiency of stochastic optimization methods. It integrates an adaptive mechanism that helps to ensure a good performance for a large set of problems. Due to the nature of path planning, it is a constrained optimization problem, and many optimization algorithms have been proposed to overcome the problem. Brand et al. [9] proposed the application of ant colony optimization and compared two different pheromone reinitialization schemes.
Operating in an unknown semistructured environment, UUV may encounter obstacles which are not described in the electronic chart. In order to ensure global and real-time path planning, Yun et al. [10] divided the path planning algorithm into two levels: firstly, employing global path planning based on geometric method and then carrying out local path planning based on one new artificial field potential function, which uses sector as the parameter. In order to ensure real-time path planning, the information of sonar is combined to the static global map. Dolgov et al. [11] described a practical path-planning algorithm for an autonomous vehicle operating in an unknown semistructured environment, where obstacles are detected online by the robot's sensors. Firstly, they use a variant of A* search to obtain a kinematically feasible trajectory and then improve the quality of the solution via numeric nonlinear optimization, leading to a local optimum. Further, they extend algorithm to use prior topological knowledge of the environment to guide path planning, leading to faster search and final trajectories better suited to the structure of the environment.
UUV must have homing and docking functions in order to be completely autonomous. Homing is an operation in which UUV return to the vicinity of the launcher after a mission. Path planning and tracking control of the planned path are also included in this operation. Docking is another operation in which UUV are fixed with the launcher exactly when it is close to the launcher. Docking is more difficult than homing since the position of the launcher can be changed easily by ocean current and waves. Hence, much more precise path considered that future position of the launcher is needed [12]. Nowadays, many of the docking methods have been studied. But, most of the developed methods are concerned about towing or underwater structure designed for docking with the docking station [13, 14]. And many articles are concerned about providing accurate measurement of the UUV position and orientation or providing control strategy for UUV docking [15–18]. However, plan homing and docking path is the first step for UUV homing and docking. Sujit et al. [19] present a navigation function based approach for docking onto a moving submarine. The motion planning system is based on potential fields but avoids the problem of local minima around no fly zones by using a Koditschek and Rimon navigation function. The authors represented the no fly zones with circles and ellipses; these zones have tangential potential. Thus, when the AUV reaches them, it gets deflected away depending on the direction of the tangential potentials. Using these directional potentials of the no fly zones, the navigation function based controller must guide the AUV towards the dock. But the resulting path is not an optimal rendezvous.
In this paper, we proposed a strategy to plan UUV homing and docking path in movement disorders environment. And the application of the particle swarm optimization (PSO) algorithm is investigated [20, 21]. PSO is considered as one of the modern heuristic algorithms for optimization first proposed by Kennedy and Eberhart in 1995 [22]. The motivation for the development of this method was based on the simulation of simplified animal social behaviors [23]. The PSO algorithm works on the social behavior of particles in the swarm [24]. In PSO, the population dynamics simulates a bird flock's behavior where social sharing of information takes place and individuals can profit from the discoveries and previous experience of all other companions during the search for food. That is, the global best solution is found by simply adjusting the trajectory of each individual towards its own best location and towards the best particle of the entire swarm at each time step [22, 23, 25]. Owing to its reduction on memory requirement and computational efficiency with convenient implementation, it has gained lots of attention in various optimal control system applications, compared to other evolutionary algorithms [20, 21, 26]. Several researches were carried out so far to analyze the performance of the PSO with different settings; for example, Shi and Eberhart [27] indicated that the optimal solution can be improved by varying the value of ω from 0.9 at the beginning of the search to 0.4 at the end of the search for most problems, and they introduced a method named TVIW with a linearly varying inertia weight over the generations. Guimin et al. [28] introduced exponential inertia weight strategies, which is found to be very effective for TVIW. Ratnaweera et al. [23] propose time-varying acceleration coefficients as a parameter automation strategy for the PSO named TVAC, which reduces the cognitive component and increases the social component by changing the acceleration coefficients.
The contribution of this paper is concluded as follows. Firstly, the avoiding static obstacle cost function, the avoiding dynamic obstacle cost function, and the approaching objective cost function are built to construct the path planning cost function. Secondly, we proposed a new strategy to plan path for UUV homing and docking. Thirdly, we proposed a novel particle swarm optimization algorithm and used it to obtain the optimization cost function for path point.
The rest of this paper is organized as follows. In Section 2, the path planning cost function and a new strategy to plan path for UUV homing and docking are introduced. In Section 3, the basic PSO methodology and previous developments are summarized. In Section 4, a novel particle swarm optimization algorithm is proposed. In Section 5, the experimental settings for the benchmarks and simulation strategies are explained, and the conclusion is drawn based on the comparison analysis. In Section 6, the simulation of UUV path planning is explained. In Section 7, we present some concluding remarks.
2. UUV Path Planning for Homing and Docking
In this section, UUV path planning for mother vessel recovery using PSO in dynamic obstacles environment is presented and the reasonable path is studied. Path planning cost function has three parts: cost function for static obstacles, cost function for dynamic obstacles, and cost function for approaching target. To simplify the issue, UUV sail in the mission zones at certain fixed velocity so that UUV sail at certain fixed distance in each step. As each step for path planning has a fixed length of time, if we obtain the heading angle for UUV sailing to next waypoint, then we can obtain the position of the next waypoint. Set each particle of particle swarms as a heading angle for UUV sailing to next waypoint. Using NPSO, we can obtain the heading angle for UUV sailing to the optimal next waypoint with minimum cost function.
2.1. UUV Path Planning Cost Function
Firstly, define the initial position P 0(x 0, y 0), the waypoint P i(x i, y i), i = 1,2,…, n, and the target P g(x g, y g).
2.1.1. Cost Function for Static Obstacles
In the underwater environment, UUV have to avoid static obstacles including submarine ridge, reef rock, and hidden pole. In Figure 1, static obstacles avoidance for UUV is illustrated, and the obstacles are represented by circles.
Figure 1.

Schematic diagram of static obstacles avoidance for UUV.
If the position of UUV is P i, the next candidate waypoint is P i+1′. As shown in Figure 1, the straight distance L is range from P i to obstacle at the direction along P i+1′. Cost function for static obstacles can be written as
| (1) |
where N is an adjusting coefficient, which is constant and given. V is velocity of UUV, which is given. The shorter the time from UUV to obstacle is, the larger the value of F 1 is, and vice versa. Set the candidate waypoint with minimum value of F 1 as the next waypoint.
2.1.2. Cost Function for Dynamic Obstacles
UUV also have to avoid dynamic obstacles existing in the mission zone. In Figure 2, the point M is regarded as the cross point of the line P i P i+1′ and the movement direction for dynamic obstacle. Assume that the time interval of UUV moving from P i to M is t 1 and the time interval of dynamic obstacle motion to M is t 2. If t 2 > t 1, it is said that UUV are faster than the obstacle to arrive at M. When t 2 − t 1 is large, UUV will safely sail. Choose the avoiding cost function e −β(t2−t1)2. Besides, when the time between UUV and M is longer, more safety is obtained, and vice versa. Therefore, the cost function for dynamic obstacles is adjusted to
| (2) |
Figure 2.

Schematic diagram of dynamic obstacles avoidance for UUV.
If t 2 < t 1, it is obvious that the dynamic obstacles will arrive at point M earlier than UUV, and UUV can avoid the dynamic obstacle safely. Above all, cost function for dynamic obstacles F 2 is
| (3) |
Consider the size of UUV and dynamic obstacle, F 2 in (3) can be rewritten as
| (4) |
where α and β are adjusting coefficients, which are constant and given, τ 1 is the ratio of length to velocity of UUV, and τ 2 is the ratio of length to velocity of dynamic obstacle. If the cross point M vanishes, UUV trajectory and the dynamic obstacle trajectory have no intersection, and F 2 = 0.
2.1.3. Cost Function for Approaching Target
To implement the motion of UUV towards the target, cost function for approaching target is designed. The schematic diagram of approaching target is shown in Figure 3.
Figure 3.

Schematic diagram of approaching target.
In Figure 3, θ i describes the angle of the connecting line between p i and P g. Because p i and P g are known, so it is easy to calculate the θ i. ψ i′ is optimization parameter, which represents the heading angle of candidate waypoint P i+1′. When ψ i′ = θ i, UUV move to target directly, and cost function for approaching target F 3 is the strongest. So, normal distribution is introduced to represent F 3. Consider
| (5) |
2.1.4. UUV Path Planning Cost Function
UUV path planning cost function is iterated with three cost functions and described as
| (6) |
where k i is weight coefficients (k i > 0), which are constant and given. UUV sail from initial waypoint with fixed velocity, and each step for path planning is fixed length of time. Set each particle of particle swarms as a heading angle for UUV sailing to the next waypoint. Using NPSO, we can obtain the heading angle for UUV sailing to the optimal next waypoint with minimum cost function F.
2.1.5. The Admittance Angle for Homing and Docking
For homing and docking, UUV will enter into the mother vessel (launcher) with a fixed angle (Figure 4). To obtain the effective path at the end of the planning path, let L denote mother vessel hatch center line and a circle is made with the radius r which is tangent to one point with the line L. The range of tangent point to the hatch is larger than the length of vessel (generally choose the length of two UUV), and r is larger than minimum turning radius.
Figure 4.

The recovery admittance angle.
In the path planning for homing and docking, the objective p g is the tangent point between the circle and the line through the current UUV path point P n. When |P n P g| < vτ (where v is the UUV velocity and τ is the time slot for one step), the path planning is finished and UUV enter into mother vessel at the end of arc.
Influenced by ocean current and waves, the mother vessel is moving, and UUV are moving and has to avoid obstacles, so UUV may sail change between the right side and the left side of the mother vessel. No matter what happened, we just make sure that the circle is in the same side of the mother vessel with UUV.
3. Some Previous Work of PSO
Introduced by Dr. Kennedy and Dr. Eberhart in 1995, PSO has ever since turned out to be a competitor in the field of numerical optimization, and there has been a considerable amount of work done in developing the original version of PSO. In this section, we summarize some entire significant previous developments.
3.1. Basic Particle Swarm Optimization (BPSO)
In PSO, each solution called a “particle” flies in the search space searching the optimal position to land. PSO system combines local search method (through individual experience) with global search methods (through neighboring experience), attempting to balance exploration and exploitation [29]. Each particle has a position vector x i(k), a velocity vector v i(k), the position with the best fitness encountered by the particle, and the index of the best particle in the swarm. The position vector and the velocity vector of the ith particle in the d-dimensional search space can be represented as x i = (x i1, x i2, x i3,…x id) and v i = (v i1, v i2, v i3,…, v id), respectively. The best position of each particle (p best) is p i = (p i1, p i2, p i3,…, p id), and the fitness particle found so far at generation k (g best) is p g = (p g1, p g2,…, p gd). In each generation, each particle is updated by the following two equations:
| (7) |
| (8) |
The parameters c 1 and c 2 are constants known as acceleration coefficients. r 1 and r 2 are random values in the range from 0 to 1, and the value of r 1 and r 2 is not the same for every iteration. Kennedy and Eberhart [22] suggested setting either of the acceleration coefficients at 2, in order to make the mean of both stochastic factors in (7) unity so that particles would over fly only half the time of search. The first equation shows that, in PSO, the search towards the optimum solution is guided by the previous velocity, the cognitive component, and the social component.
Since the introduction of the particle swarm optimization, numerous variations of the algorithm have been developed in the literature. Eberhart and Shi showed that PSO searches for wide areas effectively but tends to lack local search precision. They proposed in that work a solution by introducing ω, an inertia factor. In this paper, we name it as basic particle swarm optimization (BPSO):
| (9) |
3.2. Time-Varying Inertia Weight (TVIW)
The role of the inertia weight ω is considered very important in PSO convergence behavior. The inertia weight is applied to control the impact of the previous history of velocities on the current velocity. Large inertia weight facilitates global exploration, while small one tends to facilitate local exploration. In order to assure the particles converge to the best point in the course of the search, Shi and Eberhart [30] have found that time-varying inertia weight (TVIW) has a significant improvement in the performance of PSO and proposed linear decreasing inertia weight PSO (LWPSO) with a linear decreasing value of ω. This modification can increase the exploration of the parameter space during the initial search iterations and increase the exploitation of the parameter space during the final steps of the search [31]. The mathematical representation of inertia weight is given as follows:
| (10) |
where ω 1 and ω 2 are the initial and final values of the inertia weight, respectively, k is the current iteration number, and MAXITER is the maximum number of allowable iterations. Shi and Eberhart [27] indicate that the optimal solution can be improved by varying the value of ω from 0.9 at the beginning of the search to 0.4 at the end of the search for most problems.
Guimin et al. [28] proposed natural exponential (base e) inertia weight strategies, named EPSO and expressed as
| (11) |
3.3. Time-Varying Acceleration Coefficient (TVAC)
In PSO, the particle updated due to the cognitive component and the social component. Therefore, proper control of these two components is very important to find the optimum solution accurately and efficiently. Ratnaweera et al. [23] introduced a time-varying acceleration coefficient (TVAC), which reduces the cognitive component and increases the social component, by changing the acceleration coefficients c 1 and c 2 with the time evolution. The objective of this development is to enhance the global search in the early part of the optimization and to encourage the particles to converge towards the global optima at the end of the search. The TVAC is represented using the following equations:
| (12) |
where c min and c max are constants, k is the current iteration number, and MAXITR is the maximum number of allowable iterations.
Simulations were carried out with numerical benchmarks to find out the best ranges of values for c 1 and c 2. From the results, it was observed that the best solutions were determined when changing c 1 from 2.5 to 0.5 and changing c 2 from 0.5 to 2.5 over the full search range.
4. Proposed New Developments
In the particle swarm algorithm, the trajectory of each individual in the search space is adjusted by dynamically altering the velocity of each particle, according to its own flying experience and the flying experience of the other particles in the search space [23]. Kennedy and Eberhart [22] indicate that a relatively high value of the cognitive component, compared with the social component, will result in excessive wandering of individuals through the search space. In contrast, a relatively high value of the social component may lead particles to rush prematurely towards a local optimum.
Considering those concerns, a novel and effective approach to PSO algorithms is proposed in this paper. Particles are divided into several groups, and each group contains two particles. One particle in the group is noted as number 1, which is developed according to its own flying experience and the flying experience of swarms, and owns high individual experience acceleration coefficients and low group experience acceleration coefficients. So number 1 is encouraged to wander through the entire search space, without clustering around local optima, while the other particle in the group named number 2 is developed according to the flying experience of swarms and the flying experience of the group (both number 1 and number 2). When the flying experience of the group changed, set the flying experience of the group as position of number 2 and velocity of number 2 which is vanishing. Number 2 is encouraged to converge towards the global optima, with a small cognitive component and a large social component. Consider
| (13) |
where r 1 t, r 2 t, r 3 t, and r 4 t are random values, uniformly distributed between zero and one, and the their value is not same for every iteration, x id t(k) is the dth dimension position of the subparticle i of the tth group after k time iteration, ω i t is the inertia weigh value of the subparticle i of the tth group, c max is the maximum value of acceleration coefficients, c min is the minimum value of acceleration coefficients, v id t(k) is the dth dimension velocity of the subparticle i of the tth group during k time iteration, p id t(k) is the dth dimension position of the optimal position of the ith subparticle of the tth groupafter k time iteration, p ud t(k) is the dth dimension position of the optimal position of the tth group after k time iteration, and p gd(k) is the dth dimension position of the swarm optimal position after k time iteration.
Remark 1 . —
In this paper, two subparticles are generated for each group; therefore, i = 1,2.
Define n as the number of particles in the swarm, M as the maximum iteration frequency, and fitness (x i t(k)) as the value of the cost function for the subparticle i of the tth group after k time iteration.
The detailed steps are shown as in Algorithm 1.
Algorithm 1.
Algorithm 1
5. Experimental Settings and Simulation for Benchmark Testing
Simulations were carried out to observe the rate of convergence and the quality of the optimum solution of the new methods introduced in this investigation by comparing between BPSO, LWPSO, EPSO, and TVAC. From the standard set of benchmark problems available in the literature, there are 6 important functions considered to test the efficacy of the proposed method. All of the test functions reflect different degrees of complexity.
5.1. Functions Introduction
The functions are as follows.
5.1.1. Sphere Function
Consider
| (14) |
With the search space {x i∣ − 100 < x i < 100}, the global minimum is located at x = [0,…,0]D with f(x) = 0. It is very simple, convex unimodal function with only one local optimum value.
5.1.2. Rotated Hyperellipsoid Function: Schwefel's Problem 1.2
Consider
| (15) |
With the search space {x i∣ − 100 < x i < 100}, the global minimum is located at x = [0,…,0]D with f(x) = 0. It is continuous, convex, and unimodal. With respect to the coordinate axes, this function produces rotated hyperellipsoids.
5.1.3. Rosenbrock Function
Consider
| (16) |
With the search space {x i∣ − 100 < x i < 100}, the global minimum is located at x = [1,…,1]D with f(x) = 0. It is a unimodal function and the global optimum is inside a long, narrow, parabolic shaped flat valley. Finding the valley is trivial.
5.1.4. Rastrigin Function
Consider
| (17) |
With the search space {x i∣ − 100 < x i < 100}, the global minimum is located at x = [0,…,0]D with f(x) = 0. It is highly multimodal. However, the locations of the minima are regularly distributed.
5.1.5. Griewank Function
Consider
| (18) |
With the search space {x i∣ − 100 < x i < 100}, the global minimum is located at x = [0,…,0]D with f(x) = 0. It is a multimodal function and has many widespread local minima. However, the locations of the minima are regularly distributed.
5.1.6. Sum of Different Powers Function
Consider
| (19) |
The sum of different powers is a commonly used unimodal test function. With the search space {x i∣ − 100 < x i < 100}, the global minimum is located at x = [0,…, 0]D with f(x) = 0.
5.2. The Coefficients Setting
The parameters for simulation are listed in the Table 1.
Table 1.
Parameters for simulation.
| Method | Parameters | |
|---|---|---|
| NPSO | ωi t = 0.7, c max = 2.5, c min = 0.5 |
n = 30, D = 10, x i ∈ [−100,100], v i ∈ [−100,100] |
| BPSO | c 1 = c 2 = 2.0, ω = 0.7 | |
| LWPSO | c 1 = c 2 = 2.0, ω1 = ωmax = 0.9, ω2 = ωmin = 0.4 | |
| EPSO | c 1 = c 2 = 2.0, ω1 = ωmax = 0.9, ω2 = ωmin = 0.4 | |
| TVAC | c max = 2.5, c min = 0.5, ω = 0.7 | |
In this table, c (·) expresses the accelerations coefficients, ω denotes inertia weight, the dimension is D, and the range of the search space and the velocity space are x(·) and v(·). If the current position is out of the search space, the position of the particle is taken to be the value of the boundary and the velocity is taken to be zero. If the velocity of the particle is outside of the boundary, its value is set to be the boundary value. The maximum number of iterations is set to 1000. For each function, 100 trials were carried out and the average optimal value and the standard deviation are presented. To verify the performance of the algorithm at different dimensions, variable dimension D increases from 10 to 100, and the optimal mean and variance of test function with 6 algorithms are calculated.
5.3. The Results Comparing NPSO with the Previous Developments of PSO
The simulation results are given in Table 2. The comparison results elucidate that the searching accuracy and stability arranged from low to high are BPSO, LWPSO, EPSO, TVAC, and NPSO for unimodal function. For multimodal function, the performance of NPSO is the same as TVAC, but, for the highly multimodal Rastrigin function, TVAC is superior to NPSO.
Table 2.
Comparison between BPSO, LWPSO, EPSO, TVAC, and NPSO.
| F | D | Average (standard deviation) | ||||
|---|---|---|---|---|---|---|
| BPSO | LWPSO | EPSO | TVAC | NPSO | ||
| f 1 | 10 | 4.7979e − 11 (1.1343e − 10) |
3.2219e − 24 (1.0706e − 23) |
8.7747e − 51 (3.408e − 50) |
1.4826e − 54 (1.4770e − 53) |
2.2681e − 78 (2.1874e − 77) |
| 30 | 5.3768 (4.3459) |
4.0137e − 05 (5.926e − 05) |
2.3219e − 12 (1.1495e − 11) |
1.1062e − 10 (7.3302e − 10) |
2.0175e − 23 (2.0172e − 22) |
|
| 50 | 2.8412e + 01 (4.5047) |
0.2149 (0.1634) |
2.2215e − 05 (3.5696e − 05) |
0.0019 (0.0079) |
7.6676e − 09 (5.7490e − 08) |
|
| 70 | 4.4685e + 01 (4.3454) |
1.1727e + 01 (1.2113e + 01) |
0.0192 (0.0238) |
0.0609 (0.1033) |
0.0010 (0.0082) |
|
|
| ||||||
| f 2 | 10 | 0.0013 (0.0018) |
4.0624e − 08 (1.10659e − 07) |
3.1542e − 15 (1.9686e − 14) |
3.6072e − 27 (1.6376e − 26) |
3.1897e − 27 (1.0275e − 26) |
| 30 | 2.0647e + 01 (7.9831) |
3.7596 (2.0255) |
0.5809 (0.3373) |
0.0091 (0.0197) |
0.0012 (0.0101) |
|
| 50 | 7.2982e + 01 (2.8263e + 01) |
2.6189e + 01 (1.3125e + 01) |
1.0295e + 01 (5.1265) |
0.4883 (0.3021) |
0.1684 (0.7178) |
|
| 70 | 1.4821e + 02 (4.6281e + 01) |
6.9336e + 01 (3.3717e + 01) |
2.9336e + 01 (1.0417e + 01) |
2.7039 (1.0853) |
0.8808 (1.3369) |
|
|
| ||||||
| f 3 | 10 | 5.6418 (1.092) |
4.3037 (1.2401) |
3.1701 (1.2637) |
0.7262 (0.9490) |
0.6790 (1.0522) |
| 30 | 1.2322e + 03 (9.4693e + 02) |
4.2313e + 01 (2.6596e + 01) |
3.1052e + 01 (1.7699e + 01) |
2.3370e + 01 (1.9238) |
2.2096e + 01 (5.8478) |
|
| 50 | 6.5575e + 03 (1.3919e + 03) |
2.9054e + 02 (1.6229e + 02) |
7.3894e + 01 (3.6633e + 01) |
4.6563e + 01 (2.5861) |
4.6325e + 01 (1.2231e + 01) |
|
| 70 | 1.1834e + 04 (1.9184e + 03) |
5.2635e + 03 (4.0246e + 03) |
2.0414e + 02 (7.4873e + 01) |
7.5240e + 01 (1.2063e + 01) |
8.0415e + 01 (2.9457e + 01) |
|
|
| ||||||
| f 4 | 10 | 5.1090 (3.0667) |
3.6806 (1.7685) |
3.8703 (1.7882) |
0.9750 (0.9693) |
2.0911 (1.2360) |
| 30 | 1.5696e + 02 (3.9880e + 01) |
3.2291e + 01 (1.0059e + 01) |
2.9948e + 01 (7.1988) |
1.7282e + 01 (5.4892) |
2.0219e + 01 (6.9868) |
|
| 50 | 3.8006e + 02 (3.5258e + 01) |
7.5142e + 01 (2.5871e + 01) |
6.1896e + 01 (1.5107e + 01) |
3.6575e + 01 (9.4137) |
4.3956e + 01 (1.1176e + 01) |
|
| 70 | 5.8512e + 02 (3.6592e + 01) |
1.6259e + 02 (6.2498e + 01) |
8.8994e + 01 (1.7553e + 01) |
5.4198e + 01 (1.1482e + 01) |
7.3153e + 01 (1.4861e + 01) |
|
|
| ||||||
| f 5 | 10 | 9.8646e − 05 (0.0009) |
0.0041 (0.0096) |
0.0017 (0.0055) |
0 (0) |
0 (0) |
| 30 | 0.1366 (0.1090) |
0.0008 (0.0045) |
0.0013 (0.0071) |
1.0210e − 10 (1.0021e − 09) |
2.3425e − 16 (8.3751e − 16) |
|
| 50 | 0.5856 (0.0891) |
0.0027 (0.0021) |
9.9034e − 05 (0.0009) |
0.0001 (0.0007) |
4.3243e − 10 (2.4063e − 09) |
|
| 70 | 0.6941 (0.0646) |
0.0329 (0.0190) |
0.0002 (0.0010) |
0.0011 (0.0017) |
1.1807e − 05 (4.5670e − 05) |
|
|
| ||||||
| f 6 | 10 | 5.8516e − 18 (2.5897e − 17) |
2.2299e − 40 (1.6366e − 39) |
2.0832e − 84 (1.6585e − 83) |
2.5229e − 87 (1.1045e − 86) |
1.6901e − 96 (1.6901e − 95) |
| 30 | 1.9895e + 02 (6.2708e + 02) |
1.0418e − 06 (3.7845e − 06) |
7.3163e − 18 (2.7837e − 17) |
1.0483e − 39 (7.2123e − 39) |
4.6820e − 50 (4.6820e − 49) |
|
| 50 | 1.9927e + 07 (7.1603e + 07) |
1.8348e + 01 (4.8704e + 01) |
0.0017 (0.0067) |
6.5668e − 25 (3.1648e − 24) |
4.3136e − 21 (4.2561e − 20) |
|
| 70 | 1.5637e + 13 (8.3656e + 13) |
2.6886e + 08 (1.6837e + 09) |
1.4973e + 04 (1.0807e + 05) |
4.9509e − 19 (2.6306e − 18) |
7.7603e − 13 (7.5801e − 12) |
|
With the increase of test functions dimension, the searching accuracy and stability for each algorithm are decreased. The performance of NPSO is superior to other algorithms. With regard to high multimodal function, the performance of TVAC is superior to NPSO. From Figure 5, it shows that the order of searching speed from high to low is BPSO (green solid curve), LWPSO (red circle), EPSO (blue triangle), TVAC (black dot curve), and NPSO (blue dash-dot). It is obvious that the performance of NPSO is more effective than the other algorithms. This is because, for the NPSO, two-particle coordination evolution is in the same group. One particle of the group is encouraged to wander through the entire search space, without clustering around local optima. The other particle of the group is flying between group experiences and swarm experiences, which is encouraged to converge towards the global optima.
Figure 5.
Variation of average optimum value with time.
6. The Simulation of UUV Path Planning
The simulation is designed using the proposed algorithm for UUV path planning in dynamic obstacles environment. UUV start at the initial point, and the candidate point at the next step is searched using NPSO in the range of the turning angle limitation. And the next path point is obtained when the cost function reaches minimum value. Simulation is designed in the map with size of 1000 m × 1000 m in Figure 6. The initial point is A (50, 50), and dock hatch point is D (900, 900). The maximum turning limitation is |Δψ| ≤ π/3, k 1 = k 2 = k 3 = 1, N = 1. To simplify the issue, the velocity is set to a constant 4 m/s, and UUV will encounter dynamic obstacle B which is parallel to the y-axis with sway coordinate at 200 m and dynamic obstacle C parallel to the x-axis with surge coordinate at 800 m.
Figure 6.

Path planning in dynamic obstacles environment.
In Figure 6, for obstacle B, UUV avoid it by steering left. For obstacle C, UUV avoid it by changing its course towards its starboard side. The simulation results demonstrate that the proposed algorithm is effective to implement the path planning in dynamic obstacle environment.
To compare the performance of NPSO with LWPSO/EPSO/TVAC for path planning, the simulation is designed for path planning without dynamic obstacles. Table 3 shows the parameters that were used for the different algorithms used, and Table 4 shows the simulation results.
Table 3.
Parameters for simulation.
| Method | Parameters | |
|---|---|---|
| NPSO | ωi t = 0.7, c max = 2.5, c min = 0.5 |
n = 30, D = 3, x i ∈ [−π/3, π/3], v i ∈ [−π/3, π/3] |
| BPSO | c 1 = c 2 = 2.0, ω = 0.7 | |
| LWPSO | c 1 = c 2 = 2.0, ω1 = ωmax = 0.9, ω2 = ωmin = 0.4 | |
| EPSO | c 1 = c 2 = 2.0, ω1 = ωmax = 0.9, ω2 = ωmin = 0.4 | |
| TVAC | c max = 2.5, c min = 0.5, ω = 0.7 | |
Table 4.
Comparison between BPSO, LWPSO, EPSO, TVAC, and NPSO for UUV path planning.
| M | Length of the path (m) | ||||
|---|---|---|---|---|---|
| BPSO | LWPSO | EPSO | TVAC | NPSO | |
| 10 | 1290.2548 | 1289.9196 | 1289.4264 | 1288.8112 | 1288.0532 |
| 50 | 1287.8672 | 1287.8556 | 1287.8548 | 1287.8544 | 1287.8544 |
| 100 | 1287.8596 | 1287.8544 | 1287.8544 | 1287.8544 | 1287.8544 |
| 200 | 1287.8544 | 1287.8544 | 1287.8544 | 1287.8544 | 1287.8544 |
| 500 | 1287.8544 | 1287.8544 | 1287.8544 | 1285.7200 | 1284.8792 |
| 1000 | 1287.8544 | 1287.8544 | 1287.8544 | 1285.7200 | 1284.8792 |
From Table 4, NPSO and TVAC present better performance than the other PSO.
To compare the performance of the algorithm proposed in this paper with traditional APF and A*, complex environment map has been introduced in Figure 7.
Figure 7.
UUV path planning under complex environment.
As shown in Figure 7, it turns out that proposed algorithm can obtain a better performance than the traditional APF for UUV path planning in complex environment. If the starting point is A (50, 50), and the target is B (950, 950), the traditional APF is easy to fall into local minima and cannot finish the path planning. To further analyze the performance, we set starting point C (50, 950) and target D (950, 50). From Figure 7, traditional APF is easy to shock in front of the obstacle. The A* algorithm discretizes the search space with a uniform grid, so the possible bearings are discretized too. As a consequence, the time between consecutive surfacing is nonconstant. From Table 5, the algorithm proposed can obtain shorter path than the other algorithms. The proposed algorithm can obtain a more smooth path and is superior to traditional APF and A* algorithm. Table 6 shows the computation time of different algorithms. It is clear that traditional APF spent shorter time than other algorithms, and NPSO spent the longest time.
Table 5.
Comparison of different algorithms for path planning.
| Path | Length of the path (m) | ||
|---|---|---|---|
| NPSO | APF | A∗ | |
| A → B | 1593.79 | — | 1716.81 |
| C → D | 1307.15 | 1547.84 | 1401.66 |
Table 6.
Computation time of different algorithms for path planning.
| Path | Computation time (s) | ||
|---|---|---|---|
| NPSO | APF | A∗ | |
| A → B | 94.83 | — | 11.70 |
| C → D | 54.55 | 0.27 | 10.17 |
7. Conclusion
In this paper, UUV path planning for homing and docking in dynamic obstacle environment using NPSO is present and the appropriate strategy is explored at the mother vessel recovery. The simulation results demonstrate the feasibility of NPSO. UUV can avoid the dynamic obstacles and navigate along the reasonable path to implement the recovery.
The simulation of UUV path planning in complex environment shows that the proposed algorithm can get better path than traditional APF and A* algorithm, and it is not easy to be trapped in local minima. In order to completely escape from local minima or U shaped obstacles, it is necessary to design some escape rules, such as wall-following, random escaping.
NPSO is introduced and tested through a set of 6 benchmark functions and the statistical analyses of the simulated results are compared with BPSO, LWPSO, EPSO, and TVAC. The test of proposed method with the unimodal benchmark functions indicates its superiority over the other methods. However, for highly multimodal Rastrigin function, TVAC shows better performance than NPSO.
Acknowledgments
This work is partially supported by the Natural Science Foundation of China (51179038; 51109043; and 51309067/E091002) and the Foundation of Heilongjiang Province, China (E201123).
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
References
- 1.Cheng C-T, Fallahi K, Leung H, Tse CK. A genetic algorithm-inspired UUV path planner based on dynamic programming. IEEE Transactions on Systems, Man and Cybernetics C: Applications and Reviews. 2012;42(6):1128–1134. [Google Scholar]
- 2.Kim MH, Sur J. A stream function approach to design obstacle avoidance path planning algorithm for autonomous underwater vehicles. Proceedings of the 21st International Offshore and Polar Engineering Conference (ISOPE '2011); June 2011; pp. 285–290. [Google Scholar]
- 3.Li B, Gong L, Yang W. An improved artificial bee colony algorithm based on balance evolution strategy for unmanned combat aerial vehicle (UCAV) path planning. The Scientific World Journal. 2014;2014:10 pages. doi: 10.1155/2014/232704.232704 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Mansury E, Nikookar A, Salehi ME. Artificial Bee Colony optimization of ferguson splines for soccer robot path planning. Proceedings of the 1st RSI/ISM International Conference on Robotics and Mechatronics (ICRoM '13); February 2013; pp. 85–89. [Google Scholar]
- 5.Khelchandra T, Jie H. A recursive sampling based method for path planning. Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC '07); October 2007; ISIC; pp. 2408–2412. [Google Scholar]
- 6.Fernández-Perdomo E, Cabrera-Gámez J, Hernández-Sosa D, et al. Adaptive Bearing Sampling for a Constant-Time Surfacing A* path planning algorithm for gliders. Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '11); 2011; pp. 2350–2355. [Google Scholar]
- 7.Li G, Tamura Y, Yamashita A, et al. Effective improved artificial potential field-based regression search method for autonomous mobile robot path planning. International Journal of Mechatronics and Automation. 2013;3(3):141–170. [Google Scholar]
- 8.Jaillet L, Cortés J, Siméon T. Sampling-based path planning on configuration-space costmaps. IEEE Transactions on Robotics. 2010;26(4):635–646. [Google Scholar]
- 9.Brand M, Masuda M, Wehner N, Yu X-H. Ant colony optimization algorithm for robot path planning. Proceedings of the International Conference on Computer Design and Applications (ICCDA '10); June 2010; pp. V3436–V3440. [Google Scholar]
- 10.Yun G, Zhiqiang W, Feixiang G, et al. Dynamic path planning for underwater vehicles based on modified artificial potential field method. Proceedings of the IEEE 4th International Conference on Digital Manufacturing and Automation (ICDMA '13); 2013; pp. 518–521. [Google Scholar]
- 11.Dolgov D, Thrun S, Montemerlo M, Diebel J. Path planning for autonomous vehicles in unknown semi-structured environments. International Journal of Robotics Research. 2010;29(5):485–501. [Google Scholar]
- 12.Oh KH, Kim JY, Park IW, et al. A study on the control of AUV's homing and docking. Proceedings of the IEEE Conference on Mechatronics and Machine Vision in Practice; 2002; Chiang Mai, Thailand. [Google Scholar]
- 13.Fukasawa T, Noguchi T, Kawasaki T, Baino M. ‘Marine Bird’, a new experimental AUV with underwater docking and recharging system. Proceedings of the OCEANS 2003; September 2003; IEEE; pp. 2195–2200. [Google Scholar]
- 14.McEwen RS, Hobson BW, Bellingham JG, McBride L. Docking control system for a 54-cm-diameter (21-in) AUV. IEEE Journal of Oceanic Engineering. 2008;33(4):550–562. [Google Scholar]
- 15.Feezor MD, Sorrell FY, Blankinship PR, Bellingham JG. Autonomous underwater vehicle homing/docking via electromagnetic guidance. IEEE Journal of Oceanic Engineering. 2001;26(4):515–521. [Google Scholar]
- 16.Martins A, Almeida JM, Ferreira H, et al. Autonomous surface vehicle docking manoeuvre with visual information. Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '07); April 2007; pp. 4994–5000. [Google Scholar]
- 17.Batista P, Silvestre C, Oliveira P. A two-step control strategy for docking of Autonomous Underwater Vehicles. Proceedings of the American Control Conference (ACC '12); June 2012; IEEE: pp. 5395–5400. [Google Scholar]
- 18.Breivik M, Loberg J-E. A virtual target-based underway docking procedure for unmanned surface vehicles B. Proceedings of the 18th IFAC World Congress; September 2011; pp. 13630–13635. [Google Scholar]
- 19.Sujit PB, Healey AJ, Sousa JB. AUV docking on a moving submarine using a K-R navigation function. Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems: Celebrating 50 Years of Robotics (IROS '11); September 2011; pp. 3154–3159. [Google Scholar]
- 20.Abdulameer MH, Abdullah SNHS, Othman ZA. Support vector machine based on adaptive acceleration particle swarm optimization. The Scientific World Journal. 2014;2014:8 pages. doi: 10.1155/2014/835607.835607 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Li H, Jiang H, Li B. Reflection reduction on DDR3 high-speed bus by improved PSO. The Scientific World Journal. 2014;2014:11 pages. doi: 10.1155/2014/257972.257972 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Kennedy J, Eberhart R. Particle swarm optimization. Proceedings of the IEEE International Conference on Neural Networks; December 1995; pp. 1942–1948. [Google Scholar]
- 23.Ratnaweera A, Halgamuge SK, Watson HC. Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients. IEEE Transactions on Evolutionary Computation. 2004;8(3):240–255. [Google Scholar]
- 24.Ni Q, Deng J. A new logistic dynamic particle swarm optimization algorithm based on random topology. The Scientific World Journal. 2013;2013:8 pages. doi: 10.1155/2013/409167.409167 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Clerc M, Kennedy J. The particle swarm-explosion, stability, and convergence in a multidimensional complex space. IEEE Transactions on Evolutionary Computation. 2002;6(1):58–73. [Google Scholar]
- 26.Arumugam MS, Rao MVC, Tan AWC. A novel and effective particle swarm optimization like algorithm with extrapolation technique. Applied Soft Computing Journal. 2009;9(1):308–320. [Google Scholar]
- 27.Shi Y, Eberhart RC. Empirical study of particle swarm optimization. Proceedings of the IEEE Congress on Evolutionary Computation (CEC '08); 1999. [Google Scholar]
- 28.Guimin C, Xinbo H, Jianyuan J, Zhengfeng M. Natural exponential inertia weight strategy in particle swarm optimization. Proceedings of the 6th World Congress on Intelligent Control and Automation (WCICA '06); June 2006; pp. 3672–3675. [Google Scholar]
- 29.Jiao B, Lian Z, Chen Q. A dynamic global and local combined particle swarm optimization algorithm. Chaos, Solitons and Fractals. 2009;42(5):2688–2695. [Google Scholar]
- 30.Shi Y, Eberhart R. Modified particle swarm optimizer. Proceedings of the IEEE International Conference on Evolutionary Computation (ICEC '98), IEEE World Congress on Computational Intelligence; May 1998; pp. 69–73. [Google Scholar]
- 31.Schwaab M, Biscaia EC, Jr., Monteiro JL, Pinto JC. Nonlinear parameter estimation through particle swarm optimization. Chemical Engineering Science. 2008;63(6):1542–1552. [Google Scholar]



