Skip to main content
Computational Intelligence and Neuroscience logoLink to Computational Intelligence and Neuroscience
. 2017 Aug 15;2017:6573623. doi: 10.1155/2017/6573623

A Novel Strategy for Minimum Attribute Reduction Based on Rough Set Theory and Fish Swarm Algorithm

Yuebin Su 1,2,*, Jin Guo 1
PMCID: PMC5574250  PMID: 28894462

Abstract

For data mining, reducing the unnecessary redundant attributes which was known as attribute reduction (AR), in particular, reducts with minimal cardinality, is an important preprocessing step. In the paper, by a coding method of combination subset of attributes set, a novel search strategy for minimal attribute reduction based on rough set theory (RST) and fish swarm algorithm (FSA) is proposed. The method identifies the core attributes by discernibility matrix firstly and all the subsets of noncore attribute sets with the same cardinality were encoded into integers as the individuals of FSA. Then, the evolutionary direction of the individual is limited to a certain extent by the coding method. The fitness function of an individual is defined based on the attribute dependency of RST, and FSA was used to find the optimal set of reducts. In each loop, if the maximum attribute dependency and the attribute dependency of condition attribute set are equal, then the algorithm terminates, otherwise adding a single attribute to the next loop. Some well-known datasets from UCI were selected to verify this method. The experimental results show that the proposed method searches the minimal attribute reduction set effectively and it has the excellent global search ability.

1. Introduction

Data mining, which was known as knowledge discovery in database, includes extracting knowledge, discovering new patterns, and predicting the future trends from the amounts of data. Nowadays, with an increasing number of applications in different fields, massive volumes of very high-dimensional data were produced; the data mining faces the great challenge. As known to all, much of datasets contain unnecessary redundant attributes, which not only occupy extensive computing resources but also seriously impact the decision-making process. Reducing the unnecessary redundant attributes becomes very necessary for data mining [1]. Attribute reduction (AR) in the rough set theory (RST) removes redundant or insignificant knowledge with keeping the classification ability of the information system the same as before. It was proposed by Pawlak and Sowinski [2]. Now, RST is widely used in many fields such as machine learning, data mining, and knowledge discovery [36].

AR is one of the core problems in RST. In particular, minimal reduction problem is an important part of AR in RST, in which the cardinality of attribute subset is the smallest among all possible reductions. It has been paid much attention by many researchers. One basic solution to find the minimal reducts is to construct a discernibility function and simplify it from the dataset by discernibility matrix [79]. Unfortunately, it has been shown that the problem of minimal reduct generation is NP-hard and the run time of generating all reducts is exponential [10]. Recently, because many kinds of NP-hard problems can be solved by heuristic algorithms with increasing computational cost, heuristic attribute reduction algorithm is the main research direction in the field of AR [11].

In general, swarm intelligence algorithm is one kind of heuristic approaches which were used widely for solving attribute reduction problem, including genetic algorithm (GA) [1214], particle swarm optimization (PSO) [1518], ant colony optimization (CO) [19, 20], and fish swarm algorithm (FSA) [11, 21, 22]. FSA is a kind of evolutionary algorithm which was inspired by the natural schooling behaviors of fish to generate candidate solutions for optimization problems, such as random, swarming, following, and preying behaviors. It has a strong ability to avoid local minimums in order to achieve a global optimization [23]. Due to its abilities to perform, FSA has received much attention in recent years.

In this paper, a new coding method about the subset of attribute sets is proposed. By the coding method, a novel strategy for minimal attribute reduction algorithm based on FSA and RST is proposed. It firstly identifies the core attributes by discernibility matrix. Based on the core attributes, all subsets without containing the core attribute are encoded into an integer by the proposed coding method and an initial population is generated for FSA used to find the optimal set of reducts. The fitness function of a subset is defined based on the attribute dependency of the formed rough set. In each loop, the evolutionary direction of the individual is limited to a certain extent by the coding method. If the maximum attribute dependency and the attribute dependency of condition attribute set are equal, then the algorithm terminates, otherwise, adding a single attribute to the next loop. Different benchmark datasets are used to compare the numerical results; our proposed method is a robust and cheap method for calling the fitness function.

The rest of the paper is organized as follows. In Section 2, we introduced some basic concepts in rough sets and fish swarm algorithm. In Section 3, we focus the coding method of combination set. In Section 4, a novel attribute reduction algorithm based on fish swarm algorithm and rough set is proposed. In Section 5, some well-known datasets are used to test the performance of the proposed method. Finally, Section 6 concludes the paper and the areas of further research.

2. Background

2.1. Base Notions of Rough Set Theory

In this section, some basic notions and its proposition will be reviewed in the theory of rough set.

A decision table can be represented as S = {U, A, V, f}, where U = {x1, x2,…, xn} is a nonempty finite set of objects, A = CD, where C is a set of condition attribute and D is a decision attribute set, V is the domains of attributes belonging to A, and f : U × AV is a function assigning attribute values to objects in U.

For any RCD, there is an associated indiscernibility relation IND(R):

INDR=x,yfx,a=fy,a,aP,x,yU. (1)

Let XU; the R-lower approximation of X is defined as R_X=xUxRX where [x]R denotes an equivalence class of IND(R) determined by object x. The notation POSR(D) refers to the R-positive region is given by POSRD=XU/INDDR_X. The R-approximation quality with respect to decisions attribute set D is defined as follows:

γR=POSRDU (2)

and the core attribute set is defined as

Core=a:γRaγR. (3)

2.2. The Principle of FSA

FSA is a new bionic optimization algorithm which simulates the fish swarm behaviors such as preying, swarming, and following behaviors and updates the maximum fitness value on the bulletin board. In FSA, let N be the population size, the Artificial Fishes (AF) are generated by random function which is represented by a D-dimensional position Xi = (xi1, xi2,…, xiD), and Xi∣next is the updated value of Xi. Food satisfaction of Xi is represented as fitness function value Yi = F(Xi). The Euclidean distance dij = ‖XiXj‖ is denoted as the relationship between Xi and Xj. Other parameters include step (representing maximum step length), visual (the visual distances of fish), rand being a random number in [0,1], and σ (a crowd factor).

Preying behavior is a basic behavior of FSA. As shown in (4), for Xi, we randomly select a random Xj within the current visual scope. If Yi < Yj, then move a step from Xi to Xj. Otherwise, Xi move a step to another random Xj that Yi < Yj. After a number of trials, if the random Xj that meets Yi < Yj is not satisfied, Xi will be replaced with a random position within the visual scope directly. It makes the FSA escape from the local optimal solution. Define the prey(Xi) function as (4).

Swarming behavior is described as (5). It shows the attraction of the swarm center to the individual. Let nf be the number of AFs within the current visual scope of Xi, and Xic is the center position of those neighbors. For the swarm center Xic, if the food satisfaction is greater and not too crowded (i.e., σYi < Yc/nf), then move a step from Xi to Xic. Otherwise, preying behavior is to identify a next position for the current.

Following behavior is described as (6). Let Ximax be a AF with the greatest food consistence among AFs in the current visual scope. If the food satisfaction is greater and not too crowded (i.e., σYi < Yimax/nf), then move a step from Xi to Ximax. Otherwise, preying behavior is to identify a next position for the current.

Xinext=Xi+rand×step×XjXiXjXiYi<YjXi+rand×stepelse, (4)
Xinext=Xi+rand×step×XicXiXicXiσYi<YcnfpreyXielse, (5)
Xinext=Xi+rand×step×XimaxXiXimaxXiσYi<YmaxnfpreyXielse. (6)

In addition, step and visual parameters play an important role in FSA. They determine the convergence speed of FSA and make it escape from the local optimal solution. They are described as follows [11]:

visual=visual×f4×genGEN,0,2,stepi+1=stepi×ggenGEN, (7)

where f( ) is the Lorentzian function and g( ) is the normal distribution function.

3. A Coding Method for Combination

Let C = {1,2,…, n} be an integers set which contain n elements. The permutations number of C is n!. Sort them from small to large by lexicographic order. The Cantor expansion and inverse Cantor expansion indicate that there is a one-to-one correspondence between the full permutation set of C and {1,2,…, n!}. Converting the full permutation into decimal number can be used to solve the TSP problem by the heuristic algorithm. Different from the TSP problem, rough set attribute reduction focuses on the combination of attribute set; it is necessary to discuss the ranking of a combination in the combinations sequence.

Let C(n, m) = {ααC, |α| = m}; then the cardinal number of C(n, m) is Cnm. For αC(n, m) and α = (a1, a2,…, am), then αC. Sort all the elements that, in α from small to large, that is,

if 1i<jm,thenai<aj. (8)

By lexicographic order, C(n, m) can be regarded as a sequence.

Example 1 . —

Let C = {1,2, 3,4, 5,6, 7} and m = 4. All the elements of C(7,4) were shown in Table 1.

Table 1.

C(7,4).

1 1234 8 1256 15 1357 22 2346 29 2467
2 1235 9 1257 16 1367 23 2347 30 2567
3 1236 10 1267 17 1456 24 2356 31 3456
4 1237 11 1345 18 1457 25 2357 32 3457
5 1245 12 1346 19 1467 26 2367 33 3467
6 1246 13 1347 20 1567 27 2456 34 3567
7 1247 14 1356 21 2345 28 2457 35 4567

From Table 1, we can find that the NO 22 is 2346. By lexicographic order, the following proposition is apparent.

Proposition 2 . —

Let α = (a1, a2,…, am), β = (b1, b2,…, bm) ∈ C(n, m). α precede β if and only if ∃k  (1 ≤ km) such that, ∀i  (1 ≤ i < k), ai = bi and ak < bk.

Proposition 3 . —

Let α = (a1, a2,…, am) ∈ C(n, m), then, ∀i  (1 ≤ im),  iain + 1 − m.

According to the property of combination, let α = (a1, a2,…, am) ∈ C(n, m) and ai = j. Then the number of combinations in C(n, m) as α is Cnjmi by Proposition 3. Thus we can get a matrix M(T(i, j))m×n about C(n, m), where

Ti,j=Cnjmi1im,ijn+1m0other. (9)

The follow equation is apparent: if T(i, j) ≠ 0, then

Ti,j=k=j+1nTi+1,k. (10)

Example 4 . —

For C(7, 4), matrix M(T(i, j))7×4 is as follows:

M=2010410000106310000432100001111. (11)

Since |{1,2,…, Cnm}| = |C(n, m)| = Cnm, define a mapping h from {1,2,…, Cnm} to C(n, m) as follows:

h:1,2,,CnmCn,mhx=α=a1,a2,,am, (12)

where x ∈ {1,2,…, Cnm} and α = (a1, a2,…, am) ∈ C(n, m). We will give Algorithm 1 for calculating the combination by the matrix M(T(i, j))m×n.

Algorithm 1.

Algorithm 1

DetoCo(x, n, m).

For all x ∈ {1,2,…, Cnm}, according to Algorithm 1 and (9) and (10), we can calculate h(x) = α = {a1, a2,…, am}; thus we can decode a integer within Cnm into a combination in C(n, m).

Example 5 . —

C 7 4 = 35; let x = 22 ∈ {1,2,…, 35} and h(22) = (a1, a2, a3, a4). By Algorithm 1, we know

  •   20 = T(1,1) < x = 22 < T(1,1) + T(1,2) = 30; then a1 = 2, x = 22 − T(1,1) = 2;

  • x = 2 < T(2,2 + 1) = 6; then a2 = 3,  x = 2 − 0 = 2;

  • x = 2 < T(3,3 + 1) = 3; then a3 = 4,  x = 2 − 0 = 2;

  •   1 = T(4,4 + 1) < x = 2 = T(4,4 + 1) + T(4,4 + 2) = 2; then a4 = 6;

  • h(22) = (2,3, 4,6);

  •   in Table 1, the 22th permutation being (2,3, 4,6), that is, the same result.

4. RFSA-RST Algorithm

In this section, for finding minimal reducts of a dataset, an algorithm which is named RFSA-RST in this paper based on RST and FSA is proposed. It first uses the concepts of the core to find the core attributes, and FSA is employed in a restrained manner to find the minimal reducts to ensure that the result is the minimal length.

4.1. Encoding Method

Essentially, AR is a combinatorial optimization problem and the solution space is the power set of the attribute set. Let C = {1,2,…, n} be conditional attribute set, the power set of C be 2C, and C(n, m) = {ααC, |α| = m}; then C(n, m)⊆2C. Clearly, ∑i=0nC(n, i) = C and C(n, i)∩C(n, j) = ϕ  (ij). By the cardinality of the subset of C, 2C is split into n disjoint parts. According to Algorithm 1, every integer within Cnm can be decoded into a combination in C(n, m), so integer encoding method will be adopted. Thus, each integer represents a combination in C(n, m) and the solution space is integer type. Let Δ be defined as the increment of Xi. By (4)–(6), it may be decimal. So, it is not adaptive to the solution space. In order to improve this problem, the new Xi increment is shown as follows [11]:

Xinext=Xi+roundΔ+1Δ>0Xi+roundΔ1else. (13)

4.2. Fitness Function

In FSA, the quality of a AF is determined by fitness function. For AR problem, the subset should retain the classification ability and has minimal attributes. Generally, the fitness function must meet those issues. In addition, it should be as simple as possible to improve the computation efficiency. In this paper, all subsets without containing the core attribute are encoded into an integer as a AF. Since the cardinality of each combination in C(n, m) is fixed, FSA is restricted to a fixed length subset of condition attribute set and judges a AF by its dependency value for the attribute subset represented by the integer, as explained below.

For a AF and its position being X, calculate the corresponding combination h(x) in C(n, m) by Algorithm 1. Define the fitness value of X as follows:

FX=γhXCore. (14)

4.3. Algorithm Description

In FSA, the initial population of size N needs to be provided representing multiple points in the search space. If there is a fitness value of AF that is equal to γC, then the process terminates and outputs the corresponding attribute subsets of AF. If change in average fitness value of two successive generations is 0.001 or less, no further generations are created and terminate this loop. Then a new initial population is created for the next loop. These attribute subsets of the new initial population correspond to one more attribute. So, initial strings in ith loop of the algorithm have one more attribute than that in (i − 1)th loop. The whole process is repeated based on this new initial population until a minimal set of reduct(s) is found. The detailed process of RFSA-RST is shown in Algorithm 2.

Algorithm 2.

Algorithm 2

RFSA-RST(DS).

5. Performance Comparison

To evaluate the effectiveness of the proposed algorithm, RFSA-RST, we carried out experiments in the following way. Another two different types of algorithms were used. One is proposed in [12], denoted here by RGA-RST, and the other is proposed in [17], denoted here by PSO-RST. A PC running Windows XP with 3.1 GHz CPU and 1 GB of main memory was employed to run these three algorithms. The test datasets were obtained from the UCI machine learning repository and 6 datasets were chosen. Basic information about the datasets is listed in Table 2, where n and m are the number of objects and (condition) attributes, respectively.

Table 2.

Basic information about the datasets.

Data set n m
Lung-cancer 26 52
Soybean-small 47 36
Sponge 76 45
Zoo 101 17
Blance 625 5
Spect 187 23

To make the comparison fair and reasonable, the three algorithms were independently run 50 times on each of the datasets with the same setting of the parameters involved. For each run, three values needed to be recorded for each experiment, the length of the output, the run time, and whether the output is a reduction. If the result is a reduction, then the run is said to be normal; otherwise, the run is said to be unsuccessful. If the result corresponds to a minimum reduct, then the run is not only normal but also successful. Let STL, AVL, and AVT be denoted as the shortest length, average length, and average running time, respectively, during the 50 runs. The ratios of successful and normal runs are denoted, respectively, by s1 and s2. The performances of the three algorithms on the datasets are reported in Tables 3 and 4, respectively.

Table 3.

Performance of three algorithms.

Dataset PSO-RST RGA RFSA-RST
STL AVL STL AVL STL AVL
Lung-cancer 3 4.14 3 4.02 3 3.92
Soybean-small 2 2.29 2 2.32 2 2.38
Sponge 8 8.7 8 8.4 8 8.02
Zoo 5 5.32 5 5.05 5 5
Blance 4 4.02 4 4 4 4
Spect 16 16 16 16 16 16

Table 4.

Performance of three algorithms.

Dataset PSO-RST RGA RFSA-RST
s 1/s2 AVT(s) s 1/s2 AVT(s) s1/s2 AVT(s)
Lung-cancer 0.06/1 0.2073 0.08/1 0.8103 0.22/1 0.2255
Soybean-small 0.7/1 0.7117 0.78/1 1.0177 0.72/1 0.7163
Sponge 0.6/0.92 6.4159 0.62/0.98 16.1129 0.78/0.98 6.8089
Zoo 0.94/1 1.3892 0.96/1 2.1648 1/1 1.4584
Blance 0.98/1 0.5726 1/1 1.9935 1/1 0.6045
Spect 1/1 12.0372 1/1 36.0937 1/1 12.9012

From Table 3, the proposed algorithm has the same performance as the other two algorithms for the shortest length of the outputs, but outperforms the other two algorithms in terms of the average length of output solutions except for the Soybean-small dataset. It means that the stability of the proposed algorithm is higher than the other two algorithms.

From Table 4, for the ratio of normal runs, all the outputs of the three algorithms are roughly the same, but the proposed algorithm outperforms the other two algorithms in terms of the ratio of successful runs except for the Soybean-small dataset. Therefore, if the minimum attribute reduction is required, the proposed algorithm is the best of the three algorithms. It is also reflected to the average length in Table 3. As far as the average running time is concerned, the proposed algorithm is slightly worse than PSO-RST but better than RGA.

Since the operation modes of the proposed algorithm and RGA algorithm are the same, another experiment has been done to evaluate the time efficiency of the two algorithms. We modify some process of both algorithms where the core attribute set is no longer identified and n in RGA (for the parameter m in the proposed algorithm) is set from 1 to the cardinality of minimal attribute reduction for each dataset. Record the running time for each loop. In order to show the results more clearly and length limitations, we only show the test results about the datasets Spect and Sponge. As shown in Figure 1. By Figure 1, we can find that, for all datasets, the running time in each loop of the proposed algorithm was increased with the increase of m, while RGA were relatively stable. That is due to the fact that the search process of the proposed algorithm focuses on the attribute subset with fixed length. Therefore, the time efficiency of the algorithm is higher in each loop and the convergence rate is faster. It shows that the algorithm proposed in this paper is more efficient.

Figure 1.

Figure 1

Average running time.

Summing up the experiment results, we see that the proposed algorithm is more efficient than the other two typical algorithms on the datasets for test, although the running time is slightly worse than PSO-RST.

6. Conclusions

In this paper, we derived a new method for minimum attribute reduction based on rough set theory and fish swarm algorithm. In this method, by a coding method of combination subset of attributes set, the FSA has been used to search minimum attribute reduction and attribute dependency has been applied to calculate the fitness values of the attribute subsets. The FSA has been restrained in which every integer corresponds to a specified length attribute subset in each search process. The cardinality of the attribute subset represented by AF was starting from the length of the core and incremented by one after in each loop.

Numerous test results show that it can improve not only the accuracy of minimal attribute reduction but also the efficiency of convergence rate.

To improve search efficiency, future enhancements to this work are to confirm whether the starting length was reasonable and, to improve the search time efficiency, these works can reduce redundant search process.

Acknowledgments

This work is supported by the Opening Project of Sichuan Province University Key Laboratory of Bridge Non-Destruction Detecting and Engineering Computing (2014QYJ02).

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

References

  • 1.Liu H., Hiroshi M. Feature Selection for Knowledge Discovery And Data Mining. Springer Science & Business Media; 1998. [Google Scholar]
  • 2.Pawlak Z., Sowinski R. Rough set approach to multi-attribute decision analysis. European Journal of Operational Research. 1994;72(3):443–459. doi: 10.1016/0377-2217(94)90415-4. [DOI] [Google Scholar]
  • 3.Yu Y., Pedrycz W., Miao D. Neighborhood rough sets based multi-label classification for automatic image annotation. International Journal of Approximate Reasoning. 2013;54(9):1373–1387. doi: 10.1016/j.ijar.2013.06.003. [DOI] [Google Scholar]
  • 4.Zhou J., Pedrycz W., Miao D. Shadowed sets in the characterization of rough-fuzzy clustering. Pattern Recognition. 2011;44(8):1738–1749. doi: 10.1016/j.patcog.2011.01.014. [DOI] [Google Scholar]
  • 5.Polkowski L., Skowron A. Rough sets in knowledge discovery, 1. Vol. 18. Physica, Heidelberg; 1998. Rough sets: a perspective; pp. 31–56. (Stud. Fuzziness Soft Comput.). [Google Scholar]
  • 6.Hu Q., Yu D., Xie Z., Liu J. Fuzzy probabilistic approximation spaces and their information measures. IEEE Transactions on Fuzzy Systems. 2006;14(2):191–201. doi: 10.1109/tfuzz.2005.864086. [DOI] [Google Scholar]
  • 7.Starzyk J., Nelson D. E., Sturtz K. Reduct generation in information systems. Engineering Letters. 1999;2(2):36–41. [Google Scholar]
  • 8.Starzyk J. A., Nelson D. E., Sturtz K. A mathematical foundation for improved reduct generation in information systems. Knowledge and Information Systems. 2000;2(2):131–146. doi: 10.1007/s101150050007. [DOI] [Google Scholar]
  • 9.Skowron A., Rauszer C. (Theory & Decision Library).The Discernibility Matrices and Functions in Information Systems. 1992;11 331–362. [Google Scholar]
  • 10.Skowron R. Intelligent Decision Support Handbook of Applications and Advances of the Rough Sets Theory. In: Slowinski R., editor. Intelligent Decision Support. Dordrecht, The Netherlands: Kluwer Academic Publishers; 1992. pp. 311–362. [DOI] [Google Scholar]
  • 11.Luan X.-Y., Li Z.-P., Liu T.-Z. A novel attribute reduction algorithm based on rough set and improved artificial fish swarm algorithm. Neurocomputing. 2016;174:522–529. doi: 10.1016/j.neucom.2015.06.090. [DOI] [Google Scholar]
  • 12.Wroblewski J. Finding minimal reducts using genetic algorithms. Proceedings of the Second Annual Join Conf. on Information Sciences; 1995; Wrightsville Beach, NC, USA. pp. 186–189. [Google Scholar]
  • 13.Das A. K., Chakrabarty S., Pati S. K., Sahaji A. H. Applying restrained genetic algorithm for attribute reduction using attribute dependency and discernibility matrix. Communications in Computer and Information Science. 2012;292:299–308. doi: 10.1007/978-3-642-31686-9_36. [DOI] [Google Scholar]
  • 14.Hedar A.-R., Omar M. A., Sewisy A. A. Rough sets attribute reduction using an accelerated genetic algorithm. Proceedings of the 16th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing, SNPD 2015; June 2015; jpn. [DOI] [Google Scholar]
  • 15.Abdul-Rahman S., Mohamed-Hussein Z.-A., Bakar A. A. Integrating Rough Set Theory and Particle Swarm Optimisation in feature selection. Proceedings of the 2010 10th International Conference on Intelligent Systems Design and Applications, ISDA'10; December 2010; egy. pp. 1009–1014. [DOI] [Google Scholar]
  • 16.Cervante L., Xue B., Shang L., Zhang M. Binary particle swarm optimisation and rough set theory for dimension reduction in classification. Proceedings of the 2013 IEEE Congress on Evolutionary Computation, CEC 2013; June 2013; mex. pp. 2428–2435. [DOI] [Google Scholar]
  • 17.Wang X., Yang J., Teng X., Xia W., Jensen R. Feature selection based on rough sets and particle swarm optimization. Pattern Recognition Letters. 2007;28(4):459–471. doi: 10.1016/j.patrec.2006.09.003. [DOI] [Google Scholar]
  • 18.Anuradha J., Tripathy B. K. Improved intelligent dynamic swarm PSO algorithm and rough set for feature selection. Communications in Computer and Information Science. 2012;270(II):110–119. doi: 10.1007/978-3-642-29216-3_13. [DOI] [Google Scholar]
  • 19.Chen Y., Miao D., Wang R. A rough set approach to feature selection based on ant colony optimization. Pattern Recognition Letters. 2010;31(3):226–233. doi: 10.1016/j.patrec.2009.10.013. [DOI] [Google Scholar]
  • 20.Forsati R., Moayedikia A., Jensen R., Shamsfard M., Meybodi M. R. Enriched ant colony optimization and its application in feature selection. Neurocomputing. 2014;142:354–371. doi: 10.1016/j.neucom.2014.03.053. [DOI] [Google Scholar]
  • 21.Chen Y., Zhu Q., Xu H. Finding rough set reducts with fish swarm algorithm. Knowledge-Based Systems. 2015;81:22–29. doi: 10.1016/j.knosys.2015.02.002. [DOI] [Google Scholar]
  • 22.Wang F., Xu J., Li L. A novel rough set reduct algorithm to feature selection based on artificial fish swarm algorithm. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 2014;8795:24–33. [Google Scholar]
  • 23.Li X., Shao Z., Qian J. An optimizing method based on autonomous animates: fish swarm algorithm. Systems Engineering-Theory & Practice. 2002;22:32–38. [Google Scholar]

Articles from Computational Intelligence and Neuroscience are provided here courtesy of Wiley

RESOURCES