Skip to main content
Computational Intelligence and Neuroscience logoLink to Computational Intelligence and Neuroscience
. 2021 Nov 26;2021:2392642. doi: 10.1155/2021/2392642

A SAR Target Recognition Method via Combination of Multilevel Deep Features

Junhua Wang 1, Yuan Jiang 2,
PMCID: PMC8642017  PMID: 34868287

Abstract

For the problem of synthetic aperture radar (SAR) image target recognition, a method via combination of multilevel deep features is proposed. The residual network (ResNet) is used to learn the multilevel deep features of SAR images. Based on the similarity measure, the multilevel deep features are clustered and several feature sets are obtained. Then, each feature set is characterized and classified by the joint sparse representation (JSR), and the corresponding output result is obtained. Finally, the results of different feature sets are combined using the weighted fusion to obtain the target recognition results. The proposed method in this paper can effectively combine the advantages of ResNet and JSR in feature extraction and classification and improve the overall recognition performance. Experiments and analysis are carried out on the MSTAR dataset with rich samples. The results show that the proposed method can achieve superior performance for 10 types of target samples under the standard operating condition (SOC), noise interference, and occlusion conditions, which verifies its effectiveness.

1. Introduction

By processing high-resolution images obtained by synthetic aperture radar (SAR), analysis and interpretation of focus areas or targets of interest can be achieved. SAR target recognition technology can be used for reconnaissance and intelligence interpretation [13]. Since the 1990s, the SAR target recognition method has been enriched and progressed with the development of pattern recognition and artificial intelligence technology and has made considerable progress. Mainstream SAR target recognition methods usually use a two-stage process of feature extraction and classification to confirm the target label of unknown samples. Typical target features of SAR images include geometric shapes [47], projection transformations [812], and electromagnetic scattering [1316]. Target contours, regions, shadows, etc., are representative shape features, which have the ability to distinguish different categories. Projection transformation algorithms include mathematical projection and transformation domain decomposition. The former includes matrix decomposition and manifold learning, and the latter includes wavelet, monogenic signal, and mode decomposition. The electromagnetic scattering characteristics reflect the backscattering characteristics of the target, such as peak value, scattering center, and polarization. The classification stage is closely coupled with feature extraction, and the difference of features is used to determine the category of the input sample. Nearest neighbor classifiers [1719], support vector machine (SVM) [2024], and sparse representation-based classification (SRC) [2530] are the most widely used classifiers in existing SAR target recognition methods. With the rapid development of deep learning technology in recent years, deep learning models represented by convolutional neural network (CNN) [3138] have been also employed in SAR target recognition.

Based on the existing research studies, this paper proposes a SAR target recognition method combining multilevel deep features. In the feature learning stage, the deep residual network (ResNet) [3943] is used to learn the target multilevel feature maps. Compared with traditional handcrafted features, the feature maps obtained from ResNet have the advantage of stronger descriptive ability and can provide more sufficient discriminative information for the decision-making stage [44, 45]. Considering the possible correlation between multilevel depth features, this paper uses vector correlation as the basic criterion to perform cluster analysis on different deep features to obtain multiple depth feature sets. Afterwards, the joint sparse representation (JSr) is used to characterize and classify different feature sets, so as to further utilize their internal relations. Finally, the results of different feature sets are linearly weighted and fused to obtain reliable recognition results. In the experiment, the standard operating condition (SOC) and typical extended operating conditions (EOC) are set based on the MSTAR dataset to test and verify the method, and the results show its effectiveness and robustness.

2. Learning of Deep Features by ResNet

ResNet was proposed by Kaiming He and has been fully verified in a number of image detection and segmentation competitions [20, 21]. With the continuous increase of the number of network layers, the learned features become more abundant, which can better reflect the multifaceted characteristics of the target of interest in the image. However, at the same time, it will also lead to a serious gradient disappearance problem. For this reason, ResNet proposes residual learning to overcome the difficulty of network optimization. Assuming that H(x) represents the best mapping, the stacked nonlinear layer is used to obtain a new mapping F(x)=H(x) − x, and then, the current best mapping F(x)=H(x)+x is obtained. F(x)+x can be obtained by adding a “quick connection” operation in the feedforward network. This operation has the advantages of high efficiency and robustness and will not bring additional computational complexity.

Existing research results have verified the effectiveness of ResNet in the field of image processing (such as target detection and recognition). For this reason, this paper introduces it into SAR target recognition, which is mainly used for the learning and acquisition of the multilevel deep features. The ResNet structure used in this paper contains 20 layers in total. Compared with the general CNN, ResNet can realize the direct connection between the input and the subsequent nonadjacent layers, thereby minimizing the problems of information loss. ResNet simplifies the difficulty of network learning and improves overall training efficiency. The designed networks can learn multilevel feature maps of SAR images with rich descriptions. These features can reflect various characteristics of the target in the image from different aspects and can provide effective discriminative information for target recognition.

3. Clustering of Deep Features Based on the Correlation Principle

For the deep features acquired from the same SAR image, there may be some locality in their intrinsic correlation. For this reason, it is necessary to carry out correlation analysis on the multilevel deep features. This paper uses the traditional vector correlation as the criterion to design a deep feature clustering algorithm. Assuming that the multilevel deep feature obtained through ResNet is V={I1, I2,…, IN}, the correlation between every two different feature vectors is firstly calculated and recorded in Table 1. The subsequent Algorithm 1 is described,

Table 1.

Correlation matrix of deep feature vectors.

I 1 I 2 I N
I 1 c 11 c 12 c 2N
I 2 c 21 c 22 c 2N
I N c N1 c N2 c NN

Algorithm 1.

Algorithm 1

Clustering algorithm for deep features.

In the above steps, the symbol “\” means the remainder operation; c1StTc indicates that the correlation coefficient I1 of each feature in St and is higher than the threshold Tc. Generally, some empirical analysis and tests can be used to select a proper threshold. Under the condition of normalized similarity, the threshold value generally tends to the middle value of the interval to ensure the balance of feature correlation and independence. After the above clustering algorithm, the original N feature vectors are redivided into several feature sets. For a subset containing multiple feature vectors, they share relatively high internal correlation.

4. Recognition Method via Combination of Multilevel Deep Features

4.1. Principle of JSR

JSR is a multitask learning algorithm, mainly for multiple related sparse representation problems [1013]. For the multiple deep feature vectors in the same feature set, this paper adopts JSR for characterization and classification. Let M feature vectors be y=[y(1), y(2),…, y(M)]; their independent sparse representation problem is as follows:

yk=Dkαk+εk,k=1,2,,M, (1)

where D(k), α(k), and ε(k) correspond to the dictionary sparse coefficient vector and representation error of the kth feature, respectively.

The problem of sparse representation of the M features can be jointly investigated, and the model is obtained as follows:

minβgβ=k=1MykDkαk, (2)

where β=[α(1), α(2),…, α(M)] is the matrix containing all the sparse coefficient vectors.

The joint representation model shown in formula (2) is only unified in form, but does not use the correlation between different features. The JSR model improves the overall solution accuracy by appropriately constraining the sparse matrix β, which is expressed as follows:

minβgβ+λβ2,1, (3)

where ‖·‖2,1 is the 1/2 norm. According to the sparse coefficient matrix obtained by formula (3), the reconstruction errors of different categories can be calculated, respectively, and then, the decision of the target category can be generated as follows:

identityy=minik=1MykDikαik. (4)

4.2. Target Recognition via Decision Fusion

This paper uses multilevel deep feature clustering to effectively investigate the independence and relevance of these features. Then, the JSR is used to independently analyze each feature set with inherent correlation to obtain the reconstruction errors. Denote the output reconstruction error of each feature set as ft(i), t=1,2,…, P, and the linear weighting is employed to fuse them as follows:

ei=ω1f1i+ω2f2i++ωPfPi, (5)

where ωi(i=1,2,…, P) denotes the weight coefficient.

This paper determines the weights according to the number of features in each feature set and sets ωi=pi/P, where pi is the number of features in the ith feature set. Finally, the target category is determined according to the weighted reconstruction error of each category.

Figure 1 shows the basic flow of the method in this paper with several main steps, including the deep feature clustering, JSR, and decision fusion. The final recognition performance is improved by examining the independence and correlation of multilevel deep features.

Figure 1.

Figure 1

Flowchart of the proposed method.

5. Experiments and Analysis

5.1. MSTAR Dataset

The MSTAR dataset is used to carry out experiments to test and analyze the performance of the method. The dataset contains 10 types of targets shown in Figure 2, and the related information of these SAR images is listed in Table 2. Table 3 sets the training and test sets used in the experiments, including the categories, configurations, number of samples, and depression angles of 10 types of targets.

Figure 2.

Figure 2

Images of targets to be classified. (a) BMP2. (b) BTR70. (c) T72. (d) T62. (e) BRDM2. (f) BTR60. (g) ZSU23/4. (h) D7. (i) ZIL131. (j) 2S1.

Table 2.

Relevant information about MSTAR dataset.

Azimuth (°) Depression angle (°) Resolution (m) Size (pixel)
0∼360 15, 17, 30, 45 0.3 × 0.3 128 × 128

Table 3.

Training and test sets for the 10-class recognition problem.

Class Training Test
Configuration Samples Configuration Samples
BMP2 9563 233 9563 195
9566 196
C21 196

BTR70 C71 233 C71 196
132 196

T72 132 232 812 195
s7 191

T62 A51 299 A51 273
BRDM2 E-71 298 E-71 274
BTR60 7532 256 7532 195
ZSU23/4 d08 299 d08 274
D7 13015 299 13015 274
ZIL131 E12 299 E12 274
2S1 B01 299 B01 274

In the experiments, the focus is on the comparative analysis of the proposed method and existing four types of SAR target recognition methods, which are, respectively, denoted as “ResNet,” “A-ConvNet,” “JSR-Mono,” and “JSR-Deep.” Among them, both ResNet and A-ConvNet are methods based on deep learning models, using specific network structures for SAR target recognition. JSR-Mono and JSR-Deep use JSR as the classifier, but the difference is that the features used are monogenic signal and deep features.

5.2. Results and Analysis

5.2.1. SOC

According to the settings in Table 3, the original samples in the MSTAR dataset are used for the validation. At this time, the experimental scene can be considered as a SOC, that is, the overall similarity between the test and training samples is relatively high. In the current experiment, the relevant threshold is set to 0.4. Figure 3 shows the recognition results of the proposed method. The diagonal elements in the confusion matrix are the correct recognition rates of the corresponding target. It can be seen from Table 3 that the test configurations of BMP2 and T72 are more than the training ones, which leads to their relatively low recognition rate among the 10 types of targets. Synthesizing the results of 10 types of target recognition, Table 4 compares the average recognition rates of different methods in the current scenario. In terms of recognition accuracy, the method in this paper has better performance under current conditions, reflecting its effectiveness. Compared with the ResNet method, this paper further improves the recognition performance through the comprehensive application of the multilevel deep features. Compared with the JSR-Deep method, this paper promotes the improvement of the final recognition performance by introducing the screening analysis of deep features and the feature set decision and fusion.

Figure 3.

Figure 3

Confusion matrix achieved by the proposed method.

Table 4.

Average recognition rates under the standard operating condition.

Method Average recognition rate (%)
Proposed 99.28
ResNet 99.02
A-ConvNet 98.75
JSR-mono 98.68
JSR-deep 99.14

According to the feature clustering algorithm, the threshold has an important influence on the final clustering result. Therefore, it is very important to select an appropriate clustering threshold. Table 5 shows the average recognition rate of the proposed method at different thresholds, which achieves the best effect at the one of 0.4. If the threshold is too small, the constraint on the correlation between different features is too weak, that is, the features with large differences are clustered into one category. On the contrary, when the threshold is too large, the constraint on the correlation between different features is too strong. Individual features tend to call themselves one category, losing the value of cluster analysis. According to this result, this paper determines the cluster correlation threshold as 0.4 in the subsequent experiments.

Table 5.

Average recognition rates of the proposed method at different thresholds.

T c 0.1 0.2 0.3 0.35 0.4 0.45 0.5 0.55
Average recognition rate (%) 99.08 99.12 99.18 99.24 99.28 99.22 99.15 99.10

5.2.2. Noise Interference

Whether it is an optical image or a radar image, it is inevitably contaminated by noise during the acquisition process. In practical recognition systems, training samples are often carefully selected and preprocessed and have high image quality and signal-to-noise ratio (SNR). However, the test samples come from relatively random acquisition conditions, and it may be with poor image quality and low SNR. For this reason, the noise robustness of the recognition algorithm is very important. In this experiment, on the basis of the training and test sets in Table 3, noises are added to the test samples of 10 types of targets to obtain multiple test sets with different SNRs [5]. Then, various methods are tested separately. Table 6 shows the results of the recognition rate in the current experimental scenario. Compared with the results under SOC, the performance of various methods under noise interference has been degraded. Observing the results under each SNR, the method in this paper can achieve the highest average recognition rate at each noise level, reflecting its noise robustness.

Table 6.

Recognition rates under noise corruption.

Methods SNR/dB
−10 −5 0 5 10
Proposed 70.58 81.32 88.14 93.56 98.94
ResNet 63.42 74.42 83.43 87.53 98.42
A-ConvNet 62.74 73.46 82.81 86.78 98.02
JSR-Mono 64.92 75.08 85.09 89.02 98.13
JSR-Deep 66.57 76.82 85.49 91.82 98.36

According to [1013], sparse representation has a certain robustness to noise interference, which is also reflected in the stronger noise robustness of the sparse representation method in Table 6. On the one hand, the method in this paper uses multilevel deep features to complement each other to improve the ability to adapt to noise. At the same time, the JSR is used in the classification process, and the noise robustness can be further enhanced.

5.2.3. Partial Occlusion

Similar to the case of noise interference, the actual sample to be identified may also be partially occluded by the target. At this time, only part of the target characteristics can be reflected in the test sample and used for classification. According to the algorithm described in [5], on the basis of the test set in Table 3, the target area is partially occluded to obtain the test set under different occlusion ratios, and then, the performance of various methods is tested. Figure 4 shows the recognition rate curve of each method. It can be seen that the method in this paper is more robust in this experimental scenario. Similar to the case of noise interference, the method based on JSR is more robust than the comparison methods. The proposed method in this paper combines the advantages of multilevel deep features, and JSR improves the overall performance of the recognition method under target occlusion conditions.

Figure 4.

Figure 4

Average recognition rates under target occlusions.

6. Conclusion

This paper proposes a SAR target recognition method combining multilevel deep features. This method first uses ResNet to learn SAR images to obtain multilevel deep feature vectors. Then, the deep feature vectors are clustered based on the correlation criterion to obtain multiple feature sets. On this basis, the different feature sets are characterized and classified based on JSR, and the reconstruction error results are obtained. Finally, the linear fusion analysis is performed on the results obtained from different feature sets to determine the target category. The proposed method can effectively combine the advantages of ResNet and JSR to improve recognition performance. Validation experiments are carried out on the MSTAR dataset, and the results show that the proposed method can achieve superior performance compared with existing methods under SOC and typical EOCs.

Acknowledgments

This work was supported by Major Scientific Research Projects in Guangdong Province (nos. 2018KTSCX331 and 2018KQNCX378) and Ministry of Education Cooperative Education Project (nos. 201802123151 and 201902084029).

Data Availability

The dataset used in this paper can be accessed upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  • 1.El-Darymli K., Gill E. W., Power D., Moloney C. Automatic target recognition in synthetic aperture radar imagery: a state-of-the-art review. IEEE Access . 2016;4:6014–6058. doi: 10.1109/access.2016.2611492. [DOI] [Google Scholar]
  • 2.Amoon M., Rezai‐rad G. a. Automatic target recognition of synthetic aperture radar (SAR) images based on optimal selection of Zernike moments features. IET Computer Vision . 2014;8(2):77–85. doi: 10.1049/iet-cvi.2013.0027. [DOI] [Google Scholar]
  • 3.Ding B., Wen G., Ma C., Yang X. Target recognition in synthetic aperture radar images using binary morphological operations. Journal of Applied Remote Sensing . 2016;10(4) doi: 10.1117/1.jrs.10.046006.046006 [DOI] [Google Scholar]
  • 4.Shan C., Huang B., Li M. Binary morphological filtering of dominant scattering area residues for SAR target recognition. Computational Intelligence and Neuroscience . 2018;2018:15. doi: 10.1155/2018/9680465.9680465 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Jin L., Chen J., Peng X. Synthetic aperture radar target classification via joint sparse representation of multi-level dominant scattering images. Optik . 2019;186:110–119. doi: 10.1016/j.ijleo.2019.04.014. [DOI] [Google Scholar]
  • 6.Tan J., Fan X., Wang S., et al. Target recognition of SAR images by partially matching of target outlines. Journal of Electromagnetic Waves and Applications . 2019;33(7):865–881. doi: 10.1080/09205071.2018.1495580. [DOI] [Google Scholar]
  • 7.Papson S., Narayanan R. M. Classification via the shadow region in SAR imagery. IEEE Transactions on Aerospace and Electronic Systems . 2012;48(2):969–980. doi: 10.1109/taes.2012.6178042. [DOI] [Google Scholar]
  • 8.Mishra A. K. Validation of PCA and LDA for SAR ATR. Proceedings of the IEEE TENCON Conference; November 2008; Hyderabad, India. pp. 1–6. [Google Scholar]
  • 9.Mishra A. K., Motaung T. Application of linear and nonlinear PCA to SAR ATR. Proceedings of the IEEE 25th International Conference Radioelektronika (RADIOELEKTRONIKA); April 2015; Pardubice, Czech Republic. pp. 349–354. [Google Scholar]
  • 10.Cui Z., Cao Z., Yang J., Feng J., Ren H. Target recognition in synthetic aperture radar images via non‐negative matrix factorisation. IET Radar, Sonar & Navigation . 2015;9(9):1376–1385. doi: 10.1049/iet-rsn.2014.0407. [DOI] [Google Scholar]
  • 11.Yu M., Dong G., Fan H., Kuang G. SAR target recognition via local sparse representation of multi-manifold regularized low-rank approximation. Remote Sensing . 2018;10 doi: 10.3390/rs10020211.211 [DOI] [Google Scholar]
  • 12.Huang Y., Peia J., Yanga J., Wang B., Liu X. Neighborhood geometric center scaling embedding for SAR ATR. IEEE Transactions on Aerospace and Electronic Systems . 2014;50(1):180–192. doi: 10.1109/taes.2013.110769. [DOI] [Google Scholar]
  • 13.Xiong W., Cao L., Hao Z. Combining wavelet invariant moments and relevance vector machine for SAR target recognition. Proceedings of the IET International Radar Conference; April 2009; Guilin, China. pp. 1–4. [Google Scholar]
  • 14.Dong G., Kuang G., Wang N., Zhao L., Lu J. SAR target recognition via joint sparse representation of monogenic signal. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing . 2015;8(7):3316–3328. doi: 10.1109/jstars.2015.2436694. [DOI] [Google Scholar]
  • 15.Zhou Y., Chen Y., Gao R., Feng J., Zhao P., Wang Li. SAR target recognition via joint sparse representation of monogenic components with 2D canonical correlation analysis. IEEE Access . 2019;7:25815–25826. doi: 10.1109/access.2019.2901317. [DOI] [Google Scholar]
  • 16.Chang M., You X., Cao Z. Bidimensional empirical mode decomposition for SAR image feature extraction with application to target recognition. IEEE Access . 2019;7:135720–135731. doi: 10.1109/access.2019.2941397. [DOI] [Google Scholar]
  • 17.Potter L. C., Moses R. L. Attributed scattering centers for SAR ATR. IEEE Transactions on Image Processing . 1997;6(1):79–91. doi: 10.1109/83.552098. [DOI] [PubMed] [Google Scholar]
  • 18.Ding B., Wen G., Zhong J., Ma C., Yang X. A robust similarity measure for attributed scattering center sets with application to SAR ATR. Neurocomputing . 2017;219:130–143. doi: 10.1016/j.neucom.2016.09.007. [DOI] [Google Scholar]
  • 19.Ding B., Wen G., Huang X., Ma C., Yang X. Target recognition in synthetic aperture radar images via matching of attributed scattering centers. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing . 2017;10(7):3334–3347. doi: 10.1109/jstars.2017.2671919. [DOI] [Google Scholar]
  • 20.Zhao Q., Principe J. C. Support vector machines for SAR automatic target recognition. IEEE Transactions on Aerospace and Electronic Systems . 2001;37(2):643–654. doi: 10.1109/7.937475. [DOI] [Google Scholar]
  • 21.Tison C., Pourthie N., Souyris J. Target recognition in SAR images with support vector machines (SVM). Proceedings of the 2007 IEEE International Geoscience and Remote Sensing Symposium; July 2007; Barcelona, Spain. pp. 456–459. [Google Scholar]
  • 22.Demirhan M. E., Salor Ö. Classification of targets in SAR images using SVM and k-NN techniques. Proceedings of the 2016 24th Signal Processing and Communication Application Conference (SIU); May 2016; Zonguldak, Turkey. pp. 1581–1584. [Google Scholar]
  • 23.Liu H., Li S. Decision fusion of sparse representation and support vector machine for SAR image target recognition. Neurocomputing . 2013;113:97–104. doi: 10.1016/j.neucom.2013.01.033. [DOI] [Google Scholar]
  • 24.Thiagaraianm J. J., Ramamurthy K. N., Knee P., Spanias A., Berisha V. Sparse representations for automatic target classification in SAR images. Proceedings of the 4th International Symposium on Communications, Control and Signal Processing; March 2010; Limassol, Cyprus. pp. 1–4. [Google Scholar]
  • 25.Song H., Ji K., Zhang Y., Xing X., Zou H. Sparse representation-based SAR image target classification on the 10-class MSTAR data set. Applied Sciences . 2016;6(26) doi: 10.3390/app6010026. [DOI] [Google Scholar]
  • 26.Ding B., Wen G. Sparsity constraint nearest subspace classifier for target recognition of SAR images. Journal of Visual Communication and Image Representation . 2018;52:170–176. doi: 10.1016/j.jvcir.2018.02.012. [DOI] [Google Scholar]
  • 27.Li W., Yang J., Ma Y. Target recognition of synthetic aperture radar images based on two-phase sparse representation. Journal of Sensors . 2020;2020:12. doi: 10.1155/2020/2032645.2032645 [DOI] [Google Scholar]
  • 28.Yu L., Wang L., Xu Y. Combination of joint representation and adaptive weighting for multiple features with application to SAR target recognition. Scientific Programming . 2021;2021:9. doi: 10.1155/2021/9063419.9063419 [DOI] [Google Scholar]
  • 29.Zhu X. X., Tuia D., Mou L., et al. Deep learning in remote sensing: a comprehensive review and list of resources. IEEE Geoscience and Remote Sensing Magazine . 2017;5(4):8–36. doi: 10.1109/mgrs.2017.2762307. [DOI] [Google Scholar]
  • 30.Kang M., Ji K., Leng X., Xing X., Zou H. Synthetic aperture radar target recognition with feature fusion based on a stacked autoencoder. Sensors . 2017;17(1) doi: 10.3390/s17010192.192 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Morgan D. E. Deep convolutional neural networks for ATR from SAR imagery. Proceedings of the SPIE; May 2015; Baltimore, MD, USA. pp. 1–13. [Google Scholar]
  • 32.Chen S., Wang H., Xu F., Jin Ya-Q. Target classification using the deep convolutional networks for SAR images. IEEE Transactions on Geoscience and Remote Sensing . 2016;47(6):1685–1697. doi: 10.1109/tgrs.2016.2551720. [DOI] [Google Scholar]
  • 33.Zhao J., Zhang Z., Yu W., Truong T.-K. A cascade coupled convolutional neural network guided visual attention method for ship detection from SAR images. IEEE Access . 2018;6:50693–50708. [Google Scholar]
  • 34.Min R., Lan H., Cao Z., Cui Z. A gradually distilled CNN for SAR target recognition. IEEE Access . 2019;7:42190–42200. [Google Scholar]
  • 35.Kechagias-Stamatis O., Aouf N. Fusing deep learning and sparse coding for SAR ATR. IEEE Transactions on Aerospace and Electronic Systems . 2019;55(2):785–797. doi: 10.1109/taes.2018.2864809. [DOI] [Google Scholar]
  • 36.Jiang C., Zhou Y. Hierarchical fusion of convolutional neural networks and attributed scattering centers for Robust SAR ATR. Remote Sensing . 2018;10(6) doi: 10.3390/rs10060819.819 [DOI] [Google Scholar]
  • 37.Dong G., Wang N., Kuang G., Zhang Y. Kernel linear representation: application to target recognition in synthetic aperture radar images. Journal of Applied Remote Sensing . 2014;8(1) doi: 10.1117/1.jrs.8.083613.083613 [DOI] [Google Scholar]
  • 38.Xin Y., Kuan L., Jiao L. SAR automatic target recognition based on classifiers fusion. Proceedings of the International Workshop on Multi-Platform/Multi-Sensor Remote Sensing and Mapping; January 2011; Xiamen, China. pp. 1–5. [Google Scholar]
  • 39.Cui Z., Cao Z., Yang J., Feng J. A hierarchical propelled fusion strategy for SAR automatic target recognition. EURASIP Journal on Wireless Communications and Networking . 2013;39:1–8. doi: 10.1186/1687-1499-2013-39. [DOI] [Google Scholar]
  • 40.Srinivas U., Monga V. Meta-classifiers for exploiting feature dependence in automatic target recognition. Proceedings of the IEEE Radar Conference; May 2011; Kansas City, MO, USA. pp. 147–151. [Google Scholar]
  • 41.Huan R., Pan Y. Decision fusion strategies for SAR image target recognition. IET Radar, Sonar & Navigation . 2011;5(7):747–755. doi: 10.1049/iet-rsn.2010.0319. [DOI] [Google Scholar]
  • 42.Chang C., Lin C. LIBSVM: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology . 2011;2(3):296–389. doi: 10.1145/1961189.1961199. [DOI] [Google Scholar]
  • 43.He K., Zhang X., Ren S., Sun J. Deep residual learning for image recognition. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition; June 2016; LasVegas, NV, USA. pp. 770–778. [Google Scholar]
  • 44.Gao M., Song P., Wang F., Liu J., Mandelis A., Qi DaW. A novel deep convolutional neural network based on ResNet-18 and transfer learning for detection of wood knot defects. Journal of Sensors . 2021;2021:16. doi: 10.1155/2021/4428964.4428964 [DOI] [Google Scholar]
  • 45.Jing E., Zhang H., Li Z., Liu Y., Ji Z., Ganchev I. ECG heartbeat classification based on an improved ResNet-18 model. Computational and Mathematical Methods in Medicine . 2021;2021:13. doi: 10.1155/2021/6649970.6649970 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The dataset used in this paper can be accessed upon request.


Articles from Computational Intelligence and Neuroscience are provided here courtesy of Wiley

RESOURCES