Skip to main content
PLOS One logoLink to PLOS One
. 2025 Jan 13;20(1):e0317355. doi: 10.1371/journal.pone.0317355

Modulation pattern recognition method of wireless communication automatic system based on IABLN algorithm in intelligent system

Ting Xie 1, Xing Han 2,*
Editor: Salim Heddam3
PMCID: PMC11729949  PMID: 39804877

Abstract

The aim of this study is to address the limitations of convolutional networks in recognizing modulation patterns. These networks are unable to utilize temporal information effectively for feature extraction and modulation pattern recognition, resulting in inefficient modulation pattern recognition. To address this issue, a signal modulation recognition method based on a two-way interactive temporal attention network algorithm has been developed. A two-way interactive temporal network is designed on the basis of the long and short-term memory network with the objective of enhancing the contextual connection of the temporal network. The output of the temporal network is attentively weighted using the soft attention mechanism. The proposed algorithm exhibited enhanced overall, average, and maximum recognition rates at varying signal-to-noise ratios, with an increase of 10.34%, 8.33%, and 3.33%, respectively, in comparison to other algorithms within the Radio Machine Learning (RML) 2016.10b dataset. Furthermore, the modulated signal recognition accuracy was as high as 92.84%, with an average increase in the Kappa coefficient of 12.28%. The Kappa coefficient in the Communication Signal Processing Benchmark for Machine Learning (CSPB.ML2018) 2018 dataset was 0.62, representing an average increase of 10.32% over other algorithms. The results demonstrate that the proposed recognition method can enhance the network’s accuracy in recognizing modulated signals. Moreover, it has potential applications in modulation pattern recognition in automatic systems for wireless communications.

1. Introduction

As science and technology develop, wireless communication technology is constantly applied in daily life, facilitating people’s material and cultural life [1]. Among the aforementioned fields, automatic modulation pattern recognition technology represents a significant area of wireless communication automation [2]. This technology is capable of automatically recognizing the modulation mode of wireless communication signals, which plays a pivotal role in determining the radio’s ability to perceive the spectrum space. However, with the exponential growth of communication data, the allocation and utilization of the limited spectrum resources has become a pressing concern in the current wireless communication field [3]. Traditional modulation pattern recognition methods are mainly achieved through likelihood ratio or feature extraction, and their computational steps are complex and the recognition accuracy is poor [4]. The advent of deep learning techniques thus opens up new avenues for modulation pattern recognition. The application of deep learning techniques to modulation pattern recognition has the potential to enhance the accuracy of recognition based on feature extraction, as evidenced by studies [5,6]. However, in the modulation pattern recognition of wireless communication, it is difficult for the algorithm formed by Convolutional Neural Network (CNN) to utilize the time series information. Meanwhile, recurrent networks are slow for long sequence training. Accordingly, the study proposes a two-way Interactive Attention Bi-LSTM Network (IABLN) algorithm with the objective of resolving the aforementioned issue. The number of chain cells in LSTM is reduced by enhancing the contextual linking ability of Long Short-Term Memory (LSTM) networks and losslessly compressing the length of information using convolutional networks. Finally, the attention mechanism weights the LSTM, and the IABLN algorithm is constructed to improve modulation pattern recognition and classification effectiveness for the automatic wireless communication system.

A two-way interactive temporal network based on LSTM is studied and designed to enhance the contextualization capability of the temporal network. By introducing multiple rounds of interactive operations, the ability to extract modulation pattern information for wireless communication systems is improved. The application of soft attention mechanisms within the IABLSTM network enables the model to prioritize pertinent information pertinent to the current task, thereby enhancing its overall performance. The modulation pattern recognition method for wireless communication automation systems, which is based on the IABLN algorithm developed in the study, addresses the limitations of temporal networks in long input sequences and the lack of temporal information sensitivity in convolutional networks. Overall, the recognition method improves the contextualization capability of the temporal network, speeds up the inference process, and improves the recognition accuracy of the network, which has a good application prospect in the field of wireless communication. The proposed recognition method has the potential to enhance the capabilities of temporal networks in summarizing past and future information, thereby facilitating the advancement of modulation pattern recognition technology for automatic systems. Furthermore, it offers theoretical and technical support for the development of intelligent communications.

The overall framework of the study can be divided into five sections. In Section 1, a synthesis is presented of the domestic and international achievements and shortcomings pertaining to modulation pattern recognition methods for the development of automatic wireless communication systems. In Section 2, the study proposes the IABLN algorithm and based on this, the design of modulation pattern recognition methods based on the IABLN algorithm is carried out. In Section 3, the experimental simulation and analysis are carried out. In the fourth part, the experimental results are discussed and analyzed. In Section 4, the research findings are summarized and directions for further research are pointed out.

2. Related works

Communication development promotes the improvement and enhancement of people’s living standards, and modulation pattern recognition technology is significant in wireless communication. Scholars have achieved many in research on modulation pattern recognition of wireless communication systems. F. Liu et al. proposed a method using feature extraction and deep learning for low accuracy of automatic recognition of wireless communication signals. By optimizing the gated recursive unit, the CNN and the parallel gated recursive unit were input for identification, achieving a high recognition rate with low Signal-to-noise Ratio (SNR) [7]. To improve automatic modulation classification in the development of cognitive radio, Q. Zheng et al. proposed a two-level data enhancement method using spectrum interference. By converting the original signal to the frequency domain, the frequency domain information was used to enhance the radio signal to help modulation classification [8]. To improve multi-class modulation modes classification of modulated signals, R. Khan et al. increased the modulated signals in the frequency and spatial domains by deploying three data enhancement methods: stochastic method/reduction, random shift and random weak Gaussian ambiguity enhancement techniques. Hyper-parameter selection based on cross-validation was used for statistics, and it was found that the learning efficiency in the spatial domain was better than that in the frequency domain [9]. To strengthen the analysis of the differences and characteristics of in-phase/quadrature and amplitude/phase representation, S. Chang et al. introduced CNN and Recursive Neural Network (RNN) into automatic modulation recognition for modulation type recognition of received signals. Through the proposed network, the effective use of all outputs was realized [10]. To improve low effectiveness of modulation classification in working systems with low SNR, Y. Sun et al. proposed a modulation type classification model for different received SNRs using machine learning. Firstly, the constellation image and image classification technology were used for modulation type detection. Secondly, the feature graphic representation was used to represent the statistical features as a spider map of machine learning. Therefore, the overall classification accuracy of 59.00% was obtained at 0dB SNR [11]. N. Rashvand et al. proposed a natural language processing Transformer network-based modulation recognition method with the objective of enhancing the accuracy of automatic modulation recognition of input signals. The RF signals were embedded with markers through real-time edge computing of IoT devices, resulting in an accuracy of 65.75% in the RML2016 dataset [12]. H. S. Ghanem et al. proposed a CNN-based modulation classification algorithm to address the problem of low modulation recognition efficiency due to point deformation and dispersion of constellation maps caused by noisy channels. The generation of modulation type travel graphs and the reliance on Radon variations for training and testing enabled the achievement of effective modulation classification under fading channel conditions [13].

Deep learning technologies such as CNN and LSTM are widely used in modulation classification in the field of wireless communication, and they have strong potential for processing and analyzing large data by obtaining raw data and finding representations for different tasks such as classification and detection. Therefore, deep learning technology’s value in the modulation classification of intelligent communication systems has attracted many researchers. To improve gradient vanishing problem in the process of modulation pattern recognition, J. N. Njoku et al. proposed a cost-effective hybrid neural network c. The pooling layer, small filter size, Gaussian drop layer, and skipping connection layer were used to increase network capacity, so as to enhance its feature extraction process. Moreover, the recognition accuracy of 93.50% was achieved in the Deep-Sig dataset [14]. To improve that the modulated radio signal had a great influence on the modulation recognition results, S. Lin et al. proposed a time-frequency attention mechanism based on CNN, which solved the problem of learning channel, frequency, and time information [15]. To promote deep learning in radio signal recognition, Y. Tu et al. created a real-world radio signal dataset, and used deep learning methods and machine learning methods to compare the recognition benchmarks, so as to realize the automatic collection and labeling of data [16]. The computational speed of deep neural networks also has limitations. Ashtiani et al. proposed an integrated deep neural network. Sub-nanosecond image classification was performed by directly processing the light waves propagating across the chip’s pixel array, eliminating the need for large memory modules [17]. To improve the classification accuracy of higher-order modulations in the polar plane, A. H. Shah et al. proposed to use phase ordinariness and polarity as combined inputs to the feature proposal in CNNs, and to divide the CNN into four blocks, each consisting of a set of symmetric and asymmetric filters. The extended input of the network was improved by adding features inside the network, thus improving the network [18]. M. Venkatramanan and M. Chinnadurai developed a deep learning arithmetic optimization algorithm based on an augmented modulation classification method to address the suboptimal efficiency of multiple input multiple output orthogonal frequency division multiplexing systems. Modulated classification by LSTM-based CNN and hyperparameter selection of LSTM-CNN using enhanced modulation led to rational validation results in simulation tests [19].

Based on the above, it can be seen that scholars have carried out various studies on modulation pattern recognition of wireless communication systems using deep learning, and most of them focus on CNNs or temporal recurrent neural networks. However, the lack of global information and temporal characteristics of CNNs leads to the impoverishment of the contextual connection of modulated signals. In addition, the characteristics of serial computation in time series networks lead to certain limitations in the training scale and inefficient thrust efficiency. These limitations continue to impede the advancement of deep learning-based modulation patterns for wireless communication systems. To address this challenge, the study proposes a modulation pattern recognition method based on the IABLN algorithm. While enhancing LSTM context connection, the attention mechanism innovatively carries out the attention weighting of network output, and an IABLN bidirectional interaction algorithm is designed to improve modulation pattern recognition. In the field of wireless communications, the temporal characteristics of modulated signals are key to identifying different modulation patterns. The IABLN algorithm is designed based on these temporal characteristics to ensure that the different modulation patterns can be effectively captured and distinguished.

3. Design of modulation pattern recognition method using IABLN algorithm

Firstly, the contextual connection ability of LSTM is enhanced. Secondly, the length of lossless compressed information of a convolutional network is employed to reduce the number of chain cells of LSTM. Thirdly, a Bidirectional LSTM (BiLSTM) is introduced to construct an Interactive Bidirectional Timing Sequence Network (IBLSTM). Secondly, an attention mechanism is introduced to weight the output attention of IBLSTM, thereby forming the IABLN algorithm.

3.1. LSTM-based IBLSTM network

Automatic modulation recognition in intelligent communications represents a crucial aspect of recognizing the radio’s capacity to perceive the spectral space. This is a fundamental and universal aspect of several fields. The current common method for automatic modulation pattern recognition is information modulation recognition using deep neural networks. However, modulation recognition based on convolutional networks lacks global information and temporal features, which can result in a weak contextual connection to the signal. Furthermore, temporal networks exhibit serial computation characteristics, which results in slower speeds for both large-scale training and inference. This results in a sub-optimal modulation recognition pattern for deep neural networks based on convolutional networks. Consequently, research is conducted to investigate alternative modulation pattern recognition methods. To improve the ability to extract modulation mode information from wireless communication systems, the context connection ability of LSTM is optimized. Throughout the execution process of the LSTM, input elements and hidden state intelligence from the previous time fragment interact within the LSTM, which leads to insufficient representation of the context [20,21]. Therefore, a deformed LSTM is introduced to enhance the context connection. Before input xi at the current moment enters the cell of the LSTM, xi is interacted with another input ht−1 of the LSTM for multiple rounds, thus enhancing the contextual modeling capability. The specific expression is shown in Eq (1).

{xi=2σ(Qihprevi1)xi2foroddi[1r]hprevi=2σ(Rixi1)hprevi2foreveni[1r] (1)

In Eq (1), r represents the number of interactions. σ represents the Sigmoid activation function. Qi and Ri represent the parameters, respectively. hprevi represents another input obtained after the interaction. Among them, x−1 = xt, hprev0=ht1, and both denote the current moment’s input and the previous moment’s output hidden state, respectively. A schematic diagram of the interaction of the input with another input of the LSTM for multiple rounds is shown in Fig 1.

Fig 1. Schematic diagram of multi-round interaction.

Fig 1

Fig 1 shows two inputs interacting in 5 rounds. The entire stage is considered interactive, i.e., an interaction between the input vector and the hidden state. The LSTM can be interacted to encode sequence information from the current moment to the future moment [22,23]. However, it is not possible to encode information from the future moment to the current moment. Therefore, Bi-LSTM is further introduced for optimization. BiLSTM is a network model with forward LSTM and backward LSTM, better capturing the dependencies between long distances and realizing back-to-front information encoding, with superior global synthesis capabilities [24,25]. Fig 2 shows the BiLSTM network structure.

Fig 2. BiLSTM network structure.

Fig 2

In Fig 2, the BiLSTM has a two-layer LSTM structure, the first layer is the forward LSTM, which is responsible for propagating the input information sequence forward. The second layer is the backward LSTM, which is responsible for propagating the input information backwards. The expression for the output of the BiLSTM at a certain point in time is shown in Eq (2) [26].

{htF=fac(UFht1F+WFxt+bF)htB=fac(UBht1B+WBxt+bB)ht=htFhtB (2)

In Eq (2), htF is forward LSTM output, htBis backward LSTM output, fac(•) is the activation function, U and W are the weight matrices. F and B are the forward and backward LSTM, respectively. xt is the input, and b is the offset term. ht is the output. ⨁ is vector splicing operation. Combined with the above, IBLSTM structure is shown in Fig 3.

Fig 3. IBLSTM network structure.

Fig 3

In Fig 3, the current input is first interacted with the previous input to improve the contextual connection capability of the LSTM. BiLSTM is used for bidirectional interaction to achieve integrated input and output and increase the network performance.

3.2. IABLN recognition algorithm based on IBLSTM network

In order to realize the design of the IABLN algorithm, an attention mechanism is introduced to IBLSTM output attention weighting. The attention mechanism is to find the correlation between the data on the basis of the original data, and introduce a weight score according to the correlation to measure the output [27,28]. Attention mechanisms are mainly divided into soft and hard attention, among which hard attention cannot be differentiated in deep learning technology, so the simple soft attention mechanism is used to de-output weighting. The principle of the soft attention mechanism can be summarized as weighting different parts by assigning different weights to the existing input information. This weighting process improves model performance by allowing the model to focus more on information relevant to the task at hand. In contrast to traditional hard attention mechanisms, soft attention mechanisms model the attention weight distribution as a probability distribution, thereby enabling the model to generate attention for all input positions, rather than just one location [29,30]. The specific expression is shown in Eq (3).

Attn(X,q)=i=1Nαixi (3)

In Eq (3), Attn(•) represents the output of the sequence after being weighted by attention, N represents input length, X represents input length N sequence, q represents the input query vector, xi represents the i element in the sequence, and αi represents the weighted value. The weighted formula for the attention mechanism is shown in Eq (4) [31].

αi=p(z=i|X,q)=Softmax(s(xi,q)) (4)

In Eq (4), p represents the weighted probability, z is selected information index position, and s(•) is the scoring function, which is calculated as shown in Eq (5) [32].

s(xi,q)=xiTq (5)

In Eq (5), T represents the transpose matrix. In order to more effectively illustrate the implementation steps of the soft attention mechanism, this study combines with the German translation of English task to describe it. Fig 4 shows the details.

Fig 4. Steps in the execution of translation tasks incorporating soft attention mechanisms.

Fig 4

In Fig 4, when translating the word "machine", it is first calculated by the attention mechanism to obtain an input. The current input represents the input of the current moment, which is used to calculate the score based on the hidden state of the previous unit and the output of each unit of the encoder. The current moment input is a weighted average of the score and the output of the encoder, as well as the output of the previous moment. The specific function expression is shown in Eq (6).

{[α1,α2,α3,α4]=Softmax[s(q2,h1),s(q2,h2),s(q2,h3),s(q2,h4)]context=i=14αihi (6)

In Eq (6), context represents the input at this moment. The two inputs of the attention mechanism are both from the output of the BiLSTM, which is conducive to increasing the influence of the output in the LSTM unit on the modulation pattern recognition results to a certain extent, so as to improve network performance. Combined with the above information, the network structure of the IABLN algorithm based on IBLSTM is shown in Fig 5.

Fig 5. Schematic diagram of the network structure of the IABLN recognition algorithm.

Fig 5

In Fig 5, the input data is the modulation data of the original automatic wireless communication system. The input signal is a non-inverting quadrature decomposition sequence signal with a length of 128 and a length of 128 and a number of channels of 2. The input channel of the convolutional layer is 2, the output channel is 128, and convolution kernel size is 5. The output of the convolutional kernel is activated and sent to the pooling layer, and the size of the pooling layer kernel and step size is set to 3, and pooling layer input is filled with an edge of 0. The time series network is a time series network containing 42 bidirectional interactive time series units, and the input dimension and hidden dimension are 128. Therefore, the flow of the modulation pattern recognition method based on the IABLN algorithm proposed in the study is shown in Fig 6.

Fig 6. Schematic diagram of the network structure of the IABLN recognition algorithm.

Fig 6

In Fig 6, IABLN algorithm first passes the input signal through a one-dimensional convolutional layer for feature extraction, and expands the signal from two dimensions to a high-dimensionality. The down-sampled signal is transmitted to BiLSTM for temporal feature extraction, and the soft attention mechanism is used to weight the output. Finally, the output result is transmitted to the classification network for classification, and the modulation class output of the modulated signal is carried out. The forward and backward output hidden states of the BiLSTM are stitched together and used as hidden information of the attention layer, so as to weight the output of the time series network layer. Finally, through the fully connected layer of two layers, the mapping dimension of the output features is determined according to the number of modulation categories. After obtaining the class judgment probability of the signal through the Softmax layer, the category with the largest probability value is selected as the final modulation class judgment result. All outputs of the convolutional and fully connected layers are fitted to the nonlinear model through a network of excitation functions.

The IABLN algorithm effectively exploits the temporal properties of modulated signals by combining the BiLSTM and the attention mechanism.The bidirectional structure of the BiLSTM allows the algorithm to take into account both the past and the future information of the signal, which is crucial for understanding the long-term dependence of the signal. The attention mechanism further enhances the model’s ability to identify critical time segments in the signal, improving the accuracy of modulation identification. At the same time, this study, the Parametric Rectified Linear Unit (PReLU) is the activation function [33]. The main expression is shown in Eq (7).

fPReLU(yi)={yiyi0aiyiyi<0 (7)

In Eq (7), fPReLU represents the PreLU excitation activation function, yi is nonlinear activation function input, and ai represents negative half axis slope. PReLU can excite all the output characteristics of the convolutional computational layer in a nonlinear manner. Without adding additional parameters, it can improve the over-fitting ability of the module and reduce the probability of overfitting. When negative half axis slope is controlled to 0, PreLU is transformed into a Rectified Linear Unit (ReLU) [34,35]. ReLU is a commonly used activation function, usually referring to the ramp function, while ReLU is aReLU with parameters. In addition, when the model is trained, the cross-entropy loss function calculates the loss [36,37]. The specific expression is shown in Eq (8).

Loss=1mj=1mi=1nyjilog(y^ji) (8)

In Eq (8), Loss is the cross-entropy loss function, m is the amount of data input in a batch, n is the number of types, and yji is the probability that a sample label is a certain kind, with a value of 1 or 0. y^ji represents the probability that the network predicts that the sample will be a certain type of modulation. Finally, to verify the modulation pattern recognition method using proposed IABLN algorithm, a simulation experiment is designed. The whole phase consists of three modules, namely dataset division, network training parameter configuration, and network performance evaluation.

In Fig 7, a dataset consisting of multiple sampled modulation signals and corresponding modulation type labels is first classified. A plurality of data is randomly selected as the training data of each class under each SNR of a single modulation species, and multiple samples are randomly selected from the remaining data of a single SNR as the validation set. The last remaining sample is used as a testing machine. The training, validation, and test datasets of each class are combined into a training set, validation set, and testing machine, respectively. The K-fold cross-test is used to divide all data except the independent test set into K-part experiments that did not overlap each other [38,39]. Each K-1 copy is selected as the training set, and another one is used as the test set. The test sets selected for each time of the K experiments are not overlapped with each other, and the average value is taken as the experimental results. Since the types of each modulation mode are consistent with the number of samples, a non-stratified K-fold cross-test is used, and the K-value is set to 5. Secondly, the network training parameters are configured. It mainly includes the number of iterations, the size of the data batch sent to the network parameter training in the training batch, and the initial learning rate. In this study, the Adam optimizer algorithm is used as the optimization algorithm of the model, and the early stop strategy controls the loss process of the verification set. Meanwhile, the stochastic gradient descent optimizer with momentum is adjusted by the cosine annealing algorithm, and the initial learning rate is 0.001. The training is stopped when the validation set loss function exceeds ten generations and does not degrade the process. Finally, the network performance is evaluated. After the network finishes training, the effectiveness of this modulation pattern recognition method is tested by an independent detection set. The Overall Accuracy(OA), Average Accuracy (AA), Max Accuracy (MA), and Kappa Coefficient (KC) are mainly used, wherein the OA calculation is shown in Eq (9) [4042].

Fig 7. Experimental steps of modulation pattern recognition method based on IABLN algorithm.

Fig 7

OA=i=1NclassTiji=1NclassCi (9)

In Eq (9), Nclass is the number of categories, Tij is the number of signals identified as a certain class, and Ci is the total samples number of a certain category of the testing machine. AA is calculated as shown in Eq (10).

AA=1Nclassi=1NclassTijCi (10)

KC is a quantitative expression of the confusion matrix, which indicates consistency degree between the predicted and actual results, and the closer the KC is to 1, the closer the predicted results are to actual results. The specific calculation is shown in Eq (11).

Kappa=OAi=1NclassCi×NiN×N1i=1NclassCi×NiN×N (11)

In Eq (11), Kappa represents the KC.

4. Verification and analysis of modulation pattern recognition method using IABLN algorithm

To verify the modulation pattern recognition method using IABLN algorithm, performance verification of IBLSTM network based on LSTM was carried out firstly. Secondly, attention mechanism’s effectiveness was verified. Finally, performance analysis of IABLN algorithm and the comparison of modulation pattern recognition methods were carried out.

4.1. LSTM-based IBLSTM network validation

In order to validate the effectiveness of the IBLSTM network proposed by the study, the study was conducted in RML 2016.10b to validate the effectiveness and analyze the effect of the number of interaction rounds on the performance. The RML2016.10b dataset had about 1.2 million data samples, and the modulation types mainly contained the eight most commonly used digital modulations and two analog modulation information. At the same time, one layer LSTM, two layer LSTM, bidirectional LSTM and two layer Gate Recurrent Unit (GRU), a variant of LSTM, were introduced for performance comparison. The five-fold cross-validation took the optimal weight of each model in 5 rounds to perform 5 inferences on the independent test set, and the average value was taken. It was worth mentioning that all validation experiments in the study were trained tests in a Linux runtime server consisting of an Intel i7 7800 processor and two Nvidia Titan Xp graphics processors. The deep learning framework was Pytorch 1.10 in a CUDA 11.5 environment and the runtime program was Python 3.8. The test results of the test set after the network used different time series networks to train the training set convergence, as shown in Fig 8.

Fig 8. Comparison of recognition performance using different temporal network networks at different signal-to-noise ratios.

Fig 8

In Fig 8A, the accuracy of the proposed IBLSTM time series network was the highest under each SNR. Compared with other time series networks, its recognition accuracy was superior. Among them, the SNR [–12,–2] interval had the most obvious increasing effect. When the SNR value was 18dB, the proposed IBLSTM time series network increased by 2.34% on average compared with other methods. Fig 8B shows a comparison of F1 values for different time-series networks. It can be concluded that the overall difference of F1 values of the five time series networks was small, but IBLSTM still had superiority. When the SNR was 18dB, the F1 value of IBLSTM was as high as 0.95. Table 1 shows the overall performance evaluation results of the five time series networks.

Table 1. Comparison of indicator evaluation results for different time series networks.

Performance index Temporal network
Two layer GRU One layer LSTM Two layer LSTM Bidirectional LSTM IBLSTM
OA 57.23% 58.23% 59.74% 62.01% 63.92%
AA 58.45% 59.38% 60.90% 62.07% 64.57%
BUT 87.60% 89.09% 91.97% 91.10% 92.80%
KC 0.54 0.58 0.56 0.60 0.62

In Table 1, OA of the proposed IBLSTM network was 63.92%, which was the highest among the five time series networks. OA represents the percentage between the number of prediction pairs and the overall amount of data, and the higher the value, the more accurate the overall recognition rate. AA represents an average of the recognition accuracy of each category. The proposed IBLSTM has the highest AA value. Compared with the two-layer GRU, the AA value of IBLSTM increased by 10.47%. Compared to standard LSTMs, IBLSTMs increased by an average of 7.39%. This indicated that the recognition performance of IBLSTM was superior. The MA value of IBLSTM was 92.80%, which was 1.87% higher than that of bidirectional LSTM. In the comparison of KCs, IBLSTM still showed better recognition results. On the whole, compared with other time-series networks, the proposed IBLSTM had a significant improvement in the recognition performance of modulation modes, which confirmed the feasibility and effectiveness of the improved LSTM.

4.2. Modulation pattern recognition performance verification based on IABLN algorithm

To verify the rationality and superiority of IABLN algorithm in modulation pattern recognition, the attention mechanism was first verified. The IBLSTM without the output weighting of the attention mechanism was employed as a control. The same weighting was applied to add the output content of all units, which was then input into the subsequent classification sub-network. All the remaining structures and positions of the rest of the networks remained unchanged. After five-fold cross-training, each chose the weight with the best effect on the validation set and tested it in the independent test set. Fig 9 shows the overall recognition accuracy and F1 value comparison results of the two weighting methods.

Fig 9. The recognition accuracy of various signal-to-noise ratios with and without attention mechanism in the network.

Fig 9

Fig 9A shows the recognition accuracy of the temporal network under different SNRs with and without attention mechanism, and it can be concluded that the accuracy of the temporal network after adding the attention mechanism was improved faster. When the SNR was -2dB, the recognition accuracy tended to be stable. This indicated that the convergence effect of the temporal network was further improved after the attention mechanism was added. Comparing Fig 9B, the F1 value of IABLN increased by 3.26% compared to IBLSTM. This showed that the attention mechanism effectively improved modulation pattern recognition accuracy in the network. Fig 10 shows the performance indicators of the two methods.

Fig 10. Performance evaluation results of network with and without attention mechanism.

Fig 10

In Fig 10A and 10B, the performance evaluation results of the IBLSTM network without attention weighting were worse than those with attention mechanism weighting. After adding the attention mechanism, OA increased by 10.34%, AA increased by 8.33%, and MA increased by 3.33%. In addition, the KC of the IABLN network after adding the attention mechanism was 0.06 higher than that of IBLSTM. The above results showed that the addition of attention mechanism effectively improved the recognition effect of the network on the modulation pattern and enhanced temporal network performance. To better understand the enhancement effect of the attention mechanism, the output score generated by the attention unit in the PAM4 modulated signal with an input SNR of 14dB in the RML2016 dataset was analyzed. The normalized weighted results are shown in Fig 11.

Fig 11. The weighting of attention mechanism for each output unit when inputting a 14 dB PAM4 signal.

Fig 11

In Fig 11, the attention mechanism differed in the weighting of the output of each bidirectional interactive time series unit, indicating that the output at some locations was filtered and trade-off. The weighted values of output units 6, 12 and 38 were as high as 1.00, which indicated that the attention mechanism attached more importance to these three elements, and also indicated that the output results of these three elements had a greater influence on the whole modulation pattern recognition results. Finally, to further reflect the effectiveness of the proposed IABLN algorithm for modulation pattern recognition, the test set was verified by using the best network weights in the verification set. Fig 12 shows the modulation pattern recognition results of IABLN algorithm with SNR of -12dB and 0dB in the test set.

Fig 12. Modulation pattern recognition results with SNR of -12dB and 0dB in the test set.

Fig 12

In Fig 12A, confusion matrix diagonal value was lower with SNR of -12 dB, indicating that the recognition accuracy was lower at -12 dB. Compared with Fig 12B, the recognition accuracy of IABLN algorithm was improved with SNR increased. Among the 10 kinds of modulated signals, the recognition accuracy of the CPFSK modulated signal was 1.00. Among them, that of WBFM modulated signal was the lowest, which was only 0.32. Therefore, the confusion matrix of the IABLN algorithm was further analyzed when the SNR of the test set was 6dB and 10dB, as shown in Fig 13.

Fig 13. Modulation pattern recognition results with SNR of 6dB and 10dB in the test set.

Fig 13

In Fig 13A, the vast majority of modulation mode signals could be accurately identified at an SNR of 6 dB. Among them, the recognition accuracy of CPFSK, GFSK and PAM4 modulated signals was as high as 1.00. Fig 13B shows the recognition results at a SNR of 10 dB, and the recognition effect of the modulated mode signal was superior when the SNR was 10 dB compared to the first three SNRs. However, there was a certain misjudgment in the identification of QAM-like signals, which may be due to the fact that the modulated signal QAM16 was input into a subset of QAM64 during the modulation process, causing the IABLN algorithm to classify it as a QAM64 modulated signal. The modulation recognition accuracy of WBFM modulated signals was the lowest under different SNRs, indicating that the relationship between this mode and SNR was small. This discrepancy may be attributed to the fact that WBFM modulated signals, like AM-DSB modulated signals, were analog modulated. This process involved a continuous speech signal as a data source, whose model was affected by the silence time period. Additionally, the different SNRs differentiated them to a lesser extent, which results in a significantly lower recognition accuracy of the IABLN for such signals in comparison to digitally modulated signals. Most of the modulation modes increased as SNR increased, and QAM16 as the most affected by SNR.

4.3 Performance validation of different methods

To further illustrate the superiority of the IABLN algorithm, the study introduces the current commonly used modulation pattern recognition methods, as well as references [12,18], for comparison with IABLN. Among them, the model proposed in literature [12] is Transformer and the model proposed in literature [18] is CNN. Meanwhile, the study employs the optimal parameter settings identified by the literature’s authors as the relevant parameter settings for the comparison methods. First, the study compares the recognition accuracy and F1 value of the six methods in the RML2016 10.b dataset. The specific comparison results are shown in Fig 14.

Fig 14. Recognition results of IABLN and comparative methods on the test set under different SNRs.

Fig 14

Comparing the recognition accuracy of the four recognition methods in Fig 14A, it can be concluded that when the SNR was -2dB, the frequency of the recognition accuracy increase of the four methods gradually tended to be stable. The recognition accuracy of the proposed recognition method was improved the fastest. When SNR was greater than 6dB, the highest recognition accuracy of IABLN reached 92.84%. In Fig 14B, F1 of LSTM was the worst among the four methods, while the F1 value of the IABLN proposed in the study was 3.33% higher than that of the LSTM. Meanwhile, the study contributes to the evaluation of the six methods on different hardware, providing insight into the parameters of the model and the inference speed. The specific results are presented in Table 2.

Table 2. Performance evaluation results of IABLN and comparative algorithms.

Performance index Method
LSTM DesNet ResNet IABLN Transformer CNN
Total number of parameters 220.00k 300.00k 200.00k 300.00k 250.00k 240.00k
Inference time 22.33ms 24.12ms 17.88ms 1.57ms 2.45ms 17.88ms
OA 52.10% 55.76% 60.34% 62.88% 57.89% 60.02%
AA 54.34% 57.44% 61.33% 63.89% 55.28% 59.64%
KC 0.52 0.54 0.60 0.63 0.55 0.58

Table 2 demonstrates that the modulation recognition method based on IABLN exhibits enhanced recognition accuracy compared to existing methods. The total number of parameters of IABLN was as high as 300,000, which afforded greater computing power than other methods. In terms of inference time, the response speed of IABLN was 1.57 ms, which was 90.73% lower than that of other methods on average. In comparison to the OA value of LSTM, the recognition accuracy of IABLN exhibited a notable enhancement. In comparison to DesNet and ResNet, the OA value of IABLN exhibited an increase of 12.77% and 4.21%, respectively. The KC of IABLN was 0.63, representing an average increase of 12.90% compared with other methods. Although ResNet had a smaller number of parameters than the IABLN algorithm, it was less efficient in terms of inference time. This may be due to the IABLN algorithm’s ability to be deployed in an offline device after it has been trained on appropriate data, enabling real-time responses. The preceding validation results demonstrated that the research proposal, IABLN, exhibits notable advantages in the RML2016.10b dataset.

On this basis, the study further utilized the CSPB.ML2018 dataset and the RML2016.09a dataset for performance checking. The CSPB.ML2018 dataset was an improvement of the RML2016 10.b dataset, which addressed the known issues and survey errors of the RML2016 dataset. It contained 8 different digital modulation modes, 3584000 signal samples. The RML2016.09a dataset was an earlier version of the RML2016 dataset that also contained signal samples for multiple modulation types. The performance evaluation results of the six methods on the CSPB.ML2018 dataset are shown in Table 3.

Table 3. Performance evaluation results of the six methods in the CSPB.ML2018 dataset and RML2016.09a dataset.

Method and datasets Total number of parameters Inference time OA AA KC p-value
LSTM CSPB.ML2018 220.00k 30.24ms 50.55% 51.02% 0.5 0.001
RML2016.09a 200.00k 19.87ms 65.55% 54.65% 0.46
DesNet CSPB.ML2018 300.00k 28.56ms 56.87% 55.46% 0.53 0.001
RML2016.09a 250.00k 20.09ms 61.02% 60.88% 0.5
ResNet CSPB.ML2018 200.00k 21.88ms 64.72% 63.25% 0.61 0.001
RML2016.09a 180.00k 15.68ms 70.75% 68.48% 0.64
IABLN CSPB.ML2018 300.00k 4.95ms 65.07% 64.21% 0.62 -
RML2016.09a 280.00k 3.86ms 78.65% 74.21% 0.68
Transformer CSPB.ML2018 250.00k 4.54ms 65.60% 64.33% 0.64 0.098
RML2016.09a 200.00k 4.01ms 80.09% 75.51% 0.67
CNN CSPB.ML2018 240.00k 22.45ms 59.78% 56.54% 0.53 0.003
RML2016.09a 200.00k 12.33ms 68.78% 70.23% 0.62

Table 3 indicates that the proposed IABLN algorithm of the study exhibits reduced inference time and recognition accuracy relative to Transformer, although the discrepancy was marginal. Nevertheless, the inference time of the IABLN algorithm was reduced by an average of 79.63% in comparison to DesNet, ResNet, and CNN. This discrepancy may be attributed to the fact that Transformer optimized the design of the model in terms of size and performance during the experimental phase, thereby conferring an advantage to its proposed Transformer architecture in the identification of signal modulation in the CSPB.ML2018 dataset. Nevertheless, the modulation pattern recognition method based on the IABLN algorithm proposed in the study remained demonstrably superior in terms of recognition accuracy and recognition efficiency. Comparing the validation results of different algorithms on the RML2016.09a dataset, it can be concluded that the proposed method of the study was more superior in terms of response time. Furthermore, the recognition accuracy of modulation patterns can be significantly enhanced by the utilization of the IABLN algorithm. The superiority of the study can also be illustrated by comparing the test of significance obtained by repeating the test 3 times with different methods (P<0.05).

4.4 Time complexity analysis

Finally, the study further performs a time complexity analysis. Assuming that the length of the input signal was N, the size of the convolutional kernel was K, the number of input channels was Cin, and the number of output channels was Cout. The computational complexity of each output feature map was O(N·Cin·K), and the total complexity of the convolutional layer was O(Cout·N·Cin·K). For each direction of the LSTM, the time complexity was O(N·D). Among them, D was the hidden layer dimension of the LSTM cell. Since the LSTM used in the study was bi-directional, the total time complexity of BiLSTM was O(2·N·D). The attentional mechanism necessitates the scoring and normalization of each element within the sequence, which was of a complexity of O(N·D) for each output time step and a total time complexity of O(N2·D). Assuming that the output dimension of the fully connected layer was M, the complexity was O(N·D·M). Combining the above, the total time complexity of the IABLN algorithm proposed in the study was O(N·(Cout·Cin·K+3·D+D·M)+N2·D).

The computational complexity of the traditional one-way LSTM was less than that of the IABLN, as it was only processed once at each time step. Conversely, CNNs primarily extract features through convolutional and pooling layers, which typically exhibit a reduced computational complexity, particularly when processing high-dimensional data such as images. However, CNNs do not typically address temporal information. The Transformer deals with the data by means of a self-attentive mechanism, which can process all elements in a sequence in parallel. Moreover, the computational complexity was related to the length of the sequence and the number of attention heads. Although Transformer performed well in some tasks, its computational complexity was usually higher than that of LSTM, especially when the sequence length was long. The preceding experimental results demonstrated that the IABLN algorithm exhibited high recognition accuracy and a rapid response time on diverse datasets. These findings suggested that the algorithm was an effective approach for modulation recognition tasks, despite its high computational complexity.

5. Discussion

The study proposed a signal modulation recognition method based on the IABLN algorithm and validated its performance on the RML2016 and CSPB.ML2018 datasets. The IBLSTM network was trained on the RML2016 dataset in the training set, and its test results exhibited an average increase of 2.34% compared to those of other networks. In their study, M. A. Hamza et al. employed the BiLSTM to investigate the modulated signals of communication systems through the lens of deep learning models. The outcomes of this investigation aligned with the findings of the aforementioned study. This indicated the rationality and feasibility of using BiLSTM to enhance the contextual connection of temporal networks. The introduction of the BiLSTM network, coupled with the use of an attention mechanism to weight the network output, had been demonstrated to enhance the network’s recognition efficiency. This was corroborated by the study of M. Tian et al. [43]. Therefore, it can be concluded that the modulation pattern recognition method for wireless communication automated systems based on the IABLN algorithm proposed in the study was both reasonable and effective.

The results of the performance comparison of various methods on the RML2016 and CSPB.ML2018 datasets indicated that the number of parameters of the different methods was not consistently correlated with the actual inference time. In comparison to LSTM, ResNet, Reference [12], and Reference [18], the IABLN algorithm exhibits a greater number of parameters. However, it demonstrated a notable advantage in inference time. Y. Zang et al. conducted a study on data-driven fiber optic models using BiLSTM with an attention mechanism. Their findings indicated that, with a high number of parameters and a complex modulation format, the proposed model exhibits a faster prediction speed. This indicated that strengthening the temporal network contextual links enhances the recognition speed of the IABLN, while the soft attention mechanism weights the output of the temporal network, which in turn accelerates the algorithm training process, thereby improving the overall performance of the modulation pattern recognition method.

It was important to note that both the RML2016 and CSPB.ML2018 datasets utilized in the study were heavily labeled types of data. Deep learning algorithms were generally reliant on the sample size of the data, and the limited training data may result in a degradation of the algorithm’s performance. In light of the limited data volume in practical applications, the study proposed the introduction of data augmentation to the original data in subsequent work. This approach extended the data volume by adding noise or generating new samples, thereby enabling the algorithm to perform more effectively. With regard to the issue of an imbalanced distribution of actual modulation patterns, the study proposed the introduction of machine learning algorithms for the enhancement and optimization of the IABLN algorithm in subsequent work. This could the identification and classification of imbalanced modulation patterns with multiple models or variants. Furthermore, due to the challenging nature of acquiring models with labels in real-world environments, the study primarily conducted performance validation on publicly available datasets. In light of the intricate nature of automatic modulation recognition, it was evident that there were numerous avenues for further research and optimization. The research shortcomings were presented in order to facilitate the advancement of wireless communication technology, which will benefit human society. This was achieved through an in-depth exploration of signal modulation identification methods.

6. Conclusion

To improve the accuracy of modulation pattern recognition in automatic wireless communication systems, a signal modulation recognition method using IABLN algorithm was designed. Firstly, BiLSTM was introduced on the basis of LSTM to enhance the context connection of the time series network, and the convolutional layer fit input signal features to reduce the length of the input stream, so as to construct the IBLSTM time series network. Finally, the soft attention mechanism was used to weight the output of the temporal network. Experimental verification on the RML2016 of the modulation dataset showed that the recognition accuracy of the proposed IBLSTM time series network was increased by 2.34% on average compared with other methods. After adding the attention mechanism, the F1 value of the time series network increased by 3.26%, OA increased by 10.34%, AA increased by 8.33%, and MA increased by 3.33%. Compared with other recognition methods, the IABLN recognition accuracy proposed in this study was as high as 92.84%, the response speed was only 2.45ms, and the KC increased by 12.90% on average. In the CSPB.ML2018 dataset, the IABLN algorithm inference also exhibited an average reduction of 7% in comparison to the other algorithms. Results showed that this proposed method using the IABLN algorithm had higher recognition accuracy on each SNR, and the inference recognition speed was faster when the number of parameters was small, and the recognition performance was superior to other recognition methods. However, there are still some shortcomings in the research. The simulation experiments were mainly carried out by using a public dataset for verification, and the modulation signal experiments in the actual environment were not carried out. In the future, this study will further explore how to obtain more accurate data in the actual environment for the robustness verification of the recognition method, so as to improve the application value of the modulation pattern recognition method.

Supporting information

S1 Dataset. Minimal data set definition.

(DOC)

pone.0317355.s001.doc (182KB, doc)

Data Availability

All relevant data are within the manuscript and its Supporting Information files.

Funding Statement

The author(s) received no specific funding for this work.

References

  • 1.Singh R., Goel A., and Raghuvanshi D. K., "Computer-aided diagnostic network for brain tumor classification employing modulated Gabor filter banks," Visual Comput., vol. 37, no. 8, pp. 2157–2171, Aug. 2021, doi: 10.1007/s00371-020-01977-4 [DOI] [Google Scholar]
  • 2.Zhang H., Nie R., Lin M., Wu R., Xian G., Gon X., and Luo R., "A deep learning based algorithm with multi-level feature extraction for automatic modulation recognition," Wirel. Netw., vol. 27, no. 7, pp. 4665–4676, Oct. 2021, doi: 10.1007/s11276-021-02758-0 [DOI] [Google Scholar]
  • 3.Lv Z., Singh A. K., and Li J., "Deep learning for security problems in 5G heterogeneous networks," IEEE Netw., vol. 35, no. 2, pp. 67–73, Apr. 2021, doi: 10.1109/MNET.011.2000229 [DOI] [Google Scholar]
  • 4.Ghanem H. S., Al-Makhlasawy R. M., El-Shafai W., Elsabrouty M., Hamed H. F., Salama G. M., and El-Samie F. E. A., "Wireless modulation classification based on radon transform and convolutional neural networks," J. Ambient Intell. Hum. Comput., vol. 14, no. 5, pp. 6263–6272, May. 2023, doi: 10.1007/s12652-021-03650-7 [DOI] [Google Scholar]
  • 5.Wright L. G., Onodera T., Stein M. M., Wang T., Schachter D. T., Hu Z., and McMahon P. L., "Deep physical neural networks trained with backpropagation," Nature, vol. 601, no. 7894, pp. 549–555, Jan. 2022, doi: 10.1038/s41586-021-04223-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Bhosle K. and Musande V., "Evaluation of deep learning CNN model for recognition of devanagari digit," Artif. Intell. Appl., vol. 1, no. 2, pp. 114–118, Feb. 2023, doi: 10.47852/bonviewAIA3202441 [DOI] [Google Scholar]
  • 7.Liu F., Zhang Z., and Zhou R. "Automatic modulation recognition based on CNN and GRU," Tsinghua Sci. Technol., vol. 27, no. 2, pp. 422–431, Apr. 2022, doi: 10.26599/TST.2020.9010057 [DOI] [Google Scholar]
  • 8.Zheng Q., Zhao P., Li Y., Wang H., and Yang Y., "Spectrum interference-based two-level data augmentation method in deep learning for automatic modulation classification," Neural. Comput. Appl., vol. 33, no. 13, pp. 7723–7745, Jul. 2021, doi: 10.1007/s00521-020-05514-1 [DOI] [Google Scholar]
  • 9.Khan R., Yang Q., Ullah I., Rehman A. U., Tufail A. B., Noor A., and Cengiz K., "3D convolutional neural networks based automatic modulation classification in the presence of channel noise," IET Commun., vol. 16, no. 5, pp. 497–509, Aug. 2022, doi: 10.1049/cmu2.12269 [DOI] [Google Scholar]
  • 10.Chang S., Huang S., Zhang R., Feng Z., and Liu L., "Multitask-learning-based deep neural network for automatic modulation classification," IEEE internet things J., vol. 9, no. 3, pp. 2192–2206, Feb. 2021, doi: 10.1109/JIOT.2021.3091523 [DOI] [Google Scholar]
  • 11.Sun Y. and Ball E. A., "Automatic modulation classification using techniques from image classification," IET Commun., vol. 16, no. 11, pp. 1303–1314, Jan. 2022, doi: 10.1049/cmu2.12335 [DOI] [Google Scholar]
  • 12.Rashvand N., Witham K., Maldonado G., Katariya V., Marer Prabhu N., Schirner G., and Tabkhi H., "Enhancing automatic modulation recognition for IoT applications using transformers," IoT, vol. 5, no. 2, pp. 212–226, Apr. 2024, doi: 10.3390/iot5020011 [DOI] [Google Scholar]
  • 13.Ghanem H. S., Al-Makhlasawy R. M., El-Shafai W., Elsabrouty M., Hamed H. F., Salama G. M., and El-Samie F. E. A., "Wireless modulation classification based on Radon transform and convolutional neural networks," J Amb Intel Hum Comp, vol. 14, no. 5, pp. 6263–6272, May. 2023, doi: 10.1007/s12652-021-03650-7 [DOI] [Google Scholar]
  • 14.Njoku J. N., Morocho-Cayamcela M. E., and Lim W., "CGDNet: Efficient hybrid deep learning model for robust automatic modulation recognition," IEEE Net Le., vol.3, no. 2, pp. 47–51, Jun. 2021, doi: 10.1109/LNET.2021.3057637 [DOI] [Google Scholar]
  • 15.Lin S., Zeng Y., and Gong Y., "Learning of time-frequency attention mechanism for automatic modulation recognition," IEEE Wireless Commun. Le., vol. 11, no. 4, pp. 707–711, Apr. 2022, doi: 10.1109/LWC.2022.3140828 [DOI] [Google Scholar]
  • 16.Tu Y., Lin Y., Zha H. R., Zhang J., Wang Y., Gui G., and Mao S. W., "Large-scale real-world radio signal recognition with deep learning," Chin. J. Aeronaut., vol. 35, no. 9, pp. 35–48, Sep.2022, doi: 10.1016/j.cja.2021.08.016 [DOI] [Google Scholar]
  • 17.Ashtiani F., Geers A. J., and Aflatouni F., "An on-chip photonic deep neural network for image classification," Nature, vol. 606, no. 7914, pp. 501–506, Jun. 2022, doi: 10.1038/s41586-022-04714-0 [DOI] [PubMed] [Google Scholar]
  • 18.Shah A. H., Miry A. H., and Salman T. M., "Automatic modulation classification using deep learning polar feature," J. Eng. Sust. Dev., vol. 27, no. 4, pp. 477–486, Jul. 2023, doi: 10.31272/jeasd.27.4.5 [DOI] [Google Scholar]
  • 19.Venkatramanan M. and Chinnadurai M., "Modelling an enhanced modulation classification approach using arithmetic optimization with deep learning for MIMO-OFDM systems," Meas. Sci. Rev., vol. 24, no. 2, pp. 47–53, Apr. 2024, doi: 10.2478/msr-2024-0007 [DOI] [Google Scholar]
  • 20.Hamayel M. J. and Owda A. Y., "A novel cryptocurrency price prediction model using GRU, LSTM and bi-LSTM machine learning algorithms," AI, vol.2, no. 4, pp. 477–496, Oct. 2021, doi: 10.3390/ai2040030 [DOI] [Google Scholar]
  • 21.ArunKumar K. E., Kalaga D. V., Kumar C. M. S., Kawaji M., and Brenza T. M., "Comparative analysis of gated recurrent units (GRU), long short-term memory (LSTM) cells, autoregressive integrated moving average (ARIMA), seasonal autoregressive integrated moving average (SARIMA) for forecasting COVID-19 trends," Alex. Eng. J., vol. 61, no. 10, pp. 7585–7603, Oct. 2022, doi: 10.1016/j.aej.2022.01.011 [DOI] [Google Scholar]
  • 22.Li Y., Qi Y., Shi Y., Chen Q., Cao N., and Chen S., "Diverse interaction recommendation for public users exploring multi-view visualization using deep learning," IEEE T. Vis. Comput. Gr., vol. 29, no. 1, pp. 95–105, Jan. 2023, doi: 10.1109/TVCG.2022.3209461 [DOI] [PubMed] [Google Scholar]
  • 23.Liu Y., Wu H., Rezaee K., Khosravi M. R., Khalaf O. I., Khan A. A., and Qi L. "Interaction-enhanced and time-aware graph convolutional network for successive point-of-interest recommendation in traveling enterprises," IEEE T. Ind. Infor., vol. 19, no. 1, pp. 635–643, Jan. 2023, doi: 10.1109/TII.2022.3200067 [DOI] [Google Scholar]
  • 24.Dileep P., Rao K. N., Bodapati P., Gokuruboyina S., Peddi R., Grover A., and Sheetal A., "An automatic heart disease prediction using cluster-based bi-directional LSTM (C-BiLSTM) algorithm," Neural. Comput. Appl., vol. 35, no. 10, pp. 7253–7266, Apr. 2023, doi: 10.1007/s00521-022-07064-0 [DOI] [Google Scholar]
  • 25.Ma C., Lin C., Samuel O. W., Guo W., Zhang H., Greenwald S., and Li G. "A bi-directional LSTM network for estimating continuous upper limb movement from surface electromyography," IEEE Robot. Autom. Lett., vol. 6, no. 4, pp. 7217–7224, Oct. 2021, doi: 10.1109/LRA.2021.3097272 [DOI] [Google Scholar]
  • 26.Rhanoui M., Mikram M., Yousfi S., and Barzali S. "A CNN-BiLSTM model for document-level sentiment analysis," Mach. Learn. Knowl. Extr., vol. 1, no. 3, pp. 832–847, Jul. 2019, doi: 10.3390/make1030048 [DOI] [Google Scholar]
  • 27.Yang Y., Xiong Q., Wu C., Zou Q., Yu Y., Yi H., and Gao M., "A study on water quality prediction by a hybrid CNN-LSTM model with attention mechanism," Environ. Sci. Pollut. R., vol. 28, no. 39, pp. 55129–55139, Oct. 2021, doi: 10.1007/s11356-021-14687-8 [DOI] [PubMed] [Google Scholar]
  • 28.Lin L., Li W., Bi H., and Qin L., "Vehicle trajectory prediction using LSTMs with spatial–temporal attention mechanisms," IEEE Intel. Transp. Sy., vol. 14, no. 2, pp. 197–208, Apr. 2021, doi: 10.1109/MITS.2021.3049404 [DOI] [Google Scholar]
  • 29.Guo M. H., Xu T. X., Liu J. J., Liu Z. N., Jiang P. T., Mu T. J., and Hu S. M., "Attention mechanisms in computer vision: A survey," Comput. Vis. Media, vol. 8, no. 3, pp. 331–368, Sep. 2022, doi: 10.1007/s41095-022-0271-y [DOI] [Google Scholar]
  • 30.Zhang L., Wang B., Yuan X., and Liang P., "Remaining useful life prediction via improved CNN, GRU and residual attention mechanism with soft thresholding," IEEE Sen. J., vol. 22, no. 15, pp. 15178–15190, Aug. 2022, doi: 10.1109/JSEN.2022.3185161 [DOI] [Google Scholar]
  • 31.Brauwers G. and Frasincar F., "A general survey on attention mechanisms in deep learning," IEEE T. Knowl. Data. En., vol. 35, no. 4, pp. 3279–3298, Apr. 2023, doi: 10.1109/TKDE.2021.3126456 [DOI] [Google Scholar]
  • 32.Su M., Yang Q., Du Y., Feng G., Liu Z., Li Y., and Wang R., "Comparative assessment of scoring functions: the CASF-2016 update," J. Chem. Inf. Model., vol. 59, no. 2, pp. 895–913, Nov. 2018, doi: 10.1021/acs.jcim.8b00545 [DOI] [PubMed] [Google Scholar]
  • 33.Jahan I., Ahmed M. F., Ali M. O., and Jang Y. M., “Self-gated rectified linear unit for performance improvement of deep neural networks,” ICT Express, vol. 9, no. 3, pp. 320–325, Jun. 2023, doi: 10.1016/j.icte.2021.12.012 [DOI] [Google Scholar]
  • 34.Crnjanski J., Krstić M., Totović A., Pleros N., and Gvozdić D., "Adaptive sigmoid-like and PReLU activation functions for all-optical perceptron," Opt. Lett., vol. 46, no. 9, pp. 2003–2006, Sep. 2021, doi: 10.1364/OL.422930 [DOI] [PubMed] [Google Scholar]
  • 35.Daubechies I., DeVore R., Foucart S., Hanin B., and Petrova G., "Nonlinear approximation and (deep) ReLU networks," Constr. Approx., vol. 55, no. 1, pp. 127–172, Feb. 2022, doi: 10.1007/s00365-021-09548-z [DOI] [Google Scholar]
  • 36.Liu D., Tian Y., Zhang Y., Gelernter J., and Wang X., "Heterogeneous data fusion and loss function design for tooth point cloud segmentation," Neural. Comput. Appl., vol. 34, no. 20, pp. 17371–17380, May, 2022, doi: 10.1007/s00521-022-07379-y [DOI] [Google Scholar]
  • 37.Luo X., Li J., Chen M., Yang X., and Li X., "Ophthalmic disease detection via deep learning with a novel mixture loss function," IEEE J. Biomed. Health Inf., vol. 25, no. 9, pp. 3332–3339, Sep. 2021, doi: 10.1109/JBHI.2021.3083605 [DOI] [PubMed] [Google Scholar]
  • 38.Shehzad F., Islam M., Omar M., Shah S., Ahmed R., and Sohail N., "Optimizations of modified machine learning algorithms using k-fold cross validations for wheat productivity: A hyper parametric approach," SJA, vol. 38, no. 5, pp. 271–278, Nov. 2022, doi: 10.17582/journal.sja/2022/38.5.271.278 [DOI] [Google Scholar]
  • 39.Marcot B. G. and Hanea A. M., "What is an optimal value of k in k-fold cross-validation in discrete Bayesian network analysis?" Computa. Stat., vol. 36, no. 3, pp. 2009–2031, Sep. 2021, doi: 10.1007/s00180-020-00999-9 [DOI] [Google Scholar]
  • 40.Xiong J., Luo J., Bian J., and Wu J. "Overall diagnostic accuracy of different MR imaging sequences for detection of dysplastic nodules: a systematic review and meta-analysi," Eur. Radiol, vol. 32, no. 2, pp. 1285–1296, Feb. 2022, doi: 10.1007/s00330-021-08022-5 [DOI] [PubMed] [Google Scholar]
  • 41.Kolesnyk A. S. and Khairova N. F., "Justification for the use of cohen’s kappa statistic in experimental studies of NLP and text mining," Cybernet. Syst. Ana, vol. 58, no. 2, pp. 280–288, March, 2022, doi: 10.1007/s10559-022-00460-3 [DOI] [Google Scholar]
  • 42.Mahmood Y., Kama N., Azmi A., Khan A. S., and Ali M. "Software effort estimation accuracy prediction of machine learning techniques: A systematic performance evaluation," SPE, vol. 52, no. 1, pp. 39–65, 2022, doi: 10.1002/spe.3009 [DOI] [Google Scholar]
  • 43.Tian M., Dong H., Cao X., and Yu K., "Temporal convolution network with a dual attention mechanism for φ-OTDR event classification," Appl. Optics., vol. 61, no. 20, pp. 5951–5956, Apr. 2022, doi: 10.1364/AO.458736 [DOI] [PubMed] [Google Scholar]

Decision Letter 0

Salim Heddam

15 Aug 2024

PONE-D-24-21497Modulation Pattern Recognition Method of Wireless Communication Automatic System Based on IABLN Algorithm in Intelligent SystemPLOS ONE

Dear Dr. Han,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Sep 29 2024 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Salim Heddam

Academic Editor

PLOS ONE

Journal Requirements:

1. When submitting your revision, we need you to address these additional requirements.

Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please note that PLOS ONE has specific guidelines on code sharing for submissions in which author-generated code underpins the findings in the manuscript. In these cases, we expect all author-generated code to be made available without restrictions upon publication of the work. Please review our guidelines at https://journals.plos.org/plosone/s/materials-and-software-sharing#loc-sharing-code and ensure that your code is shared in a way that follows best practice and facilitates reproducibility and reuse.

3. We note that your Data Availability Statement is currently as follows: "All relevant data are within the manuscript and its Supporting Information files."

Please confirm at this time whether or not your submission contains all raw data required to replicate the results of your study. Authors must share the “minimal data set” for their submission. PLOS defines the minimal data set to consist of the data required to replicate all study findings reported in the article, as well as related metadata and methods (https://journals.plos.org/plosone/s/data-availability#loc-minimal-data-set-definition).

For example, authors should submit the following data:

- The values behind the means, standard deviations and other measures reported;

- The values used to build graphs;

- The points extracted from images for analysis.

Authors do not need to submit their entire data set if only a portion of the data was used in the reported study.

If your submission does not contain these data, please either upload them as Supporting Information files or deposit them to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of recommended repositories, please see https://journals.plos.org/plosone/s/recommended-repositories.

If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent. If data are owned by a third party, please indicate how others may request data access.

4. We notice that your supplementary figures are included in the manuscript file. Please remove them and upload them with the file type 'Supporting Information'. Please ensure that each Supporting Information file has a legend listed in the manuscript after the references list.

5. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.

Additional Editor Comments:

Reviewer 1#:(1)The added network model must be analyzed for performance evaluation.

(2)The contribution of this work must be clearly stated in the introduction section.

(3)In the last paragrapgh of section 1, the section number has a problem.

(4)The computational complexity of the proposed method should be analyzed to compared with the other representative methods.

(5)Use different datasets to verify your method.

Reviewer 2#:1 Clarify your writing, like IABLN in the first time, like the specific version of the dataset.

2 Focus on introduce and elaborate the interactive operation part further.

3 The comparison with state-of-the-art methods needs to be improved.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: N/A

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: No

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: (1)The added network model must be analyzed for performance evaluation.

(2)The contribution of this work must be clearly stated in the introduction section.

(3)In the last paragrapgh of section 1, the section number has a problem.

(4)The computational complexity of the proposed method should be analyzed to compared with the other representative methods.

(5)Use different datasets to verify your method.

Reviewer #2: 1 Clarify your writing, like IABLN in the first time, like the specific version of the dataset.

2 Focus on introduce and elaborate the interactive operation part further.

3 The comparison with state-of-the-art methods needs to be improved.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2025 Jan 13;20(1):e0317355. doi: 10.1371/journal.pone.0317355.r002

Author response to Decision Letter 0


2 Oct 2024

The manuscript has been revised according to comments.

Attachment

Submitted filename: Response to reviewers comments.doc

pone.0317355.s002.doc (40.1KB, doc)

Decision Letter 1

Salim Heddam

16 Oct 2024

PONE-D-24-21497R1Modulation Pattern Recognition Method of Wireless Communication Automatic System Based on IABLN Algorithm in Intelligent SystemPLOS ONE

Dear Dr. Han,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Nov 30 2024 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Salim Heddam

Academic Editor

PLOS ONE

Additional Editor Comments:

Reviewer 1#:The authors have completed the paper revisions according to the reviewers' comments. It can be accepted for publication.

Reviewer 2#:Justify the rationals of the proposed method, i.e., what is the temporal characteristics of modulation data? (not to illustrate with voice data like in the paper).

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Partly

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors have completed the paper revisions according to the reviewers' comments. It can be accepted for publication.

Reviewer #2: Justify the rationals of the proposed method, i.e., what is the temporal characteristics of modulation data? (not to illustrate with voice data like in the paper).

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2025 Jan 13;20(1):e0317355. doi: 10.1371/journal.pone.0317355.r004

Author response to Decision Letter 1


10 Dec 2024

The manuscript has been modified according to comments.

Thank you very much!

Attachment

Submitted filename: Response to comments.docx

pone.0317355.s003.docx (10.9KB, docx)

Decision Letter 2

Salim Heddam

27 Dec 2024

Modulation Pattern Recognition Method of Wireless Communication Automatic System Based on IABLN Algorithm in Intelligent System

PONE-D-24-21497R2

Dear Dr. Han

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. If you have any questions relating to publication charges, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Salim Heddam

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewer 2#: A two-way interactive temporal network is designed on the basis of the long and short-term memory network with the objective of enhancing the contextual connection of the temporal network. The output of the temporal network is attentively weighted using the soft attention mechanism. The proposed algorithm exhibited enhanced overall, average, and maximum recognition rates at varying signal-to-noise ratios.

I suggest accept.

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #2: A two-way interactive temporal network is designed on the basis of the long and short-term memory network with the objective of enhancing the contextual connection of the temporal network. The output of the temporal network is attentively weighted using the soft attention mechanism. The proposed algorithm exhibited enhanced overall, average, and maximum recognition rates at varying signal-to-noise ratios.

I suggest accept.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #2: No

**********

Acceptance letter

Salim Heddam

3 Jan 2025

PONE-D-24-21497R2

PLOS ONE

Dear Dr. Han,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

If revisions are needed, the production department will contact you directly to resolve them. If no revisions are needed, you will receive an email when the publication date has been set. At this time, we do not offer pre-publication proofs to authors during production of the accepted work. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few weeks to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Salim Heddam

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Dataset. Minimal data set definition.

    (DOC)

    pone.0317355.s001.doc (182KB, doc)
    Attachment

    Submitted filename: Response to reviewers comments.doc

    pone.0317355.s002.doc (40.1KB, doc)
    Attachment

    Submitted filename: Response to comments.docx

    pone.0317355.s003.docx (10.9KB, docx)

    Data Availability Statement

    All relevant data are within the manuscript and its Supporting Information files.


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES